Build Node.js token-based Authentication REST APIs with JWT (Express.js, MongoDB, Docker, GCP) | Bassam Rubaye | Skillshare

Build Node.js token-based Authentication REST APIs with JWT (Express.js, MongoDB, Docker, GCP)

Bassam Rubaye, Full-Stack Software Engineer

Build Node.js token-based Authentication REST APIs with JWT (Express.js, MongoDB, Docker, GCP)

Bassam Rubaye, Full-Stack Software Engineer

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
20 Lessons (2h 12m)
    • 1. Introduction

      3:05
    • 2. What is JWT?

      6:06
    • 3. Debug JWT

      5:22
    • 4. Signed JWT and JWT Validation

      5:17
    • 5. Generate JWT using Node.js core module (Crypto)

      8:51
    • 6. Session-based Authentication

      5:50
    • 7. Token-based Authentication (with JWT)

      4:18
    • 8. Understand the code architecture

      1:49
    • 9. Build Authentication server with JWT using Node.js

      20:29
    • 10. Test the Authentication APIs using Postman

      7:41
    • 11. Use HTTP Authorization request header to pass token

      1:47
    • 12. Setup MongoDB cloud service

      5:47
    • 13. Setup Mongoose to connect to MongoDB database

      9:11
    • 14. Setting Security HTTP Response Headers with Hemlet

      12:05
    • 15. What is a docker container?

      3:28
    • 16. Dockerize our Authentication service

      6:35
    • 17. Google Cloud Platform (GCP)

      5:46
    • 18. Container Registry (GCP)

      7:46
    • 19. Cloud Run (GCP)

      6:06
    • 20. Test the new service (Postman)

      4:51
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

51

Students

--

Projects

About This Class

JSON Web Token (JWT) is popular and talked about all the time, but what is JWT exactly and what is JWT structure, and how it works. In this course, we will answer these questions and also see how we can use JWT to build a token-based authentication REST APIs using Node.js with JWT and see how is that compared to the traditional cookie-based authentication.

This course also dives into defining and creating Authentication API, exposing API Endpoints over HTTP, and handling HTTP Requests. It covers testing API Endpoints (using an HTTP client - Postman).

This course will give you the skills you need to advance your NodeJS platform skills. This course is very practical and applicable. It focuses on teaching you skills you can use. In this course, you will build your skills with web API authentication, hashing, encryption, JWT, Docker, Google Cloud platform deployment.

Even though we will using Node.js but the concepts learned here can be used with any other framework!

By the end of this course you will learn:

  • What is the JWT structure?

  • Use Node.js core module (crypto) to generate JWT

  • Build Node.js Authentication Restful APIs that uses JWT

  • Work with Express, Bcrypt, and NeDB

  • Setup a new MongoDB cluster and connect the Auth API to it

  • Learn how to wire the application modules using Dependency Injection(DI)

  • Use Docker to dockerize the authentication service

  • Deploy authentication service to Google Cloud Platform

Meet Your Teacher

Teacher Profile Image

Bassam Rubaye

Full-Stack Software Engineer

Teacher

Hello, I'm Bassam. I am a full-stack software engineer with more than a decade of experience working in different tech companies in different industries! I love building software applications with more focus on web application!

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Do you want to learn how to build authentication rest API with JWT using NodeJS. Then this course is for you. Hello and welcome. My name is with some enamel full-stack software engineer will over 12 years of experience. So by the end of this course, you will learn about the fundamentals of JWT. You will learn what is JWT? What is the JWT structure? And we will see how we can generate a JWT token using the built-in NodeJS module and crypto module that provide cryptography functionalities. Then we will take a look at the usage for JWT, token based authentication versus session-based authentication. Then we will see how to hold our own token-based authentication rest API server using the ecosystem of NodeJS. Things like Express, yes. Then using decrypt to see how we can hashing the password. Then we will see how to save our data to EDV. Then we will move to use MongoDB. We will use the cloud-based service for that. Also the fundamentals you will learn her here about JWT can be applied to any other platform harder than NodeJS. And we will also learn about some design patterns, concepts like dependency injection to wire our modules. Then for deployment. And we will use Docker to generate our deployment image and then use the Google Cloud Platform. For serverless deployments. We will focus on two services, that container registry and Cloud run. By learning these concepts here, you can apply it to any other flat platform other than NodeJS. Now, since we are using NodeJS here, knowledge of javascript has required to understand the code here. So what are you waiting for? See you soon. 2. What is JWT?: So what is JWT? Json Web Token? Jwt is an open standard that defines a secure, compact, and self-contained way for exchanging information between two parties. As a JSON object, the information in JWT is digitally signed, which makes it verifiable. Now, here is a sample JWT token. So you can see JWT token is composed from three parts that are encoded in base 64 URL encoding and separated by a dot. So the red part is the header. And the second part, which is the purple color, is the payload, and the blue color is the signature. Now there are two properties for JWT. The first thing is a compact, because it's a string that is encoded with base 64 URL. And due to its small size, a JWT can be sent in the URL or the HTTP post data, or even in the HTTP header. The second property is JWT is self-contained. The payload part contains the required claim information, which can help avoid the need to query a database for that information. Now let's understand in more details the structure of a JWT token. The first part is the adder, which contains the token type. And the hashing algorithm that is used to assign the token. For example, this is a sample header that you can find. Any JWT contains the first attribute, the algorithm as I mentioned. And the second is the type, which is JWT. So what do we do for this is basically we will encoded this using the base-64 URL encoding. The second part is the payload. The payload contains the claims, which are statements about the entity. This entity normally is the user. There are three types of claims. The first type, It's called register. Now these claims are predefined or reserved, which you can use. For example, these are some of the reserved or register claims. Iss, which is the issuer of the JWT. The sub which is the subject of the JWT, HUD, which is the audience. That's this JWT is intended for. Exp is expiration time that you shouldn't use the JWT after the time is expired. Let's move to the second type, which is called public. Public claims. Are those that can be created at all. They should be defined as a URI with a namespace to avoid collisions. And the final type, private claims, are claims that are neither public or registered that the two parties agree to use in order to share the information. And the last part is the signature part. Now the signature. The signature ensures that the token has not been changed. It combines the header and the payload and then hashing them using the hashing algorithm that is specified in the ADA with a secret key. For example, this is how we can generate a signature if we use HMac, for example. And then we take the encoded header plus a dot plus the encoded payload, and then the secret key. Now we can also use, instead of secret key, we can use a public and a private key. If we use RSA, for example, and sate of H MAC, which we will go in details later. Now because you WT is base-64 URL encoded, it's not human-readable. So it's better to use a tool to help us to decode it, which I will show you now. 3. Debug JWT: Now let's go to JWT dot IO, which is a great tool if you want to debug or decode a JWT token or even to encode one, to generate one if you want. So let's get familiar with this tool. Now. It has, you can see here's the encoded part on the left side and the decoded part on the right side. So this is the JWT token that is generated based on these attributes in the header. And the attributes are the claims in the payload and the signature. So for example, if we come to the first part, which is the header and a set of the 256. We can change it to hs 5.1.2. Now you can see two things has changed here. The header, because the string gear change, the attribute here is change. So the encoding part is changed. The second type that has been changed is also the signature part. Because we change the hashing algorithm which the signature is using from the previous one. And also the encoding part of the header has been changed. Now, again, we can use the Euro. And instead of a secret key, we can use a public and private key, which is great for distributed systems. Since the consumers can share basically the, the public key between them. Going back again, this is our original JWT token. Now for example here, the same thing. If we changed anything here. It's like the subject. And instead of 9-0, we can make it 91. Can see the second part, which is this part, that purple color is changed here because that corresponds to the base-64 URL encoding of the of the payload. And also the signature is changed for the same reason as I explained for the header. So for example, if we change also the, the name to John and can see the same thing happens again. So you can see you can play with these attributes. Here under you can generate a new JWT token. And at the same time, if you have from your system a JWT, you can pay sit here. And it will automatically generate the decode, sorry, the parts that you want here that correspond to the encoded part. Which is a great way to debug if there's any, if any problem within your system. Another nice thing we can do is it gives a help or a hints about the claims. We are using an availability for example here. And this sub is a register claim. So because it's registered is a reserved claim. So it can give you this as a subject for IAT which is issued at, also provide a hint about it explanation. For example, since it's also timestamps, so can't see they give you the exact date and time for when this token has been issued. We can use the same thing if we use EXP or expired. At. Now if you notice for the name, we don't get any. Hence, the reason for that because a name here is moreover, private or custom claim that's not registered. Now, let's add, for example, another claim. Let's add the audience. For example. Let's use 12344. Now, if we go again, you can see because it's a registered claim, we get a hand for that, which is audience. 4. Signed JWT and JWT Validation: By signing the JWT, its integrity is maintained, which will ensure that other parties can only view the encoded data. And this case the header and the payload, but they cannot change it. Now there are two mechanisms to sign a JWT token. First one is called the symmetric signature. The other one is the asymmetric signature. Now, let's understand both. For symmetric signature and this type is done when there is only one server that signs and validates the token. It uses. A secret key and a hashing function. It's called HMac. Now let's see how this will work. We have the header and the payload. Then we will use the HMac. Hmac stands for hashing, for mass, for message authentication code. It uses hashing functions like MD5, SHA-1, and SHA-2, And it uses a secret key with that. Now, would this secret key, we generate the signature. Now, suppose that there are multiple applications that need to validate a given JWT. If we use the secret key to sign the JWT, then that secret key is needed by all of these applications to validate that JWT. But it's not a good idea really to share that secret key between all of these applications because it may result in leaking that secret key. That's why we can look at the other type of signing a JWT, which is called a symmetric signature. Asymmetric signature uses a private public key pair to sign the token. Now this type is very suitable for distributed systems where you have many applications that can validate JWT token. Where there is only one server that has the private key. And the server generates the token and signs them using that private key and share it with the client. Now, we have the data, the header and payload. Then we use here an RSA. This type of I'm in encryption that we will use the private key. And we, after that, we generate the signature based on this private key and the algorithm. Now the client can send this token to any application. And then the other application can validating that using the public key. Jwt validation. When a server receives a token, first thing, it fetches the header and payload from the token. Then it uses a secret key or the public key in case we are using asymmetric signature, or in this case we are using the cemetery signature. Then we will use the secret key to generate a signature from that header and payload. Now, if we generate a signature matches the signature provided in the token, then it will considered valid. Now, I wanted to mention also, It's not enough to just use the signature to validate the JWT. It's better to combine that with validating the claims in the payloads at well, for example, we can check the expiration date of the token using the EXP claim that you can find in the payload. We can also check that like if the token is actually meant for us through the audience claim, which called like a Udi. 5. Generate JWT using Node.js core module (Crypto): Okay, so now let's see how we can generate a JWT token using NodeJS. So I wanted to mention that this is by no means something like a production ready or you should use this in production. Like later, I will show you some more production ready modules. You can use it when we belong the authentication server. But this is more avoid to give you kind of deeper understanding about this structure. For JWT can also kinda see how we can use the crypto modules to do this. So this is all good. Like leave the, the, the repo and the description in case you, you're interested in kind of tick and locate. It also contains more examples about other core modules like the streams, the events, the HTTP. So here some kind of more examples of all the crypto itself. So moving on here, the first line, which is basically just importing the module, the crypto module. Now if you don't know what the crypto modules, it's basically deals with the cryptography, provides lucky cryptography, utilities, things like for hashing, for HMac, for cipher or the cipher. And normally that's the pattern used is to have a set of factory methods to generate the in a sense, for, for each of these k in a sensing. In a sense, it's sorry, and it also supports stream spaced interface. So moving on, on my three, I'm just defining a method, this method called encode base-64 URL. It takes a string and it returns a string that it's encoded with base 64 URL. Now, the way to use it is by first converted a string back to our buffer. Now a string is a sequence of characters, while a buffer here is a sequence of bytes. Now, after I converted to a buffer, I converted back to a string with the using the base-64 encoding here. And then replace some of the characters and the slab for a slash and so forth like to make it friendly with the to make it like in the same format as the base-64 URL format. Moving on, on line 11. Using the factory method to create an HMAC, which takes the first argument is the algorithm type which is shot 256. And then the second argument is basically the secret key. Because as I mentioned, like HMac is hashing. Or this hashing has some added layer of security by using a secret key. And in this case, I'm just using a string called Secret and encoded back to make it base-64 euro. Now, if you remember, as we mentioned, that JWT format is a header and then dot and then the payload. And then final part is the signature. And this is what the HMac would be used here, is to January the signature in the header. We, we are just referring to the algorithm we are going to use, which is HMac, we shout to 56, and then the type is JWT. Now for the payload, we are saying that we have this claim which is this subject with this ID, and then it has a name and it's admin as false. So we have these two objects, the header and the payload. Now what's remaining for us is these signature. Now, moving on to live 30, which is like the core method here, which is called january JWT. And it will take the header object and then the payload object, and then the algorithm, which for simplicity, I'm assuming that it will always as like expect the ACH Mac with that, with the Chateau 56. Now the first part is to generate the encoded header, as well as I mentioned, like using the basic C4 URL. And so taking the header objects and then just make it converted to a string using the JSON dot stringify. And then on line 32, I'm just logging this just to see the difference between the original object and the second one. Moving on here, to the same thing applies to the encoded payload, urinary being coded payload. Finally, we generate the signature using the Mac and then etch Mac has two methods, update and digest. Update basically will take whatever string and have added to the contents so that it can be hashed in when you call digest, you're saying yes, now I have the content ready. I want to generate the hash for this. So after you call digests, you cannot call it again, it will throw an error. So here I'll have the signature. Since I'm not providing a good encoding for Digest at all, Janet as a buffer as you see here on line 36. And then finally, I'll encode the signature to make it also a base-64 URL signature. And then finally, I'll generate the JWT by adding these parts together, joining them together, and using kind of the dot between each word and returning this JWT. And let's now see how we can test this. So if we say here, you can see that the first part, which is the the header, the encoded header, and then the encoded payload is easy, easy on line 34. And then the signature before recording that, which is a buffer. And then finally the JWT token. We complete one here. So now if we take this part JWT and go back to the JWT Debugger dot IO, that's has the debugger and pasted there. You can see now I, I decoded back my original information. So for just two. If we change this, for example, now, sampling like true and rerun, that we should have a different JWT token. And you can see here like the parts for the header census not changed as the same. But when you look on the payload, which is this part here, it's the same thing for the, for the signature part. Changing one kind of value from true to false, changed the whole JWT basically. And so if I take this and go back here, you can see we decode back the original values that we have, which is here Admin true. Okay? 6. Session-based Authentication: Authentication is an important part of any application. Be it web-based, desktop or mobile application. Each kind of application has certain best practices when it comes to handling user authentication. In web-based application, this process is very important as it acts as a thin red line between the application being secure and insecure. Web application. Use SCTP protocol and HTTP is a stateless protocol. This means that each HTTP request is considered an independent requests and no information from the previous request is saved. For example, let's say that we are shopping on Amazon.com. If we add certain items to our shopping cart, then we should be able to see all the items even after we navigate to a different page on Amazon. In this case, each time a request sent to the Amazon server from the client, decline needs to send its credentials. So we need to send our login information each time an HTTP request is made to the server. As you can see, this is not really a good practice to solve this issue. Session-based authentication comes into the picture. It is also known as cookie-based authentication. Now in this sequence diagram, we will see the steps involved in the cookie based authentication. The first step, the user, which normally is the browser, sends a request to the server. The request contains the login credentials, the username, and password. For that user. The web server authenticates the user. If the user is valid, it's creates a session and stores all the information about the user in memory or in database. And returns a session ID to the user using something called a cookie. In the HTTP response. This session ID is stored then by the user and the browser cookies. The next time the user makes a request, it sends the cookies as well in the HTTP header. Now the web server validates this, this session ID and the cookie and checks it with the in-memory session ID. If it's valid, then the webserver recognize the user and returns the requested information. For example, it will return wherever response from the API, in this case API foo. So there are few limitations when it comes to cookie-based authentication. First thing is if we are using a distributed systems. In a distributed system, there are many servers handling the same service using a load balancer. Which means it's not necessary that a request from a given user will always go to the same server. It is quite possible that one request is handled by one particular server and the next request is handled by another server. In this case, using session-based authentication is very uncommon because we can't see the session info on both servers memory. Next look, limitation, it comes to the performance because storing and retrieving this session information from the database or Memory is a costly process. Each time a new user authenticates, we need to store their information. And whenever a user sends a session ID with the request, then we need to validate it from the database or memory, which leads to a lot of back and forth. Finally, since we are using as saving the session ID in the user's browser cookie. It is possible that an attacker or a website could gain access to the user's cookie and then perform some malicious operations on the website, which is also known as cross-site request forgery attack. 7. Token-based Authentication (with JWT): In this video, I will continue talking about the JSON web token and how we can use JSON web token with the authentication system. So we will build an authentication server that uses JWT using NodeJS. So let's get started. So first of all, let me give some kind of theory about what a modern token-based authentication server look like. So imagine you have an application, and this application require some authentication from the user. Normally, these, this, these authentication systems require the user to pass credentials, username, and password. So here the actor or try to use the client and try to login. So it passes the username and password. The client then passes this by calling the kind of the login API from the server. Normally this is a post request passing these credentials. Now the server, it will try to verify these username and password by calling a database. And if there is a match in the username and password are valid. And then it will return back a token, which is basically JWT token back to the client. Now if the client that try to call another API, but kind of protected API, there is no need to pass the the credentials again. Instead, you use the authorization header. And basically by putting the token here using the mirrors schema. And you send that to the, to the server. Then the server will only try to validate. If this token is valid or not. If it is valid, then it will return back the success response to the user. If it's not, then it will kind of error out. So this is kind of the basic idea about a token-based authentication server that uses a JWT. Now, when you try to compare this to the traditional cookie-based authentication, you notice that this is completely stimulus. There is no need to manage these sessions here. And because you only the credentials are only passed on the 1s m, this subsequent requests, we'll use the token instead. And the talk itself kind of represents the claims. So the advantage here is that you can also see the server doesn't need to make subsequent calls to the database. It only needs to happen once. Which gives a kind of performance gain. Since the now to validate the token is what's required rather than just calling the database again. And also like there is no need really to use cookies, to pass cookies between the client and server. And since cookies are bound to domains. So we know there is no need to worry about the cross origin issue anymore, especially with mobile devices. So this is kind of the popular design, especially with with a graph q l. So let's see how we can bold one. Such servers using NodeJS. 8. Understand the code architecture: Now let's talk about the authentication server we're going to weld. So it basically follow the three tiers or three layer architecture. Going from the bottom one, which is the database layer. Over here, we will use the database, store our credentials like these and am passer for. Since as I mentioned, we're going to use NodeJS here. I'm going to use some and EDV, which is an embedded database, and to store these data. Then the next layer, which is the service layer. Now the service layer will call, will basically have some methods like login register and which we'll call the methods from the database layer. Now, the top layer we will have the controller, which will represent kind of the handling the request and response for each of the routes, which is, as I mentioned, we will have a route for login. We will have a route for unregister, which will accept kind of username and password and the requests and then return badly detail can l, All of these will, all of these methods here and the controller will basically we'll call the methods in the service, as you will see. So each layer is dependent on the layer below it. So the surface depends on the in-database, the controller depends on this service. 9. Build Authentication server with JWT using Node.js: Okay, so now we will move to the authentication server we're going to build. So this one, we will expose three web APIs. The first one is to register the user, which will accept a post request. The second API will be login API, which will also accept a post request for the username and password and will return back the the JWT token back. And then a third API will be a GET request. We will call it luck check token, which will verify the token if it's valid or not. So here I'm using NodeJS for this in I'm using the express framework to make it easier to all these APIs. So here you can look into the dependencies which just like started with kind of your typical NodeJS application and install these dependencies. The first one is the Express, which is our kind of MVC based or framework to follow application NodeJS, the second dependency, bodyParser basically to accept kind of the post parameters to parse those. And then the B crypt library to hash if passwords so that for security reasons is safer. And then finally, the JSON Web Token to generate the tokens and the NADP, as I mentioned earlier, is basically we've all kind of store the database, the username and password in the database. And so an ADB, it's, it's kinda of a native embedded database that provide you with persistent or in memory. That's kinda a 100, 100% JavaScript-based would know binary dependencies. Json web token. It's basically to generate the and verify the tokens. And as I mentioned for decrypt basically to, to hashing the passwords. So I start with the first file here, which is our main file, the server. I'm just loading the dependency for Express, then the second line for the body parser. And then I have a file for the route called the user router. And then here's your basically typical express, a worm just creating a new instance, new server instance run Express. Then using this middleware for the JSON, for the body parser. And then this is just a test middleware and just to log the request that I'm receiving here. And next means that you move to the next middleware. So that you don't block the pipeline requests. Finally, here I am saying that anything, all of the APIs will be within this route called slash user. I'm just from a perspective. And finally, we will, on line 13, we will listen on port 3 thousand. And next we will have the files, the modules, the database and the router, and the controller and the service, all under the Source folder here. So starting from the db folder are here. And here I'm on the first line, I'm just loading the NADP dependency, which basically expose this up. This class called datastore. And then on the next line, I'm importing the promise phi function from the util core module. What is this like? The reason for that is most of the kind of methods that expose by an ADB Datastore are using the callback pattern. But for me personally I like to work with the promises. And that's why this utility function here. It's very helpful to convert any kind of yep, callback pattern to be promise in instead. And just I want to mention that you need to have NodeJS like acting they added after NodeJS eight. So you need to use like NodeJS eight and above. For me, like I tested this with NodeJS version 12. And moving on here, I'm just an online forum just creating the database from the datastore. On line 56. These are basically the methods that will first one call and so Promise. So just return a promise and we'll accept the object that you want to insert to the database. And then the fine one promise. Well basically, as the main suggest, the name suggests is that you will find the record by Wherever four arguments you are providing. I want to mention here is that the NADP kind of very similar to MongoDB. If you work with MongoDB before, kind of follow kinda similar gaming's, similar structures here. And that's it. That's our DB module. When finally we just exported this object. Also like I want to mention here that I'm using memory, so I'm not persist into a file. You can easily convert this to be persistent to file. By adding just one line. You can look into the documentation for an ADV, how to do that. But for me, just for simplicity, I put everything in memory. And next thing we will move to the service layer. So again, for the service, we are going to need three methods. The first one to register, second one for login, and the third one to check the token. Now, I wanted to mention here something is that I'm using here. I'm creating a factory method, which you see here on line seven, which is called CREATE_USER service. The reason behind it is I wanted to inject the dependency here. This is kind of a simple pattern for doing dependency injection without the need to kind of heart tight cupola equity dependency or DDB here since this service, if you recall from the diagram I mentioned, it depend, it depends on the DB module. I'm here passing the DDB here. So that in case you want to use a different implementation for the DB, you really don't need to change anything here. Since you are passing, passing it as a dependency. There is no really tight coupling. Here. I will show you where we are passing the DB, which is going to be end-user router for file. So I'm going to use the same pattern with the other, with the controller shoe. So this is kind of just two of y i. Now there is couple of dependencies here, which is on line one, just using the JSON Web Token module into January the token and also to sign this token and to verify the token decrypts. Module is also two, as I mentioned, to hash the passwords when we save them so that we don't save them as plain text. And like the secret and the cell salt rounds. Like the secret, will you use it with the with the JWT generation and verification sold around? We'll use it for 3dB crept. I will go through that now in a minute. So on line seven, as I mentioned, you create the method I factor method that will return an object, this object called the user service. Which and the first method call register here, and already asserted. This accepts a username and password. Just to keep it simple. You can have more properties if you like, more fields. But just for the sake of simplicity here, going t depends only on username and password. And on line 11, we will call the secret module. It has a method called hash. And what it takes, it takes a password, and then it will take also salt rounds. Now for the salt rounds like that, GV and idea, it's basically kind of considered the cost factor. So the higher the number, the higher the number of rounds required two kind of for the hashing or the passwords. And that's why we need assaults around here, that B correct, you will use. So hash will return a promise. So here that's what we use, that then if everything is successful, we'll get the hash. Then we call the DB module, assuming we have the insert promise method there. And then we pass the username and the password which will be hashed password. Then we will create the document, and we'll just return that document as simple as that. And the second method here is called login. And what logan do is takes a username and password and try to see if the username and password is valid or not. So on line 19, I'm calling the defined one record promise from the DB module, passing the username. If there is a record that that match this username. I'm just like logging into your for testing. And then on line 21, I'm just check if there's a record, then I calling DB crypt again to compare if the password on that record is similar to the to the password patch passed, which is also should be hash with the hash password as there is a match. Then I am basically calling the which is what match here, going to reflect on line 24. If the match is true, then I will go ahead and generate the token. Now the JWT module has a method called sign. The sign will take the claim object, which in this case is just b. I chose to kinda return the username and then some expiration date for an hour here, kind of chose for this and then the secret, a secret again, we'll use. Will be used to for the signature of this token. And then we return the token back. If something goes wrong, which means like username password didn't found for some reason, then just throw this error. That's it for the login. Finally, for the check token method here, it's simply takes a token and a callback and multiple do is we will call the verify method from the JWT module just to make sure, like we're the verifier will do is check if this token is valid, which I can engage you are by checking the expiration date. All of these things like That's refers that the JWT is still valid. And so it takes the token, it take the same secret we used to sign this token. And it will return kind of the, the callback for it is the, the, the error because the kind of This Is there an error first pattern for NodeJS and then the decoded result in there's an error. We just return that you are to the callback. Otherwise, we just return the username. And that's it. This is our service. Now the next file is the controller. The controller will have the same methods, which is for login, for registering for check token, except that it will accepts different arguments, since the controller will basically handle the request and return back the responsible, it will accept the request and response and then the next and afford the middleware that matches the Express. So again, as a talked earlier, going to use the same pattern like the factory. Again, since this controller is dependent on the user service, so I'm just injecting this service here. And what this factory method will return a user controller object for us. Now online for, for the register, I'm like This method is accepts the request and then the response, and then the next method function. And then on line five, just logging this body here. I'm typo. And then on line six, just calling the user service register, passing the username and password, which we takes from the body. Because this one a is going to be a post request. So it's going to be in the body of the request. And then since register is returns, returns a promise. If everything is correct, then we return 200 with this response which is OK. And then user with the document, and that's it. And if there's anything kind of Wrong for some reason. Then we return like a 404. And then when the OPS falls and what type of error messages there. For the next method is the login method, which is a kind of follow accepting arguments, the request and response. And the next, in calling the user service called the login, I pass the user name and password from the body requests because this one is also postdoc, post, post requests. So username and password will be part of the body of the request. And like the login will basically return a token back for us if everything is successful. So I'm just returning the token with this within this object with okay, Cultural. If the username password didn't match, then our return 400. And so with the ok. falls and then what's the type of error message? And finally, user controller method, which is called check token. This is, as I mentioned, a GET request. So it will it will accept the same kind of requests and response in we will call the check token from the user service passing the token. So since this one is a GET request, so we'll pass the token as a query parameter. Now I wanted to mention, like in real life, in real applications, you need to pass it as inside the authorization header. But for simplicity, I just put it here and the query parameter. Let me know if you want me to implement also in the authorization header by, for example, welding in another service for that, another API for that to demonstrate this. Now moving on. But since this one is accepted callback, like with the error first and then the results on the data. If there's an error, we'll return 400. In saying that something went wrong. What's the error for that? Otherwise, we just return to a 100 with user data in down. Finally, our user router is really simple. It doesn't do anything other than injecting the, the, the dependencies for each other. So here I'm using the router from the Express. And on line one, line two. Here I'm basically create kind of importing the user module, controller module. And then next I'm importing the user service module. And finally I'm importing the DB module in the sense, as you saw, like the service depends on the db and it all depends on the service. And here it is essential that the user service passing the DB to the factory method, doing the same thing for the controller, bypassing the user service to that. And finally, I'll create the router. And four thereafter I'm saying for the, for the check token is going to be a get request. And we're going to use this detector O'Kane from user controller to handle this request. For the login is going to be opposed and is going to be using the login method to handle this request. Finally, for the register, this is same thing. And up here I'm just creating a test username password just to test things in there. That's it. 10. Test the Authentication APIs using Postman: Now let's run our server. And you can see here I created an npm task here. So to run the server file here. So easiest way to do is to do npm, start, npm run start. What we're saying here is that we are, it's running the server on line 3 thousand. So how we test, how we can test this? Like for me, I prefer to use postman for this. So we can use curl of you like, and there is like different ways. It's really just an API server. So it's up to you what you use. So for me I would just like use the the post man. So what spokesman is kinda we provide you a UI to simplifies the ways to make requests to your username, to your API. So basically, so the, the first API we have is the register API. Now this is the path for it, our server slash, user slash register. And then here you can see we are, I'm saying it's supposed request him here. I'm saying pass the body. Now when I do, since that's supposed to require like Hawaii and the format of this body is Rowe. And I'm here selecting its need to be a JSON. Since I'm using the JSON parser for it. As simple as that. Now, if you recall that the parameters we need to pass is the username and password. So let's create this user 1-2-3, 4-5-6. Here. Yeah, so this is our our response basically on the the bottom part, you see we have a 200 response and then we have OK, as true. And then the document that we received back from the NADP in there, you can see like the document has the username that we pass, and then the password which is hashed, and then ID. That's what any in EDV will provide this kind of lock, your primary key. So please don't use this in production just a year. And just to demonstrate this, like I, normally you don't return back the password and username to the, back to the response. So this is our first API. Now, let's test our login. Now the way we do this is M. Ok, I already kind of tested here. So again, it's a post request Passing the login path slash users slash login. Same thing by passing the request in the body of the post requests and in the JSON format. And then I'm going to say Joe with 1-2-3-4-5. Remember we created the username and password. 123456. And see like the response here for, for U1 which is unauthorized, so your Unauthorized because the password in a match. So here like this is our responses back username and password could not be found. Now, if I change this to six Bank here, we will get back the 200th request response and then kind of the ok. saying true. And then the most important part, which is our token, our JWT token. And if we take this token now, what we will use it, we will use it to test our last API, which is called the Czech token. And again, this, this, this method, this API basically it's a GET request. So it accepts, this is the path for it and this one is the query parameter. So here, the query parameter, if you recall from the code, is called token. So here that's like I have some sample token here. See, since like the soak in, I think it's all like I use it way before. So it's not valid. So this is what I returned back as four over four or one. And let's make sure we decode this. So if I go to I0 here, go to the debugger. And yes, so I think this is our newest JWT, but I wanted to copy this one and instead, I go back here, paste it. Yes, he like the expiration date is to all then probably even the password is wrong for this one. Now going back to two postmen, again, I'll take the JWT here. I paste there in the query parameter. And now you can see like we have 200 response, ok is true. And the username as Joe, which is kind of the record in the, in the NADP database. So in case you don't want to use postman, you would like to use Carlin said, I'll leave also likely curl request in the end report solve. But here for example, from postmen, if you click on the code section, you can copy the kernel requests here. And then if we go to the command line, paste this, we get the same result, which is here. And as I mentioned, I'll leave this in the repo for you case. You prefer this just copy paste that has changed the, for example, the parameters that you're passing, in this case, it okay. So I hope this was helpful and please let me know if you have any question or if I have missed anything. Thank you for your time. 11. Use HTTP Authorization request header to pass token: Now let's modify this end point to use the HTTP authentication framework. And now the way it works is instead of passing the token and through the query, we pass it using the, the header, the request header with the format for it is bearer and then the token, which is a better pattern definitely to use. So let's do that. We define the token parameter here. And using the request DO headers and then the authorization header. And since there is two keywords, you can just split it by the space and take the second element. That's best there as to the service, check token service. Now we can run the server. Now. It says they're using Postman. It's run it as it is without the header. Now it's failing because we are not passing the token in the header. We're still using the query parameter. So let's change this. Let's first remove the query parameter. And then we move to the header section. And then here we can just pad type the authorization key is take the token, face it with the mirror scheme. Now the send request. And see, now it's success since we are passing the token successfully. Now if we change with this token just to make it invalid by deleting. Now it's saying an error because we changed the token. 12. Setup MongoDB cloud service: Now we will switch to a different database in a set of an ADB, We will use MongoDB. Mongodb is a general purpose document based database that you use wildly by different companies. I used it myself to handle millions of records in scales really well. And it has a format like the document format similar to JSON, which is called BSON Binary JSON. Now, in a set of us managing the nodes in solid locally, I will use a cloud-based service of R by MongoDB called MongoDB Atlas. And that will make it much easier and more realistic. There's a free tier that we will use. I think it's up to one gig. And so go ahead and sign up for that for free so that we can set up the cluster and the next. Ok, so now after signing up for the MongoDB Atlas, we are, we see this dashboard. Now in the first thing is the projects. So according to this case, we have project 0. And the next thing we need is to create cluster. The cluster will have the notes like the MongoDB servers, where each node will have the MongoDB instance. I'm normally it has a three nodes as a replica set. So let's go ahead and click on the building cluster to create our first cluster here. So here we need to choose between our type of cluster will go with the free one, the shared clusters. So click on the create cluster. Now here we need to decide where, which cloud provider and region we need to provision this cluster. Now it's support is the friend cloud providers. For me, I will go with the AWS and as far as the origin, the region, I'll go with the Oregon in here, like you can see the different cluster type information like the ramen and et cetera. So now we can create a cluster. It will take a few minutes until it's provisioned the cluster for us. Now, if you look here and the current dashboard for the clusters, the current cluster name is cluster 0. Can see it has this cluster tier information. The region is in AWS and the type which is like a three, a replica set that has the three nodes. And this is all within the free tier that we created. Now our cluster provisioned successfully. This is the updated dashboard. It has charts about E operations, number of connections, ie, the size on the documents. This is more detail. The viewable in the cluster are the nodes that we have, the three nodes available. Next thing, now, we need to enable access to this cluster from outside through our application. In order to do that, we need to create a username and password. So go ahead and click on the button. Here we will create a new database user. The type here we'll choose the password. Now for the, for the password authentication, just enter the, the username and password. And then from the dropdown, we need to choose it privileges. So we have different choices. Now, probably a better choice is to create a read to a specific database. For here I'll just go with the read and write to any database. For simplicity, I am not going to create one since I already have this username password created earlier. Here we can see even if we want to create custom roles, the next step to enable the axis is to whitelist your IP address. You can whitelist specific IP address, may creating the ad current IP address. But for our use case, we enable any IP address to be connected to this cluster from outside. In case we want to deploy this to the cloud. So in order to do that, we need to create the kind of the general any IP address, which is 0 dot 000. This will enable any IP address to connect to this cluster from outside. You can even make it luck temporarily to be deleted afterwards. Here we can create that. Again. I already created this waitlisted, this IP address. So now we should be able to connect our application to this cluster. 13. Setup Mongoose to connect to MongoDB database: So now we have our Mongo DB cluster up and running. It's time to connect to the MongoDB server and create our database. In order to do that, we will use an npm package called Mongoose that's provided as syntactic layer between our application and the MongoDB. It's basically a document object mapper that allow us to run the MongoDB commands using the object oriented structure of our application. Here is the Mongo's website we'll be using. So even though we are going to use a completely different database, we're going to use a completely different MPM package to connect to it. We, the only place to make that code changes is going to be in the DB module. Because the way we structure the current code, so it's really maintainable and it's just a dependency that will inject into the service module. So let's comment out that code and this install the Mongoose MPM package. Now, the first thing we need to do is to acquire the Mongoose module and to connect to our MongoDB. So we need to go back to our cluster that we created earlier. And then click on the connect button here. What do we need to do here is to connect our application to the cluster. So we have a couple of choices here. But what we need to do here is to click on the connect your application. Since we have a NodeJS application, now there are different types of Sir driver that it's provides. We will go with the NodeJS since that's the framework we are using. And what do we need? We just need to copy the connection string. Now Mongoose has this method called Connect, where we are going to pass. So we will use the mongoose connect method and we will pass the connection string. Here, we will just change the username and password to match wherever we created. Then for the database that we will be creating, recall it off db. So to create this database for us since it doesn't exist yet. The second parameter is an object where we set the ease in URL and parser to true. Next, we will need to use it to create our model. Mongoose also provide another method that's called model. Now it accepts the first parameter is the name of our model, which will be user. The second parameter will be the schema is just an object that has the name of the field and then the type of that field. So like similar to any database, you need to provide the name of the field that you will be using for our documents, which is going to use her name, in this case, the type for it is string. The second field is going to be the password. And the type. It's also string. Next, we'll create a db object to keep it the same as before so that we don't break the dependency from the service layer. So it has two methods as before, one for inserting the insert promise and one for finding the document, which is find one promise. So the first method, the answer promise, as a function that accepts two parameters, the username and password. Here what we will need to do is to create a new record using the username and password at passed towards from the service. Now in order to do this, first, we will need to create a new object from the user model. So we create the new user document and pass the username and password to the user model. So far, this will not save the, the document to the database. In order to save them, we would need to call the save method of the model, which is the user model. And this user, like the save method, returns a promise. And that's what we expecting here. That said I should save our document. Next. Let's create the define one promise. Which again, it's a function that accepts my user name parameter. And it will return the document if it found that the database. So let's first I want you to take a look at the Mongoose documentation here. It has the friend guides for querying for the schemas, for inserting for validation. So we will use the fine one method to to return back the document that we are searching for. Okay, so again, we will use the user model, which provide us with defined one method that we'll pass the parameter, the username so that we can find the document by the username and then call the exec to run the query. And again, this will also return a promise, so that's should not break our code. Finally, we will need two exports, our DB object. Now this is our DB module. Let's test it. Let's run our server to test if everything works as before. Running the dev environment. So this should connect. Now let's test our register endpoint. Passing the username password would look like document created correctly. Now for the login endpoint, nice, we also receiving the JWT back. Now let's confirm that the data has been created in our MongoDB. So let's click on the collections. Collections are like tables in relational databases. Will take some time to load. Nice. Now we see our old DB has been created and the users collection that has two documents. Maybe you're asked What we only created. Register one user because I already have like a test user here. So that's where you see two documents has been created. Now let's test the last endpoint, the check data, BUT endpoint, so let's copy this JWT and paste it here in the authorization header. Send the request. Okay, it works correctly. Now that's invalidate this data VT by removing part of it. Send the request, nice, it's failing. So we successfully use MongoDB with our application. 14. Setting Security HTTP Response Headers with Hemlet: Express is a very good framework for building RESTful API server or any web service. However, it does not include all the security best practices out of the box. And that's why we need to configure that ourselves. One of the safety measures we can take is to set certain security-related HTTP response headers. There are a number of security related headers that can be returned in the HTTP responses to tell the browser to act in a specific way. However, some of these headers are intended to be used with HTML responses. And search may provide little or no security benefits on our API server, which does not return HTML by returns JSON data. So let's take a look at some of these headers that we have. So if we go to our local host and make a request to the login API, passing the username and password, we get the JWT token. Now I'm using postmen. And if we go, if we look down to the headers section, this section, and the bottom one is the response and the headers and the response corresponds to the HTTP response headers. Now, postmen has excellent features which is giving you kind of explanation about each of these headers. For example, the X power by, if you hover over the hint here, tell me like what does this mean? Which is here saying the technology, for example, ASP.net, PHP, J boss. In this case the value is express, which is what we are using. Content type. For example, here the media type, which is in this case JSON, length ETag for the resource version, the data of the request, and the connection. So we have around six response headers. Now, if you look around here, the first one, is it telling in the response the type of the server we're using or tech stack we are using, which is expressed, which for malicious users, they can use this information to look for some algorithms that can attack our server and may introduce, introduce a weakness to our server. So. The first thing is how we can hide this. Now express provide a method to do this. However, what I will do is I will go with a package called helmets. The helmets module provides a middleware to set security-related headers on our HTTP requests. Helmet sets HTTP headers to reasonable and secure defaults, which we can also extend and customize as needed. So let's see how we can use helmet. Go to the terminal. For example, here from VS code, I will just create a new terminal. And here I will just say npm install dash S. This will solve the helmet module. Second thing is we need to include. So we say, now since helmet is a middleware, so we can use it as any other middlewares we are using right now. So here I'll creatures here. So we can use helmet out of the box, which as I mentioned, we'll provide some secure defaults. So let's see how that will work. If we say server that use and just pass helmet here and save that. And we go back to Postman and send request. Now, you can see we have a lot of HTTP response headers returned back. Now let's look at the body. It seems everything is looking good. Now, you can see here is that helmet by default gave us some of the HTTP responses that considered secure by default. And you can see it also removed the Power BI, which was used to be set to express. Now this is a great but building an API server that, and we are not Returning HTML. Not all of these headers are useful to us. Let's see how we can configure helmet for certain security-related headers that can be used for our API server, and also can be used for any other web server, whether that would be UI base or not. So let's remove the default behavior here. Now let's type server dot, use helmet dot. So helmet consists of around 11 module that we can use individually. And that's what we're going to do. So first we will call this Method here, which is called high-power wi fi Colby from postmen again. See now the power by Header is disappear. Because this middleware took care of that net to prevent browsers from performing mice mime sniffing or media types sniffing and inappropriately interpreting the responses as HTML, we can set the content type options. Now snuff. Let's see how we can do that. So in an element is really simple. Just do server.js, helmet dot know, snuff. Save this. Go back again. Voila. Now we have the value returned here on this header. Next, let's see how we can protect against a drag and drop style click jacking attacks. Next to see how we can protect against the drag-and-drop style click jacking attacks. We can set the content security policy. And to do this in element again, we can configure this. We just do server.js. We pass helmet dot Content Security Policy. Save this. We go back to a postman Sandra requests and we can see now this header has been added. Now, we can even configure this more by setting specific directives here. And in this, we can, for example, say, the directives can say the default. Source said this two, omega dot and this one Susa, the frame. And sisters. Save this. Now we can, you can see now we can figure this two for the specific field. Also to prevent against the drag-and-drop style click jacking attack. We can have the X frame options header set to be deny. To do this, we can just say server dot, use helmet. And then the action here can be said to deny. We can save this. Go back to postmen and send our request again. Rate, we can see now this header has been set correctly to deny. So these are some of the basic headers, security headers that we can set using helmet. Now if we, for example, try the same. Another end point just to make sure that everything works as before. We can go back to the, for example, check token, send their request. Okay, saying the JWT is expired. So if we, for example, copy the JWT from here, go back to this and we replace this in the authorization header. Send it again. Okay? The response is correct. If we look the headers, they are return the same security headers that we set before. I suggest that you go to helmet js dot github dot IO in the Arctic. Look here at the documentation and see more middlewares that available and even the details about each of these headers. And the great artists that even you can go to the kind of more documentation about the HTTP headers itself is to have a deeper understanding for these security headers. 15. What is a docker container?: Okay, so now we have our authentication service running and we just did, but we tested only locally. Can we just run it in a different machine or even run it in a different cloud computing environment. And this is what we will discussing is using containers. So what's the container? That's the first question. So we can imagine a container similar to the shipping container. It doesn't matter what is inside it, what boxes inside it. You can ship it anywhere using a train, using a ship is it's truck. Doesn't matter. Does it look in such a way and tells us some certain locking system similar to that IS containers. So we can define it as a standard unit of software, packages up code and all its dependencies. So the application one quickly and reliably from one computing environmental another. So to provide us with portability, because every container will have the, all the application dependencies and N2O run as expected regardless of their environment it will run. Second thing IS containers run almost as fast as running the native application directly on the operating system. Which means that its performance, That's what MIT set it apart from, something like the v virtual machines because it doesn't take that much memory and CPU that the virtual machine take. Also a program running inside a container, sees no other program running on the machine. So with containers, you can want and define an isolated process running on the operating system. With all of these advantages that receive IN containers at will be step forward for us. Now there are many tools we can use to create containers, but the most popular of them is Docker, and this is what we are going to use in this course. So to Docker eyes our authentication service, we need first to create a Docker image of the service and then run a container and a stance from that Docker image. In order to do this, first, we need to create a Docker file. A Dockerfile is like a recipe, defines the secret that defines the container image. For our application. We then build this Docker file using the docker wall, which will create a Docker image for us. The Docker image is the actual package that contains the application source code with all the necessary dependencies. And it describes the command that should be used to run this application. Now we have the Docker image. We need to run it using docker run. And this will run a container in a sense from that image. And this is what we will implement next. 16. Dockerize our Authentication service: First step is go to Docker.com and just install the Docker CLI for your OS. So we can just go to the Get Started and then just download it for your OS, whether it's Windows or Mac, I'm using maximum. And after you, you download and install it, you can just come here and say Docker dash v. You should see the Virgin. So now time to write the Dockerfile so as to create a new file on the route called Docker file with the capital. I also suggest to install an extension that supports Docker commands for your ID. For example, here I sold one for VS Code. Alright, so let's go to the content of our docker file. The first one is from node and then a number, and then dash, stretch, dash, slump. What does this mean is, this is basically the base image that we want to build on top of. So we are saying build on top of this image. That's a Node Version 1218. Bet in Seoul and air stress slam is basically slim version of the Debian Linux system. This way we really don't care about how Node works because we just requested to use a Linux with Node Version 12 in it. Next is the work there. And they slash users light source slash app. This is means that just created this directory and CD inside it. Next is the copy command. We copying the package.json and the package dash loc dot JSON from the client source code to the docker container working directory. Then the run command where we seeing npm install dash, dash production. So here we will basically and so all of the dependencies that's in the package.json, excluding any deaf dependencies. That's what the dash dash production flag will do for us. This way, we can keep the size of the image is small, and also the size of the container will be smaller. Next, we will copy everything in the in the source code of the current client application. Everything here, except the node modules. Because I added the node modules to the docker ignore. So any file added here. Be excluded from the copy command. Next is the EMV or in the environment variable. Here I'm exposing the port to ADHD. Now this environment variable is used and express in the server Node.js. When this is past, the pores will be used for the server to loosen on that board. Next, I'm using the exposed command on 8080. Here we are saying that the container will be listening on port 8084 TCP connections. The cmd command is basically describing what will be executed and set for for the, for the container to run our application. So we are saying npm start. The npm stars is defined as a task in the MPM scripts to run the server.js, which will run our Express. Okay, it's time to pull this Docker file and generate the Docker image. So we run docker build dot t is the Docker file. And then we should take it by dash d. And then we pass the name. It's better to use the username slash and then the name of the Docker image. It will take few seconds for this to finish. What we can see is that every Docker command and the Dockerfile correspond to a liar. And the build process. So this is the final layer, basically is the image and this is the id of the image. Now we can verify that by running docker images command. And we can see the first image has the same ID and the same name of the Docker image we just created. Now starting to create the Docker container innocence from that image, we run docker run dash b, 3,880. We map the 3 thousand port to the 8080 port from the container. And then we pass the name of the tag of the image we just created. Hit Enter. We can see now our Docker container is up and running. Let's go to Postman and verify the chick token endpoint. It seems that it's working now, but the JWT is failing because it's expired. And if we tested the login is successful and this is the JWT, we get back for that login. Now, if we paste this new JWT to the authorization header and send a request, it seems it's working correctly. Would this we are able to run the application from a Docker container and test that. 17. Google Cloud Platform (GCP): Google Cloud Platform. Cloud computing is a big deal on the world of technology. It's also a big deal for those who are not quite in technology. Cloud computing. Machine learning or AI will affect the lives of people far beyond the world of technology. There are many cloud providers out there, including Google, Amazon, Microsoft, and more. Although each provides many similar products, the implementation details of how these products work tends to vary. Google Cloud Platform, often abbreviated as GCP, is a collection of products that allows the world to use some of Google's internal infrastructure. This collection includes virtual machines via the Google cloud engine or object storage. Via the Google Cloud Storage. And it also provides APIs to sum of Google built-in technology like Big Table communities or Cloud Datastore. Now, before we can start using any of GCP services, we first need to sign up for an account. If you already have a Google account, such as Gmail account, you can use that to login. But still you need to sign up for cloud account. So please, The first thing we need to do is to go to the cloud dot google.com. And you'll see this page here. And the first step is we need to sign up for a free account. So we can click here on the get started for free. So if you are eligible for the free trial, the free tier aisle gives you $300 to be spent within the next 90 days as of the recording of this video. So select the country, and then you select, agree. Now this $300 should be more than enough to explore all the things in this course. In addition, also there like some products on the GPU and the GCP, it has free tier of usage. So click on Continue to require you to enter some credit card information that you can do. Now after you have signed up, you're out automatically taken to the Cloud console. And a new project here has been created automatically for you. You can think of a project as a container for your work. Where all the resources in a single project are isolated from those all other projects. You can go here and click on the projects down, drop-down list. What I will do here, I will create a new project. Let's call this NodeJS. Jwt. Of. You can select a location, I'll just keep it as the default here. Next, if you look here on the categories for the different services offered by GCP, navigate to the Compute section, and we will go to a service called Cloud drawn. Now Cloud drawn, it's a service that allows you to develop and deploy highly scalable containerized applications on a fully managed serverless platform, which is ideal for our service, since we already made a Docker container from our service. So click on the cloud, run. This, make sure here to switch today, right, project we created earlier. First thing is we need to enable the cloud run API. So just waiting here for the, for the API to be enabled. Every service need to be enabled before it can be used. We will do the same when we are going to use the container registry service. It may take some time until the service is enabled. 18. Container Registry (GCP) : Okay, so now we are on the cloud interface or cloud console. Now to manage the resources, you can use the Cloud console. Now, another way to manage the Google Cloud resources and services is to use the Cloud SDK, which is a suite of tools and libraries to create and manage the resources. In general, like anything you can do using the Cloud Console. You can use the Cloud SDK as well. The primary CLI tool is called the G-Cloud, which provides with commands and scripts for automation. Now to use this G-Cloud tool, we can use the Cloud Shell. The cloud show, an interactive shell environment that allow you to manage the, the GCP resources for your project. It comes pre-installed with the G Cloud SDK tool. So if you click here, you can say continue. If you can see here, if you type G Cloud version for example, it also has the Docker already installed. Now, there is another way to install the SDK. What we can do is we can go to the, to this cloud dot google.com SDK docs install. So the Cloud SDK locally, you can go to the cloud dot google.com SDK docs slash install, and then follow the instructions for your platform. Like here, the instructions are pretty straightforward. Just follow them and feel free to install whatever tools you like here. What do you wanna do? What do you want to end up is you can just do. You should see a similar result as this. Now, like after you have installed everything, you have to tell the SDK who you, who you are by logging in to your account. In order to do this, you can't just say G Cloud oath. Log n. What will happen is Google made it really easy. It will just connect the terminal to your browser, to login, say allow. Now the next step is to enable the container registry service. O2. Container Registry. So he puts Container Registry is a private registry that runs on Google Cloud platform that allows to host our docker images over there. So we'd first step to do is to enable this if it's not enabled for your account. So you can see here we don't have any repositories. Next, we want to push our local Docker image to the container registry so that we can use it from cloud RUN. Before we can push or pull images, we must configure Docker to use the G Cloud CLI to authenticate requests to Container Registry service. In order to this, we can type this command, the terminal G-Cloud auth configure Docker. And this will update our Dukkha configuration. Next we need to tag the image with the registry name. Yeah, so here is our Docker image. So what do we need to do is we need to say docker, tag or image. And then we need to specify the target, which is PCR dot IO, that corresponds to the host name, which mainly we will be in the United States location. Then the project ID, which you can go and take it from here. You can see this is the project ID here in the US. Then the name of the image, which we can say my app. And we can also tag it with the version if you like. Version one. We can see now the new tag damage has been created for us. Ok. Next, what we need to do is to push it. So we can see docker push. Then we can specify the load I0. The docker push command has been completed successfully, and now the new Docker image has been pushed the Container Registry. So if we refresh, we should see our Docker image. It's been deployed here. 19. Cloud Run (GCP): So now from the cloud console, we can search for cloud Run Selected here. Now let's talk about what is Cloud run. Cloud drawn is a service that allow you to develop and deploy highly scalable containerized applications on a fully managed serverless platform. Since now we have our Docker image pushed to the Container Registry. It's time to deploy it here and access the service publicly. So the first thing is we make sure that the service is enabled. Then we can click on create service. Here are like the service settings. First, select the region that's closer to you. I will go with the USO S1, then pick a name for the service. I'll go with NodeJS, JWT, Oh, service. Next. We're going to look up for the container image that we just pushed to the container registry. So if you select here, you can look up wherever images we have right now. And this is the tag we pushed earlier. After we select this, we click Next. Next. And this settings, we can specify how the service is being invoked. Now we have different options. From the Ingress. We can specify if we want to make it accessible to the public. Or we can add a trigger to make it invoked by a specific events. For our use case, we will allow all the traffic to be accessed without any trip, without any triggers. So so the first option, which is allow all traffic from the Ingress. And then for the authentication will allow unauthenticated invocations without enabling authentication. Like next. Now, the service as being provision now based on our settings. So give it a minute or so until it's finished. Okay. So our service now has been provisioned successfully. Now if you look on the right side, you can see on the container tab information about the image that we used for this deployment. In the image URL is the is the image on the Container Registry. And you know, like the ports, the service account, the CPU allocated for that, and the memory, and concurrency and so on. Now on the top where it saying where the name of the service, which is the NodeJS, JWT, OAuth service. There's the region. And the URL is basically the public URL that we are going to use to test our service from postmen. So that will be our public URL. Okay, so now let's, let's go through some of the tabs available here on the cloud drawn Dashboard. The first step is the matrix, which basically give you stats about the requests to count and the requests latencies. Since we can here play with the timing from the past ours we don't have any requests we made, so we don't have any data. Now later when we make requests, we should see the counts updated here. All the data revisions is basically the revision of the deployment we make. We have only one logs. Basically, as the name suggests, it will display the logs that happened on the service. Triggers. Since we don't have any triggers, we can add if we want to. But since this one is a publicly accessible, so we don't have any triggers for now. Again, the details is the region in the to stand with the deployment and the URL public URL for this one. This is a nice YAML file of the of the configs are the settings we made for the service. And lastly is the permissions. 20. Test the new service (Postman): Next is to test the service. Since now we have a publicly accessible service. We can just copy a specific URL from here. And I will use postman as before. So we will go to the Postman and then just paste the URL here. As you see, this is the same URL I am of the deployment we made. And we'll go through the first API, which is the register. And now we'll create a new service, a new user. Go SAM. Same thing here. I will pass the body since it's supposed request headers as the same as before. And extend. Ok, so now the user has been created for us successfully and the password is being hashed to. So that's the expected response. Now if next we will try to invoke the login with that username and password. So I will go to the same URL slash login. Then we go to body. And here I will enter the new created user to Sam and the password from 12x. Click send. The hydrogen out of the same as we use locally. If I click send, ok, so the response is successful. Now, this is the JWT token. I will copy this. And it tried to use it with the last endpoint we have. India. If we go to the header section for the authorization HTTP header, I just replace this old one, pastes the neutron flux end. Okay, nice. We have the same username. Now let's check for false results. So for example, for login will just change the password or something like seven. Okay? Because we use the wrong password, we have bucket false response. Okay, now it seems our endpoint working as expected, which is great using the new deployed service. Now if we go back again to the to the cloud drawn Dashboard, go to the metrics. Look here on the Refresh. Okay, so you can see now we see some updated data here about the request we just made. Here's the latencies. We are seeing. The innocence this time, which reflects the number of requests we made. Now if we go to the logs section for example. And you can see this is the reflect the request we just made to the service. These are basically what I have as a console log on the code and the code base. And it's displayed here. So the slug logs tap can be really helpful to debug errors or unexpected results that can be happening on on production.