Python Serverless Apps with AWS and Terraform | Tony Truong | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Python Serverless Apps with AWS and Terraform

teacher avatar Tony Truong

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

16 Lessons (1h 45m)
    • 1. Introduction

    • 2. Course progression and architecture

    • 3. Environment Setup

    • 4. Deploying your first backend application

    • 5. Serverless Framework YAML abstractions and common configs

    • 6. Creating RESTful API functions

    • 7. Initial Look at AWS Console Resources

    • 8. Creating VPCs and Subnets - Networking for Lambdas

    • 9. Terraform Apply and what gets created on AWS

    • 10. Update back end to use new infrastructure

    • 11. Creating an AWS Cognito User Pool in Terraform

    • 12. Bearer Authentication for the back end

    • 13. Look at AWS Resources for the User Pool and begin authentication code flow

    • 14. Complete Implicit Grant flow

    • 15. Chat App with Lambda Websockets

    • 16. Thanks

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

This course takes you through building Serverless applications quickly and securely on Amazon Web Services by leveraging:

        - Terraform for infrastructure code

        - Serverless Framework (Python) for quickly deploying and structuring our code (REST & Websockets)

    We go beyond the defaults and will construct infrastructure as code and create our own VPCs, Subnets and tackle OAuth2.0 authentication.

    By the end of the course, you should be able to comfortably understand how to create a REST and websocket application from scratch and leverage the boilerplate to start generating business value immediately.

Meet Your Teacher

Teacher Profile Image

Tony Truong


Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Introduction: Hi, everyone. Welcome to the fast start course for server. Listen, python. I lay out what this course is about, but before that, I do a quick introduction. My name is Tony Trunk and I work in the consulting industry where I've been fortunate enough to experience many domains ranging from a couple of major airlines, market research, media and even the energy sector. I helped found five start ups in the past year and 1/2 and serverless has been a quick and lean way to get started. As a startup, you need to move quickly, but not compromise on scalability and test ability. As you grow. You also want to keep costs down early and not pay too much technical debt When you do expand, this course aims to get you bootstrap very quickly and start contributing business value immediately by leveraging infrastructure as code serverless technology and AWS for the public cloud. Finally, don't worry about memorizing the code because the course description and this intro video will link you to the boilerplate code. That's all the reusable components we build here as well. Some extras. I hope this course provides you with lots of value and thank you for watching 2. Course progression and architecture: here is an architecture diagram of what you will be able to do. By the end of the course, you will be able to deploy your serverless code in custom, be PC's segment your services in public and private sub nets and understand how to use plug ins to perform log aggregation or keeping your lambda functions warm. The reason why we structure it this way is that you can expand the number of resources on AWS. You need to have private seven. It's anyway for certain services like RDS it also as extra layers of security if you need to be compliant, this course takes an iterative approach, and each section depends on the knowledge before it. You don't have to know python syntax as it's fairly easy to understand, it read, but it does assume that you're capable enough to search for the little things and to be able to install the tooling. Your focus is more on the bigger picture, and I would place the course somewhere between beginner and intermediate. So we start with the defaults like this when a service application gets deployed and uses the default PPC and you have no isolation of your functions. This is fine for experimentation, but not really maintainable or in useful when you have multiple developers sharing the same space. Also, as I mentioned earlier, certain resources require private sub nets. This leads us to our next diagram, where we start writing the infrastructure as terra form code. We isolate our Lambda functions into private sub nets and how replication and more than one zone for her availability. Networking through NATS is done through the public. Sub nets that way are functions Have outgoing egress excess, but they're safe from Inglis. Finally, we talk authentication. One of the most basic things in any application is security for your users. Strangely enough, I've seen this messed up plenty of times when people don't take the proper time to really understand overflows. This should be enough to structure your coat and get going quickly as you build out business values for your customers. Happy watching 3. Environment Setup: all right to get started, we'll create the My APP directory and in this will create a backend directory and a in frustration directory. The Infrastructure Directory Will house are terra form code and the back and director will house our serverless code for the purposes of this course will put the infrastructure with the back end code, but ideally would have them separated. We'll check or node version, see if there's something newer on the long term support. We do that with dash dash LTs there is, so we'll switch to 10 points. 16 3 I already got it installed so we can just switch to that version. Check our serverless version. Go 1.5 to 2 will need that later. So we'll lock down the serverless version later just to make sure that our deployments are consistent and the plug ins that we use are consistent. So we wouldn't use a newer version of fighting. In this case is 3.7. I've already got it installed through conduct, so we just switch to it and then we'll initiate a pip and environment using python 3.7 en fellow, make sure that any dependencies that we install are installed under this environment, so it's an isolated python 3.7 environment. That way, we don't conflict with any of the global packages. So we create our services template by using service, create dash, dash template aws python that will initiate a few different files for us. He's just started files well, heavily modify them in the next part of the video. 4. Deploying your first backend application: What we'll do now is install a serverless python requirements plug in. And what this Blufgan does is it uses a doctor container. If you don't know what doctor is, is just a lightweight B M that runs on the Lennox Colonel. But he can work on Windows and Mac. It's well, and what I will do is it will look at the pitfall or requirement. Start txt. It will install all the Parthenon requirements and dependencies that we need packaged it up into a final deployable that we could push to eight of us. If we look at the package thought Jason, we could see that these several is plucking these no more than just a node module. So in the future, we can just package it up and leave this in the depth, dependencies and just news NPM install. And it will achieve the same effect. Taking a quick peek at the pit file. There's not many requirements just by thin and in the having. A top I were returning is that it's called 200 success. We'll take a more detailed look at this later. The more important Balto look at two years the service that, um Oh, and this file defines everything that we need to do in the Serverless framework plug ins provider such as AWS, Gcpd, Usher and I framed my vision entirely recommended that we locked down the framework. Persian it the frame, repression, changes or any, if any, dependency region changes ever. It's just pretty bad practice to not lock down the version, because is, you could get up breaking change at any moment. Well, just plug in the version number that we got from the Serverless install last time. It's 1.5 to 2, and we'll accept anything before, too. We'll also modify the Python version to master system version. So 3.7 is there 2.7. You can also see here that the function is defined as Hello. It was this kind of remove all the commenting and move well, heavily modified. This follow later, and we'll do it in such a way that it can scale that as your application gets a little bit larger and it becomes a little bit more readable maintainable. If you leave everything in the serverless that, um, a fall and get rather unwieldy and large, we'll start with moving a customs that Yemen filed outside. And instead of putting it in line, what we can do is, say Customs and use a special syntax to reference an external file. Let's call it, uh, resources slash customs that yellow. So we created I Resources folder here and put their customs in there. But the framework allows us to do is what we reference these external files, especially the custom tag. It allows us to put in configurations for plug ins or putting our own custom variables. We could also build their own custom plug ins as well. You'll see more of it later when we put in custom variables based on different environments aren't Blufgan requires some configurations to be set. So it's a space that I didn. And if you notice that there's a doctor, I's pip element. We could set that to either linear or non Lennix. If you're running on Windows or Mac tools, that it's a non dash Lennix and the last thing we'll do here is deployed the application. So I have a virtual em if a device set on AWS account and why don't need to do is sign into it using AWS MFK, I've set up a profile in my 80 s sash credentials file or country quell and set a pro vocal personal where I have my secret access token and secret key. Right now, I'm using all theon, my smartphone who put in the authentication code and what that will do is it will go hit AWS and it'll give me a temporary token, which I can use. It will expire in about a day or so. Aws MF A needs to be run if you don't run this and you have a virtual em. If a device set the application won't deploy, so we have to do is run serverless deploy the default station set to death, and in this case, stage and environment are equivalent meanings, so let it run. At the moment, it will put everything in the default. VPC. What we'll do later on the Terra Form scripts is put more security best practices into play and find a way to isolate our application just a little bit more. It's more important if you have a shared environment with other developers, but in general it's a good practice just to isolate your applications and functions. I won't need those VP season sub nets, anyway, if we're going to deploy a post grass database or some other services that need to sit behind a firewall, if you look carefully at the terminal and you can see that is pulling in a doctor image called limb C I slash lambda build Python 3.7. That's our play cannot play, so it's pulling down that Dr continued to build our dependencies for us. Once it's deployed, you can see that it outputs are service Neymar stage the region that the application sits in, and we can execute a serverless commander in both the function. At this point in time, we don't have any restaurant endpoints yet, but we'll do that. An extension of the video. 5. Serverless Framework YAML abstractions and common configs: Let's restructure our folder and little bit that we can at some customization is and scale it a little bit easier. We don't want this every list that Yellow Files get too large. So let's start with setting the stage to explicitly be dead. Even though that's the default, we just want to make sure that it's very clear we'll set our region to AP. Southeast. To that's in Sydney will make the default memory size of the Lambda functions to be 128 megabytes. You can override this in each individual function, but it's going to set the minimum size. Then we'll do Is set some common environment variables for each Lambda function. These will be injected into the lender going shit themselves. And because we're in the provider section, it'll be for every single Linda function. You see me used some special syntax here for the region so we can self reference previously defined tags such as the provider, and this allows us to in check. Dynamic variables in for the log level here will will actually reference a custom element and, at the same time, reference thes stage. So the stage changes from Deb to staging to production. It will pick up the right load level, and we'll see that a little bit later when we want to follow the customs. That yellow file, another important element to consider, is taking all of our AWS resources. And if we do that under the provider tag or elements we can add in the tags element up name , stage and owner. That way, when we go on the AWS console or any of our colleagues go in the AWS console, it's very clear, and it's very visible to see which functions belong to who and what stage they're in. I'm just gonna quickly fix up these Coghlan's here. They should be dots. What we'll do next is specifying external package that memo file and this allows us to specify what goes into the final distributable. We there certain falls that we don't need for the land of function to actually work the package that Jason Node modules and he read Mies the infrastructure code itself that can all be removed and then the external resources as well. Ah, resources are eight of US resources in this case. The only thing that we're going to specify is the default. AP I Gateway Error responses by default AP I Gateway does not return the proper corns headers for cross origin requests. So we need to specify Ah, the allowed origins. And in this case, we will also specify external files for one specific function. We'll go into more detail on how to re factor the function in the next part of the video. You noticed that I've added the A p I gateway errors that, um oh, the package that Yemen and now they hello dot Yeah, most placeholders. One quick thing that I do need to do is change thes stage environment bearable. Earlier, we switched that to the custom. That stage what that does is in the custom, that stage it looks for the default. Um, I'm sorry, the option in the command line if it's not there, that it'll use depth as the default, as mentioned earlier, will add in the false that we want to include and the policy want to exclude Here I d vials get ignores. Read me is and we only want to include the hello directory, which contains our function. When you will need to do that for every single function that we specify. For now, it's just the one well paced in. Next is the default AP I gateway response errors for 405 100 cross origin requests. If we don't have this, the browser will return a not allowed cross origin request air them does on AWS suffer from a cold start problem when they haven't been executed in a while. So before they've been allocated resources and actual servers. But eight of us, the baby I gateway might return that area once the Lambda has been warmed up properly, Then we shouldn't get that air anymore. 6. Creating RESTful API functions: as we prepare for our bigger application. Let's change the hello that Yemen, while into a use your profile. So we call this profile that Thiemo ultimately what we want to set up is some kind of user log in using cognito and have a little bit more complexity in our application just to demonstrate something a little bit more concrete. The previous deploy was a simple function that has echoed something back, but we actually had no way to invoke it. What we're preparing for now is a wrestle a p I. And this is just one way of organizing that I found to be quite scalable and useful. I like to create one folder per function or a folder for a group of functions, and what we'll do is move the profile that yellow under the profile folder and call it main dot pie. You don't have to call it main dot pie, but I found naming it this way rather useful, but we also need to add in the unit that pipe. This tells Python that it's a module so that we can include it in other functions. You also need the unit that pie in order to execute. It is a module on the command line. We can execute these locally. So let's rename the function to get profile. And we'll reference the profile folder, the main dot pie file and that method working execute will also call that mean as well. We need to attach an event ap I gateway to tell us. How do we execute it? Basically the rest endpoint. So that's going to be an event, and the type is Http and we should to find the path. So let's call this profile and the i d in the path. The idea is a euro parameter and the curly braces. It's kind of a placeholder. The method that we're gonna do is get, and we cannot enable course just by setting it too. True. So we want to make the idea mandatory parameter and which has to that with request parameters and paths, I d should match up with the perimeter. True. And the reason why were you This is because in the U Earl looks a little bit nicer to read , been sitting a career parameter, so a modifier family should know and change the method to Maine. We'll put an empty placeholder in there for now, just to an empty return. And we need a way to execute this locally. So we'll go if name he goes to Maine, they will execute the main method, and we'll put in empty arguments for now, the event in the context arguments actually come from a P I gateway. So when we execute it with Curl, it will give us information about the arguments. Um, things like the context i d If there's any headers, things like that. The body also sits in the event. So let's execute this on the command line to make sure that it works. Will run this in pimpin and we're gonna run the Python Command, which will execute Python 3.7 and profile dot made profiles. Thief Older Maine is the father were executing, you know, run the If May Nichols or if name equals main part cool. The Foshan, as it is now, won't actually return properly if we deployed it. What we need to do is return proper HB status codes and proper Jason Response message. We can also return an empty response, a swell with just a status code. That's perfectly ballot since we're gonna be doing that fairly often. Let's make an A P I folder and we'll make it a module within unit that pie and a Let's call it response to stop by. There are some common headers that are Lambda functions need to return, so we just specify that here eso you need to specify the excess control headers to enable cores without it would actually return an error to the browser so important you re status and we specify our content type is application Jason as well. One way to return properly is this return a status code, as mentioned earlier, what we still need to provide her headers. So let's go ahead and do that. So the January empty response meant that just takes a status could weaken either pass one in, um, as an industry status and we passed the course better. The other thing that we can do is pass in a Jason body, so we just make a generic method called generate response, which takes the body, and we'll just turn it into a Jason object when using chasing that dumps and the side is good, we can reuse our generate empty response function to include the header as well. The okay response. It's the exact same things. It's a short cut. It takes a body, but it returns at http. Okay, just as a convenience function. So I think you get the idea now we kind of reuse everything. And let's put in some common methods for errors using the generate message response and generate empty responses that we to find before. So these are just nice helper functions that I hope you find useful. Let's use what we just created in the profile function and Christine will do is include our function. So we import from a P i dot responses the okay response function room of the print and that's called our function. Return the body as just the hello message. So it just takes a python object. All the key value pairs, as you recall, will be Jason dumps into a proper Jason String say hello world and the message and then one last thing will need to do is include our new a p I and our profile function in the packages. Okay, so we're just about ready to deploy. What I like to do next is actually go through some error scenarios where something might not have gone right. So the first thing is our serverless version. Right now I'm using my systems default node version, which is not the one with that we're showing earlier. So what you'll see is that there's a framework version mismatch because Thesis herbal is framework version that I have installed on this note version is not the the latest version on the latest note. So here we'll do a serverless deploy and you'll see that we'll get a framework air. So my current node version has an older framework installed, so we'll switch are no aversion to 10.16 point three, and this notation should have the proper serverless version is told to another several list employ again. We get another error. So what's wrong this time? So we've got two errors, actually, but will address the 1st 1 So we're missing a con fig and we need to add in the log level the second Arab will addresses to MF A. So previously I mentioned that if we didn't sign in with him FAA, we have virtually McVeigh enabled on our account it wouldn't work on the deploy so well, we'll use their personal profile again after signing in will do a serverless deploy, and everything should be fine. We'll grab our your URL. You'll notice that there's an end point now that we can actually hit. Just run a curl on that should be getting a hello world message. 7. Initial Look at AWS Console Resources: Let's take a look into the AWS console to see what actually gets uploaded into s three. The bucket gets automatically created using the up name that we defined earlier in the serverless that, um, a file. Generally, I don't even take a look at this, but it's good idea to check it, in case you're not sure what actually gets uploaded in case you're using a plug in that has issues or you're not sure what's actually in there. So the confirmation template Jason contains the resources and definitions that actually get provisioned, and we could take a look into cloud formation itself. Confirmation just defines all the different resources that we want to use, and it's usually done in very large chasing files. So the nice thing about the framework is that it does all this for us. But by having the confirmation here, you don't need to maintain state across machines. That means anybody who's running the source coke and actually deploy, and it will take a look at this confirmation. The only caveat there is that the back in application name needs to be the same. Otherwise, it can't find the cloud formation template. After we think you look at the defaults here will write some terra form script so that it supplements are a serverless code. It's good to establish a baseline, though, so we know what we're dealing with. All right, so let's take a look at some of the networking. First thing we'll do is going to be PCs and a BPC. Since a virtual private network, it's a logical separation of where AWS resources go into. We've only got one in this case, which is the default one, and it's kind of open to everything at the moment. Then we'll take a look at the sub nets as well. In this case, we've got three different sub nets and a sub genesis. Another way to section off the VPC, which has a larger ah list of I P addresses, and we'll break them then to three different networks. We'll do more of that later when we hit the Terra Form scripts. So there's no network gateways, and there's one Internet gateway so that there's outgoing access. These are the security groups. We can see that it's still the default vpc that it's attached to, and then we'll take a look at a p I gateway. So FBI Gay Way is the entry point. It sits between the Lambda functions and when we actually invoked it through the terminal and we've got a profile function and it takes one argument, opened that up in another tab and come, come back to that later so we can see that you contest the function here if you want to. It's just good to take a look, but we're not really gonna use anything in there. And then we go into a gateway responses and double check that are 405 100 faults. Have the course header responses now in production will change the s risk. Teoh whatever domains we want to support our lambda function. Here we go. It's attached to an A p I gay way and it's the security grip and the I am rolls give it access to write cloudwatch logs when we execute the function. If you want to see what kind of hopeless that has any errors or things like that, we go into cloudwatch and we had laws on the left hand side. So here's Orlando function. When hit the end point real quick to just generate some logs. You hit the refresh button from the time you actually invoked the function. And when you actually get the louse is a slight delay. So it was just refreshes to see what's happening. So you see the request I d and how much were built for. So it took 100 milliseconds to run and 128 makes so that's it for the logging and will improve upon this methodology and a little bit. 8. Creating VPCs and Subnets - Networking for Lambdas: Okay, cool. Let's take a look at what gets deployed today. The functions get invoked through a P I gateway. They sit on a default VPC and ah, default public submit. They can make outgoing calls because it goes through a router and Internet gateway, which is attached to the default VPC and all those go through cloudwatch. Where we want to end up is something like this. So all call still go through a P I gateway. We'll create a new VPC and to private sub nets. So the functions get replicated into the both sub nets. Lo still get pushed a cloudwatch and all going Internet connections go through a net which sit in to publics of nets and go to an Internet gateway. Now, the good thing about putting the Lambda functions in the private sub nets is that if we have more AWS resources such as Post Quest Server, my sequel server or something like that, they're actually required to sit inside private sub nets. So for the Lambda functions to access those resources, they need to sit in the private stuff in it as well. That and we have an increased level of isolation, so there's more security. So one last thing to do is re factor or get profile function here instead of returning in okay, responses with the static message of Hello world. Let's actually return the path parameter. Let's remove this and we also want to do some error checking. So we returned an error response if there's an improper request. So the event that a b I gate will bring down has the path parameter in it. So this has set that to path grams variable and we returned unassertive fault. If you can't get the path perimeters, the path parameters are empty with just returned with an invalid request. Then we're extracting user I d. And this is in the path Firms will just get to a get a swell and similarly will do the error checking. Now when you return a custom object that this returns whatever i d was passed in and we can do that with the okay response, call that user object. And if you recall what we're gonna do, is just chasing dumps it in the okay response meant that from the previous video. Well, echo the I d cool. So now we need a way to actually execute this locally. Consider late in aws ap I gateway response. I mean, sorry, an event here. And so what? Creating event and we'll give it a past prim element. We just call that test I t or something. So when we run this locally, we can just test it. Um, as a smoke test. If you wanted to writing unit tests, we can. And we just used the Mulkey library that's built in. We can write one of those later. So what we want to do next is actually declare how to deploy the Lambda functions into these private suddenness, and VP sees that we saw earlier. It's the first thing we'll do is go to our serverless that yellow file and under provider. We're gonna use the tag VPC This is built into the framework and in order to specify the PPC, we actually need to give it The VP sees security group I d. We'll get that from the output from terra form in a bit. The difference is in the sub nets. We are going to have to and we put in the sub net i ds so not the security group. I did for the seven. It's just the sub net I DS. That's a very important factor for the framework. Okay, finally, we're ready to move on to some actual terra form files. First thing we'll do is go to the infrastructure of folder and actually put some placeholders in, and we'll create a main don t f and l put start TF and a variables. The A T F. This is kind of the standard practice variables that TF contain, variables that you would actually put into substitute into the Terra form scripts. The main dot tfl is Where will you supposed to the modules and write the terra form code. We can also write them into various different files as well. And Terra Form is smart enough to do the proper order and all puts the TF will be what we want to display at the end. When we finish running the script, the first thing we'll do is lock down the terror form version us with everything. We can even specify an external state, and when you run the state, it needs to know what aws resources you've provisioned and how you provisioned it, so that next time it could do a diff by default little store that stayed on your local machine. The provider tag is the public cloud provider. Terra Form supports many public club provided assistance Gcpd, aws and Azure and we can specify a version here. We can also specify an external credentials file, But what issues Whatever is built into our system. Next we'll specify. A module will call it VPC modules. Kind of like an instance of what we're using. And in the source. I'm gonna specify an external resource. This will be get or another repository and the module happens to be called VPC eight of us . I'm no lock down that version as well. We can build everything manually and by hand, as we saw earlier in the diagram. But this verified module will save us a ton of time. So I'm just gonna open up the terra form registry where we can see where those Montreuil lives. Just so you know what I'm talking about? Gonna search it up real quick. Don't remember where it was and we go into the registry on Terra Form by Has your court of course search the modules called vpc and we're looking at eight of us and verified modules . So the module does a lot more than what we're using it here for. And it looks very clearly what inputs and outputs you can get from it. So you're architectures a little bit more complicated. This can help you with that. Okay, Cool. And the next thing we want to specify for architecture, it just said name in this case, we're going to reference their variable now, and it's gonna call it Project and the workspace. The workspace is kind of like the stage or the environment. So if you wanted, like a Deva staging and a production environment, you can dynamically allocate those resources here and switch returned. Different work spaces. I'm gonna create an external view of the variables fall so us, I feel in the variables you can see what I'm referring to. And in order to refer to ah variable, you just go Bardot project. And so we have to declare the bearable project in here. Give a description. We can also give it a default variable. What? We won't Sorry. I mean the default value. But we won't. So we were when we run. Terra form is gonna ask us and prompt us for what that variable is. Next thing I'm going to do is the c i D. I Range. It's a networking term, and what we want to do is in our VPC specify how many I p addresses we get and what the default I P addresses is Harriman. It specify some networking, commenting in case the variable falls get large. I like to do a little bit of commenting in there, so we declare C I d r and let's go with ah, reasonable default. So in this case, I've chosen a very popular 1 10 0.0 dot 0.0 and slash 16 so slash 16 are the most significant bits first. So an I P address is a set of eight bits, and we're gonna use the 1st 16 bits for networking and then the rest of the actual I P addresses available to us. It gives us 60 something 1000 I think, from slash 16. We can use an online calculator for that. So here I'm going to highlight the 1st 16 bits. That's well, that's what we're going to use for the network part. And then the last 16 bits for the I P addresses. So that's all. That strong hand means that entire ranges for our VPC and our sub nets will take sections of the PPC that range. So the availability zones is what we're going to specify next and in the Sydney region there three zones A, B and C. We won't use all three zones, but will specify a couple here. Let's go with a and C AP Southeast one a are sorry to a A. We sell these to see. We can also create another file called, uh, something TF ours. And those contains secrets and and they fill out these variables. But because we only have one dynamic variable, we won't make one of those files. So the next thing to do is specify the private sub nets. This is also another C i. D R range. I'll explain that in a little bit. If you recall, there's gonna be two private sub nets and to public sub nets and sub nets are just another logical grouping of resource is. The thing you do have to be careful of is when you allocate these sub nets that they have a limited number of I p addresses. So you're going to either need to make more sub nets or you're gonna need to allocate the right number off I p addresses when you do your ah submitting. Okay, so we declare the variable private sub nets and a reasonable default, it's gonna be about 256 I p addresses. So you remember the 1st 16 it's or for the network, and then the rest, we're going to split the rest. In this case, I just go 10.0 point one slash 24. And then the second availability zone is gonna have ah, the same amount of I P addresses under the same for the public sub nets. But give them a slightly longer range. So instead of 01 is gonna be 100 of one and 102 in case I wanted to make more private seven . It's later I could go one up to 101 and the maximum number there is 255. This module also requires ah, enable net, but default. It'll create one, not for sub net. There are other configurations in this module, such as one Networx, the all share. But I would prefer to have one, not percentages for scalability reasons. And this is this a boolean variable. So we just said it's true. The default is true anyway, but I'd rather be explicit about it. And then the last thing we want ad in is some common tags. So when AWS creates these resources, we can see these texts. That way. We know again who owns what and what is automatically manage versus what is created manually, the environment tag. I like Teoh stick in the workspace. So we switch workspaces or create new workspaces. It will deploy an entirely different set of resources for us. That way we don't have to duplicate our code for each environment. 9. Terraform Apply and what gets created on AWS: to initialize are terra form project. We need to run Terra forming it here we get an error because I actually misspelled Thievy PC AWS module and as the area gave us so we'll go back in and change this up. So it's PPC slash aws and keep in mind this modules called BPC. We'll need to remember that for the outputs, you know, the terra forming it and what it will do. Is it noticed, Toby? Usually module will go and fetch that from the registry and stored on our local machine. If you recall, we needed to fill out the output. TfL in our service that yamma we needed the BPC security group i d. As one of the arguments and the seven it ideas as well. So is dressed data from AWS Resources. We can use this data element and apply a filter on it. And what we'll do is get the module BBC I d and put it into the filter. So they're module dot pepsi. Vpc was a name that were supposed to remember earlier and thought vpc underscore. I d is one of the output variables to actually use it. We need actually need to specify an output element. We could give it any kind of name. And the value will be what the data filter up, what will be? And then the private sub net I ds. And here I was gonna up with the AWS region as well, but we don't really need it. I'll go back and fix this later, but this doesn't really affect the terra form code. Okay, then what I need to do next is do declare a new workspace. And we do that simply by being terra form workspace. New in the name of the workspace, and it works base. In this case, I just use it us, the environment or the stage. Those three terminologies all tend to mix up interchangeably. So the terra form workspace New Dev and in order to see what resources get, actually get made. We can't just to terra form plan. What school? It my app. It will prompt us for any variables that we didn't put a default value for. Scroll up here. We can see that all the pluses mean that is going to create all these new resources. So there's a security group. Some of them are calculated after the resources made because we don't know what they are until AWS generates them and the module does a lot of work for us, it's going to create an elastic I p for the For the Nets. It is going to do all the routing is gonna put in the Internet gateway. And if you notice that it does all the tagging as well, some of the elements have a time out so we can time out at five minutes that there's an issue. If the resources taking way too long to create there might be a problem, there's a route tables, and Terra Form is smart enough to make the resources that they depend on each other in the correct order. It's also a best practice that put all the resources in a linear fashion in your folder organization. All right to Terra form apply, and we actually have to say yes, it's gonna ask us for the project name every time. Weaken. Like I said, use Don t f bars file that won't get asked what we need to store that somewhere safe in case we have sensitive information. But here we don't, so it's not that run and we'll be back when that completes 10. Update back end to use new infrastructure: all right. Another, the Terra form scripts have run. We see the outputs here. So our filter on the V p c i d. On the vpc i d being that large number of vpc desh zero's air eight d seven and the I DS that we got back was this thing that starts with SG's for security group. We'll copy that and put it to the service. That yellow file. Just correct this from earlier. Just remove the data element that I'm not using. Okay? Just paste in the security group. That way SERVERLESS knows who associate the done the functions with that bpc I've tried putting the bpc i d in here in the past, but that doesn't work, and then I'll just go and fetch theseventies I ds. He said the private seven it i ds None of the Lambda functions go in the public one. Okay, You notice I switched node versions to make sure that I'm using the latest one. I'm gonna do ah serverless deploy now such the right for their first and then do a proper deploy and I'll let this run and then get back. Okay, we're now back, and we have a different endpoint. What I've been doing has been deleting the stack and re creating them. So now we get Thea the curl results and our path parameter is an I d. I'm just gonna put in a place holder. I d called my i d. It doesn't really matter. This is the start up time that I was talking about. It took a while, but it echoes back the user object with the idea that we're expecting Put it another one just to populate the logs. Okay, so the second time was a lot faster. Let's open up AWS console to see what's actually created. So first thing we'll do is going to BP sees we should have a new one. So there's two now in the default sub net are sorry Default region, you know, is that the curse is a little bit off because I've cropped the browser a bit. It says hi some sensitive information. Okay, so we got the new VPC We can see the sea I'd ers and ITV for address 10.0 point zero slash 16 And all of our tagging is there great. Next we'll take a look at the seven it's so we should have four new sub nets. There's the two private ones and the two public ones. They're both into a and to see. And the BBC I d On the right hand side is the new one. Look at the round table, see where the traffic goes. And here we've got the the two nets. The tune that said in the public submits Okay in here the route tables. So to the roots for their private seven, it's go straight into the Nets and the public ones. Let's go straight to the interdict Gateway and our new Internet gateway for that VPC. Okay, so let's take a look at our Lambda function to make sure that everything's okay with it. Skip profile by than 3.7 E. Gateway Resource can actually see the code from there. What? It was packaged with it. Our environment parables, tags and here it should be the important part here. So our new V P. C i. D. And the two subjects they belong into cool. Everything seems in order. Now we have a isolated lambda function is replicated into availability zones, and it can access the external Internet. Any resources that required to be in the private suddenness can be accessed by this lambda function 11. Creating an AWS Cognito User Pool in Terraform: So this is our applications so far. Or Web app or curl will go through a B I gateway and then invokes one of the Lambda functions sitting in both private sub nets, which sits inside of a PC. Unfortunately, right now it's public, So what we want to do is add a little bit of authentication. AWS provides a service called cognito and inside cognito hasn user pool. When users register for our service, they'll create a accounts on the user pool, the pool those things like register than user and does a verification through email or phone number gonna send an authorization code. So the way this works is when the Resco goes for a P I Gateway, a p i k. We will execute the authentication function. It does start through JWT sor Jason Webb tokens. And if the authentication raises an air Earth and it's an invalid, request returns true, then it'll actually execute the get profile function. Some other alternatives to Cognito are Octa or off zero, but since we're in using terra form anyway and AWS here, we'll just put it into our tire form scripts. Pricing for cognito is free cheap early on as well We can We only pay for the users on our pool. Don't pay for anything else. Okay, so here's our terra form code. Well, here's all of our code. Really? And what we're gonna do is create cognito Dottie f you know, infrastructure code. Just move it over here. There's enough separation that warrants its own file. It's quite a bit. It will take up the entire screen. Really? And on the right side will have our variables will add some stuff. So I'm gonna explain them one line by line. Why not necessarily type the melt with line by line? But I think it is good to have an understanding. So our first resource is called aws cognito user. Cool. Just call that in user pool and we need to give it a name. You recall. I like to use ah terra form workspaces for the different environments. So it's gonna be the project, the workspace, and just call it the Newser pool. Fairly standard format. Next thing we want is account. It goes off a variable called enable cognito on user pool. So what this does is we don't necessarily always want a user pool and what I've done here is do a in line if statement. Yes, there's an user pool or it's it's enabled on the bully and true. Then we're gonna set the counter one. That means for this resource we're gonna have one instance of it if count is set to zero that we won't have unusable. So when uses for a couple of things and by default will have false, there's one thing to be aware of whenever we need to count. Object. It's not easy to refer to the resource or the properties of the resource when we needed for outputs and things. I'll show you a couple of functions later on on how we can address that issue. So let's continue. So the next thing we need to set is the properties that we want to collect. So in this case, Alias and preferred to use her name. And what we want to do is have cooked needle, verify the email for us. So we're gonna use that attributes. If we were collecting phone numbers, which we can cognito will send an SMS with a verification code out, we choose that method, but in this case when use email So when he used Ah, another bearable here called verification subject. This will be part of our emailing template. So when Cognito sends out the email, this is the actual subject. It's gonna have a project named Ash and the subject, which will add into the configuration. So we haven't used this variable type before, but it's called the map. Is this a nested object so we can group related ones together? Could the variable cognito config step sequencing map and the defaults? So the first property that we're gonna add in is the subject SE device verification code just for demo purposes? And then we need the body as well that in here. So for terra form, it's verification message. It will be under the same config, but a different property, different key and verification code. Then there's a special place holder, which is four hashes inside of curly braces. Cognito will inject the verification number inside the placeholder. The last thing to do is to tell us that use that template and we actually want to tell the template to use the verification code type code. Final conflict for this one default email option. Confirm with coat you can check the different types in the Terra form documentation and the AWS documentation. Okay, now we want to add a password policy. This is foreign users who sign up these air fairly self explanatory minimum length requires all sorts of stuff. Lower case number symbols in upper case make it a fairly strong password. There's another option to enable multi factor authentication as well. We'll skip that for today because just look it up in the documentation and the schema for the the email property. This training is called e mails required, and the email has to have minimum length of seven characters. So that's a fairly short email max of 32 close that it will add some common takes, just like we do it for other resources. That way, when we bring them up or delete them, we know who it belongs to. So application requires Ah, user pool client. The client identifies which application is using this pool. You could have more than one used the pool, but for our back end code here, the the user application will create one specifically for it todo with the resource and use their pool client user poll. Quiet and we only want to create this if we have ah cognito enabled. So we named it the same way and we'll create accounts for it as well. Copy that from the top. Okay, now we need the music pool I d. And like I mentioned before, it requires a special syntax because we have the count. So we do a joint. So we're joining nothing to the user pool and the estrous there means take everything from the user pool list because we have account it's a list, but because we only have one item in the user pool is going to join it with an empty item and we just take that default value so that ends up being just the user pool. So we're not going to generate a secret in order to use this user pool. Just said it to false. So clients can opt to generate a secret, and terra form will put a secret. But that means you have to pass that secret onto every call that you make to the cognito ah user pool, and we disable SRP off SRP, a secure, remote password. It's a way of authentication without sending the pass it over the wire in this case, will to set it to know it actually requires a lot of work to do that. Ah, So what we're gonna do is rely on https. So the TLS certificates through the browser to to handle the encryption for us. So when we authenticate the possible go through the encrypted connection just like everything else and that was that the allowed oh, off lows, the off flows can be implicit code. Um, any of the off to flows here called it allowed. Oh, what scopes. But I think the in hindsight, the better the better variable name would have been flows. So it is that the variable, But what flows default. And let's assume that it's a spa up that's purely JavaScript. So the new recommended method is to use the code or what flow I recommend you look that up to really have a good understanding of this if you don't already and implicit. This is why we generated like we didn't bother. What? Generating a secret? Because you can't hide secrets in period JavaScript code, especially for all the new front ends. Okay, we just set a lot off lows. Is true. Um, if you recall on the oh off to flow code means once we thing user authenticates to get a code back and they exchange that code for a token, an implicit it the after they authenticate, then they get, um, token immediately. Now the callback girl IHS. Once the user logs in, they'll get redirected back to your website. And you can set that, too. Whatever your domain is and the callback, the callback will continue that the code or the JWT token then reset the supporter. Identity providers in this case is cognito. Ah, this is where you can set it to Google, Facebook or lugging with Amazon. Then we said the allowed award scopes open I D and E mail. That means in the JWT, and the claims were allowed to access the e mail. You also need the open I d to be ticked off in order to get access to like the sub and all the other claims. Okay, Now we need to set up some outputs. We need the client idea that gets auto generated and the user pull endpoint. We'll need that for our back end application in order for it to work here. We knew something else called the Element that can cat function built into terra form. So it's just selecting the first element, a certain end index. So the index's zero. So the first element and canned cat is We're taking the entire and user pool list again, and the SRS represents everything. We combine it to an empty list, and we take the first item. We have to do this again because of the count object. There's no easy way to do that in terra form. It's a requested feature. Ah, but until it's there, we have to use this kind of hack around to get that we don't necessarily need the air in. But here I just like to help with that as well. Okay, now we can do a terra form. Apply once we have everything that we need. Music, cool client ideas. Last thing, and we'll be right back once we do in a play 12. Bearer Authentication for the back end: these are the results of our terra form output. You'll notice that there's a new V P. C i. D. And a couple of new sub net ideas as well. Don't worry about this. That's because I've turned down the stock and brought it back up in the code that they don't match up. Don't worry about it. I actually replace them as we go along. Now you notice that there's Thekla's needle endpoint for the front end and the pool I d. And we also have the up client I d for application. We need this later for our back end code in order to actually verify the J W tees to start off creating off folder. And it will have a couple of different files in it. The 1st 1 being a you tails thought python file And what this house is as common functions for our verification code. I prepared this earlier, so just paste it in and explain it line by line. Don't worry about memorizing the lines because the template provided on get hub. We'll have this code for you. Okay, so cognito does J w tees for authentication using oh off flows and you notice that there's a key euros variable in there, and it takes a placeholder region that's the AWS region and a user pool i D, which we had earlier. We'll pass those in in a little bit. The common functions are. Get known keys and verify and get claims, and we could take a look at what the keys look like. Well, pacing an example here, eight of us post known keys on a public girl, given this template, and he can have more than one item in it. So it tells us the algorithm that was used for encryption or encoding and the i D. In case they cycle at the different keys. They have more than one, so one of them could expire and you use the other one. So that's passed to us during the headers when a request is made and we just look up the key I D. They're reaching the user pool in the well known keys. You can find that in the documentation. So we load the keys and we read them cognitive and uses a private key to generate Thesiger nature for the JWT. We can verify that with one of the public keys on the known you, Earl. So what we do here? It was get the token. And in the cognitive event, we just take that from the authorization token element. And we split it with a blank and we looked through the the keys that you see on the right hand side. It's an array and research for the the property because we'll need to know what algorithm they use in this case. For example, Rs 2 56 If we don't find the proper Cheever returned false, and that would raise an authentication error. Then we construct the keys and we split it. So that token looks like a bunch of base 64 encoded things with a couple of dots in between . Then we get the unverifiable claims and we compare the digest. So comparing digest icis checking the client I d claim and making sure that this cognito pool is what our app is using and compared. I chest make sure that we're protected against timing attacks. There's a get sub function. You can see that it takes in front of the request contacts the identity and Kofi authentication. So these air convenience functions that I've included. It's all pulled from the documentation as well. The's sub is also the years or I d And then you got the generate land of policy. So this generates a policy in AWS. I am policy. If we've properly authenticated, what we're gonna do is generate this policy so that the actual function were trying to call in this case, get profile. I can run. So we generate in, allow policy and say that Hey, you're allowed to execute or invoked this function. As you can see in the action, execute a p I invoke line their online 69 cool. So these are utilities, and what we'll also need to do is create something that uses it. We'll create bear off the pie because we're using a bearer token, and this function is fairly straightforward. Just paste it in. Okay, so we see the environment variables where we pass in the user pool idea and the client I d and the keys air. Using the U tells function, get known keys. We check our claims and if the claims were false or any time we turn returned false in the util stop, I we raise an error. Unauthorized and that will actually return an unauthorized error. Otherwise, we generate a lender policy. So for that user, which is in the sub, we say allow and run this method who we need to create the environment variables for their pool idea and the client I d it's gonna pass it for every single function, and you may have more than one newser pool. So we're gonna put it into our customs that yeah, no. Based on the stage, you'd be changed the terra form workspace. We could deploy a different user pool for a different environment and have a different set of users. We need to pass a different one each time. So we'll just create day, Deb. One year, just we created a set of new code, and the off directory will need to add it to you the packaged Thiemo so that it gets packaged up and pushed into s three. Otherwise, our off function wouldn't exist. So just at often two includes So this is our get profile function and what we need to do as well is at in and authenticators that, um oh file. So we need to declare our Lambda function or off Lambda function. This one has no end point. It just gets executed by cognito. So we'll just or a P I gateway first, which will then go to cocaine. You doing double check everything else. So just called that bear off and we reference Ah, bear off the main function in the off folder. Now, for any function that requires authentication, we go to that, Um Oh, file. And under http, we just add in authorize er's element at the same level as cores and method since circuited to here. And we gave it the name of the author function. It's as easy as that. Next we'll run the functions unauthenticated and authenticated, and we'll see how that looks. 13. Look at AWS Resources for the User Pool and begin authentication code flow: just quickly Before we do a serverless deploy will decide in the authenticators thought, Yeah, profile is part of the functions and then this is the result after running serverless deploy So we've got a new endpoint here and we've got a bear All function and are get profile function Okay, let's take a look at what's actually in the AWS console again. I clipped the u R l bar So we take a look and the the cursor will be slightly off But that's all right So we see my app Dev User pool So it's gonna naming convention See the pool I d the air and to the pool and some of things that we set last time So right now we've got known users and we've got our password policies So everything is checked off minimum of 10 characters and we created one up client i d and we've got the call back Euro . So the call back once we authenticate is going to hit our local host machine. This is for testing Normally what I do is actually I have the call back euro as local host and my domain for that environment. So since we're not building a front. And what I'm gonna do is create a check forward. Existing domain. If it's not, there, is going to create one so we can use it for a hosted you. Why Amazon provides a hosted. You lie for us to do log in and registration just for the purpose of this demo. We can actually use their sdk for the front end in order to log in. But we don't need to do that. And in this page you can customize their hosted you. Why? For a streamlined experience, I suggest that you actually just used the SDK. That way you can customize the flows are but you like every experience is different and experience is a part of Thea the application. Cool. So when ah, to a curl to this end point now. Now that we set unauthorized er on it, it should come back with an unauthenticated error. There we go. We got unauthorized message. That's what we expect. And in this case, there are two ways to authenticate. But we first need to register a new user. In this first part of the video, I'm gonna do the authenticating code passed and we'll do the implicit path later, which is a type token, as you see here. So this is the AWS documentation. We just feel that our domain, which is the one we created earlier and are up client I d and are called back Uriel. Now, the call Ducky Royal has to match. And in the code I put the local host instead of my domaine dot com today. You. So that was the other change that you didn't see behind the scenes. Okay, so we have known users. That's we saw earlier. What I don't need to do now a sign in or sorry. Register for a new user. Just put in my name. Tony docked wrong and my email address in some password. If you recall from the previous code, it's ah gonna send me and an email message with the verification code. It's gonna pull that out from my phone and they will stick it in here. Okay, so that actually hit local host. That's fine. That's their redirect. If you look at the bottom, we've got the local host get requesting the network tub and we've got a code in the OA flow . What we're gonna do is we can make a post request with that code. I'm not gonna do it here, but that post request, if we send in the code, we're going to get back an access token. See, we have implicit grant and code grant flows. If you take a look in the user pool, we can actually see that I've actually confirmed and verified. And this is where we actually pay for the cognito service. So when assigning again, sudden again, properly, and you see that local host at the bottom of the network tab, the callback egos code Eco's some number and that's the initial request on the right hand side. But at the bottom, there you see a local host get on and it's got the code in there. Okay, so let's look at the other one next 14. Complete Implicit Grant flow: all right for the implicit grand flow. First of all, can do a little bit of house cleaning. We've got to change the package to Python Dash Jose and the euro lib. The requests needs to change to import as request. This is a python 3.7 specific Dele. Okay, cool. So let's sign in again. And this time I've changed the Ural type response on the score type equals a token instead of code. And this will give us an access token once we sign in. Okay and are called back is to local host. And we've got an I d token and an access token. So for our testing on every environment, if you want to log in with a token to make a P, I calls in the back end, you do this, just going to search for the excess token bit and pull it out, and it expires in a day and separated by a couple of ducks for each section, you can use JWT decode online to see what's in it. It's what we saw before on the right hand side of the code screen in order to actually use it, which is to a curl with a header. Authorization Colin Bearer space. The space is important because the U tills actually parse the space. We pasted our token and we call our endpoint with any I d and we've got a response, so it's no longer unauthorized. 15. Chat App with Lambda Websockets: Let's take a look at how we can use the boilerplate to handle Web soccer connections. Web sockets are live streams, and a P I gateway now has support on handling the connections for us. He used to be a downfall of Lambda, but now it's fully supported. So I prepared some code earlier, and I'm going to do now is walk through it. So the first thing we'll do is open up serverless that yellow and you take a look at the environment variables. You can see that I've added in dynamodb host. So this is for the Southeast to region. And we need to add these two tags here. Web socket ap I name Is this whatever you want to name it and how do we determine the actions? So the routing and what we do is look at the body dot action have also ripped out the I am role statements so that all the functions have this effect. They can all manage connections and they can all right to a specific dynamodb table. I did this all here for all Lambda functions so that we can simplify the code a bit just for demonstration purposes. But you can use the plug in I am rolls per functions and that will allow you to manage the permissions more finally per function. Here we take a look at Web soccer star Thiemo. There's the disconnect and connects Handler. That's when the user connects and it'll do certain things. There's a default message in case they can't handle the connector disconnected with this error or send an error message and our chat function. So what we're gonna do today is build a iconic chat up, send a message and I'll echo to everybody else. Okay, so we need to import dynamo db dot models. Quantum with D B is a library for us, the right to dynamodb and will need dynamodb to handle the connection I ds for us when and user connects. So to create the table, we need to take a look at this model here where we have different Boolean types. So we're kind of defining the scheme of the dynamodb table. We extend the model class on dynamodb and specify the table name the region. So we get the region from the environment bearable ever changes. Otherwise, we default to southeast to, and we can specify their throughput of the table and the host. So there's an NPM package that we can use that will allow us to run dynamodb locally in case we wanted to write tests. But we override that with the environment. Variable. Okay, so for every connection is a connection manager and from the event request contact, we could get the connection. I d Every connection has a connection, I d If the user is trying to connect, we add them to the connection table without connection. If they're trying to disconnect them, we just removed them from the dynamodb table. Otherwise we generated error. Okay, if you take a look at the ad connection method is very simple. We use our connection model and add in the connection 90 that we got earlier. You can ignore the is ready. That's this kind of demonstration of the bullion property and for remove connection does a similar thing. The Dynamodb library is extremely easy to use, are highly recommended and suggest you look at the ah documentation for it. Okay, so let's take a look at the actual chat application been used the boto three library to create a connection to a p i banishment R a p i Gateway management, FBI. That's, ah, new thing in boto three. And what we're gonna do is we're gonna do a scan on the table, so we want to send a message to every single user we get all the I DS. Then we load the body element and it will have a message in the body element. And we'll send that to every single connection using the post connection method. And then the Lambda returned successfully. That's all. It really takes ap. I Gateway does the heavy lifting for us, and we just deal with the logic itself. So in the scripts folder, I've created a method to create the table manually in case the table doesn't exist. So if I want a handle a bunch of different item potent ah DB schemes, that's where I do it you can created in terrible arm. Or you can create it in manually or you can create it in cloud formation. But I believe that code schema changes should be managed inside the code itself. And then we you see I CD to run it properly. Just make sure your scripture item potent. So here we, Kathy dot n firebomb when override the dynamodb host to be the region and point gonna connect to I ate a voce accounts. If you change dot m to use the local host, then you can use ah number of libraries out there to similarly dynamodb locally, but it's never quite the same. Okay, so I'm gonna run my pseudo migration scripts just using python module run, and that should connect to the Southeast to end point and create my table. Then we're gonna do a deployment. So in the default configuration here, there's no private be PC's and no private sub nets for demonstration purposes. We'll definitely tackle that in a more secure manner in a little bit. Let's let this deploy. We'll get back. Okay? So if you go to the read me in the boiler plate, you'll see that if you remember, we had the Jason object. The action is the routes, and if you install ws kept, we can actually connect directly to the end point and test it out. So it's a neat little ah, little tool on node. So once we've deployed, you'll see a W s s endpoint. So it's a secure Web socket endpoint right there on endpoints. So I'm gonna connect to in two different terminals. And once we send a message on, one of them will see it and the other one. So if you recall in the code, the action is the route and we did. We enable that as chat and it's looking for the message body and said, Hello, world, Let's check the other terminal that we got. Hello world That's it. Let's see how we can make that a little bit more secure The next section. 16. Thanks: all right. If you manage to make it this far, I think now you're fully armed to the teeth in order to start creating your own business applications. What I wanted to do now is focus on the boilerplate code that I have publicly available on . Get hub. Feel free to use this as you please. There's a couple of additional things in here that's not in the course, and I walk through them really quickly right now. So what we did overcook need authentication we had bear off in. But now I have some authorization in. So this piece of code will check if the user belongs to a certain group. You see that here in the custom group off, we check the claims cognito Colin groups and make sure that they belong to that group. You also have a post confirmation hook. Now, Lambda Hook is something that far is automatically without you explicitly calling it. Once the user has joined the group, we can fire off some custom logic. We also have Web sockets. So in the Web sockets folder, we take a look at the main functionality. We handled the users connecting and disconnecting. And if It's not one of those and it's not in one of the routes. They'll get a bad request. There's an example. Echo in here is well, feel free to read that through your own time. What we do need to look at is the service, that yellow file. And here we name the web socket a p I, and we give it a wrote. So in the Jason request, once they've connected through a website your connection they need to have the action item in their adjacent object. Take a look down. Here we go to the Web socket connection example. Here, action equals to echo. So in our main dot pie in the web sockets file full there. Sorry, there's an echo action. And then there's this echo back Hello, world, and we can test that here. Finally, our A p I gateway. We might not necessarily want eight of us to name it the random, not name it has so we can use a plug in here called domain manager. Now we purchased the domain, you know, we check out customs Thiemo and said a certificate for it. We can use this plug in to set a custom domain for a P I gateway endpoint. For example, Dev, that my domaine dot com that a that a u slash a p i and you'll see slash ap I hear and whatever. So domain is, for example, dead or staging or production, and in order to run that instructions are also and there read me. All you need to do is once you purchased the domain and created the certificate you can use serverless create domain and service deploy. That should set up your A P I gateway. I've already got some terra form code in there as well to help you handle that. Here we go. Just be careful that there's currently a buck in terra form. When that gets fixed, update this code to handle it completely. And that's it for this course again, thank you very much for taking the time to go through it. And I hope this helps you