Scaling WebSockets with NGINX | Hussein Nasser | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Scaling WebSockets with NGINX

teacher avatar Hussein Nasser, Author, Software Engineer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

7 Lessons (1h 16m)
    • 1. Introduction and Agenda

    • 2. Introduction to WebSockets

    • 3. Layer 4 vs Layer 7 WebSockets Proxying

    • 4. Spin up a WebSocket Server

    • 5. Configure NGINX as Layer 4 WebSocket Proxying

    • 6. Configure NGINX as Layer 7 WebSocket Proxying

    • 7. Class Summary

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

WebSockets protocol is a bidirectional communication protocol and having NGINX proxy and load balance this protocol can be tricky, in this class I discuss the fundamentals of WebSockets and how to scale this protocol with NGINX. 


  • Quick Introduction to WebSockets
  • Layer 4 vs Layer 7 WebSocket Proxying
  • Spin up a WebSocket Server without NGINX
  • Configure NGINX as a Layer 4 WebSocket Proxy/Load Balancer
  • Configure NGINX as a Layer 7 WebSocket Proxy/Load Balancer
  • Summary

Meet Your Teacher

Teacher Profile Image

Hussein Nasser

Author, Software Engineer


My name is Hussein and I’m a software engineer. Ever since my uncle gave me my first programming book in 1998 (Learn programming with Visual Basic 2) I discovered that software is my passion. I started my blog, and YouTube channel as an outlet to talk about software.

Using software to solve interesting problems is one of the fascinating things I really enjoy. Feel free to contact me on my social media channels to tell your software story, ask questions or share interesting problems. I would love to hear it!

I also specialize in the field of geographic information systems (or GIS). I helped many organizations in different countries implement the GIS technology and wrote custom apps to fit their use cases and streamline their workflows since 2005. I wrote fiv... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Introduction and Agenda: Websockets became a very important and popular protocol for full duplex communication between two parties. And what made it attractive is its compatibility with the HTTP protocol that makes it easily be able to run in the browser, which is very attractive. So all my web apps can take advantage and I can build chatting apps and stuff like that. Logging, you know, feeds, stuff like that into the browser and any other coins I support WebSockets. However, in this course, I'd like to talk about how do you scale WebSockets connection given that it's full duplex and it's two-way, scaling is not really easy when it, when things are stateful like WebSocket know, like when there is when there are two communication, you have no idea when the server is going to send you a message. So you really don't know how to scale this thing. You have to lock it down to 11 machine only. This is unlike the HTTP, which is stateless, where you can throw in the request to any back end, really. And to serve you because each request is an independent stateless that the state is in that request, right? That's why at rest it's called this state transfer. You transfer the state every time with the request. However, WebSocket is stateful. So in this course, I would like to talk about how you can use in genetics. And this is basically the main focal point of this course in scaling and then WebSockets protocol. So let's go ahead and go through the agenda of this course. So the first thing we're going to talk about is just a quick introduction of what WebSockets, what it is, what it is, why is it important? How does it work? Very low-level. I like to talk about the fundamentals of things, how things really work and build up and build up until the client. And then we're going to talk about two major concepts. And I didn't mention in this engineer scores are already in another section, but worth mentioning, I am layer 4 and layer 7, WebSocket boxing him. Any backend engineer I think really must understand those are model, at least understand this layer 4 and layer 7, all right? Because your app can fit into either of them. And what do you need to understand is just what, what can you see in layers have been versus what can you see invader for? What is available to you on both layers. And that's, that's basically what it boils down to. We're going to learn how to spin up a WebSocket server without an drinks first using Node.JS because that's the easiest thing I can find him. That's what I'm comfortable with. I'm gonna build a guy, show you the code, share that code with you, write it from scratch. I'm going to have hope sockets Arbor and ghetto communicate toward the browser. And then only then are we going to start scaling it up, then load balancing it using a generic. So we're going to spin up an engine ECS instance and We're gonna do a layer for proxying to the absorbing. I'm going to explain what that means. Then we're gonna do and they are seven proxying. See What's the difference is the frozen cones are what does it give us? Stuff like that. And then finally, we're going to summarize this entire section. Best jumped into a guys. 2. Introduction to WebSockets: Introduction to web sockets. And this lecture I'd like to talk about what is the WebSockets protocol and how does it work? And it, these two pieces are very critical to understand how to actually scale the thing that you can scale something, yeah, Do you don't understand how it works? So some people get bored of this, but I like always to go back to the basics and it's like, okay, how HTTP 10 really used to work the original version, that very first version, there are versions before they have been when we first shipped it. There's a client, there's a server. We established this lifetime and the client wanted to send a request. So what it does, it opens a TCP connection and then it sends that GET requests. I want to index.html page on HDB a 10 and this orbital, so sure. Here's this. I read it from the desk or cache in memory and then I'm gonna response to you back. And here's what happens. The design, the original design was since it's stateful, there's no reason to keep the TCP connection. And this was night in the 1990s wherever the webpages didn't have much stuff in it. So you would send one thing and you get it and there's no need to give that these are beginning to anymore, but unfortunately not the case anymore. When you want to send another thing, go ahead and reopen another connection, right? And then send that, hey, why did the glow that I want Image 1? Because I'd just fished in index.html. That is an images that I wanted to download. Well, okay, You opened, There's image1. Okay, let's close it. And then do the same thing again. I'm just do you get the idea? This is very expensive. Opening and closing the TCP connections. There is an overhead to that and you can add an extra overhead. We didn't have encryption back them. But SSL and TLS in every handshake that is very expensive. And it's a SYN, SYN-ACK and AC and then the TLS handshake, it is very expensive. So we quickly abandoned this model and we added something of that cold or keep alive header to solve this problem. And we made it kind of endothermy. And 11 says, Okay, you open a connection, you keep it alive until one of us agrees to close them. So we're gonna do okay. Give me the index.html page. There is it, is it a, we don't close it, keep the same TCP connection, then send some other data. And hey, I want to image one. And it is, Here's a must-do. And there's image three, and so on. And then after we are absolutely done, we close everything and we'll be done effectively. So let's take a look at what WebSockets protocol is and what does it do in the same scenario as HTTP? So we have the same configuration here. We have a client and a server. And this server that has the web sockets capability. And we're going to explain what that means in a minute. So what happens here is we open a TCP gash and very similar to the HTTP context. And then we do something called a WebSocket handshake. And we're going to dive deep into what battery means in a minute. And I forgot to turn off my laser. There you go much better. So we're going to, we're going to explain what the WebSocket handshake is. It is effectively just this thing is just as TCP connection on top of, on top of it. That is the HTTP context. And then we do something or the WebSocket handshake. And this is very critical to understand because and Unix need to understand what that is in order to proxy effectively. And once you finish this handshake, this TCP connection becomes a dumb pipe that is just sending messages back and forth. They're all know HTTP rules anymore. You know, there's Debian rules are being requested response or sending a request and wait for a response to know, sort of work and send information to the client. Client can send something to the server, client scans and something else without expecting really in response. Well, there is some sort of acknowledgment that the TCP stack, which is the layer for stack, I am acknowledgment and stuff like that. But at the application level, that's where we came up, OK, there is no expectation and the server can send something out of that client, sends something back and there is no order. It's bi-directional, full duplex communication. That's very useful because in the case of chatting where you don't, you can't predict who was in a send a message. That's going to be useful and the communication obviously can overlap and that is absolutely fine. All right, and there is the headers and specific headers for the websocket messages and stuff like that. But that's out of the scope of this video. And once we're done, we close the connection. Let's take a deep look at what the web sockets handshake. And you might have seen this. This is the protocol, you know, prefix which indicates that this URL or your eye, this resources or WebSocket resource, Ws. And for the secure one, HTTPS 1, which has DLS is WSS. So really everything starts with a TCP connection open request, and then followed by this articular special header, right? Which we send a GET request. We send this lash PDP-11 because that's the minimum version I WebSockets we're walking. Or for obvious reasons we need a connection to stay alive, right? Too, to enable full-duplex communication. And then we send a special header called connection upgrade. And then that special header contains other information as well that I just didn't find space to write them. Now, that indicates that this is, I want to upgrade to a WebSocket connection. And if the server supports and understands that this is actually a WebSocket requests to upgrade that connection is going to respond back. With this, I hate switching particle. There was a special code as go 1, 0, 1. So GDB causes eight. Alright, we're going to switch protocol. What might say why, why are we up for grading? The reason why I've created is because this is just a normal connection. And any time you can upgrade an existing TCP connection, tcp, an HTTP connection, to a WebSocket connection. That's where we do. And once you do that, the entire pie for that particular connection becomes just the WebSocket connection, right? And then that's what happened. And that's very critical because now we have some sort of a stateful request, right? We just made this connection at, tethered to this back-end, right? Because now when I sent some sort of information here, I can't really consider this stateless, right? Because now that particular connection is now stateful. Anything you send always must go to the same server. I'm just server, same process. Because now we, we save this state. We no longer the HTTP protocol. We are a stateful WebSockets protocol. Here are more details of how the WebSocket handshake looks like in the wire line. So we're going to send get slash chat for example, where slash chat is the URI where we support the WebSocket connection. We're going to explain that this only wore Creeley if you have layer seven proxying because we can't even read what a slash chat really is in a layer for proxying and load balancing. And then we're going to do HTTP 101. I'm going to do, hey, I'm understanding this particular host and in case we have multiple website listening on this particular IP address. So you can have multiple host. I want to upgrade this connection to web socket. And then the connection header says upgrade. And there's a bunch of keys. There is the protocol. This is something that is just to internal for you. The user. And the backend says, Okay, I want to support this kind of protocol, the subcortical for the web socket. So you can have multiple WebSocket servers that support like Okay, wanted supposed chat one support logging, want support two-part genuine support on a live feed. And then the version. And these are like really optional to be honest, these things, right? Not this, this, this, this key is, is critical for this. And then you specify the origin. And then the server must respond with the following. Http 101, this status code 101, and this is the switching protocol and says, Okay, I'll upgrade that connection. That connection that would be ample gray not accepted this, and this is the key that is some sort of a variation of this. So we know that, okay, we are who we said we are because we talk your key and we did some arithmetics on it. And then here's the things our support, I only support chat, I don't support. So our chat. So this is the colon is over. And all of this really is. You can write it yourself from scratch using a normal HTTP server. Or you can just use a library that did, does this correctly and securely. So what are the what sockets use cases? Some of the WebSockets use cases, those really chatting, right? And it is very important if you want to chat, right? That's a full duplex communication. And each party can send text or images or media production amount. We have no idea who was innocent. Any data. You might say, I can build a chatting app on an HTTP, vanilla HTTP, sure. But you're going to have to do polling, right? So it's okay. I do have a chat for me. Do you have a chair for me? Do you have a chair for me? Do you have a chat for me? And then if if you do this sort of a new does, then is going to respond back. If it doesn't, then it's gonna respond like hay, net, and then increase the bandwidth on the network and it causes all sorts of latencies and stuff, right? So bi-directional is preferred if you want instance. Almost push kind of chat notifications. Live feed, like let's say you have a blog or video. Live video is a bad idea on WebSockets just because it's DCB, but you get the idea. You can do something that is, doesn't have the bandwidth hungry, right? Live feed, like you're in a conference and you want to just send messages. It's just a feed of messages, CVD of news, right? That is constantly being refreshed. So yeah, you can, you can replace this with Server Sent events because it's just one directional technically, right, from the server to the client. But you can still use WebSockets. Multiplayer gaming get so many similar ideas, especially if the game is built so that we are sending inputs and the state of this state of the game, then WebSockets is a good idea. If it's otherwise, then I would think twice before using WebSockets. Just because large bandwidth on WebSocket is not a good idea in general, right? And the only reason is really because of DCP, then we don't, we don't really build stuff that has like large bandwidth, that kinda, I'm okay with it being the lossy. You perfect to use UDP in that case like something like WebRTC and stuff like that. Showing line broadway is on logging. These are some of the use cases, at least. 3. Layer 4 vs Layer 7 WebSockets Proxying: All right, Let's get to the meat of this work or this course. And the idea here we're going to discuss in this section is we're going to discuss layer 4 and layer 7, WebSockets, proxying. And we're going to take few slides talking about Layer 4 versus Layer 7. And then we're going to add WebSockets, do it and what's the difference between the two as we get into it. So in the layer four OSI model, we see the TCPIP content. So what does that mean? The TCP stands for transmission control protocol. The IP stands for the Internet Protocol At, let's translate this to engineering speak the software engineering aspect of this, the network engineer looks at this completely different. The two os back and software engineers. Here's how I see this, right? The IP, think of this as a packet that has everything at the IP stack, which is layer 3, has the destination, which is the destination IP address, the source IP address, and some sort of a data. And at the TCP stack, which is higher up using the IB, adds two additional useful information plus so much other stuff, right? Additionally, it adds the ports, the source port, the destination port, and then shoves it in an IP packet. So we call it TCP segments and IP packet. So what do we see in the TCPIP? We see for the things we see port's IP addresses and the idea of connections and the TCP, because the TCP establishes his idea of sins and act and sequences and it's a very stateful protocol. So in a layer for concept, we really shouldn't really look at the data inside of the IP or the TCV. We just should only look at these metadata, which is IP addresses, ports and you want the connections on sequences, stuff like that. But we really shouldn't look at that data. And even if we tried, the data might be just encrypted because we did a TLS handshake before and their content is actually encrypted. You don't really cannot see it. And even if it is unencrypted, let's say it's port 80 and you are sending some data. We really don't should not then I put should not on double-quotes. People still do, routers still do with these kind of things? Well, that's a general rule of thumb. We don't really look at the content as bird, the rules. Otherwise, if you start looking at that, you, then you are classified as another kind of layer, right? Because you're, you're, you're digging into things that you're not real concern. However, in a layer 7, right, I skipped layer 65 because they are irrelevant in this discussion, layer seven is the application, layer four is the Transmission Control layer. Look at this this way. And layer 7, this is the application. So you have access to everything that you see in layer 4 plus Application Layer Content. You can see the content. Guess what? You can decrypt the content. Provided. By definition, if you're a layer 7 application, you have to develop the content. You have to terminate TLS, you have to serve your own certificate. You have to look completely at the content. This is unlike layer forward. Hey, I look at this if that is by default, the default for me, which is the port, the IP address and stuff like that. These aren't nothing selected and will never be encrypted. So the beauty of this layer seven UC that content, that means you see more useful information, not just bores and IP addresses. You have contacts or the application, you know, you have access to the headers, HTTP headers, you have access to that pass, you know, like, Oh, we're going to actually Slash chat. We're going to slash blah, blah, blah slash blah, blah, blah slash dot, dot, JPEG. Layer four. You can to do that because you can't look at the data. Hidden layer 7, you see all that stuff. So you can do clever thing. Hey, if someone's going to slash a chat to come to this server, if they're going to slash media or media is heavier than chads, they come to this server. This is a beefier server. So you can do these kind of smart rules and an engine acts and Roxy really. So that's the beauty of this. So now we understand layer 4 and layer 7. Now that we know layer 4 and layer 7, layers and which, what do we see in each? Now we'll go back to the WebSockets and layer 4 proxy. And when it comes to Woodstock and I say proxying, I mean, really reverse proxying to be specific. And anything that has a two load balancing as a subset. If you wanted to add all API Gateway and stuff like that, right, a reverse proxy. And that's what I mean, but it's just, it's just simple to say proxy and you get the idea. So at layer four, proxy on WebSocket is done as a simple tunnel. The moment you send me a TCP request. And I know as as as a proxy as in genetics, I know that I'm supposed to just take your word for it and pick a back-end. And I'm going to tunnel everything that you send me always to that backend. So in genetics will intercept the SYN request for a connection and it creates another connection on the back-end. Again, this is one implementation. I'm not sure exactly how in Unix does it, but one implementation might be this set. Some implementations are smarter than that. Try to be, you know, because stateful proxying like that, no, reserving a connection always to the back-end can be expensive. So NGINX, some proxies, takes, shortcuts, tries to optimize these kind of things. But this is the general idea. If the client wants to connect to in genetics to this board, and we know this board is a layer four proxying front end port. It will always to create a back-end and then anything in the future, any data center, this front end connection is Donald to the backend connection is just blindly. So whether it's HTTP with gRPC, whether it's anything, WebSockets, I don't care. I'm going to tunnel you always to the same backend. And just like that, if your backend does support WebSockets in genetics, doesn't even need to understand the WebSocket protocol. Engine X in this configuration doesn't really need to understand any protocol because it said dumb tunnel. So the backend connection remains private and dedicated to this client. The second example layer for proxying on web sockets. So we have a client here and now we added in genetics in the middle as a layer for proxy. And then we added this server Watch as a WebSocket server listening on port 443 for this particular example. And I made this secure to show you exactly what's happening here. And this time, the first thing we're gonna do is I'm going to open a connection. And then the moment the request of the connection opening goes to Engine X on that board, knows, oh, this is layer 4. I'm going to tunnel. It's going to open a back-end connection. And then here's what happened. The client will send a TLS handshake. What would the engineers do? Well, then genetics is in this configuration and so it's a labor force that means anything. You send me, just going to send to the back all the way to the back-end. So the TLS handshake, which is a request to encrypt, goes all the way to the backend, and the backend will dissociate all you want to encrypt sure that the backend or respond to inject shore. And he says, Okay, here's my information is my certificate, here's my parts of the development encryption keys and then injects what does it do? Say, Oh, you're responding to this tunnel of the stone belongs to this client. Boom, I'm just responding back and here's what happened. Immediately. The client gets it, gets, it gets a key, and the server gets the same key, right? That's the purpose of a handshake. That goal is to establish a symmetric key that we can use to encrypt the communication and guess what? Engineers have no idea about whiskey and genetics can absolutely cannot get this key unless it did something shady, which would not gonna talk about it. But in Unix, in this configuration cannot get this game. So this is a complete end to end encryption, right? Because my, my middle boxes cannot read my data. So now when I send data, it says, okay, upgrade because this is request to upgrade, open a socket connection. I just stepped it out if all the complexity for simplicity, well, what do we do? Well, this is Engine X. Hey, enter next to Engine X. This is just, it doesn't even know it's an upgrade requests. Receives a request on this port. Hey, it's encrypted. I don't even know who the governor because I'm not supposed to look. And even my luck, I cannot decrypt anything. So it says, oh, you go to this board, blindly, follow it to the back-end. The backend understands WebSocket protocol. So we're going to play with switching protocol. Then your next what does it do? Just forward the buckets all the way to the client. That's it. And now anything that you send already, it's a tunnel. So anything you send, the client will just randomly set. Then jacks, we'll just blindly, not randomly, blindly send it to the backend and the backend or response with a response. And then Engine X will just forward that response to the client, and that's it. And this is an alibi deduction of communication. And that's fine because we know. And here's what, what's very important here. The, the important thing here is you have to configure the timeouts. So that engine X doesn't really decide to close the connection just because there is no enough data or nobody sent anything for a while. This is where it gets really tricky and you have to work a little bit more to the progression a little bit. But then once the client closes the connection to the server can safely closed that private backend connection. Let's take a look at layer 7 proxy on WebSockets. So Layer 7 proxying, and it's still the same exact scenario and see what's the difference here. Now, we're going to open the connection. So indirects didn't really open. I can actually say, Okay, for I'm a layer 7, I'm advance. You are suddenly we are suddenly a TLS handshake. Okay. You suddenly me a TLS handshake. Here's the thing using a t dt, essentially, I'm a layer 7, right? Proxy. Alga response to you is going to respond back to the client with Engine X certificates, with Engine X public key, with the Engine X Diffie-Hellman parameters, you are establishing a TLS session between you and engine X. Engine X wants to decrypt anything between you and whatever server you want to connect to, but it needs to see whatever you say. Every API Gateway does exactly that in order to, to, to see what's you're sending. And normal people know that unfortunately. And that's why I want to kind of send clarify this picture even more. Right? That's why you have to, in this configuration, you have to put certificates here, you have to put the private key sometime. Sometimes you have to even share the certificate and the private key of the server. And in genetics, and this is a frond apart a lot of people like that. Normal people do like to share the same certificate. Some people like to create a unique certificates, unique SSL keys for Engine X, all right, like halide now and in genetics have their own keys in genetics and the chlorine has the same key. And now the client sends, okay, I want to upgrade my WebSocket connection. Here's all the information. And genic sees that in full. Remember that index 11 didn't touch a backend yet. This engine is like all right, I got I see for myself first, what are you trying to do? I'm going to I'm going to connect to a back-end. And this is not just one bag and it could be seven or eight, right? Any number on the back and an index will load balance accordingly. So now when I do an upgrade request and genetics will say It's okay, alright, I will get request. Let me see. I am configured on this board to load balance and proxy on this particular server. Let me open a connection and let me send TLS handshake because this is secure, right? We have another secure session between NGINX and the backend. And if the back-end doesn't support CLS, it will be uninterrupted. Although this is the preferred configuration, especially in the Cloud, where even things that you're sending this on the backend. And if it, if it's like it's a private land, who cares, nobody have access to it. But if it's in the cloud, I would I would still don't want trust and uncorrupted can actually, because you never know who your data is shared with a clogged. Really they're using all this software, defined networking and whatnot. You know, everything is shared and I mean depends on how sensitive the data at the end of the day, but yeah. Alright. Well, yeah, the Cloud provider could, if they want, they could they could sniff that your traffic. And you might not want that anyway, when you include the back-end now and your next, we'll send this upgrade request, a brand new upper grade request to it. And now the server worse but okay, switching protocol to Engine X. And now only when NGINX or a CSS, which you've protocol Engine X will say, OK, it will issue its own switching protocol. That's a completely different from this to this because this connection is different than this. And here's what you need to understand. So now if you go into slash chat or if you go to slash super chat room, go to slash feed in genics can take you exactly to the server that you want to connect to it. And it can do smart proxy if you're going to slash index.html. This is just a normal HTML. Let me fix that page. If you go into slash jet or this is a WebSocket requests, let me go there. So it can do smart things like that. Layer 4 cannot do any of those fancy stuff. You're going to stop this. Now this is a socket connection and this is a absorbing. And I showed in the previous configuration where they are for the whole thing is just one tunnel. It's end-to-end. Now we have two WebSockets can actually, technically, and they are. Tag to each other. Anything you send here goes to this connection. Always. Anything goes in here, goes to this gadget and it's decrypted. It's always encrypted. Let's go ahead and do it again. So we really got a client wants to send a web socket message. It encrypts it with the pinky, sends it to the Engine X into X takes it, decrypts it with the pinky. Take a look at the content. Maybe it doesn't stuff with it. Let's say you want to ban bad wards. You can implement that logic into genetics. You want, hey, I don't want someone said this War, for example. We'll look at the content to you, disrupted, ban, these kinda wards. Very easy. Now Engine X, once it passes this kind of messaging, it takes that message encrypted with a golden key, sends it to the back-end. The backend takes it decrypted with a golden key, its own golden key, which only these guys have, and then does its own thing and then send that, uh, spawned DAC. And then the same thing, it's encrypted with their golden key and the engineers stuck-up, said, right, And it, once it gets all these messages right, then that client, then genetics linkups it back with a big key, send it back all the way to the client and you get the idea was that God can enclose. The backend is also closed already to stuff home. Alright? Now that we know what happens in the wire, right, with WebSockets and the tunneling and layer 4 and layer 7. Let's zoom out a little bit and see what these normal requests look like in a normal HTTP request. In a layer seven load balancing. So what's that going to be doing? No buzzing approximate exactly identical, the same thing. Load by z is just the smarter reverse proxy. Load balancer is a reverse proxy. Every load balancer is a reverse proxy but not evident. Reverse proxy is a load balancer because I'm extent because reverse proxy just terminates the connection and then send it to the backend. A load balancer terminates the connection, send it to the back-end in a smart way, says, okay, I'm gonna send it to this backend. Okay, That's right. The idea is just what do you do it with a load balancer? Now, what happens here is Engine X preheat the back-end connections. Okay? This is an HTTP layer seven load balancer. All right, let me just open a bunch of connection on the backend. Let's warm them up. Again. Not every note, every proxy does that. Sometimes it does, sometimes it doesn't not. It depends on the situation, depends on the memory, depends on so many other things. It depends even in the configuration, some configurations and hey, I want you to preheat this money connections. Leave, leave this many, leave this many connections open. So engine next, Bree here it's a bunch of TCP connections on the back-end, ready for use. A client establishes a brand new TCP connection, does a TLS handshake encrypts all that jazz, and now sends a request. This is a normal HTTP requests. What that means is that a quick response, right? We accept, we sent out a course, we expect back a response. So it's entered through the engine next decrypts it looks at it, decides what to do with it. And now we're doing a load balancing. So it means this request, let's say a round robin. We're doing round-robin. So this request goes to the word got back the first connection, send it, send it to that server. And the server will respond back, and then we'll get it back and then respond back to the client. The client sends another request. All we remember that the last time we can pick this server, this time we pick this silver boom server responds back to us and they'll respond back. Let's take an example of how Layer 7, load balancing with WebSockets really work. Once we start on genetics and drag says, okay, we're going to web sockets, kind of a low load balancer. So let's just wait and see What's going on. Doesn't really open back into connection. It might, it might not. It really depends because WebSockets more expensive than normal SGD begin activism. Once we open one, it's going to be become private. So let's give it this way. The client makes it TLS handshake, so okay. I'm going to pay I want to conduction shore. And then as follows it up with a TLS handshake on the backend says, Okay, let's prepare, going to open up the Aztec and the back-end. And then the client sends a request to upgrade that connection. And then here's what happened. Immediately. We will tunnel to the backend because this is a socket request. And now anything we get back from the server back to the client. The client says back to the connection here. So the BEQ, the server responds back and then back, the server sends another information. This is not by the way, another request to respond. This is just a web sockets at yes, at this animation, I happen to do it this way. But here's another thing. I said, we're just sends random data. We just send it back. It's a tunnel, this connection now tunneled to this and nobody and just should never use this connection for anything else except that client. So if you have n number of clients, you're going to have n number of back-end connections reserved. There is no pooling, unfortunately, there is no sharing or anything like that. And then if we ever requested a new connection, then that session becomes a load balance at that level, right? Then load balancing is at the connection level. And if this stylish and another connection that we're going to be tunneled to this server. And instead, right, we don't send every mess web socket message to a different server is going to fail because like Okay, what is that? I can't do that. I guess. I don't understand What is that? Because if you send that web socket message and all of sudden center to another server, that's the song was like what is that? You just sent me something that I don't understand in the middle of that connection. Right. There is an order that needs to be maintained. And in this order is a stateful. So we all bets are off. We'll listen to the same server. Now. You can build a true message by message load balancer, socket level, but you have to build it from scratch yourself. If you understand the context and each message really doesn't really matter. You know, I have a back-end server that I'm talking to us to another participant. I have a centralized server message. We're all all the messages pores in. You can do something like that. It's actually more effective. You do it that way. But again, it depends on the use case. You can do so many tricks if you understand what's going on first. And you really understand what do you want. 4. Spin up a WebSocket Server: All right, finally to the actual demo work, right? So what I'm gonna do here is we're going to spin up a normal web sockets or we're using in genetics, we've got to build it from scratch. You can totally skip this lecture if you don't want to know how a WebSockets servers spin up. You can use the code. I'm going to reference the source code in the lectures for sure, but you can totally skip that if you don't want to. Just jump. Now, let's jump to the code and run our own WebSockets servers. And once we do that, we're going to load balance those sockets over. What are the genetics? Alright guys, how about we jump into it and start spinning up our first web socket server using Node.JS. Obviously, to do this exercise, you need NodeJS and install. Just go ahead to an adjacent organ install the latest NodeJS, You should be up and running. Really don't need anything else. Pick your favorite visual IDE. I like Visual Studio Code. It's just really nice. And then create a folder. And we're gonna get started. I'm going to share the source code at the end of the video for everything. So you're going to find it in the lecture notes. I believe. So. Let's go ahead and create an index dot js file here. So what I'm gonna do is really just import to the HTTP library. It's very important because the WebSocket is built on top of HDB. So we need really an HTTP server to get that WebSocket server. So let's go ahead and just create an HTTP server object here by doing http dot create server, simple stuff. We're also going to need the WebSocket server. So let's go ahead and do a WebSocket server require web sockets are going to require the web socket library. And then we're going to get this server object out of that. You don't really mean semi-colons and JavaScript, but I'm just getting used to it. So we have the WebSocket server a class. We really need to have a WebSocket server object, right? Just like we did with HTTP server. So the way we do it is just do OK, socket server. And let's just be consistent. I think it has to be like that. Do that camel case WebSockets server equal new WebSocket server. And then really just Bazin the HTTP server. And I think kids these days do tricks like this, right? This is exactly equivalent to having this, but believe me or not, I really like this better. I just find it very difficult to parse these new syntax is that JavaScript language comes up with every single day and could be just being, being a warmer. But yeah, WebSocket server, we have a web socket server, we have an HTTP server and a go. The next step is we need to listen on my HTTP server. Obviously that's, that's pick up or tried no board. We're going to change that later. And when, whenever we're done, we're just gonna do console.log listening on port 8080. We're going to change that later. And I'm gonna make it dynamic. I just want to make sure that my WebSockets servers running first. So up until now we just have a normal HTTP server that is listening, right? But what do we don't have is an actual event that listens to the upgraded request that we talked about. And to do that, we really need to do it WebSocket server dot on. And when you receive a request, you get a, get a request object. And here's where we're going to do with their request object. What we're gonna do here is we're going to accept that request to upgrade. And to do that, you're gonna do it request dot except, and then you pass in which protocol. Remember when we chat, when we said chats, who botch add these like normal protocols, no, accepts basically everything. And then you can also configure what origins and what cookies you want to have effectively except not every request like you cannot. You don't want to accept the WebSocket requests from this particular IP address. But in our situation we don't really care. So any one request to the origin or you can do a star, a Kansan does this stuff. And the act of sending us that request will give us a beautiful connection. And I'm going to declare this as a global variable. Let's connection, they call No. Because I want to reuse this connection as the full duplex one, right? This is what we really need to work on. So now all we have to do is just on when we do an event on the connection. If I received a message from a web socket message, then. I want to do this with the message. So this is one of the core app will do in this case, let's just go ahead and do console.log. And I received a message, right? And what is this message really? It's the data, but there is a piece of data we want. This is the actual object. The data itself is UTF-8 data because it's really, really could be anything, right? Not just ascii, the emojis, you can set anything. So this will be a converted to UTF-8 and then you can print it. So our servers, what it does is it literally, if you send it a message, it replies back immediately with vi. Just say, Hey, I received the message, but how do you reply? You just do connection dot send. The client itself will have a corresponding connection that we is going to listen and we're going to also send data to. So let's go ahead and say, Okay, hey client, I received your message right? On this port. And this is very critical. I want to specify the port because I want to know which server are we hitting, right? And so here's what we're gonna do. This, Let's actually declare the pore tried here to be ADHD. And let's just replace it all over. Because what I'm gonna do is I want to bake these port, the port dynamic. So I'm going to paste here, paste here, and where else did we use the port, obviously, right here. Okay, and we don't need these guys. All right. So now that I have the porch or so, this should be it to be honest, Let's go ahead and test our suburban. Before we test it, we have to really initialize and BAM and all that jazz. Let's go ahead and do npm init dash y, and then do npm install. We got to specify what donor store, I guess we only need to install of socket HTTP is there by default. Right? And then in BEM, BEM really just do node index.js. And you can see that we're listening on port 8080 by default, I'm going to change that to make it a parameter because I want to listen on multiple servers. But let's just our server. How do we test that? Just go to the browser. All right guys, so in order to test this, we really just need, you can't build an actual client, but I liked it this from the browser itself. And just open a blank browser page and go to def tools. And then in DevTools, really just create a variable called Ws and then create a WebSocket object. This is a built-in in every browser, right? And what are we going to do is I'm going to use the WebSocket protocol. I'm going to hit localhost because that's my server. And on port 8080, that's why we're listening on. And once you do that, that connection seems to be accepted. And we take a look at the network that we said, okay, look at that 101, we get an switching protocol that's requests that we got in. We requested the upgrade, right? We requested this and we want to WebSocket. Awesome. And we got a response that is, the connection has been established sexes for so let me our solvers actually work, working correctly. And the next thing we're gonna do is just do an on message. This is how you wire and event. So when the client receives a message, I just want to print it, right? And now, how do we send the message? Very simple. That's it. Alu. And immediately we get a response from the server, received your message hello on 8080 and go to the client. Hi, I had received the message hello. Cool stuff. So let's just change the port to be, to make it a little bit more dynamic and as an argument. So again, due process dot arg v sub two. Because these are the array of all the arguments you pass to note, the first one, I believe is NodeJS itself. The second one is the file which is index.js. And so the third, 1012, whatever we want in this case, it's going to be a port. So I'm just passing out the port. So if I do now, node index.js 22, 22, we're going to be listening on port 22 to do so it says exact same thing, but we're listening on port 22. 22. And I can do this with any importantly 33. How about I want actually multiple servers? How can I do that? Very simple. All we have to do is just do a node index, dot js 22, 22. You do an end node index is 333. You don't add index dot js or 40, 44, and seven other onshore node index, 55, 55. So we have 2, 3, 4, 5, we have four servers. And what do you do that? We're listening on all these four servers. Let's just test one of them. And then we're going to jump to the next lecture. Now if I do connect to port 88 is going to give me an error. So that's collected through 22 Ws dot a message. That's just wire are the events the videos does send anything. I receive a message one to two to two. Awesome. So if I do 33, 33, and we do, is that a message? Now here's the Syne. Hi, iris, the message from 33, 33. You might say this is silly, but it's going to be very useful when we put in genetics on top. Then what you put in genetics over the reverse proxy, you have no idea what back in your token to. This is gonna give us a clue. What backend did we connect to? And based on that, we know how the load balancing is actually working, right? So now we know we have our WebSockets running up and running. All good. How about we jump into the Layer 4 configuration of my NGINX. 5. Configure NGINX as Layer 4 WebSocket Proxying: Alright, now that we have a lot of WebSockets servers running, let's configure our NGINX so that it is a layer for WebSocket proxy to our WebSockets servers that we have a spun up. And here's what we're gonna do for labor force. But then we're going to listen on port 80. We're not going to need within corruption for simplicity here. And to any TCP connection request is a tunnel that owns goes to a WebSocket API. Alright, we already know that, but we're going to actually demonstrate it that important paths don't matter. And landfall, they are Layer 7, context, right? So if slash chats last sue, but that doesn't really matter anything you do to all WebSocket connection. In layer 4, it will always go to the same server. It doesn't really matter. So any request while WebSocket connection, regardless of that bath, will always be a WebSocket requests will be recognized as such, and it will always go to the same contexts they seem application technically. My go to different servers for sure, but not layer for proxying injects doesn't really understand what is, what is it doing in this case. So yeah, W, If you doing WS, ws localhost, this is going to 0 of socket app. If you're going Ws localhost bla, bla, bla. This is also a WebSocket app. It doesn't really care. Anything here, doesn't really mean anything. On the backend. Yeah, you can do some sort of as more things in and understand. Is that okay? I don't really, I don't really care about this. I want to drop this, but ninja next doesn't really care about this layer for proxying blindly tunnels everything to the back end. And a connection request to port 80 will be tunneled to the WebSocket app back. And let's jump into it. All right guys, how about we configure in genetics to be a layer four reverse proxy in our case. So let's go ahead and create a TCP dot CFG configuration file for Node.JS. This is a new thing that I didn't show before in my laboratory, a section, but you're getting create a configuration and pass it as a path to Engine X. And this is what we're gonna do today. So as usual, since we're building a layer for proxy, we're going to start with stream, a stream context. And since in genetics for some reason needs this events things does give it this event thing. Now the next thing is we're going to build an upstream backend. We're going to call it W as backends, right? And here's what we're gonna do. The server 27 000 000 001 on port 22. 22. These are my back-end, so these are my backends. How many backends do we have? Or why PPP? We have 22. 22, we have 333, we have 4, 4, 4, 4, and we have 55, 55. These are my backends. So now Ws backends. What else? How do I stream to my back-end now to have a server? So we're gonna create a server, and this server is going to be listening on port, Let's listen on port 80. Sure. And then we're going to proxy pass. There is no protocol. It's blind pass. Proxy best, everything to back-ends. Let's go ahead and save my changes. And now let's run my engine X configuration. So I'm going to right-click here and then I'm going to copy the path of this configuration. And then here is how you run in genetics. Assume you already installed in genetics is really straightforward process. You can download it directly from them,, you can do in stone if you have a Mac like me, you can do a brew and installing genetics. If you have a Linux, you can just do sudo APT get the histology next one. If you haven't in genetics, you'll have access to this rewrite engine X. And then when you do Engine X dash, see is where you have the configuration. I assume C stands for configuration, and then you just do a paste. And just like that, we have Engine X now listening on port 80 and acting like a reverse proxy layer for reverse proxy. Let's go to the browser and test it. All right guys, so I have Engine X listening on port 80 and ready to accept our WebSocket requests. So let's declare this time, oh, new WebSocket object. And we're going to go to a local host, which is the default is what? Port 80, right? Now if I do this and I do the normal thing on message. And then I do the USDA sent what the bat. We got a request. We got the response from server 33, 3. Remember we have how many? We have 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4 and 55. 55, right? These are the four servers or socket servers that are running on the back-end. Okay, So now another request. If I send another request, what do you guys accept expect to which servers going to respond back to me is going to be the same. Why? Because it's a thienyl. This Ws as a connection. Now, this is tunneled to a back-end connection. If I now establish a new connection, which is doing that, this is now a brand new connection. If I do Now, this, now we've got an, an another server. And as long as I have this session open, I'm always going to be connected to that server. And the thing about this is that I can do really anything here. And it won't matter because I am doing a layer for proxying. It doesn't really matter. This is absolutely useless. We don't really check for path. We don't do anything on pass, right? We just literally take whatever we have, then blindly go to the backend. So you can see that every request to create a new connection to Engine X is load balanced by default round robin to a back-end. But as long as we have that connection open, any requests or data sent on that connection always goes to the same server. This is unlike HTTP, which I talked about. I showed that in my previous section. So now that we have an engine X server listening on port 80 and have a lot of configuration, which is the TCP connection. We can jump into the second lecture to talk about how do we do this in a layer 7 configuration. And then we're going to do cool stuff with that. Let's jump into it. 6. Configure NGINX as Layer 7 WebSocket Proxying: All right, We know how cucumber go in genetics as a layer for WebSockets, proxy, and Load Balancer. We've seen that how it works. How about we actually do a Layer 7 of soccer proxy? But first of all, what does that mean? In this case, we're going to intercept the path and route appropriately. Here's what we're gonna do. If you go into http localhost. This is a, an index.html page. We're going to actually show a page. And if you go to the same server localhost, but you are going to slash web socket app. We're going to actually go into a WebSocket. Absolutely. This is not possible with Layer 4 because port 80 is used for WebSockets. So if you do GDB localhost on our website, gadget is going to freak out, says, Wait a second, this isn't this is not an HDB. What are you doing? This is the Zachary well, WebSockets can actually, so it's going to freak out. But in Layer 7, you can actually do these kind of tricks, right? You can intercept the route and route appropriately. So you can, if you want to or not gonna do that, but if you want to, you can spin up another set of WebSocket server that does a completely different WebSocket application. And if you go to slash chat, you can go to that sets of server, to that sets of back-ends. This, you're gonna do this and layer 4, because sport eight is blindly tunneling, right? And when we only need to, we only know two tunneled through these backends, which are WebSockets, we cannot differentiate enough. There is no enough metadata in layer 4 to tell us, oh, I want this versus I want it. That is just ports. You can spin up another port if you want to, what is going to be ugly? And layer four. So port 80 becomes HTTP, port 8080, one becomes o socket port 8080 two becomes the WebSocket app for chat. You can do that, but some people don't like to do that. I'll jump into the code and let's do this. All right guys, so in the previous lecture, we learned how to deploy and genetics as a layer or proxy. And we know how we seen how sticky this thing is. So let's go ahead and stop and genetics, first of all, you can send a signal to do it. This way. In genetics, dash is stop. This way. All the engine next sessions will be stopped, sometimes did and genetics processes and will remain so you can use kill all, at least this is in Mac and you can do to kill all in genetics that will just make sure everything is good about it. But I don't have any matching processes. You can do the same thing for node, right? I definitely did have some stale NodeJS process. You can kill it this way if you want to. And Windows, I believe it's two tasks, kill dash IM, and genetics. To the extreme that dash F, That's how you kill it. And obviously this is going to get it in that order here because this is Mac. All right, so let's go ahead and create a new configuration. And let's go let AWS dot c of g, which indicates that we are a layer 7. And we talked about Layer 7 proxying. It's, it's truly what we're gonna do this through an HTTP context. And literally let's just, let's just copy the entire code really because it's very, very similar. The only thing we need to change this instead of stream, which indicates that we're streaming to the backend immediately know what actually want to do a layer 7 boxing, so HTTP. And that the only difference here as well we're going to do when we proxy we, when we listen to port 80, we can't do this anymore. This is not a normal, but we have to specify what protocol is the backend on. So let's go ahead and delete that line. And I'm going to insert now that we have actually a Layer 7 proxy, we can do a location. We're going to hey, slash WebSocket app. If you actually going to absorb that app, Execute this. If you're going to this web socket API, websocket atlas, this code, this chat, Ws chat, Ws app, and Ws chat. And let's say for simplicity, Ws app as these two servers and WAS chat is these two servers. And so let's go ahead and undo this. So Ws chat. And then we're going to copy this thing is XYY chat. So now this is called, That's called SWS app, and this is ws chat. So we have two back-ends. I want this app is just these guys do 233 and you 44 and 55. So we can actually play some games here, which is awesome. So if you're going to slash app, I want you to proxy pass. Http Ws app. And when you're going to chat, I want you to proxy pass to what WAS chat. So now this is, this has to be a back-end that exist right here. Obviously it exists and this backends already exist. But this is, this is not enough because here's what we were missing. Your proxying, just HTTP stuff. We need more information on the backend. Now the Engine X is actually its own layer 7 proxy. We need for engineers to understand how to do WebSocket stuff. Here are the lines that we need to do that. So I just copied this lines. So let's, let's explain one of them. Http version 11. That means, hey, I want to set HTTP version. That's very nice. We want to send out great header which we talked about, just dictate variable whatever the upgrade header as the actual upgrade protocols. So this works actually for any protocol. So this might be actually dangerous if you're doing this way. So that the more, the better ways to effectively just literally just say web socket, right? I remember there was that because of this line, there was an attack that goddamn executing on own HTTP 2 as HDTVs to smuggling have done by this particle in line. Because sometimes you don't really want to upgrade any requests. You want engineers to only upgrade wall sockets, for example. So you want to hard code this, this line will upgrade any, any, any protocol from HTTP to that protocol. So HTTP 2 is 2 CBR to be a specific value that, that edge to see thing that HTTP 200 unclear tags which you should not really used to be honest, right? And there we have sockets, there are other protocols as well, so make sure that you only have giveaway you absolutely need. But since this is a test, that's upgrade everything. And the third line is set header. We're setting the header for that connection header and we're sitting the upgrade. And this is exactly identical to the handshake that we talk about where we're sitting the handshake in this case. And then proxy set header when setting also the host, whatever the host we received just passes through. Right? Now, we won't need to also copy this then same thing here. Insert and paste. Awesome. So now that we have all this in place, let's save my project. And then let's copy the path. And then go ahead and run it. How do you run it Engine X dash C based as if we didn't get any errors. Oh yeah, an error under unknown directive localhost. Okay. When are we getting an error? On line 27? Oh, what did I call it? Local host? Suppose the relocation. Alright, I'm just used to writing Localhost, I guess. All right. There you go. It's running. Let's test it. Let's go to the browser. Now in the browser, if I go to this guy, remember, we only have two directives here. We have slash ws chat and slash ws app. So let's try WAS app. And then I'm going to do just on message, listening to my message. And I'm going to send something. We got a response from 22. 22. Let's do it again. This time. I should get a response from 33. 33. Awesome. Notice that we're back to to to to to. We never went to 4, 4, 4, 4 because that's the app. The app is WAS app. And we got together configuration Ws app only should load balance between this guy and this guy. So sounds like everything is working correctly. So let's go ahead and do the best chat. Well, good. That really is chat. We're heading 4, 4, 4, 4. So if I do another connection request, we're going to 55. 55. So we are now banned the balanced load balancing between 4, 4, 4, 4 and 55. 55, right? There is no slash, blah, blah, blah, blah. This gives you an error, right? Because this, again, this is not really a valid thing. Because in genetics only this is two, these two. Let's add an index.html page. All right, now we have two WebSockets. Backends. We demonstrated how important the routing and layer 7 load balancing air, but we don't have any HTML page. You can do more than that, right? So we're gonna do here is when someone go to the location slash, we're going to serve them in index.html page or simple one, That's great one actually. Index.html. And I think you do HTML5. Just do a low ward. Which one? Low ward. Very simple one. And let's listserv it. How do we serve HTML pages? I've gotta remember that thing is you do root, root and then where the path, right? So I'll just go ahead and copy path. And I can't, I, I keep forgetting if this is the actual path or the path to the actual HTML page. So let's try both. Let's draw first the path itself, right? So I'm going to save, obviously, now that we made a change to that configuration, we can do two things. We're going to stop and genetics altogether, or we can just reload it as reload. Now that NGINX knows this thing, if I go to localhost 80, look at that. The local host page, Fitzmyer page. And we still can work with the videos chat, that normal app that we have. And we have another app on the ws and other WebSocket app. Isn't that cool guys? Yep. This is the other app and both of them are load balanced. So finally what I'm going to do is I wrote a small HTML page that kinda does everything why we did at doing the WebSocket stuff, but through that HTML page. So I'm just going to show you the code directly. So I just wrote this code beforehand because there was no points showing you some normal HTML stuff. This is really the exact same code that we've written, except it's eight, I'm ready app, right? It's just that no temp pretty HTML. And it has a text box. And when I click on this text books, I stopped sending messages. That's that's, you know. And it has a, it has literally a text box and a div element where we show the output and Isaiah write anything in the textbook and I press Enter, that's the code 13. We will send a message to the server and we gotta get the response back. Really, really simple stuff. So let's go ahead and refresh. Now we have had a WebSockets. If I, if I do test Enter, this is easier, right? If it's a desk. So now if I refresh and I thought something else, now we're heading which the other server refresh. It's much easier to then actually typing all this code in the browser right leg of that, we're load-balancing between this. But now I want to shift to the other two. They're chatting up. Very simple. All we have to do is just change that to AWS chat, save, refresh. And voila. Now we're Lord buzzy, between 55, 55 and 4. 4, 4, 4. And you can see, as long as I'm typing now I'm using the same session, it's the same socket. So it's always going to reply from the same server. And then when I hit refresh and now hooked into another server, so it's still it is load balancing right? At this level. All right, guys, that is all the demos we have today. Let's go back and summarize this course. 7. Class Summary: Finally, almost at the end of the course, how about we summarize our course here? So we did a quick introduction to web sockets. We learned what are the WebSocket? There was the little bit details, but I like to go into deep details guys in general and normally lectures and courses and my videos. I always like to go and, and kinda unearth the fundamentals and kinda bubble it up because I find, a lot of people find it useful and I find it a useful myself. It goes every time I unearth these things, every time I kinda dig deep and try to find more information, I learned something new. So we learned about WebSockets. We understand how they work. And that's very critical for our back-end engineer. We learned about Layer 4 versus Layer 7 WebSocket proxying and how it slightly differs. And we learned how to spin up our circuits over without really injured x. Let's just simple thing, right? And then we deployed in genetics as a layer for Web Socket Layer load balancer. And then we stepped it up a little bit. We deployed in genetics as a layer 7 WebSocket love admins. And now you're telling me, what do you think is more expensive? And what do you think is more efficient versus what? What do you think is gives you more features. It's very clear as day, once you understand how things work, you don't need a pros and cons list anymore. You don't need anyone to tell you how things work or, or what to do it because you understand the basic fundamentals you get to pick for yourself. Then started old it bends. You can rebuild this entire architecture yourself. If you understand how these pieces really work, and feel free to ask questions in the Q&A section. And that's it. That's the end of the introduction to WebSockets and NGINX. I really enjoyed making this course. I hope you enjoy the other sections as well. Let me know, guys, what do you think about this and enjoy the course? This is your instructor who's soon Nasir signing out. Thank you so much and enjoy the course.