Introduction to NGINX | Hussein Nasser | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

27 Lessons (2h 15m)
    • 1. Introduction to NGINX

      2:42
    • 2. What is NGINX?

      4:21
    • 3. NGINX Use Cases

      6:32
    • 4. Layer 4 and Layer 7 Load Balancing

      9:50
    • 5. TLS Termination and TLS Passthrough

      10:22
    • 6. NGINX Frontend Timeouts

      1:37
    • 7. client_header_timeout

      3:25
    • 8. client_body_timeout

      3:39
    • 9. send_timeout

      2:30
    • 10. keepalive_timeout

      3:51
    • 11. lingering_timeout

      2:08
    • 12. resolver_timeout

      1:26
    • 13. NGINX Backend Timeouts

      0:35
    • 14. proxy_connect_timeout

      3:03
    • 15. proxy_send_timeout

      4:41
    • 16. proxy_read_timeout

      3:34
    • 17. proxy_next_upstream_timeout

      1:45
    • 18. keepalive_timeout (backend)

      2:13
    • 19. Working with NGINX

      1:51
    • 20. Installing NGINX

      1:07
    • 21. NGINX as a Layer 7 Load Balancer

      18:12
    • 22. NGINX as a Layer 4 Reverse Proxy

      9:35
    • 23. Enable SSL/TLS on NGINX

      13:51
    • 24. Enabling Fast and Secure TLS 1.3 on NGINX

      3:47
    • 25. Enabling HTTP/2 on NGINX

      3:08
    • 26. Class Summary

      2:38
    • 27. Bonus - Run applications in a docker container

      12:41
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

179

Students

--

Projects

About This Class

NGINX is an open-source web server written in C and can also be used as a reverse proxy and a load balancer.

In this course, You will learn how to deploy Layer 4/Layer 7 Load Balancing, HTTPS, HTTP/2 and TLS 1.3 with NginX. Here is what I want to go through the following topics in NginX

  • What is NginX?

  • NginX Use Cases

  • Layer 4 and Layer 7 Proxying in Nginx

  • Nginx Timeouts
  • Example

    • Install Nginx (mac) 

    • Nginx as a Web Server 

      • Static content

      • Regular expression in NginX

      • proxy_pass

    • Nginx as a Layer 7 Proxy

      • Proxy to 4 backend NodeJS services (docker)

      • IP_Hash load balancing

      • Split load to multiple backends (app1/app2)

      • Block certain requests (/admin)

    • NGINX as a Layer 4 Proxy

    • Create DNS record

    • Enable HTTPS on Nginx (lets encrypt)

    • Enable TLS 1.3 on NginX

    • Enable HTTP/2 on NginX

Meet Your Teacher

Teacher Profile Image

Hussein Nasser

Author, Software Engineer

Teacher

My name is Hussein and I’m a software engineer. Ever since my uncle gave me my first programming book in 1998 (Learn programming with Visual Basic 2) I discovered that software is my passion. I started my blog, and YouTube channel as an outlet to talk about software.

Using software to solve interesting problems is one of the fascinating things I really enjoy. Feel free to contact me on my social media channels to tell your software story, ask questions or share interesting problems. I would love to hear it!

I also specialize in the field of geographic information systems (or GIS). I helped many organizations in different countries implement the GIS technology and wrote custom apps to fit their use cases and streamline their workflows since 2005. I wrote fiv... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

2. What is NGINX?: What is Engine X and genetics is, can be looked at as a web server. I'm done. When we say web server and amines as serves what content then you can look at this content as static content or even dynamic content. Cgi and fast CGI, kinda like that. But regardless, it, It is, listens on an HTTP endpoint and understand how to talk to this HTTP protocol. And as a result, as serves this web content for you. And a lot of people use it just for that. But Engine X is also a reverse proxy. Does that mean? That means you can make it facing the Internet. And then then the back-end it will do, it will take the request that it takes from the Internet and then move them across your back and appropriately. For instance, you can use it for our load balancing. So everybody's hitting in genetics, everybody's token to one thing. But Engine X cannot possibly answer all these requests, so it will load balance the requests into multiple back-ends. So that's one use case for it, reverse proxy. Another use case for reverse proxy is back in rounding. I want to route if you're going to slash app one, go to these sets of server. If you are going slash app to, or this is a different backend servers, right? I want to go to slash version one slash version 2. I can look at the content, where are you going and then route to the appropriate backend. So everybody's still kin to one thing, which is this reverse proxy bud or a reverse proxy turns around and talk to the backend and know just directly talk to the back and it just a morphs of that, of course, it changed that of course, it authenticates that of course does show much stuff and routes it to the appropriate backend. And it checks if this back In this down, I'm going to do it somebody else. Oh, maybe I have to talk to another backend. Very, very useful use cases. Caching. Because now we have one thing to talk to. The Engine X or this reverse proxy can, in some cases, cash requests that are repetitive. If, if client makes a request to this reverse proxy and then clients BY makes the exact identical request, given that obviously there are no security issues, the engineer can be configured to cache the clients as a response from the backend so that if someone client mimics this same exact requests because a I have an answer for those. I don't need to communicate with a back and to give you that answer. Every time we pay the cost to go to the back-end, there is some penalty latency that incurred, right? And we want to minimize that as much as possible. An API gateways and other use case, which is really very similar to backend routing, right? If you think about it, all of these are just the reverse proxies, a huge cases. I am Tolkien to one thing and my API Gateway. And I want to write your limits you because you've been making a lot of requests. So you gotta stop. The key has to stop these infinite requests. So I know you are, I have made a weight off these request. I'm going to stop you and you make these number of requests. I am going to route APIs versioning. If you're going to version one and I fixed the bug, hay out, I'm going to point you to these backends. Version 2 goes to these backend and so on. So this is in a nutshell what NGINX potential as ICBA would get to the next lecture. 3. NGINX Use Cases: So in this lecture we're going to discuss my accountant architect join. What is the desired architecture that I want to reach? What is the problem with Mike on Dr. lecture? What I want to solve in my current architecture. So this is what I have today. As you might have a database and this looks like if it's boss guys, because 5432 and I have it, I have built my obligation and it's listening on port 3000 and one, and it's just pure unencrypted HTTP endpoint. And I want to deploy it to the ward. And I'm listening to port 3000 and 1, 2 or 3 1001 year. Now, my clients started hitting my server, right? They give me then employees endpoint. I'm hitting that. And obviously my code is executing hitting the database and gets back some sort of a Jason and I started getting more requests. My server started getting some low number of connections exceed, exceeding, so on. So one might think, Oh, saying why do I need to complicate things? I'm just going to spin up thousands of these and expose them because it's all in the same machine. I'm going to just add the port 3,102. There's three, and so on. Yeah, if thousands of machines. So if you can do that, but now the clients have to be aware of these boards. Kinda yucky if you think about right, plus your inputs are unsecured. So what do you do now? You have to serve multiple certificates, but now, what, what do you have to do? You have to now copy this certificate until all servers. Okay. You can do that. Sure. Plus now all of this is when war machine, what do you do? You spin up multiple machines. Goes that machine can go down, right? And how do you tell the client, Hey, by the way, the client, this machine is down, go to this machine. Now. There's a lot of work then needs to be done with this rhyme. So as why a lot of people introduces an extra layer. And that's called the reverse proxy. Which one use case of a reverse proxy is a load balancer, right? So now I want to use in genetics and I'm going to enable https. I'm going to put this certificate here. I'm going to enable HTTP to all those beautiful stuff and then hide completely hide my back-end. It can remain unsecure when HTTP, or it can be also DBS. And there's advantages and disadvantages for both, right? What we're going to do here is now if I, my clients now only talk to engineers, there is one endpoint he took to this thing. If you make a request, this buckling or request will be load balanced, says, okay, I have already on the backend, establish the communication with these backend server. I have three servers here. So in genetics we'll talk to this, we'll talk to this will look, there's okay Audio alive. Are you alive? And we'll make sure that everything is up and running. So now if you make a GET request here is we'll say okay, was my load balancing, I'll go to them. Oh, it's round robin. So I'm going to talk to this guy, then give you back the result. So now if I'm making another request from another client, the Engine X might decide to go to this server or server that of course go to all the way to the database. I'm going to get back out of smalls and then a spawn back to this over you can see that we're talking, there was an extra layer, so there is an additional cost nevertheless, right? But that is why in genetics or any universe proxy have to be as efficient as possible. In the rules that applies. In the inspection it does to the packets. It should not be an obstacle, it should be a performance one. All right. No more request comes in. You can go to this bath and go back to the man away. So that's what we want to reach. One thing to talk to and the back-end can scale independently. I have no idea as a client that my destination is actually one of these servers. I have noisy. I talked to Engine X and in genetics talk to the back, right? So I am as a client, not aware of my final destination. To me as a client, my final destination is in your next this is what I talked to you. All right. As opposed to a proxy, right? This was a reverse proxy. Proxy is you the server doesn't know the origin or chlorine. You talk to a server, but through a proxy, the proxy makes the request on your behalf. But here, the, the server actually doesn't know the origin getting high, right? It's the reverse. So the server doesn't know that this client makes a request. It knows that in genetics made this request. So that is the architecture, its problem and the desire architecture and the benefits it gets us. This is what we want to go and, and when we reach the example section and the demo section, we're going to actually build this for you guys. So another very critical piece in genetics has a concept of a front end. Anything related to the client's communication is called a front-end. And anything that you raise it to the back-end communication, It's called it back. And so wouldn't, whenever I mention this class that in genetics front and I mean this bar, I'm the communication with the client. And when I say Engine X back and I mean this spot, how it's communicating with the proxies. Proxies the results to the backend RAM. Very critical to called substantive. Notice that when I say a font and I don't mean at the app that toe that is running on the client. No. I mean, the interface that is listening here and accepting requests from the client, I'm very, very critical things to understand. So this was the cat and that is I'd architecture. And we're going to learn more when we actually implement this in real life. Let's jump to the next lecture. 4. Layer 4 and Layer 7 Load Balancing: And genetics layer four versus Layer 7 proxying these two OSI layers, or one of the most critical one for a backend engineer to really understand. Some things happen in layer four and certain things happen in layer seven. And we have complete visit different visibility in this layer versus this layer. So now, if we wanted to define these two things, layer 4 and layer 7 refers to this OSI model. Layers. Layer 1 being the signal or light signal or the radio wave or whatever it is Ryan layer to being the MAC addresses and the frames layer three being the IP layer for being the TCP IP layer 5 is the session layer, layer six is the presentation, and layer 7 is basically where the application is running. That's basically the OSI stack. All right, so in, in layer 4, what we see is only the TCPIP stack and nothing about the app. So the app could be a web app, but when you look in that lens, imagine you have a magnifier and you're pointing to a layer for that is the only thing that you see. You see the TCPIP stack. So what do we have access to that layers? We have access to this source IP, whereas this Sigmund coming from or IP packet coming from. And the TCP section, we have the source port, which application this thing is coming from. This is usually the random port that the application generates for you. We also have access to the destination IP. Very critical, right? Where are you going? I know where you are go. And this stuff, by the way, is not encrypted at all. If you do a sniffing thing, if you sniff in the middle of your router or the rice B or anywhere really, right? If you install some software sniffer, you can see this plain text information. This is never encrypted. And you know the destination IP address, you're going to this destination. So Engine X or any other proxy can do a clever things. Oh, you're going to this rule, sorry, you can go to this. This is blocked. I'm not going to let you this. Oh, you're going to this destination port. Well, if you go into this board, actually this port has been read directed to this board so you can do clever things within genetics, but this is the only thing that you have access to. Sometimes proxies and reverse proxies do a little bit of an inspection to the content. That's all, not all the time. Sometimes one main reason is to detect what this thing really is, and mainly to detect S and records which is high. This is a request to establish a connection or this is at TLS client. Hello. All right, Hey, I am about to establish an encryption, I'm about to establish a connection. This is very interesting entry points where it's not really application personally, it's still Layer 4. But usually in genetics, detect these and raises events that can be helpful. But that's pretty much it, right? It doesn't really look at the content and try to derive, Oh, this is a GET request, for example, at layer four, Proxy doesn't usually do with them. It can, with a deep packet inspection. There's no really pointed that at that point you're just a layer 7 reverse proxy. So speaking of which and layer 7, we see the application. Whatever this obligation is, this is an HTTP get request that is interesting to be positive because this is a gRPC, this is a WebSocket requests that the complicated app level thing that sits on top of our transport layer, right? Which is, which is this. This is mu, all that carries our stuff, our content. But that's the thing we, we are. We are just merely transporting this content, what the context is it the application. So that's where layer 7 comes in. And what can you do? You have more contexts, you have more knowledge at that level. You can know where the client is going and know just by peerwise you have done no, you can no. You know which you are able, they are visiting. Again, you are does not visible at this stage RAM. You know what URL, you know, which page, you know, the headers, you know, the cookies, you know, show many, so many stuff at that level, right? My tabs and this stuff is encrypted. You're right. Usually there's stuff is encrypted. That's why when you want to do a proper Layer 7, anytime you do a Layer 7 proxying, you have to decrypt the stuff. Some people don't like that. Some people, some engineers do not like that. The reverse proxy Dick corrupts their traffic. And you have to understand that. The proxies are doing that, right? Any CDN decode. So the traffic. That's why I always say that engineers, you're going to know what the Cloud things are doing for you. Note no USU was things randomly. You get understand. No. Maybe you understand, is it Oh, yeah. I'm okay with them seeing my content and caching it. Because if you want to cache it, you have to decrypt it, right? You can't catch it and corrupted the stuff that's useless. But yeah, that's what it does. Very critical thing, layer 4, layer 7, we have more content, we can do routing, we can do API gateway, we can do much more fancier stuff here. We can even share that connection on the backend, whether they are seven. Let's do this, see how layer 4 and layer 7 proxying walks in and genetics. So you can in fact operate and layer 7 if you wanted to, which is the HTTP stuff RAM. Or it can also operate on Layer 4, which is called TCP. And we're gonna see that in the example. So whenever this see the TCP contexts, that's layer for HTTP context is. And there are several layer for proxying is useful when in genetics doesn't really understand the protocol. Like, for example, it's a Postgres database protocol or MySQL. While Postgres and MySQL database to our application layer. Protocols in genetics doesn't understand how to better minute this thing, how to read it, how to parse it, and how to most importantly, turn it on and talk to a back-end that understand it. It doesn't have that capability. So if you find yourself don't understand or engineers doesn't understand, like gRPC or Warburg. What RTC, for example, you can opt in for a layer for proxy and say, Hey, blindly, just take whatever I gave you in this packet or this segment, just forward it a vacuum or Layer 7 proxying is useful when Engine X want to share back-end connections and cache results and enter a stand and rewrite header is and does more to the content of self and layer 7 proxying and genetics, understand HTTP protocol, so admin understand how to read the headers, understand how to talk HTTP on the back-end. So it knows how to stop the requested, read it in full and change it more fit so that cash in the room and then rewrite it to the back-end. Add more headers, right? For MySQL database, moral God doesn't know how to do that, is going to break the whole thing, right? You might say I can, I, can I use HTTP protocol in layer 4? Sure, Absolutely. Again, you can tell the engineers, hey, engines, I know you understand this GDB, but please don't look at my Condon, just just merely just forwarded the back-end, right? The first backend and make it sticky, right? That's one properties of layer four, right? Because we don't understand the context and layer four, It's very dangerous to start swirling and load balancing packets on the backend. So like Hey, no more than two, you do a layer for load balancing. The TCP connection is pegged to one and only one back and you cannot share it, that you cannot use another bag and connection with Layer 7. No, I understand this. Http or this is HDTV is DDB as stateless. I'm not gonna do any of that stuff. I can I can share my request on any of these back in connection. So load balances become more efficient and layer 7, but there's a cost of corruption and the cost of sharing the certificate which we're going to talk about and TLS lecture. So you can use this stream contents if you want to become a layer for proxy, and you can use the HTTP context if you want to become a layer 7 proxy. And we're going to explain both of them actually. The example. Let's jump to the next lecture. 5. TLS Termination and TLS Passthrough: Now let's talk about TLS termination versus DLS pass through. So we start first by defining what TLS is. So yeah, As it stands for Transport Layer Security. And this is the de facto way of encryption between two parties. If you want to establishing corruption. Most people use TLS. And every time you see HTTPS on the owning a visit, I will browser, that is TLS. And it works by using symmetric encryption for communication. So symmetric means the client and server has the same key. Now, you have, might have a question, Well, how can I share the key, right? It's like a candle. Send their key, right? I cannot let one party generally the key and send its plain text because anyone in the middle can't see it. That's just bad. And I cannot encrypt it either because how did the other party decrypted? It needs some sort of another key to decrypt the key that we encrypt it, right? So that is where there is another method of key exchange algorithm called Diffie-Hellman. And he was this one flavor is called asymmetric encryption, which is public, private key. So it uses this initially to exchange that symmetric key to a certain extent through exchanges a master hash that we use to generate the symmetric. But you might say I was a wildcat. Why are we encryption in this? Why can't we just encrypt with asymmetric encryption altogether and because we don't have to store anything, well, this thing is very slow. This thing is very fast. This thing is not designed for long-term large content and corruption. This thing is, it is. So that is DLS. And these guys do not guarantee the symmetric encryption and asymmetric encryption. You cannot guarantee Who are you talking to and there is no method of vacation. So anyone in the middle can just stop the key exchange, for example, and then reply back and say, oh, this is actually the silver and intercept the connection. And it actually decrypt everything by performing a termination of the Ls sort off, right? So we needed a way to authenticate that. Is this really the server them token 2. And now you need some sort of a certificate that to prove that the server is really the server and know what nobody else has access to the server. So the server will, during the handshake of the TLS, will reply back with its own certificate that is only signed by certificate authority that nobody else will ever get, right? So Google.com will have a certificate authority signing. It's sort of again, a nobody knows are difficult authority will ever sign, like me claiming to be google.com, right? Unless the certificate authority, it turns to be shady, which guess what? That was? A couple of them got banned. So this is just the server authentication. Clients can be authenticated and that's when you call this fancy than the South called N TLS, mutual TLS. But it's outside the scope of this video. So we'll talk about TLS domain, which we kinda talked about in the men on the Manila case. But this is more legitimate authorized way of doing TLS termination. So in genetics uses the, has the HTTPS, right? Let's assume them has TLS, but the back-end is just pure HDPE. I just decided not to secure my back-end. You can choose to do that and nothing wrong with that at all. If you have it in a privatized on-premise becomes questionable if you're back in as HTTP and it's in the cloud. Especially with the Cloud is everything is shared. And the networking aspect, unless it's really virtualized and purely secure, then you can start question that you can be safe, but yeah, you can definitely be HTTP on the backend. Now in genetics, what it does is it actually terminates the DLS is like, Hey, client, talk to me when you want to negotiate this key exchange thing. And I'm going to decrypt your traffic, right? So anyone sniffing from the current all the way to Engine X cannot see anything. It's all encrypted. But I'm going to decrypt the traffic and then send it uninterrupted to a back-end. Now anyone between in genetics on the backend sniffing can see it. But hose that, right. Who's sitting on my premise looking farmland and that's broadening my admin them, you might be fine with that. All right, so that's one way to do it. Another way is. I have an engine, X is DLS and the backend is also TLS HTTPS. And this is whether it is recommended on the Cloud. If you have everything on the Cloud, then yeah, even back in you should, you should secure that as well. But now in genetics terminates DLS, RAM, decrypt the traffic, optionally rewrite, add headers, add more cookies, whatever. And then turns around and re-encode the content to the back-end, because that's a different TLS channel between this channel and this channel is a completely different in corruption. So there is the corruption in corruption going into this. Next, there is an additional latency, which means, that means whatever ciphers you pick for your TLS, I'll go to them. Hey, better BE fast. I think Georgia is now the fastest. As Birra report that I read, AES. Aes is good. I don't use RSA for key exchange at all. I mean, there's an SSL lab which we're going to go through as well. It's going to yell at you. So hey, this is about undergo them don't use this, don't use this. Remove this algorithm, add this algorithm. All we're going to go through that as well during the exercise. I don't know, I haven't to Engine X here. So Engine X into X can look at this is an emphasis into next engine. Next kind of look at layer 7 data and rewrite headers cache, but needs to share the backend certificate or at least has its own right because now you're no hosting TLS session on in genetics and needs to prove itself. The Engine X needs to say, hey, this is me. How do you do that? Well, if you have a single domain, right, then you have to kinda share the backend certificate within genetics. That is just sometimes no-no for people. It's like, Hey, what certificate? That means? I should give you my private key. New and Unix, you know, getting my private key. I don't trust you. Some people say that, hey, and your next, if you really want to look at the stuff, Genomic, your own stuff, I don't care. I'm not going to give you my private key as a back-end, you generate your own private key, you sign your own certificate was some certificate authority and come back, you have your own domain, right? Again, nothing wrong with that, but just just understand that what you're doing is sharing private keys. And this is just a no. He never share Breivik is with any thing else except you. Right? So that's that's another thing that I have. You have to look at on your duty hours termination becomes a little bit cumbersome with multiple domains, but yeah, and that's the price we pay for privacy. Let's talk about deals pass through. So if I don't trust in genetics or I don't trust the person hosting and genetics for me. Then you do theos pass through, Hey, my back In this DLS in genetics, your dumb, you are just a dumb pipe. Just pipe everything through yourself all the way back to me. So engineers now proxies three is a packet directly to the back-end. It reads that, okay, Um, this is a TCP Handshake. I'm going to stop here. I'm going to establish the TCP handshake. But if you're doing a TLS, anything that comes to 200 ts Hello, I'm not going to respond to you. I'm not authorized to terminate the TLS. I am a TLS bathroom, so I'm going to take that TLS or request and forwarded back to the back. And just like boo, boo really just is just a pass-through, right? And then the back-end will receive that the LSE London response back to the, to the in genetics and genomics is effectively a man in the middle here, but cannot see anything. The encryption is into end. So at the end, complete the encryption completely. So the TLS handshake is forward all the way to the backend as we talked about. It's just like really it's a tunnel at the end of the day, right? But here when you do this, you can no longer cash because hey, the moment you establish an end-to-end encryption between the client and the server and the back-end. Now the Engine X cannot really read the content anymore. It cannot see Layer 7 content. It is now a pure layer four checks only. It can look at the source IP, destination IP, that's that ports and stuff. You can't look at the contents encrypted and you cannot see it, so you cannot catch it. But it is more secure. And the Engine X doesn't need the backend that advocate. Plus you can. There is one disadvantage for this is that engineers can also, because it doesn't have that context and layer 7, it cannot share the back-end connections. It has to make private connection for every request. And that can be costly. Let's jump into the next lecture, which is in genetics timeouts. 6. NGINX Frontend Timeouts: Engine X timeouts are any proxy timeouts for that matter, are very critical to control the privacy of the security really off the proxy, the reverse proxy that all balancer, it is very critical to ensure efficient use of resources. So you don't let a runaway process run for a very long time and consume all the resources and then the client keeps waiting. So these are some of the timeouts that engine excuse to get anti these things. So so I'm going to take some time to explain each of these time-out separately though there'll be available in each lecture so you can jump into the lecture that you're interested in. They are categorized into, I categorize them into front and Deimos, which I talked about remember, front-end is anything that the client talk to. The engineering side, back in time window is where the engine next door to the back-end, right? The back-end servers. So these are some of the diamonds for the back-end. These are some of that time outs of the client. So let's get started with the front end timeouts. So now we're going to talk about the front-end timeouts. There are six front-end time was though we're going to discuss in the coming lectures. And the first one is client head or timeouts, client buddy timeouts, send timeouts, keep a lifetime odds, lingering timeout, and finally, resolve or timeouts. Let's start explaining one after the other. 7. client_header_timeout: So the first front and timeouts is called client head are diamond and is defined as a timeout for reading client request header. If the client does not transmit the entire header within this time, that request is terminated with a four or eight request timeout error and the default is 60 seconds. So anytime you see a 44 XX, this is that the client kind of a fault ride the client talk time, the client made an ad or that client took its sweet time to respond to us. And that is basically when you respond with for our syndrome. So the client head our time on these or specific to only the headers that the HTTP client sends. So for instance, here is an engine X server. There's some backends and the chlorine now establish it as DCP Gaussian and sins that GET HTTP slash 11. And it won't go to the root page, right? And okay, we receive that request and we acknowledge it. But no, there are a bunch of headers that the client is responsible to send for us is like, okay, how, how big is the content length for this? How, how bay, what's the type of the content when cookies and other stuff like that, right? So if I send the content length for example here, and then the client all of a sudden just posed, right and didn't send headers for a long time. I am allowed to wait, right? Since that read, write, then since that head or timeouts started, we started the heterodimer. I'm allowed to wait for x amount of seconds after which I'm gonna just kill that connection. And emphasis on kill connection. And we are really initiating a close connection. This is no really a forceful connection and close we're going to talk about the lingering section as well. So yeah, I don't know if that client actually sends the content type. For example, APA Jason, I know this probably wrong because this is a GET request, but you get the idea, right. This poverty has changed this to boast, for example, night, but content-type application JSON. Well, we're not going to readily accepted because hey, you are none, sorry, right? And you took so much time to send this headers, right? And you might say, what's the purpose of this? What is the case where users or clients can take their sweet time to send the header as a world, there's an at that gold slow loris that does exactly that. One client can tell since starts sending byte by byte content. Why just so that it can occupy resources on NGINX or the proxy, right? And if you do that, if you send one byte as second, then every time you send a byte the second that timer resets, right? But then you took so long to send these headers. I'm going to give you 60 seconds to send then Toyota headers, right? If you did not send all these headers, then, I'm sorry. You're gonna go awesome, right? So you're going to kill it. That's how slowed or to an order set that has basically prevented white setting that client had at home, not just that other timeouts as well, which is then next time, although again discuss Let's jump to that. 8. client_body_timeout: The client buddy timeout is another timeout that is used to prevent attacks like such as slow loris and also slow clients that are just taking their time to upload like a large file, right? This is a specific to their body, not the header. So like let's say you are uploading a file, right? Or uploading an image or applying something that is digging so long for you to generate, right? We want to maintain the resources on the engine acts side so that you don't, you don't end up consuming and hogging resources. So and genetics doc defines this header timeout as following, defines a timeline for reading client's records buddy, that timeout, this sit only for a period between 2x's for read. Engine X now is reading the front end, right? Not for the transmission of the whole request body so that a quest buddy can take a long time. But this, this timer is just specifically between two reads. Every time a client sends something, this client timeout gets reset. Every time a client sends something, we go yearly said it. It's just between two successful rights from the client side and waits for an hour or so in the end, genetics is that client timeout is defined, right? And obviously if it exceeded, we get an N400, a timeout error and the default 60. All right, so the client sends a post request, in this case, HTTP 101, that content length is one megabyte Meli in whatever you want to call them. It's the content type is text. I'm about to send a large body right now. So I said that. Okay. So obviously there's something that I didn't show here, but the Engine X there is unknown and send it to one of the backends, obviously. Alright. So now this, the client starts Sunday, the body. So it's like, okay, this is a very large file. And so every time that said, since the body that client timeout gets reset, so it sends another again reset. So now it starts counting, okay, you send it. This is the large buddy segment you sent. Now I have to wait, wait, wait, wait, wait, wait, wait. What is this default 60-second is exceeds. Then I am going to close the connection. Okay? So now you can change this to as low as 1 second or two seconds or three seconds. Because if you don't if you maybe your engineer says, Okay, I don't want to serve client that takes more than two seconds to send me a body. Of course, those are bad clients. They are It's okay if they my uploading clients are usually in a, for example, in a high bandwidth network. And I don't really trust people to. If someone is doing that shady business, taking some time, that means there are an attacker and immediately I'm going to set that back to time 0 to a very minimum timeout so that nobody can do this stuff. So yeah, if you close the connection and then I'm going to take time, this quarter will be blocked. And by block, really, what I mean is going to be red, but it's going to be discard is not going to be sent to the backend. And that's part of the lingering time. Although again, I talk about so that is the client. But anytime other very, very important, I am out in the front end to set correctly and then your next year, you might not need to touch it, but just understand what it does. And that is the fundamental understanding of all we need to do as back-end engineers. Let's jump to the next lecture. 9. send_timeout: So the next timeout to discuss is the sinned timeout. Timeouts defines and then your next docs as sits at timeout for transmitting a response to the client. That timeout is sit only between two sexes for write operations. Note to the transmission of the whole response. So now we receive that request successfully formed the client, but now I'm about to respond to that client. So even in the responding aspects of thing, we have now certain amount of time to respond and not the entire response just between two, right? I'm about to write to the choline. That's what's happening here. The default is 60-second. Let's take an example. So let's say I want to get to request the HDB 11 content length is 13 and they're going to target sticks immediately. Engine X will just forward it to the back-end, that back. And we'll respond with a very large body or very large content and installed to responding to Engine X. Hey, here is a very, very injects, takes it and just writes it back to the client. So this slide as a timer starts, and now the backend started responding with a continuation of this. And then, well Engine X DX and then write it back. And now the backend is taking a long time to respond, to give me content to send to the client. I don't want to I don't want a bad request like this, taking resources both here and also here. So if this is taking a long time, then I'm going to wait for 60 seconds in the front end to send. And if that ticks more than 60 seconds, I'm going to close the connection to the front and he says, Okay, sorry, whatever requests you send to me, it looks like it's malicious. Or maybe that's one thing, that's one example I have to take example of. Or maybe it's just the my back-end is just not not happy with this request. So I'm sorry, I have to close this connection. And there are a set of time what that stops. Whatever was happening here as well. So we're going to talk about that next time, but that we close the connection, we move along. All right, so anyway, cause that comes after the backend, takes that time and it will be just, it will be just ignored. And this fact, right? So that is this send time odd, remember, kind of important to on as well. Let's jump to the next time around. 10. keepalive_timeout: So the key alive timeout and I've got a front end time out here. And this one is easy to understand. But because you see when in the previous days or HDB, every HTTP request, the response was accompanied by a TCP connection, by a dedicated HDB TCP connection. And that was expensive because every request you have to do a three-way handshake and then send that to get GET request or a post request, then the server will respond and then you close the connection. And if you do this in a modern website where we have thousands of requests to make, you're going to spend most of that time destroying an opening and initiating DC big actions. So keep alive was invented by the HTTP protocol to keep the connection alive, right? So when you send a request he has is a get theirs to be one and by the way, keep the connection alive, which I believe now is the default. The default disconnection New have to keep it alive until either the client or the server decides to close it. So now when you keep the connection alive, right, you make this request. How long should I wait and keep the idle connection alive? Before I say, You know what, this is, too many idle connections. Also take resources. You need to check these idle connections for idleness and any resource consumption really. And the proxy side, you need to look at it very carefully. You need to, I mean, it doesn't matter if it's one climb, but if it's thousands and thousands and thousands of lines doing the same thing, That's when you need to revise the keeper lifetime. I maybe minimize that. And it's like, okay, what is the nature of my requests, my sort of thing, uh, websites. If it's a website, then well, how far how often is the users are active on this website? Is it just that afresh, do something for a few seconds and the client and then done, yeah, Maybe keep the connection alive for, for just five seconds, right? But, but the mode is stopped being active then that keep-alive timer will get reset to anyway. So it becomes a matter of measuring. The use case is an okay, I have an idle connection. Was there for 10 seconds, then the 11th seconds someone wanted to use it and disclose, then they have to establish a connection again, maybe that's fine. Let them establish your hand again, right? Maybe 75 seconds, which is the default. I think you can put a 0 which disables keep alive. Client says, Hey, sorry, can you, you always have to establish a connection every time, right? It's kinda bad if you think about it. But there are some use cases that I've seen that people disabled keep a lifelines if the client state idle of decline can actually state idle for more than 75 second I was you, I think that default then what are you going to do is just close the connection, right? There is also another configuration and keep alive to set the client side. Keep alive as well, which is something you can, you can control if you wanted to. So this keep alive sets this front end server keep-alive connection. While the other party, which is the timeout, the header, is called the response header field, where if you respond back to the colon, you can also set the keep alive and the client side. How long do you want the client to sit before it closes? It's going and I can't think of an example where these could be different. To be honest, you have to be the same, but might be wrong that So yeah, that was the keep alive timeouts. So let's jump to the next one. 11. lingering_timeout: So the lingering timeout is defined in Osbourne genetics as one. The lingering clause is an effect. This directive specifies the maximum waiting time for more client data to arrive. If data are not received. During this time, that connection is closed, otherwise the data or read and ignored and agentic starts waiting for more data again. The way to read I ignore cycle is repeated but no longer than the lingering time directive. You remember guys, when I said when the Internet effectively tries to close the connection, it doesn't really forcefully fin that connection, close it. It issues at a quest to close that connection and gives a lingering time for the client to gracefully close. This is better than HTTP protocol. You can't just close that connection and reset it all together because it causes a lot of problems in the client side bugs and stuff like that, right? So if the vendor X decide to close the connection for one, any of the reasons that we talked about at time 1 has exceeded and engineers just didn't like the client anything. When you close the connection, client can still send content to then genetics. But all of the does, is it we're going to start reading this content and just discarding it. It will just not respond. It will not acknowledge you will not do any of that stuff, right? We'll just completely read the content and just go, go its merry way, right? And maybe it will actually acknowledge that i'm, I'm not sure. So now you can wait for up to 60 second before, you know, I'm going to physically clause you ha I gave you all the time you need. That should be enough to cleanly close that connection. So you'd be in that state in the TCP session where you are in a clean state. All right, this is a critical two to get the States right. Otherwise you end up in, into a weird state where the connections end up reset. And that just raises flags for the Ottomans and the firewalls. So let's jump to the final, actually front and diamond. 12. resolver_timeout: So the resolvers timeout is when you, when you configure this in genetics to connect to our backend, usually, you need to resolve the DNS for these names, right? And if you, for example, made a request now and the Engine X will ask, okay, what is my the IP address of this back-end server one dot test.com. This is called resolver timeout. Engineer needs to resolve the IP address of this. So it's going to tip to a DNS server, could be a local one, it could be a public one. How long should I wait? After which I will give up connecting to this backend. So this way, in this case, you will think that you have only 30 seconds. And after out, sorry, you've done effectively, you cannot do anything else, right? And that is basically you can, you can either move to another backend or just stop that connection altogether. So that is the resolver time or another. I think 30 seconds is too much for the default time or fourth, four to resolve a DNS, I really would put this as a very, very, very low number, effectively. 13. NGINX Backend Timeouts: Alright, now that we have discussed the frontend time outs, let's discuss the Engine X backend diamonds. So the backend diamonds here is proxy connect timeout, proxies send timeout, proxy read timeout. You can see that everything is appended by a proxy keep-alive timer, which we saw in the front then, but that's the backend version of the keep-alive timer. It's like because we want to keep the connection alive in the front and we're also, we need to keep the connection alive at the back-end different thing. And then finally the proxy next upstream timeout. How about we jump into this? 14. proxy_connect_timeout: First timeout on the back-end. Now we want to learn about as defined as proxy connect timeout defines a timeout for establishing a connection with a proxy server. It should be noted that this timeout cannot usually exceeds 75 seconds. That makes sense. This means that engine X is, is configured to have seven backends, whatever number of back-ends. Engine X, when it starts up, it needs to connect to this backend, perform some health check. Are you existing audio alive? How much should I wait, as in genetics, to consider that this thing is alive or not? When I connect to servers is let's say server one dot dot.com is very far from engine X, which is a bad idea by the way, right? But let's say I put it up because I want to use geo-location. I want the Engine X to be as close to the server as possible to calf or caching reason. But if you do that, then how far should I wait? Should I wait 60 seconds, 75 seconds? What should I do? You can configure this time, I'll say, okay, if you can't connect to this backend in two seconds, this back-end is dead. If you can't connect to this backend and 75 seconds, this back-end is dead. So you can make this diamond as short as possible, because if you're in a high local area network, that shouldn't be 75 seconds and that's just wrong. But if you're far away, right? And because you're not going to wait 75 seconds to tell you that the Baggins down, Hey, if I can't connect to you in 1 second, you're down. That's it. Right? And there is pros and cons to configuring this connect timeout in a very affection because engine thinks is going to take resources to ask this back in audio alive or not. And we want to know as fast as possible. Putting it too small though, if the backend is far away and it's overwhelmed and the latency of the networks and the hubs we have to take add takes more than three seconds. Then oh my god. Yeah. Of course. Three seconds is going to take 30 seconds. I don't want to I don't want to kill. Instead of my back-end as unhealthy and down. Just because the it didn't respond in that and that amount of time. Maybe a wool could have responded in five seconds, right? So this is something you have to play with. It really depends on the latency, depends on how overwhelm the server is. So many other factors do. That's why you doesn't really make sense to wait more than 75 seconds, they prevent you from increasing that more than 75 seconds. If you take 75 seconds to connect to, that's just a bad backend in my opinion. Right. Let's move on to the next timeout. 15. proxy_send_timeout: Proxies and timeouts. And it is defined in the Engine X dog sits at time 0 for transmitting a request to the proxy server. The timeout is set only between two sexes for write operations and not the transmission of the whole request, right? So now we're funding out, of course not from the client to the Engine X that we already received that we have time off that deal with this kind of request. But now we have another request from the Internet to the server. We're controlling this time on now. So now if I posted HDB 11, a very large body, for example, I take that request immediately just forward it to the backend. Okay, that's good. Now, I'm sending this buddy. All right. We just said the client sent me this buddy, right? It's like this as a so I take that body, that segment and then immediately forward to the backend. I don't wait to buffer all the whole thing or just sent as IoC of stuff, I just forward it to the backend. And that a very large file that this next segment of the body. And it just forwarded the bagging. And now we're waiting here. Also, we're waiting right here. Remember there was a buddy timeout that we set here and the client, but this timeout can expire before this. That's totally fine. So if, if the backend, if I could not write the next body requests, then next write requests. In 60 seconds, I'll get a closer connection to this backend. So yeah, there's just disclose it. The setTimeout just that's where we no longer we no longer want to consume time with this connection. Let's just close it as just this is taken long time. Let's just close this connection and move on. And maybe the client will eventually gives us something here and eventually it gets resolved. But as we said, but he timeout can trigger even before that, right? So eventually both diamonds might trigger list time with my three gotten this might not, and that's could be a bad thing, right in this case. But eventually you will send the request and then you're gonna get an error that oath, the backend is bad, right? Sorry, might say, why, why do we said this? Why do we need this if we have this kind of time on? Because the backend is more pressure than the front end if you think about it, if you're taking it, remember anything that you're taking this amount of time? The 60-second is seconds that can serve other clients. And when you do that, you kind of wasting precious resources that this server has for something that the client's fault. So you want to terminate this connection, resort, reset it, let it go back, and then someone else can establish a new connection and use it, right? This time. We don't want this thing to be waiting in and taking precious resources. We want to close it as fast as possible, right? If it was me, I don't know if I would close that connection. I would basically kinda clear the pipes, right? Because this connection is still good, right? If you think of ice, it could be shared, especially if it's Layer 7 proxying. I can use this scholarship or something else. Why closer to me, they can actually do this. Decision to close is a little bit harsh. But kudos that might be, there might not be a way to reset or a clean up thy pride is like That's it. Just use that you sending that you really need to close it because all bets are off. You have to tell the client that, hey, I'm no longer sending any anything to you. Wherever you talk, whatever memory you establish for this, please destroy it. So I changed my mind. I actually would like to close that connection so that it will destroy anything it needs because there is no request that says, Oh, clean everything in HDB after, unfortunately, not be a good idea though. Maybe one of you guys engineer's listening to this can build something like that and there's going to be political and make an RFC clean. The connection. Don't close, just clean it. So that's an official request that the server will take it and then actually respects at a reset and HTTP they said it's not the TCP. So keep that connection, just nullify everything that I did. Pretty good idea actually. Let's move to the next packet. 16. proxy_read_timeout: Proxy read timeout. It's defined as a timeout for reading response from the proxy server. So this is the other way around. Now the server's responding to us. How long should I wait between two sexes for reads exactly the same thing as cent, but just flip it, right? So now let's say I send a GET request 13 bytes, I send it all over and it generates a huge, huge response. So here is a very, very okay read that sent back to the client. So this is a read. This is a right read, right? Okay. So now I'm reading the response on the socket here, and this are in the back-end. And then writing and then the server as they can is sweet time to kind of send me the next portion of this. You know what? I'm sorry. If you were taking this much time, then either the I don't want that wasted time, 60 seconds to send me something. I'm sorry. That means I rather kill you in a shorter amount of period or an fail the request, then wait and take the head of these precious resources that we have managing this connection. So, I'm sorry, let's just kill this connection is taken a long time. And you, as a backend engineer, you decide this time was like, okay, if I'm really something, if this response are depending on, I don't know, real-time WebSocket connection or something like that, then this have to be really large. If you think about it, like I servers into that kind of thing. And that server sensitive NSSE. This has to be very large. You cannot make this small. You have to have this configured to be large because you have no idea when. This is exactly as, as, as E. How works. You send your request and the server keeps sending pupil well, well well responses. You don't know when the servers actually going to respond to you. All right. So if the server might take 1 second mark make ten, might take 120. You just wait, you have to wait. So you have to sit this as large as possible in case of a server sent event as, as E. But if you're just downloading ownership HTML page, there is no reason for you for this particular request to be to be specific to be more than 1 second? Yeah. No, you cannot you you will not have that more than 1 second. If if it happened, It's a bad back and close it. Okay? So that's when you really want to play with these connections in and then timeouts based on your use case. So present events and did as, as large as possible. And I guess this the same thing applies to the to the client side as well. That is the proxy read timeout. And obviously, when we reach more than systemic and we're going to close the connection. And obviously this might trigger also the sender timeout right, right here, which is another thing here. So again, setTimeout and greed timeouts. If you're using a server sent event, you have to configure those to be as large as possible. 17. proxy_next_upstream_timeout: Proxy next, upstream time outdoors. So now this is defined as limited time during which a request can be passed to the next server. The value 0 turns off this limitation. The default is 0. So what does this do? If, let's say I want to send a get request from the client to the proxy and injects will turn on R and connect to this guy. But these guys are responding for any reason, read timeout. It's no longer reachable anything, right? What, what should enjoy Next? Do you tell me? What should engineers do? Should have just fail, or should have just try another server? A really, there's no right or wrong answer. Really depends on your use case. Sometimes this course is so precious that okay, let's try another guy that's trying this, this server, write this as the proxy next upstream timeout. How long should I do this? When do I stop doing this loop? Let's say I have 1000 back-end server. Should I loop through all of them? Or if if I tried, I if I was in this loop for 10 seconds, this go ahead and close proxemics upstream Taiwan is just this amount of time. If you have more than that, kill it, right? So that is the proxy next upstream timeouts, right? So that's only effective. You have like more than four or five back in if you had like 20 back-ends, I don't know if that's effective. I guess if you're looping, you don't want the engineers to keep looping altogether. This, by default, is turned off a badly, which means it's going to loop forever. Be careful with that then. 18. keepalive_timeout (backend): Finally the keep alive timeout. All right, this is defined as the timeout during which an idol keep alive, connection to an upstream server will stay all bad, right, so we're talking right here now. So if I send connector go, I'm trying to connect with the back-end server and this connection to remain idle. How long should I leave it idle for? Seven seconds, five seconds, six seconds is not used. Should I close it? Should I leave it Idol? What should I do? That evil 60-second. After 60-second, we're just going to close the connection. That's the keep-alive, that is simple. Keep alive, back-end mechanism. Alright? If you, you can configure this to be like, okay, a very high So okay. I want to keep either can actually It's okay if they take pressures, resources, but I want, when a client makes a request, I want a connection to be hot and writing. I don't want to establish a handshake every time a client comes. But you can argue with that. It's okay. If I have a busy server, then the connections will not stay idle. As a result, they will keep, be kept alive. And when it's due, that means I can, I can effectively keep this idle time. Are there 60 seconds? But the problem is if you have a flux of requests that have a flux off connections, and now you kept them idle. If you keep the Kiba lifetime was like, I don't know, 30 minutes, then those connections are consuming precious resources on the backend and the server and genetics, right? So you need to do something about that as well. Yeah, you don't want to consume that much resources and you want to shut them down as fast as possible so that I don't know if you're configuring genetics to do something else, then you want those resources to move to the other endpoint effectively. And that concludes our timeouts, all the Engine X time odds. Let's jump back to the next lecture. 19. Working with NGINX: A rod, this is people's favorite part. Now we're going to actually dive into an example, guys. We're going to go into an example. We're going to install in genetics. We're going to configure it injects to be a static observer. We're going to inject to be a layer server reverse proxy, because technically and genetics is not a forward proxy, it's only a reverse proxy. And that's something I learned very recently, right? We get a proxy, the Engine X two for back-end and no JS services using Docker because that's the simplest thing ever, right, to do testing. And we're going to split the load between multiple back-ends, AP one, AP two. So we're gonna do Layer 7 proxy and we get undressed than 0. You're going to have to go here. I go at that point, go here. And then we're going to block certain requests to, yeah, you can do that in a layer 7 proxy. If you do slash admin than 0 block that you cannot go that from certain IP address. You can only do it from other IP addresses. For example, write a local ones and genetics as a layer for proxy. I'm gonna do that, right? We're going to show you how in genetics can perform as a layer for proxy. And what does that mean? And we're going to enable HTTPS on and genetics using lesson correct, fancy stuff. We're going to enable TLS 1.3 because TLS 1.2 is the default in engine X, right? I believe. And then finally we get to enable HTTP 2 on in genetics because as DB2 as is really interesting technology, right? You really need to think about enabling get on your back-end though, only enabled when you absolutely need it. And I might need to make another class just talking about HDB though, because there are so many things better than that. Of all, we jump into the example guys. 20. Installing NGINX: All right guys. So our first step, we can start with uninstalling and generics, right? All right, let's go ahead and install on Jack's right and see how to do that, okay, to do that and Mac, right, make sure you have Homebrew, which is this blue thing, which will allow us to install preimage a lot of staff very easily. So I'm going to go ahead and uninstall and genetics like that. Ruin install and genetics. Okay, now what do you do that it will start downloading the app package and installing and put it at configuration, a specific configuration Where in the next will be running. I'm not going to use that configuration because I don't want to like use default stuff if you know me from forcing my 350 videos. Okay, I see here is the thing. So the default port has been said that says like, Hey, this is your configuration, it tells you, and the default port has been set to 88 because you can run it without sudo write as just all the default stuff. He's wrong, I do. What are we going to do is I'm going to delete that file. I'm going to start from scratch. Okay, so how about we do that? 21. NGINX as a Layer 7 Load Balancer: How about we play and genetics as a layer 7 proxy guys, and we proxy it to the four to four NodeJS services. First of all, we need to start my four surfaces, right? So how do we do that? Okay, so Docker run dash p. I'm going to start for NodeJS services again. And I, I'm not gonna go through how did I did that. I'm going to reference a video which goes into the details of how I created my application and Docker arised it and made it into a container, Brian, but it's very simple. It does. These applications are very simple obligations that have like a simple rest endpoints, right? So the first application is running on port 22, 22 Mapping Board 99, 99. And what I wanted to do is I want to assign an app ID for this so I can address them very easily. 22, 22. And then I want to detach it on. The app is called Node app. Spin up one. Then we're going to spin up another one on port 3333. We don't need names per se for this, but sure. You can do that if you want to. And then 4, 4, 4, 4. Cool. We missed up guys. For, for, for, for 999. And then finally 55, 55. So for applications, for NodeJS applications. And we're gonna go through them now. Let's try them out. So we have an application running on 2, 2, 2, 2 3, 3, 3, 3 4, 4, 4, 4 5, 5, 5, 5. Let's test it out. Localhost to, to, to do this application literally just, you know, go there. It says, I am app 22. 22 homepage says hello, right? And there is, if you go to slash app one, there is an application, application one. It says, Hey, application one says hello. And the reason I'm doing that is because I wanted proxy and show you what application is actually being served now, okay? Application 2. And it says with replication to end, the final applicant has admin, okay? And this admin page, I want item ambition to have fewer customers as possible, right? Very few people should see this page. And exactly 33. 33 is exactly the same thing, okay. And 444 is exactly the same thing as well, right? You get the idea. Okay, up until 55, 55, so we have four applications. Cool. You got that so far. Here's what I wanna do. I want my NGINX to listen on port 80, not 8080, right? And I want to proxy load balance these four applications. Okay? So every time I visited localhost 80, which is just like that, I wanted to go to 55, 55, then 22, 22, 33, 44, and then just I wonder round robin through them. Okay, so that's why we're gonna do it. Now. Let's go back clear and genetics. And how about we do the whole thing again, because it's always good to redo things again. I'm going to delete. So before we go then I'm going to make a copy of this engine next five because I'm going to have a GitHub where you can check that stuff right now I'm going to make a index.php and to webserver dot config. Okay, so, so guys, you can play with that and see how what I did here. Okay, so that's my web server. Okay, that's how I do web server and all that jazz. Oh, the examples. But I'm going to remove this now. Then the next day config. And I'm going to create a brand new one. Because always better to start from scratch, right? So same thing, very, very similar. You create an HTTP because he playing on one layer guys on Layer 7. Well, I won't use the HTTP protocol. And events goes and genetics is stupid for some reason or requires me to have this. I should hate be optional, but I have no idea why, right? Maybe wants people to pay attention to these configurations, the walker connections and all that stuff. But I still don't understand why. I think it's just a bug, in my opinion. And yeah, HTTP, what do we want? How do we what do you exactly we want. We want, guys. If someone goes to the to this page, I want to proxy it to the back-end, right? Which are what applications we have application running on 222233447 and 55, right? So how do we do that? Is incredibly simple. You create, first of all, let's listen this time on port 80. So we kind of need to run in genetics as, as root here, pseudo, right? Because these ports are. Kind of system reserved, so you need to run that. So if I do that and if I do a location, first of all, we forgot. Guys is as always, you have to do is so much time zones, it'll enter your understanding of AI. So we need a server. We forgot to create a server, right? I'm going to go to server. And the server here listening on port 80, OK. And if someone tried to go to location slash I1, then to immediately proxy pass, like we learned in the web server, but I wanted to proxy pass to especial backend that it doesn't exist yet. And I'm going to call it all back end, right? There is nothing called all back in here, right? That's it. That's what I'm gonna do, right? So all back in here is something that is completely bogus, right? By guess what? It's no longer bogus if you actually created down here. Okay? All right, so how about we create a back-end, right? And that's, that's very simple to create here. And then genetics radius called Upstream. I'm going to create an upstream back and called all back-end, right? And this backend have a name called all back-end just named here. And here's what will happen if you go to slash on port 80, it will proxy, pass it to this. And here's where the magic happen, right? This back and not have, it doesn't have any back-ends right now. So how do I add? You basically use the server directive and this, this time, this is not a Block Director is just the terminal leaf directive, right? And on port 0, 1, 0, 0, 0, 0, 1 on port 22. 22, That's the first server. The second one, you guessed it. 380. Server 700 000 001, 4, 4, 4, 4. And then finally, 555, off of a flock. Get a guys. So what will happen here? By default when you create the upstream backend, there is a load balancing algorithm that is assigned by default and it's called round-robin around Rome. It's like 12341234231234. It will just go through them one by one. Okay. He makes a request, will take it this as one. If you're another request goes to this, a third request goes to this. And for the cost goes to that, that's called the round robin. And you can do another, we're going to show another a undergo them called IP hash and Least Connection, where you can do a sticky session if you want to write. Let's do this, right? Quit. And genetics does reload. And what will happen now, guys? First of all, I think we didn't. Let's see if this actually worked. We might actually get yelled at how that actually worked. I didn't have to do sudo, I think because I am already an admin. Okay. Now if I go to locals, I want you to pay attention to what happens here, guys. Refresh, refresh, refresh, refresh, refresh, refresh that z 123412341234. That's round-robin. Okay. Every request will take me to the next one, okay? And if you are interested in what exactly is happening, I am a browser and I have established one TCP connection with Engine X, right? In this case. And I am one TCP connection away from the Engine X as a browser because one is enough now. And on genetics or the backend has established for TCP connections with the four backends. Okay? So I am terminating my connection at Engine X because it's a layer 7, right? I'm playing with the layer seven, so I'm sending an extra quiz, Hey, localhost slash. It takes a course and says, Oh, logo slash means proxy bass to one of the backend, all backend, which I mean, I'll have to go through one of them one by one. So it's like, okay, I'm going to send it to this request. This is the TCP connection I'm going to use. This is the specific energy I'm getting it was, and so on. Cool. That's exactly what's happened. That's why are you saying you're saying it this way, right? Cool. And you can see, I want to use another IP algorithm. Let's show you. If you go here and you can add an load balancing algorithm, one of the road medicine and go in there. It's called IP underscore hash. I'm not gonna go through all of them, but I'd be hash is one of them. And what's going to pop in and what it does as, hey, it takes the IP address of the client and hash it into a disco glucose consistent hashing and finds out one of these four server and stick to it. That means all the requests coming from this client IP address will always go to one and only one backend. And people, some, for some reason love this, especially for stateful connections. Okay? They love this sticky session thing where, hey, if stateful applications, well, if you don't know what stateful and stateless application I made a video, go watch it. But say for application is generally a bad idea, right? Because you're storing state in the memory and what you do essentially, so you're relying on that state to be there on the next request dry, and there are good reasons to do that. But generally, we're moving way more into a stateless architecture where can get destroyed at any second, especially in a Kubernetes cluster, right? You're going to get destroy any segment of your container, application can die. You cannot rely on anything. So the idea of having a state, a state that is sticky in memory, is less and less desirable, but I'm not here to judge. There might be a good use cases. That's why this algorithm exists. Otherwise, it won't exist, right? And genetics that ish reload, if I do that and refresh, refresh, refresh. No matter how many times I refresh, I'm gonna be sticking to the server to 22. That's it. Slash app one same thing. I'm going to slash app one flesh admin to do, to do that said the TCP connection is established. I have one client IP address and I, my algorithm is always IP hash. So I'm stuck to that again. So that's essentially this, but I like my round-robin because my applications are stateless and I don't care which back in to hit coolidge guys, coolish. All right. How about we do some fancy stuff I'm going to create. Let's assume my application one app, one is heavy stuff, right? And I want, for example, to assign three servers for it, okay? Our two servers, like I want to assign two servers for application, 12 servers for application to write. I don't want to split the load, right? Because application to maybe, I don't know, maybe it's another application and I don't want I want just to assign two or maybe 547 and instances. And I want them to go to this application. What I wanted to go to different backends. So you can create as many backends you want. So I'm gonna go ahead and create upstream. If I can spell upon back-end. And I'm going to create another upstream backend app to backend. By though it doesn't have to be lowercase writing that you can be fancy if you want to. And you can just do 1, 2, 7, 0, 0, 0, 0, 0, 1, AP one is essentially 22, 22, and and 3333. Write an app to app 2 is 4, 4, 4, 4. And then 1 through 7, 0 to 0, 1 is 50, 505. Okay. So what now, What the heck does that mean? Now we have backends, but we didn't really we didn't really flip things together, right? So how would we do this rerouting and that we talked about? And we, I can only do this because we're a layer 7 proxy. I want you to pay attention to that. All because of this. What does it mean? Sorry, all because of this HTTP. Because I'm using an HTTP, I'm playing this GTP layer, which is layer 7. Layer. I can do all that fancy stuff. Oh, that was you can never do this, right? So I wanted to location if someone goes to App one, I want to proxy pass it to which upstream service app one backend. And if someone goes to location F2, I want them to proxy Bassett to HTTP app to back and see if we get this right. I'm not sure if I have to add a slash at the end of this or not, and I'm not sure I have to add app one here, so let's see if that worked. Yeah, I think it will work. Refresh. And now if I go here, slash, I'm still Rhonda opening, right? Because we removed that IP hash algorithm. I'm going through all of them because slash, but going slash app one, Enter. I'm round-robin through only 22 and 33. Sweet. And if I go slow abotu, I'm rounder and bring from 44 and 55. Isn't that sweet light? And you can use like an load balancing algorithm different in AP 2 versus AP one. How cool is this? And you can do all of that stuff, by the way go. So, so coolish. Let's do one more thing. I want to block Admin Connections and you want that tries to go to get to the admin interface from outside the Internet like on port 80. Don't let them in because now they can write if they can just type admin. And admin essentially is just round-robin it because it doesn't know what to do. You didn't tell us what to do. So it's going to get do all of it, right? I'm going to block this. I want, I don't want to access admin from the port 80. I wanted only if people knows this. Goes like the internal back and they have access to the internal back-ends. I know it says all locals, but think about this port 22, 44, 43. They are internal network that nobody have access to. Okay. But you just expose the admin API. We don't want to expose the admin API to that, to the public Internet. What do I do? Well, in genetics to the rescue. If someone tries to do admin, then let's relay. Return four or three. That means forbidden baby. You're not, you're forbidden from going to the US admin API. So now you can access it from 22. 22. Sure. You guys won 33, 33. But if you go look, I was admin. Sorry, baby. Nope. Not come in. I'm sorry. You cannot access it. Right. Because the because we're cool. App one, we still around robbing, right? So that's how you block things. If you want to block things, right? Isn't that amazing guys? All right. This as being the Layer 7 stuff, right? As usual, I mean COVID Engine X as http.com, and this is my layer seven tutorial. 22. NGINX as a Layer 4 Reverse Proxy: Let's do a layer for tutorial. Okay? I'm not going to delete a lot of stuff here, so I'm going to just go ahead and delete this. I cannot do any of that stuff and layer four. So I'm going to delete that. I'm going to delete this, the, this, delete this, and delete this, and delete this and delete this dD by the way, deleting them if you're interested. All right. Let's see if this works now. Still, I'm still want to layer 7 proxy, but I just want to make sure that I didn't break anything. Alright? Alright, so round-robin ING, right? And here's the thing dies. I want to act as a pure layer four load balancer. What does that mean? That means, and let's, let's, let's talk about this little bit. In the layer seven load balancer. The browser communicates with the Engine X, right? And that's one TCP connection. It can increase obviously guys, because we're using HTTP 101. So the browser can open up to six connection. That will change when we implement HTTP 200, right? Because it was only one and you can multiplex stuff. But right, so one TCP connection, let's assume simplicity. One TCP connection between the client, the browser, and then genetics. And there are four other on the back-end because we could be doing AS and then engineers at least four, because I don't know how Engine X works. I never know the source code. So in my open four or more batteries, it needs four right, between genetics and each of these guys, right? And it will have these stateful connections running. So we have five and total connections. And you can control these maximum number of connections, five connections. And if you have, if you have more than is going to start increasing as well, right? Here is the thing. If I want to use a layer for load balancer or layer four proxying, then what will happen is the proxy will just stream that connection back to the backend directly. So the browser will essentially connect to the proxy, but it will not terminate the connection immediately. It will just have a NAT table. That's one implementation at least. And we'll say, okay, you, the IP address, this, the client going to this back in, I want you to take this back and it will pick one back in and we'll just that set, it will establish the TCP connection on the backend, and it will just map the IP addresses, okay, this IP address on this board, going to this IP address on that board. And then we'll just map the TCP connection to that backend. So it's technically, it's one TCP connection to the back end. So let's show you that. Okay, And how did you do that? So simple. You change HTTP to stream. That's it. I don't know why it's called what is called TCP will be much easier. But at the moment you could stream the whole thing now is a layer for proximity is you cannot use the location because guess what? You cannot really read anything. In fact, even this will fail as we will show in a minute because location is flashes like, sorry man, I cannot do location because it doesn't make any sense. I don't we're going to actually prove it if I do that. Okay. And I do. And your next load say location directive is not allowed here says What the hell is location, sorry, urine and low layer four, load balancer. You want to layer four proxy. I cannot read anything. There is no slash, anything, no HTTP anything, right. All you can do is guess what? It's all or none. Okay. I deleted that accident. It's all or nothing. Right. So if you're listening to more than just proxy pass and that's it. You can you can't even tell me HTTP, Right? Just tell me which back in you want to stream to. That's it. I want to stream back to the backend, all backend. You got to say http, you don't get a choice. I do not know what protocol are you using? Which could be a good thing, right? If you're proxying as a layer for proxy, any protocol at the layer seven work with WebSockets, SMTP, the WebRTC, anything, right? Anything HTTP, anything will work because I just deal with TCP packets and I just warm magically forward stuff to this. And I'm going to still use load the round robin algorithm here, guys. But you're going to start noticing some weird thing and we're going to explain it in a minute. Those go ahead and save this. And if I do reload, now I'm not getting yelled at. Let's test this out. All right, so let's close again and do it all over again. Okay. Just to clear any more cash if there's any, I go to localhost slash. I'm going to hit Enter, hit 22. 22, refresh again. Still 22 to do, refresh again, still to to to do what the heck is going on. I thought I'm using your own robot. Has n You told us this is round-robin. This round-robin algorithm sucks. No, engineers is doing exactly what you asked it to do. You are, remember, you are a layer for proxy. That means the browser. What is it doing? Browser establishing a TCP connection between you, between the browser and in genics on port 80. So I did that and said, Wait a second, I'm a layer for proxy. And all right, so I'm, I'm supposed to forward this connection request to our back end. I have four. Which one should I choose? Well, this is round-robin. Let's pick one. I'd pick 22, 22. So I'm going to communicate that connection all the way to the back-end. So now I establish a TCP connection with that back in it. The Engine X engine does a NAT Network Address Translation. That's one implementation at least, right? I don't know it was enter monopolization, but what it does is I came this IP on this internal port is trying to go to me, but the real me as one of those back-end. So I'm going to add an entry says, okay, too, you have to go to port 2, 2, 2, 2 on this public IP header, on this IP address, which is one of them. And I'm going to add this and guess what? Any future request on this TCP connections, on this from this request from this IB is going to go to two to two because what? Because it's one TCP connection guys. It's one it's 11, end-to-end. So if you now refresh you, essentially, you're not going to establish a TCP connection again. You're just going to use it to send content in it and get slash is one of the content if you do slash, right? And we're gonna see some, some stuff changing and we're gonna explain what's happening here. But most of the time if you refresh, right, you're, you're sending content in the same TCP connection, the browser sometimes, because it's HTTP 101, it establishes six TCP connection. If you're lucky in my hit another TCP connection, which will hit another server on the backend. Okay? That will become very clear once you use Telnet, for example, let's go ahead and do that actually. So, so far we are from I afresh, I had another TCP connection, the browser, you cannot, you cannot control that, right? Because the browser, if you're using one TCP connection, is going to keep using it until it decides, okay? Uh, might use a, wanna use another one if you're refreshing too fast or you're making a lot of request is going to make our cost to use another TCP connection to you make it out requests, and that TCP connection will hit another server. So you can see like you sometimes I go to 2, 2, 2, 3, 3, 3 4, 4, 4. It's not very clear here. Okay, Let's go ahead and Telnet and see, see how actually this work. If I do telnet and I do want to 700 000 001 on port 80, you can see either GET request. And I was served from 55. 55, right? I'm a layer for proxy, but I've been served from this TCP connection and the connection has been closed immediately. So if I do it again, this is 22. 22, right? Do it again. 33, 33. Do it again. For, for, for, for. That does look like a round robin To me. It is a round robin, but at the layer four level, okay? Hope that is clear, but it is very, very powerful. If two peroxy things data as they exist without actually looking at anything. And you can do certain things. You can look at the IP address, you can block things if you want to and do so much cool stuff. 23. Enable SSL/TLS on NGINX: Time to get serious. I want to flip back this connection. I'm going to let us copy this and genetics as a TCP.com. And then I'm going to bring my, my configuration HTTP layer 7 configuration back. Because guess what I wanna do? I want, this is my layer 7. Sweet. Let's make sure it works. And genetics dash is the reload. Make sure it works. It works. Nice. Next thing, I want to go to my router, and I want to add two rules, port 80 and port 443. And I want that to point to my public IP address so I can create a domain. So I can make this a public address. I'm, I want, I want to host and genetics to the public. Now. I want to make a domain and I don't want to enable https. And then I want to enable a certificate and TLS 1.3 and all that jazz. So the first step is to make my website's public. How do I do that? I have local host, but if i u public IP address, and I think my public IP address, right? And I do http is going to fail because, well, you trying to connect to them my public IP address, which is my router and my daughter doesn't, is not listening on port 80, at least in the public side, or 443. So it's going to fail, right? So what I'm gonna do is I'm going to go to my port router here. Alright, so higher I am and my router, and I wanted to add two port rules. I want to 40 port 80 for HTTP and I want to forward 443 for HTTPS. So to do that, I'm just create an application called HTTP and anything that is from 80 to 80, go ahead and forward it to this machine, which is a sane Mac or does very simple to the port 80, right? We're going to add that. I'm going to add a very similar rule for poor for HTTPS, which is 443443443. And then at the moment I do that, now I have two rules. This guy and this guy is pointing to my router. And my router will do a nice port forwarding will forward the packet that goes to 80. That's kinda dangerous guys. You don't want to do that by I'm just demonstrating things, but he's going to forward if you know what you're doing, then by all means right? I'm going to forward anything that goes to my public IP address, to my internal Mac Book, which is listening on port 80, which is what we did, right? This is the running. Yes, it's running. Okay, so now technically if I take my public IP address and hit Enter, you can see that it's working and that's what deaf the state you want to reach here. Okay? Public IP address walking. I, if I give you this public IP address, you will hit this. I'm going to remove it after this video, but sure. Okay. That's why we want to reach the next step is I want to go to no IP.com. So essentially when you need to do is create a new host name using this service that points to your public IP address. And it's a free service. So you're going to get an ugly domain, but who cares, right? So I'm gonna go ahead and create, I don't know, NGINX test that Dina's that net engineers test. Dns.net is actually working. Okay, so this points to my public IP, which is this guy, which we know it works right? If I paste it here, that works, right? So how about this guy? Now, in few minutes, this should work. This should just point to my public IP address and it should update the DNS entries and everybody in the Internet should get this Id. Alright guys, took awhile, but now my public DNS website is rocked and loaded. Alright, so now this website is now public. And that's the step we want to reach so we can enable HTTP. So now I have an HTTP, obviously are going to prove that this is just HTTP 101 goes, well, That's, it goes as DB2. It doesn't work with unsecure stuff, right? So it's 11. It does work well. With secure. Does CB2 you can do should be two and on unencrypted. But most browsers don't support because it's very hard to upgrade the connection, right? Without TLS, right? Because we're using ALP and essentially to negotiate the new protocol. So that's why bronze this is, you know, what? Only secure. And I love this about it. I, I, this is a great decision. All right. All right. What's next? Next is we want to use, Let's Encrypt certificate authority to get a certificate for my beautiful shady website. How do we do that? Well, we'll go ahead and install. Let's Encrypt. On my Mac. We're going to install, Let's Encrypt on your Linux. I'm going to reference the unnecessary stuff you need to, you need essentially the search bot API, essentially that allows you to generate this stuff. The public key and a private key essentially it out. We'll. Identify your certificate if you think about it. Alright, so go ahead and install. Let's Encrypt brew install less encrypt. That will give us the CRT Bot API, okay, As it and install that jazz beautiful. So using Python. Sure. Okay. Once you just told us incorrect, I'm going to reference this side below the four, if you're using Windows or other stuff, but it's bay similar guys is just once you know it, we're going to use, we're going to generate, we're going to ask, Let's Encrypt for a certificate and to do that, okay? All right, to do that, worst of all, we stop. We need to stop engine X prime, so we're going to do clear Engine X dash this stop please, because the bot we're going to run, it's called CRT BAD. It's going to listen to port 80 on itself. And it's going to communicate with less encrypt and then Dagan and negotiate a certificate and again, uh, send us the beautiful public key and private key in security securely, right? And we're going to have the public key and a private key we're going to till and genetics about our public key and private key, right? Okay, Essentially it's part of the certificate. And the private key is also the thing that the server will use to essentially decrypt the traffic. Let's go ahead and do that. Okay, So how would you then we have to use sudo because it's, it's, it's listening on port 80. So pseudo search bot cert only, right? I want, I want to use the same exact thing I did with a J box. That means I'm not gonna do a fancy stuff for said before Engine X. I don't want this thing to touch my config. I want my config to be touched only buy me a more, okay, so I'm going to generate a stand alone certificate Essentially, I think it's called dash dash standalone. So pseudo search box search only dash dash standalone. And when I do that, it's going to ask me if you question says, Hey, what's your domain? What's your name? All That Jazz. And it's going to communicate with the encrypting community website to generate the certificate and then download in the right-hand desk and Mia use the public key and private key to add it to my config. I do not want this thing to touch my conflict. I know there's another way that it actually goes and touches my and genetics contract. I don't like that. I like to understand what's going on. Go ahead. Password entire. It doesn't think listening to port 80. And it's asking me for what for much my domain. It's called Engine X test dot D DNS.net. I think so. All right. Shady website updating a new certificate, listening to port 80, all asked up. And just like that, we have the public key, the full chain will the certificate, and we have the private key. And these are the two things we need. Exactly. Okay? So I'm going to copy this guy. And let's remember the location guys, right? So this guy, and this guy, okay, this is the public. Let's go to my config. Let's copy this first because we're going to need it. Okay, the public key and the private code, we'll come to that. Okay. I want to go to the same server here and I'm going to listen to an additional web server. What is it, guys? What is HTTP listening to by default for 43? And we want to enable TLS, so we have to add SSL, right? It's just legacy stuff. That's why it's still call this as L from the old stuff. And then when we listen to for 43 and we enable us as the Engine X will, will, will require two things were quite a public key and a private key. The public key is SSL underscore certificate, and then you just paste that public key, the full chain, right? And then I'm going to save and exit so I can copy the private key and come back. Okay? The private key. And you next config. And then we go down. And then we're going to add the SSL and ask our certificate and a score what private key, key, and then paste. I'm not expecting you to memorize all this. I I definitely don't, but I try to make sense of everything that I see. Right? As y. You just need to understand every single line. That's why I don't like to use default configuration at all. I need to understand what the heck is everything okay? So now when a single port 443 him, whereas this L we're secure wall jet, that jazz. And now if I save this stuff, we have the public key. The private key. How about we run in genetics? Well, there's nothing to reload, so we have to do in the next run. And we're getting a permission error. And this means that it's time for us to run engine X as root, which I don't like, but I have to do it as on-point because we're reading through this directory and this directory is, is only pseudo. People can't access it. I mean, you can copy it to another directly if you want, you can create another user. There are a lot of stuff. This is not production ready guys, right? I'm not doing, giving you a production ready stuff, but you have to understand what you doing. Essentially, there's always a best practices to do anything in life. Alright, we haven't done it's running. It's listening to port 80, listening to port 443. It has a certificate list. Is this thing. Refresh? That's the insecure part. But if I do http OS and we are secure, babies look at that connection is secure. This is legit website right there. And it's by Let's Encrypt. And it is Engine X that does the logo. Beautiful stuff. You can change that the ciphers, the default ciphers, this is a very bad cipher, by the way, RSA encryption, very bad, I say very bad. Like it's it can be broken with the shady laptop. No. Right. But it's not perfect knee forward. Okay. That's what I meant. Right. So there's so much stuff better. Diffie-hellman, you can force Diffie-Hellman. This is definitely not a TLS 1.3. I cannot tell how you can know if a site is using case 1.3 versus two by just by looking at the certificate bother websites like TLS checker. So this website allows you to check if your website is running on the latest and greatest. If I paste this guy and Ron is going to yell at me, it says, Hey, by the way, your configuration, your web server is now running as the latest and greatest TLS 1.3, it's using TLS 1.2. C says moderate unfortunately, right? So if I look, says, Hey, it doesn't have to, yes, 1.3 is not enabled, which is bad, right? So next thing we're gonna do is we're going to show you guys how to enable TLS 1.3. Sadly enough guys, if you go to the official Engine X.org, right? That's the website, right? Or the official in genetics. They also are not using the latest TLS 1.3, which is kinda, which is weird, right? It's kind of embarrassing for you. Think about it. This engine X, the Holy Grail, right? It's not using the latest deals. We're going three. And I don't know why guys. It's beyond me. Right. So let's go and enable the US envoy. Three guys, let's show you how you do that. 24. Enabling Fast and Secure TLS 1.3 on NGINX: So this website allows you to check if your website is running on the latest and greatest 5 based. This guy and Ron is going to yell at me, says, Hey, by the way, your configuration, your web server is not running as the latest and greatest TLS 1.3. It's using TLS 1 to see. It says moderate unfortunately, right? So if I look, it says, hey, it doesn't have the s1, 0.3 is not enabled, which is bad, right? So next thing we're gonna do is we're going to show you guys how to enable TLS 1.3. Sadly enough guys, if you go to the official Engine X.org, right? That's the website, right? Or the official and genetics. They also are not using the latest TLS 1.3, which is kinda, which is weird, right? It's kind of embarrassing for you. Think about it. This NGINX, the holy grail, right, is not using the latest TLS 1.3. And I don't know, I guys, It's beyond me right there. Let's go and enable the US 1.3 guys. Let's show you how you do that. So we'll go back to my config and it's really, really simple stuff, right? So part of this, you can add another configuration here. And other is called SSL protocols. I might be wrong. Yes, TLS V 1.3, you add it like that, Okay? This way, I only want the US 1.3. And if there is a shady client that wants to come in at 1.2, I want to add or I don't want this client touching my observer to begin with. To be honest, if those like Internet Explorer 6, trying to connect to me, right? I rather this guy doesn't consume isobar fail. But that's just me, right? People support backward compatibility. But if you want to be secure boards again, if you're me, you see me make a big deal between Ts 1.3, avoid two guys. This is really serious stuff. 1.21, it's so slow, right? Because it has two roundtrips instead of 1. Second, it uses variation ciphers like by default, right? So that is another bad problem, right? So, so by default is can easily suspect hello to down, drag down grade attacks. And if you are running production stuff, right, you cannot afford that stuff, right? So here's number three. I made a video about TLS to go watch that if you want to know more details, but Diffie-Hellman, ephemeral Diffie-Hellman perfectly forward all that jazz latest and greatest encryption algorithms, right? And I can specify the ciphers here, but I'm not gonna to go because this video is going to get too long. So I'm going to go just then use TSO boyfriend and let's save. And let's do sudo Engine X dash S reload. Obviously I have to do sudo now every time. Let's do that. Every time I have to do sudo noun. All right, So where's my side? We're now only 32. Beautiful disable TLS 1. That's the most secure stuff. That's how you want it. Did. 25. Enabling HTTP/2 on NGINX: Final thing I wanna do is this guys, if I go to now side, I am now secure, but my protocol is 11. Not ghoulish, because now the browser, as we discussed, it's going to create 60 billion connections. And they can do with this stupid pipelining things and all that unnecessarily stuff. So how do I use the beautiful, latest and greatest HTTP 2 or maybe HTTP three in the future. It is extremely simple. All right, let's go to the go back. And you see this SSL just after the 443 SSL space, HTTP. That's it. You tell it, Hey, I want to distribute 2. And then you do, okay. You do Save quit. And then you do sudo Engine X dash S reload. Let's try it up. So now if I refresh, hopefully this will close the TCP connection and reestablish a brand new one? No. Okay. What is that? It is it's still beautiful, beautiful edge do. All right guys, not only as to where now TLS 1132, beautiful. I disable TLS 1. That's the most secure stuff. That's how you want it did nothing. I don't want to support anything older than TLS 1.3. That's very little bit harsh if you think about it for a shady website like mine. But you can do it if you want to. You can specify configurations. I'm not gonna go through it, but go to the doc, right? And read it to understand if you wanna do like specific ciphers, because my cipher, our stir filled bad, they're not perfectly forward per se, right? Because now if I look at my cipher, I think it's still using RSA, right? Say I'm still using RSA encryption. Now. Now if you don't want as diffuse development, you might specify also a Diffie-Hellman. That's why I like HA proxy more to be honest. I just like, I don't like these default thing that are just old stuff, right? I enter a proxy forces you to always go forward by default and it always using the latest and greatest security stuff, right? I might what version of engine X anyway, because people might have started yelling at me and says, Hey, you might be using an old version of 100 and this is the Engine X. I'm using 1.17. I think it's almost the latest. 1.17. As a form, 0.17, the default is deals with Boeing to read. And the and the, and the default ciphers are essentially RSA, Diffie-Hellman or I guys. I think I did. I think we did. All we needed to do guys. 26. Class Summary: Thank you so much guys for checking out this class. I hope you enjoyed it and hope always summarize this class wrap-up. So we talked about what is in genetics. We talked about this concept of reverse proxy, web server and how people love in genetics, just because it can do both. Some people prefer their reverse proxy to be just the reverse proxy, right? Because it's just simpler this way, right? The less features you have ONE your backend, the better that thing becomes. If you're doing everything, then, then what's your identity really, a lot of people kinda prefer like HA, proxy for example, over in genetics because hey, this is, I know this is just a reverse proxy. I'm not I don't want I don't care about whatsoever. I have a dedicated web server like apache, right? But again, and genics is very, very popular because it can do both and up because I have a five as a company behind it, powering it. So they have all the money in the ward to do whoops over and reverse proxy. And I think that's why they stop being a forward proxy because of that. So then you know what, let's just do this too. We're not gonna do another one. We're talking about my kinds of architecture and how I got to the design architecture with the example. Obviously, we've talked about layer 4 and layer 7 in general. And the reverse proxy in and genetics was the difference between the two as well. We talked about TLS and TLS termination. Tls pass through as well, right? When to pick one over the other. Very, very critical stuff that we spent a lot time talking about, time outs and engineers and when to configure each one. And what is the criticality based on your use case, WebSockets, Server Sent Events, all this stuff. You have to really configure these back and timeouts and the font and diamonds to be very close to me to make your backend efficient. And then finally we did an example. We talked about how we actually spin up an engine acts as a web server. They are seven, proxy has a layer for proxy, we enable HTTP as we put a certificate would lessen CREB. We enabled the US 1.3 SSL lab. All happy with us, HDB two and in genetics and all that stuff. I hope you guys enjoy this class. I am going to see you in the next one. You guys, stay. Fantastic, awesome. Goodbye. My name is Jose and Nasser. This class was recorded on September 2021. Thank you so much. Goodbye. 27. Bonus - Run applications in a docker container: Hey, guys, I'm bringing you a quick video to show you How can you can spend up a lightweight node runtime container that contains your application in a fully essentially caps elated way where you can stateless Lee, spin up a container, execute your application and you can spend it down at any time. Ryan So this is very useful, especially if you're let's say ah, running a Jenkins Ah, jobs, right? Or C I C D pipeline or like your part off a cook kubernetes cluster. You want your application to be an image or a docker container where you say, Hey, execute this, spend up this container, execute stuff, do something and spend down, right? You can do something like that, right? So I want to show you how to do that essentially, and there will be two steps to this process. First, we're gonna build that docker image for him. What you gonna spend up that container? And in order to build that docker image, we're going to build up our own docker file, right? And we're gonna pull from the node lightweight docker image that is already there, and we can essentially right our on a little bit application like an express application, and we're going to spin it up, right? Well, the image and then spin up container running on port. Certain import. Right? So that's a very simple thing. You can do the exact same thing with Python. Exact same thing would go at any one time. So I'm going to show their note, and you can just do that and do it with any runtime you like. So let's just jump into it, right? So I have the juice to decode I Obviously, I have darker running here, So I'm gonna go ahead and just open a brand new folder and I'm going to my JavaScript folder here. Yeah, scumbag wound. I'm going to create a full, difficult ducker, and the code will be available in the description below. Guys for you. Right. So it's a brand new folder and here's the first thing I want to create. I want. Since we're building an image, we need to create a docker file right? That docker file. I literally called docker file. We have to be the same cases. I think as well, right? Wouldn't you do that right? That's the doctor. If I were getting write code and some some code. In order to build our image, that cord will unhurt. That image will and heard from the node run time and specific version as well. So I don't Nothing I want to build here is all my application. I'm gonna create a folder called APP, right. That application will contain an next O. J s. Okay, let's start building our application, and then we'll get it built. Or Doctor, how about that? Okay, so we have a folder here inside the in next o. J s. I'm going Go to terminal. My in the app has go to the up. Let's make sure I am in the app and then just do in PM in it. That's why I mean shut up. I know I'm doing just great. They just, uh, package to Jason. Then let's just create an application going stop equal, require express very simple stuff, Frank. And then do after get when someone visit this thing. This Duke up off, arrested send, I know from a light Wait container, right. A simple stuff, right? It showed that. Then abduct. Listen, I'm gonna listen to port 9999 just for fun, OK? And would you console the log list? Ning, go on 900 pull stuff, right? This is this. Is that my application? Right there. So let's go ahead and quickly install. What do we need express and the more doing it still express? Actually, it's going to be added to a wide Pakistan Jason, and that's very important thing. OK, right. My jacket package tradition requires express, and that's important. OK, so you saw them. What I did, right? Thyroid, My application I run and B m in a Stole Express. And now if I do just in PM in a stall my package do Jason will see Hey, you require express. I'm gonna stall express for you. That's the cool part of this. All right, let's just run the application. How do we run application? Let's see. Let's create a script here unless call the script, Uh, and then on the run, the script run. I'm going to just run index the jazz. So if I do in PM run after I'm listening to poor 9999 sweet, let's go to browser, local host night and day. That's obviously a lie. We're not driving a from container running from the machine. But we're gonna change that real soon. So let's go to the doctor file now. So what I want to do is build and beautiful image so that this image will first from we're gonna inherit from the node image. And I want a specific version from Not so if you're watching this video three years from now, you don't like installed. The latest version noted this breaks application for some reason. So from node 12 right? And here's what I wanted to working the working directory off. This thing is, I'm gonna just make up a working directory in the container itself and where I call it home . No, DAP, this is a completely made up directly that doesn't exist. So that working directly is where did my application is running? Essentially. Okay. And guess what? Up to now, that container doesn't have really any court, right? Because we didn't really Kobe the code. So what I want to do is if if imagine, this application is running right here, right? If I want to cope, how do I cope? A I want a copy The APP folder to that content So what do I do? I do literally copy app, which is the location. I'm running on because there I'm running from this location. Docker file Copy The app to is what? Home node App? Just hoping the stuff They're copy the content That includes the index that condones their actual Jason. Probably we shouldn't Kobe the node module. Because we really, really don't need that, Bush. You're just a test, right? And then there is the where we need to do next. Once you cope it now, when they were in a blank container. Imagine yourself in this blank container. What do you do next? Guys? We need to install the dependencies. Right? And there are two commenced their common cold run, and there is a command core seemed and each one execute. I want a certain, um, stage right. The run command execute. When you build that image from the docker file, the cmd execute when you actually run an instance off the container from that image. Okay. And then So that's what we're interested in running that thing. Right? So npm stole, run and be honest. All that said, I don't need to say in b m s still express because from the packaged adjacent If I am here and I say in PM Ron and stole, I'm gonna look in the package decisions. Okay? What do you want to stall? You wanna still express, so that will install it for us. Okay, sweet. That's all what we need and final thing is we want to expose right the port 9999 That's the portable listening to right. Sometimes you don't really need to expose any ports. If your application, like, I don't know, execute and computes the 1st 70 prime numbers and then brightest tourist and point and shuts down, that's that's completely stateless workload life. But in this case, when I do expose nine and I won't, I'm actually listening to upward and I want explosive to the outside war. OK, so that's the port. All right, here's the thing. Now, guys, this is stuff is missing something I want to actually up until the application will never run if I run the container. So how do how do I actually run my application? When the containers spun up when the containers actually run? Okay, that's the command seemed by two npm run up. That's the court, right? We worked in literally here that will execute this. So I think we have everything we need. If I come here and I need to go one level up I am now in the docker file. Vital Docker build Dashti, give it a name. I call it node app. Anything really? And then dot that important that dot to build a cannon directory, right? They conduct if And we found that thing and were stopped building all that stuff and we're building and everything looks good. Guys, we installed, we built and everything looks good. Yeah. So now we have a an image cord. No dap. How do we spin up a container from that? We have done this many times in this channel. So how do we do that? Dr. Run, Right? Give it a name. No, DAP. You can call it anything you want. It's just the same high school of the same name as the container. Now, when I do run right, the CMGI will execute. Right? When I built the image, the Ron Command was executed. I know. Solve it confusing. So if I do that and I won't expose which board up when exposed 9999999 This could be anything you want. This has to be 9999 because that support inside the container okay, And finally know that. And we're listening to 999 But this is a container. Guys, this is a completely right. A stateless container in itself, completely contained by itself, right? I can destroy it. And I can I can give you that image. I can give you that docker file. You guys can. I'm gonna push that could if you pull it. If you clone that creep, will and you do, Dr Build that same excitement I did. You will be able to do this thing exactly the same, right? So And we can able to do essentially, uh, what will be able to like, uh, spin up a Jenkins instance and do that Dr Run and do that stuff? It's well, right. So let's see if our stuff has actually works right on. And obviously it works. But this times, actually, I am inside the container. Alright, guys, that's Ah. I was like a very lightweight video I want to make with you guys. Share with you and how to make a lightweight no jazz docker container that have your code and you can do so much stuff with it. Obviously, if you don't care about the the container anymore, you can destroy it. And there is how you do it. You do doctors stop node app and you can do Dr RM. Note app on that application is gone. Just like that. It's it's killed, right? You can spin up multiple docker containers if you want to write. Let's do that. Let's do that. Are you guys? So the power off this now? Now that we have a container running this thing that application, you can spend up as many containers as you want so I can spend up Dr Ron Ashby like Port 8000. They're listening to map it a port 999 9009 and this is will be my first container. I'm gonna call it note up, then for fun. I'm gonna just detach it because I want I want just this application to run and detach itself right? And I'm going to create another port 8001 and another container it doesn't into. So now let's see if these 16,001 it doesn't actually working. It should be. That better be right. So if I do it 8000 that works it does anyone that works 1002 that works. How easy is this guy's With few line of court. I spent up three Web servers running my applications. Essentially three containers, and you can put them behind a load balancer. It's as easy as, like, caddy or an Engine X or Ajay Proxy, which we made a lot of videos on this channel. Right? And you can just load balance the whole thing, right, Which we might do another video. So we start building all these micro services architectural with containers. Right? So this is really powerful stuff you think of our guys. Alright, guys, I hope you enjoy this video going to see in the next one. You guys stay awesome.