Learn AWS Development | Wayne Ye | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

27 Lessons (5h 44m)
    • 1. Course Introduction

      3:12
    • 2. AWS Platform Overview

      5:29
    • 3. Console Access and API Key Access

      3:23
    • 4. Install and Configure Your AWS CLI SDK

      2:10
    • 5. Exploring Fundamental AWS services

      1:40
    • 6. Identity and Access Management with IAM

      7:10
    • 7. Store and Retrieve Data Anywhere Using S3

      9:12
    • 8. Mastering Server Fleet with EC2

      12:49
    • 9. AWS Networking Fundamental Virtual Private Cloud (VPC)

      13:38
    • 10. Databases in the Cloud RDS and DynamoDB

      13:25
    • 11. AWS Lambda Fundamentals and Benefits

      2:35
    • 12. Build Your Own Serverless Service

      13:17
    • 13. Managing Your Serverless Service Using SAM Serverless Framework

      6:19
    • 14. Best Practices for Lambda and Serverless Service

      2:08
    • 15. Container and ECS Overview

      5:20
    • 16. Orchestrating Your ECS Repositories and Clusters

      3:36
    • 17. Fargate Worry Free Solution for Your Container Based Services

      3:43
    • 18. Best Practices for ECS

      2:13
    • 19. SQS The Ultimate Message Queuing Service

      9:46
    • 20. Cognito Managed User Sign Up, Sign In, and Access Control

      11:44
    • 21. Content Delivery by CloudFront

      10:02
    • 22. Using Kinesis for Data Streaming

      9:59
    • 23. Cloud Service Design Principles

      8:24
    • 24. Manage Infrastructures by Code

      4:09
    • 25. Cost Control

      3:49
    • 26. Resilience Design Patterns

      4:06
    • 27. 27 AWS Development

      170:16
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

88

Students

--

Projects

About This Class

In this AWS Development class, you'll learn the fundamental concepts of cloud computing, what Amazon Web Services (AWS) is and how it can help companies in many different ways. You'll learn about the value proposition of AWS, the core services, pricing models and options, security in the cloud, and much much more.

What exactly you will learn?

The class includes the following topics,

  • Course Introduction
  • AWS Platform Overview
  • Console Access and API Key Access
  • Install and Configure Your AWS CLI_SDK
  • Exploring Fundamental AWS services
  • Identity and Access Management with IAM
  • Store and Retrieve Data Anywhere Using S3
  • Mastering Server Fleet with EC2
  • AWS Networking Fundamental - Virtual Private Cloud (VPC)
  • Databases in the Cloud - RDS and DynamoDB
  • AWS Lambda Fundamentals and Benefits
  • Build Your Own Serverless Service
  • Managing Your Serverless Service Using SAM_Serverless Framework
  • Best Practices for Lambda and Serverless Service
  • Container and ECS Overview
  • Orchestrating Your ECS Repositories and Clusters
  • Fargate - Worry-free Solution for Your Container Based Services
  • Best Practices for ECS
  • SQS - The Ultimate Message Queuing Service
  • Cognito - Managed User Sign-Up, Sign-In, and Access Control
  • Content Delivery by CloudFront
  • Using Kinesis for Data Streaming
  • Cloud Service design principles
  • Manage Infrastructures by Code
  • Cost control
  • Resilience Design patterns

You will create a lot of things by learning this class. For instance, you will create instances of servers, learn about managing the servers, Troubleshooting servers, managing the AWS product, etc

I have added a lot of practical exercises to improve memory retention and contextualize knowledge, which can help you in much better understanding of topics. You can apply this knowledge for further studying for AWS Certificate exam.


Who is this class for?

  • Beginners welcome! This course was designed with non-techies and newcomers to the cloud in mind.
  • No need for previous AWS cloud experience as we'll teach you the foundations of cloud computing.
  • A free-tier AWS account is recommended to follow along with the practice labs - I will show you step by step how to create one.

I am confident that by the end of the course you will have enough knowledge and experience in AWS and will no longer call yourself a newcomer to AWS.

Meet Your Teacher

Teacher Profile Image

Wayne Ye

Software Engineer, Tech Lead, Geek

Teacher

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course Introduction: Hello and welcome to data less development the video course. My name is Wei year. I am a Software Engineer, tablet and also a gig. I've immersed in software web development for over 14 years. I have a solid appearance in a dozen of programming languages like C sharp, Python, Ruby, JavaScript, and colon. I am a AWS Certified Solutions Architect. So now let's take a look at our course structure. We have totally six sections. In section 1, we will have an overview on AWS platform, set up our AWS account with API key access. I will also demonstrate how to install and configure AWS command line interface. In section 2, we will have a deep dive in AWS. We will see how to use IAM to manage access to AWS services and resources. We will explore how to use S3 to store and retrieve any amount of data from anywhere. We will deep dive into EC2. Learn how shall we provision our ec2 servers and provide a secure, scalable computer capacity in the cloud. We will also learn how to use RDS and DynamoDB to effectively store our relational and non-relational data. And how we can leverage is reliable performance and great scalability. In section 3, we will learn how to run code on lambda without provisioning or managing any virtual service with 0 administration. We will learn how to build a serverless service by leveraging API Gateway. And finally, we will go through several best practices for lambda and a server-less. Section 4 describes the container solutions provided by AWS, Elastic Container Service or ECS and Fargate. We will learn how to create and manage container repository's and control the running cluster. Werner, how ECS and Fargate makes our development cycle easier and faster. We will also cover several best practices for ECS and Fargate. In Section 5, we will learn a few other popular AWS services. They are SQS, the fully managed message queuing services to decouple and scaled microservice is a diverse cognitive which led us add user signing up, sign-in, and access control quickly and easily. And CloudFront, which provides high availability and high performance by distributing the surveys spatially relative to the end-users. In section 6, we will go through a suite of best-practices while we walk with AWS. Topics, including Cloud service design principles to manage Infrastructures by code, coastal control, and call me resilience patterns. All right, this is the overview of the mastering AWS development, a video course. Let's get started. 2. AWS Platform Overview: In this section, we'll be looking at the AWS platform overview. What are the advantages of using AWS and how we can benefit from using it. We will also cover how to set up console, an API key access, and how to install, and it can fake the AWS command line tools cylinder. We can't get a prepared before learning discourse. First of all, AWS is right now the number one cloud computing platform. According to most recent Cloud marketing report from CNBC, AWS is a leading the Cloud market by as 33 percent market share, followed by Microsoft Azure platform with 13 percent share, and Google Cloud Platform with 6% share. Taylor as provides a large set of infrastructure cloud services. This screenshot is taken from the official AWS website, which lifted out some most popular services. As of April 2018. There are 142 services categorized in 19 different, uh, types. For example, in Compute category AWS provides easy to Lambda and Elastic Container Service. The Storage services such as S3, Glacier and a Storage Gateway. Networking services such as Route 53, api Gateway, Cloudfront, and Virtual Private Cloud databases, including relational databases such as MySQL, Postgre SQL, SQL Server, and a non-relational databases such as MongoDB, Cassandra, and a DynamoDB. In-memory caching services such as remedies and Memcached. Aws keeps releasing new services. So this number is still increasing rapidly. As a worldwide Cloud platform. Aws has datacenters in many countries. In AWS terminology, they call them regions. A region is a geographic location with a fraction of availability zones mapped to a physical datacenter. This picture shows the AWS global network, Regions and edge locations. As of August 2018, AWS has 18 geographic regions and it totally 55 availability zones around the planet. There are many benefits of using AWS. First of all, it provides a large set of Cloud services. As we have discussed just now, it should not be challenging at all for you to find an appropriate surveys. According to your demand. An AWS services are well-documented with tons of hands-on tutorials and articles on the Internet. It should be pretty easy for you to use them. The second benefit of using AWS is that many of the services or resources you create it will be available in seconds. For example, you can launch a WordPress website online sale within a few minutes. Or you can have a Fourier feature. A RESTful API runs on API Gateway plus lambda within seconds. And also within only a few minutes, you can have a super powerful NoSQL database runs on DynamoDB, which can support 1 million concurrent queries and can be scaled up ten times whenever requested. As a world-class Cloud Platform, database provides incredible scalability and high performance is auto-scaling feature for EC2 instances. Containers and the databases gives you full flexibility to customize your triggers for both scaling up and down in a completely automatic fashion. You can also schedule a scaling activity before a special timeframe. For example, your company's anniversary say or event or Christmas holiday season. Nowadays, reliability and high availability are so critical to our business. Most AWS services have built in high availability features. For example, whenever you create an S3 bucket or object, it will be automatically spending into multiple availability zones to eliminate the possibility of single point of failure. Another example is the Aurora database, which stores data in six copies across three availability zones and all time. Finally, AWS is cost effective. The beading model is always pay as you go. So you will pay for only what do you have used. The actual cost is accurately calculated in seconds. 3. Console Access and API Key Access: In this video, we are going to create an AWS account so that we can all come to the edibles console and I manage our Cloud resources. First, we can reticent AWS account at this page for free. Once we complete our registration steps and sign in, we can see this lending dashboard, which shows the categorised AWS services. In the main section. On the top right, we can see a region drop-down menu for which you can choose your appropriate region based on your geographic location. So for now, we're scrolling down and we're trying to find, I am under the security identity and compliance category. Iam stands for identity and access management. It is the fundamental service for security managing access and permissions to your AWS services and resources. We will have an IAM deep dive video course in Section 2. Please note that after registering, we're now logged in as a root upon the user. So the first thing we should do is to create a new user, enter this root user, just like what is shown here, and that is security status. I know it indicates as we should, creating individual IAM users. You might ask, why should we do that? Well, the biggest difference between a root user and an IAM user is that the account has full access to all resources in the account. You cannot use policies within your account to explicitly deny access to the root user. So EDA versus strongly recommends you to always create your IAM users and use group permissions, rows, or policies to explicitly random appropriate permissions. Ok, So now Let's go ahead and switch to Users tab. And we want to add our first-year user. In the username field. We want to give it a name as admin. And we want to run it both programmatic access as well as management console access. And we can choose auto-generated password. And we also want to make sure you use a monster, create a new password at an exercise. In the permissions tab, we want to attach and administrator assess policy to this admin user so it can be our super powerful enemy. And we can skip the tag for now. Okay, in the review section, we're going crea, create user. All right, So now we have this admin user created. And we can see there is an access key ID shows here, and there is a hidden secret access key. Please be very sure that after you click Show and please store this access key and secret in a secure place. Once we close this wizard, we won't be able to retrieve it anymore. Okay, So now we have our AWS account registered and we have our first day user admin create it. We also requested a pair of access key and secret key, which will be used in the future. With this, we completed this video. 4. Install and Configure Your AWS CLI SDK: We are going to install the AWS CLI command line tools. And we're going to configure it. In order to install it. We can simply Google AWS CLI and follow this installing command hyperlink. Okay, so now we are on these installing the alias command line interface documentation page. The address command line interface is very powerful. We can use it to manipulate almost all AWS resources. For example, create, delete easy to incidences, create S3 bucket and upload an object, etc. What do we should prepare is we should have Python two or Python three, runtime. In whatever operating system you have, Windows, Linux, Mac, OSX, or Unix. The adversarial AI tool is a standard Python package. So we can install that through this single line. Pip, install a diversity. Why? After installing the AWS API, we have opened our terminal. And our first lesson is to type AWS config. It will prompt us to enter our access key ID, as well as secret that we retrieved from our previous video where we are creating that admin user for the region name before us is one. We can use it for now. And we can change it anytime in the future. Output format JSON has good. We can also do like AWS, EC2 describe instances. So obviously right now we don't have any EC2 instances. And anytime we can simply type AWS help, it would show a verbal save documentation to you. So do that. You can't reverse what a command we can use. Okay, so now we have the AWS CLI installed and a successful it can fake. With this, we completed the first reception of this video course. 5. Exploring Fundamental AWS services: In section 2, we will have a deep dive in AWS. In this video, we will deep dive into several fundamental AWS services, including IAM, S3, ec2, Virtual Private Cloud, and the databases. In the IM video, we will go through IM key entities, including user and group, IAM roles and policies. We will walk on several hands-on demos and learn how we can use IAM to manage the permissions. In S3 video, we will cover bucket and object of basics, access control, storage classes, S3, versioning, lifecycle, and replication. In the ACE2 video, we will learn how to create an instance and access it control instance, network access, biosecurity grew, explained difference between each instance types, AMI images, EBS, volume, auto-scaling, et cetera. In the VPC video, we will learn important concepts such as CIDR blocks, subnet and availability zones, network, ACL, NAT gateway, and a VPC peering. A database is section we will introduce AWS, Aurora database and a y, it is more favorable than MySQL or Postgres QL. We will also cover DynamoDB and why it is such a great option for NoSQL storage. Okay, with this, we complete this video. 6. Identity and Access Management with IAM: So we are now on IAM dashboard. The very first thing we should know is that IAM is operating globally and the real count your resources in all regions will be controlled by IAM. So we can see in this, it's shown IN does not require regions deduction. So I'm now going to the groups dashboard and less now create a new group. So we want a simpler name, this group as a DBA group, represents or database administrators. So in this group, I'm going to attach a policy, say RDS, so that our, all our DBA engineers will have RDS data for access as well as RDS for access. So in the Review tab, we see we have chosen both these two policies attached to these DBA group. We are now going ahead and courageous DBA group. So we have this group available right now we can review the policies showed here, RDS for access and RDS data for access. So we want to create a new user under this group. So in the US tab, click Add a user and gave these username as say DBA. John. We want to give this join programmatic access and console access. Say in this time we want to specify a chasm of password. So the DBA Joan can login. And for testing purposes I want, I want, I want to uncheck this muscle, change password setting. So in the next permissions, we will and this user simply into this TBA Group. Okay, Review. And we create this USA. Okay, we have this join DBA created. And we can review this DBA join. It belongs to this DBA group. So if we sign in using his John, he will only have access to our databases. Okay. So now I'm signed in with this TBA Joe and user. You can see we showed here. So since I have only RDS access, so what if I'm navigating to S3? Let's see what happened. Okay, So since this, John has only database access, we were clearly CDs in the S3 dashboard. We can see this excess denial error. Okay, now let's take a look at the rows. Like in these rows dashboard, diverse, give us a hint. A row is a secure way to grant permissions to entities that you trust. The typical use cases, including a role can be associated with another IAM user in another account. Or a row can be assigned to a running EC2 instances, then needs to perform actions or on other AWS. Resources. Or a row can be, needs to be acted on resources in your account to provide its features. Finally, role can be associated with a corporate directory who use identity federation with SAML. So for now we're going to create a row. And this row will be allowed EC2 instance to call other AWS services on your behalf. We will choose this option for now. In the permissions policies step, we want to give this row simply an S3 read only access. So which means if in the future we create a easy to incidence and incidence is attaching this row. It will only have S3 read-only access. I will leave it as texts MPD for now. So in the row name, I want to give it a like these EC2, S3, read-only. I'm a go ahead and create this row. So now, Okay, I have these already and will be used in the near future. Finally, let's take a look at how policy works. A policy is a permission definition document stores in JSON format. Permissions in policies determines whether the request is allowed or denied. For example, if we search for S3 and we can take a look at this S3 read only access we just used. We can switch 2D is JSON tab. And here we can see that a policy is typically come form the with statement with a Effect, allow or deny. And these actions is an array that listed out all the allowed operations in this policy. In this concrete example, this policy only allow S3 get an SR list. The resource given to a asterix, which means any principle, a user or role as either with this policy, we only have asked me read only access. As a comparison, we can take a look at these four access policy. Clearly, it is simple to say allow S3 anything. So that means S3 for access. So let's take a look at some IAM best practices. First of all, we should never use the root user to manage our AWS resources, which we had mentioned that in our previous video. We should always adhere the least or permission principle. We should avoid things like symbolic giving the user a row, administrator access, or dynamo S3. Or easy to for access. Likely We should a granular grant permission whenever possible. For example, in a web application running Israel instance only needs permissions to read the data from the database. And occasionally, right? And it will requires a read only access to S3. We should a brand new grant is permission in our policy definition. The next thing is, we should enable multi-factor authentication shorten as MFA. This is a particularly important for Rudolf Kalman users. And finally, we should rotate our credentials regularly. With this, we completed this video. 7. Store and Retrieve Data Anywhere Using S3: Okay, so now I am on S3 dashboard. One thing please note is that S3 operates globally. We can see on the right top corner, if we click these regions section, we cannot choose a region when we are operating on S3. Okay, let's create our first bucket. Now. In a bucket name, we should enter a globally unique name. For example, if I enter a pretty common name like 4-bar, it won't walk because let's give it a try. Okay, So it won't work. As we can see this promise a bucket name already exists. So let's fix this by entering a very unique name, say, okay, and the region. So please know that a bucket, it thus belongs to a specific region. We can specify a region here. So for now I chose Singapore. Once in the second config options, we can't say yes, we want to enable versioning and we can talk about this soon later. And we can be ignoring the following options for now. Go ahead and click Next. And we keep these recommended permissions for now, which simply means only these owner does IAM user can operate on these pocket. And all the other users. We're not be able to access this pocket. So here we can review what do we have chosen the options for this bucket? And we go ahead and create his pocket. Okay, so now we have our first pocket weighing cloud bucket creating. And we can see here in the dashboard is shows we have one buckets. In one region is Asian Pacific, Singapore. If we click on this bucket, we can see an overview onlys, pocket properties, permissions, and then we'll have management. We're going to talk about each of these soon. Okay, so now if we enter into this pocket, we can see that this pocket is MDD with the upload a new object. So in my logo, I have prepared a simple text file. I call it a version of the text. And its content is fairly simple. Said 2 indicating a version number. So if I can simply drag this fire on to this bucket, it, we're prone to upload this object. So I can you connect here in the set permissions step? Yes, we want to make sure these owner has read, write, and also protons permissions. And we do not want to make public access to this object. If your project is private, please do use these and recommended away. Otherwise, we can demonstrate how you publicly later. Right now we pick next. So here we can see a intrest dashboard is showed a storage class. So basically a storage class means that how we are this object be stored in which storage class, by default, S3 uses standard. That means it will be available in more than three zones. And it does have like intelligent tiering standard IA ones on a glacier. Also the last is said reduce redundancy. Nonatomic mandate. If we go through them one by one, standard is the standard tier of S3. Storage class is mostly recommended. Storage class has the best availability and doability. Intelligent here is tried to provide an intelligent way if you're not sure about your access patterns, but it will be long-lived. Standard IA is a frequently used storage at high for storing data that is a long lived. However, the key was an infrequently access for sure, then you can choose the standard collapse a class so that you will have a good trade off over performance and availability. Also, they provide a one zone, which is a lease, costly storage costs compared to standard IA. Glacier is another service that storing data archiving. So you wouldn't prefer this option if your data is very less frequently accessed and it's freaking cheap. The only drawback is if you want to retrieve any data from glacier, it typically takes like three to five hours. So finally, reduce redundancy, which is very good option actually for easy to regenerate data. For example, if your websites have some screenshots, some nails, you can use these storage class. So for now, we just choose Standard and click Next. Okay, looks good. We upload this object. All right, We have this version, no tax uploaded. Let's spend a few seconds in reviewing the storage classes, like we have just a witness rule or the storage causes. Disease is a global comprising table. So basically the Jew ability are very similar. The beginning for provide 11 nines of durability, excepted the RREF, which provides only four nines of durability. Another big difference was that standard S3 storage class provides the best availability, which is four nines. Simply speaking, four nines means less than 15 minutes downtime of a year. So this standard IA, intelligent tiering and ones on IEA, they provide three nines of availability. That simply means less than eight hours of downtime a year. Okay, Let's begin with another workshop. And I would like to talk about and very useful as three feature code versioning. Here we can see this version DO text. We uploaded a seconds ago. What if we updated this fire and we update theta J again? We can go to terminal and we added this fire, which is a version from two to four, save this file. And this time instead of drag and drop, I would like to use our terminal command to upload it. We can simply say AWS S3 copy version dot text into this pocket. Okay, it said uploaded. So if we go back to the console and we refresh this page, if we click on this file, we can see that we've dropped down. We will see two versions. And this is the latest version and this is a previous version. If we download these latest version, we can see that the version has been changed to 4. So this is ESRI version that we can store all versions of particular objects, which is pretty useful. Okay, here is our last workshop, and I would like to introduce an S3 feature called Lifecycle Management. In this bucket we created if we go to management, and here we can see there is a life-cycle button. And we can add a lifecycle rules. Let's say we name this rule as weighing lifecycle. Click Next. I will. Lifecycle means that we can choose a storage class for our objects, for current version and the previous version. So for the current aversion, want to see add a transition. And we can choose, say, we want to transfer our standard objects into a intelligent adhere after, say, 30 days. And for our previous version, which we probably won't be using that too much. We can choose a, say one's on IA after, say, 30 days. So with these settings, we will have a pretty good trade off cost and an object accessibility. If we click Next and we can say configure the expiration as well. So for the current version, we do not want to expire. So say for previous version, we can permanently delete this previous versions, average ys given days. We are also checked is pulling up expired. They did mockers and inconvenient multi-part uploads. Click Next and review whatever. Yes, we want to save this life cycle. So now we have this lifecycle rules activated on our pocket. With this, we complete this video on S3. 8. Mastering Server Fleet with EC2: Let's first take a look at the fundamental components of AWS EC2. First of all, let's take a look at this Amazon Machine Image or shorter know as AMI. They are preconfigured templates for your incidence, underlying combination of operating system and additional software packages you need for run your web services or applications. Also, there was a Elastic Block Store known as EBS. They are possessed a layer storage volumes that it can be attached onto your EC2 instance or detached and any ion when requested. The third important component is called a security group. It is an instance level firewall that you can specify with protocols, ports, and IP range, that kind of access. Because your instance, AWS EC2 provides a lot of instance types for you to choose. For example, they have tea level, which is the baseline level of performance, provides a reasonable CPU and the memory which is good for starting our project or ad hoc servers. They have an instance type, which is general purpose, and that is typically used for almost a variety of situations to have sea level instance types. And C stands for compute. If you have a compute intensive workloads, you should choose these one. They are relativity caused the effective EC2 has our level of instances. R stands for memory. If you have a memory intensive applications, you can choose these one. They also have ACS. Acs stands for extreme, which is also for large-scale and price class is an in-memory applications. And finally, they have g for graphic. You've have a graphic intensive applications. The link below is the official documentation to list out all the incidence types. Choosing an appropriate incidence type is very important for your services, all web applications to run with these and performance and a reasonable cost. So now let's take a look at the easy to pricing model. Aws provides basically for type As Pricing Model for you to choose, which is very flexible. First one is undermined, which is the most popular choice. For most of users. It is very good for short-term spiky or unpredictable workloads. The actual cost that can be charged by hour or seconds. The second one is called a Spot instances, which allow you to request a spare computing capacity. It is very good for service that has flexible startup or ending up time, or interoperable applications that you want to run at very low cost. The third ammonia is called a reserved instances. They are good for long-term running in virtual machines with significant discount. There are good for steady and interruptible services. The last one is called a dedicated hosts. They are physically easy to machines. So you'd have a server about the software horizons is which requires you to run them are physical machines such as Windows Server, sql Server, or SUSE, Linux Enterprise server, etc. We shall choose this one. Okay, Let's see some hands-on demos. You say to I'm currently on the EC2 dashboard and go ahead and click lunch a instance. I'm going to choose the Amazon Linux AMI based on SVM virtualization. This is the most recent release of these standard free tier eligible AMI. I'm going to say inequities and and I'm going to choose t2 micro for now. In this Configure Instance Details tab, I will leave everything as default except it is auto assign IP address. Yes, I do want an IP address public, so that I assessed it easier later. In these events, the details. It shows a user data field which I want to put something. I'm going to push a bash script here. And I'm going to install NGINX server and it's easy to machine as soon as it's installed. Pretty started. So this user data synchrony means at the very first time this iss two instance boost. It will trigger the given userData. Typically, it can be a bash scripts. Let's go ahead and click Next. Add storage. So default to eight gigabytes does okay. So in the ad tax wizard, we have say, given a name. My web server. Okay, so now we are on these configure security group. With her. We want to choose create a new security group. And the SSH, how we can connect it to this incidence. And go ahead and add my IP to it so that athlete a booted, I can access it around to it. Okay, now we can review everything and we are okay, let's click lunch. We should go ahead create a new key pair, my ASS edge. So download this key pair. As we just downloaded that a PAM key, we want to make some changes to it. Say, I want to move it to my home directory, and I'm going to rename it as my SSH key, the pan. Okay? And the next important thing, I should change its fire Mode to read only, I'm going to say Change Mode for 000 to his penalty. Okay? Now if we go to the instance tab, we can see this instance has been created and it has a public IP address. Let's go ahead. She onto it. We go back to our terminal, say, Hey, says edge hyphen I. Using my Hampton and easy to use her at this machine I just created. We'll go it has a guess. Okay, So now we're logging in it. And it, since we had specified that user data to install Engine X and have it run, we should see the NX process should be running. Because a PS AUX, in order to view this web page hosted by Engine X, we should do one more thing. Let's go back to our instance. And in this security group, we create a new security group. And if we view this inbound rules, we can see since we only configured Paul 22, does the asset you Porto only to my IP address. It hasn't exposed to port 80 yet. So let's go into that security group. In this inbound rules, we could have added, and we add a rule, say http. And the source should be anywhere. Which means every place in the world can access the SAT port. Yes, we're sure about it. Okay? Now, if we copy this IP address and we navigate to it, we can see these official engineers page, developer by Amazon in the Cloud Computing Era, most of the time, we would want you gain all to scale up and scale down capability. And some people call it elastic Cloud. So easy to support these perfectly. We can scroll down and go to this autoscale section, which use auto-scaling groups. Let's go ahead and create a auto-scaling group. We can choose launch configuration. We create a new launch configuration. This is very similar to the create a new instance steps. We continue to choose t2 micro and we configure configuration name for the launch configuration. And we'll name this launch configuration as we leave this for now, say and storage. So for this time, we can pick the one we created just now. And yes, it is lunch configuration and reuse my penalty. So here, since we had already created a launch configuration, it's time to config auto scaling group. First we name it, say my SG stands for my autoscaling group. And it using the launch configuration, started with one instance. And we want to spread it to all the three subnets. We go ahead and configure scaling policies. So in here, we would like to use scaling policies to other just tap City of this group. We can name it these and typically these metrics type, we have specifying average CPU use limitation. We can give it like 70 percent. And this instance, nice, simply means that the amount of time, normally your instance need to warm up. We can't give it like 300 seconds. And this disable scale in simply means when your traffic or your CPU utilization goes down after a certain period, do we want to scale in into the normal capacity? And the most of time we do want this. So I will not check this. Let's go ahead configure notifications. We will leave it for now. Review. Now we have this conformation and we go ahead and create them. Our Auto Scaling Group. All right, with this auto scaling group created and walking in action, we are expecting our incidence should be intelligently scale up and down whenever the CPU usage changes. So now let's take a look at some easy to pass a practice is, first of all, we shall be a carefully open up certain protocols or ports to an IP range that under control, we should never open up any port or protocol worldwide. And we should always use IAM roles to control instance permission, the least, the permission principle is also applied here. We use EBS volume as our storage. They can be attached or detached. We should choose an appropriate EBS volume type, for example, SSD for better performance. And for choosing HDD for larger storage with a lower price. We should use the resource tags. Tag our EC2 instances for easier management. For example, we can tag our servers like web server one, streaming server 2, etc. We shall also regularly backing up our instances into AMI images so that we can restore them. If disaster happens with the minimum data loss. And finally, very important, we do one auto-scaling. Most of the time. Use that wisely. And we should again, a very good trade-off performance and cost. With this, we complete this video. 9. AWS Networking Fundamental Virtual Private Cloud (VPC): In a nutshell, VPC is a virtual network that you can't get it into your AWS account. It is logically isolated from other virtual network in the Cloud, or VPC contains the folder important components. A subnet is a range of IP addresses in your VPC. Subnet can be poverty 0. Private resources inside a public subnet will have Internet connection, and it can be accessed publicly. For example, we will put web servers in the public subnets so that they can be vegetative via HTTP or HTTPS. Why a private subnet is used to protecting such resources that cannot be accessed public. For example, will you reach create our database clusters inside a private subnet. Another component, we call it a brood table. A route table contains a set of rules called routes that are used to determine where network traffic is directed to server components called Internet gateway. And Internet Gateway operates at a VPC level. It provides internet connection between the VPC and instances inside it. Indian and a gateway is designed the way should the redundancy highly available and scalable. Nat gateway. Nat stands for Network Address Translation. Nat gateway enables instances in the private subnet to connect it to the internet or other AWS resources. But a prevented the Internet from initiating a connection with those instances. Prison note, NAT gateway must reside in a public subnet. The final components is network access control lists, or shortened term as ACL. A CMO is like a firewall for associated subnets. A controlling both inbound and outbound traffic at the subnet level. Let's take a look at a concrete VPC example. In this picture, it shows the topological structure for the default VPC within an AWS region. The VPC see the blog is 172 doors, 31 000, 000 forward slash 16. There are two subnets inside this VPC. Each of them reside in one availability zone. If you remember, availability zones are physically separated and isolated locations within an AWS region, please note a subnet mask inside one availability zone. It can never spend multiple availability zones. So both of these two subnets, our public subnets, because in their main routing table or outgoing traffic, except a VPC CIDR goes through the Internet gateway, highlighted in green color. Whenever instances inside them initiate a incunabula request. The request, we're route to the Internet gateway right here. And we can see that if the destination IP address belonged to the VPC CIDR block, the target that we're be local. This ensures that in resources inside these two subnets can access other resources in other subnets. And finally, we can see there are two EC2 instances inside of these two subnets. Most of these two EC2 instances have public and private IP addresses. Hence, they can be accessed from outside of the VPC as well as inside of a VPC. In his create VPC wizard, we can name our VPC as my libc. And you see their block. We can give it a 10 dot 000 slash contain. And yeah, that's it. We're going to create. All right, we have our VPC created. Let's close this and we can't see my VPC gets here and it's AVP for CDA broke is SDs. So now let's go ahead and switch to the subnets dashboard. We see these three subnets associated with the default VPC. Now we go ahead and create our own subnets. For first sadness, I would like to give it a name, say AB subnet. Let's specify the VPC to my VPC. We can see the CDO block and let's say we can choose to the AZ to one eye and we can give it as either block like this, say dot o and 23. Let's go ahead and create. Okay, and then let's create another subnet. We can name it as data subnet. Also belong to our VPC just created. And he's IPV4 CDER does time, we can give it 1002 slash 23. Go ahead and create. Okay, So now we have two subnets created. So our next are saying is, Go ahead. We create an Internet gateway. Without an Internet gateway, we won't have Internet connectivity. So let's go ahead and create an IG W. Name it as My GW create. Okay, pretty straightforward. So now it is detached state. Let's go ahead and attach to our VPC. Okay, we can see our VPC here and attach. Okay, great, So now our VPC has a internet gateway. The next thing we should go to our route table, and unless you one thing we haven't see, this is a default route table. And if we go to the Subnet Associations, we can see there is no Subnet Associations. So let's go to under subnets module. Now if we select one of our sub-net just created, if we go to this route table section, we can see it is associated with this route table. Let's go ahead and click into it. So now we can see these route table in the route table section. And one thing we're gonna do is we switch to roots and say add it. And we're going to add a route which is 00. And you're seeing it goes to an Internet gateway. And we're promised these either I, GW, we just created. Let's go ahead and save routes. Okay. So with these most of our subnets associated with these route table, they both will have Internet access. Okay, we just had our VPC and a subnet created. Now let's go ahead and create a new EC2 instance and a test our VPC configuration. And go ahead and quickly create EC2 instance. And in the Network section, yes, we want to put this easy to incidence into our VPC, my VPC, and the subnet. We wonder, located in AP subnet. We want a also assign IP enabled. Yes. And then let's go ahead in their events detail, given an end user data so that we can have nx automatically installed and started. Okay, let's go ahead and create a new security group for this, say my SG and SSH, would it be my IP? And we're going to open HGTV for these anywhere. Okay? We'll go ahead and launch this instance. Okay, All right, our instance is up and running. Let's go ahead copies public IP address. And then we're going to assess you on to it. For fancy SSH hyphen I, the key ec2 user at this IP address. Alright, verified endogenous processes running by doing this. It is up and running. So if we navigate to this IP address, yes, we see these VEGF default landing page. Great. So now let's go ahead to check one thing. Okay, here we have CDs network, ACL Friday, which is this, and it's controlling the VPC we just created. Let's do one thing. If we switch to these inbound rules, we can see that it is default to everything allow. And let's change this. If NCDs added in mono roofs, which changes 100 to say http, and we allow it from anywhere. And let's go ahead and add another rule. And this rule is behind this one. We can say it is, it's a 101 and is also http. And this time we're going to say New Source, going to be my IP address, which is this one. And instead of allow it, I want to say didn't know it, and I click Save. So let's review this. This has changed it to a 100, is going to allow HTTP connection from anywhere. And the rule 101, saying denial HTTP connection only for my IP address. So let's take a guess what would happen if we go to the test page which is open here. And we could even refresh. And we can see it is loading normally. So please remember, network ACL operates at a numeral number increasing fashion. Aws recommended that we added is rule increasingly. So the lay, the later, the larger the row number is, it will evaluate it later. So in this case, since we opened up a port 80 publicly, it will read these rule first and we allow it. So there is no chance to go to this rule. What if we change the inbound rules like these? Which hand is number? Rule number 2, 99. So 99 is going to deny my IP address. Okay, saved. So these rules become the very first one. And if we go to this test a page, refresh it. So we can see that I'm no longer to access this webpage because I've got denial in this very first serve rule, which is number is 99. Now let's review some important concepts. In these slides, we're going to demonstrate what's the difference between a security group, the network ACL. First of all, security group can only allow ribose. While network SEO, we can define, allow and deny rules just as we had demonstrated. The second difference is that security group is stateful, which means that any change applies to an incoming rule will be automatically applied to the outer Kami rule. For example, if you allow an incoming port 80, the outgoing port 80 will be automatically opened. Wider network ACL is stateless, which means any change applies to an incoming rule. We're not be applied to the outgoing rule. For example, if you allow an incoming port 80, you will also need to apply this rule for outgoing traffic. The third difference is that security group operates at instance level. It's just like a fire or of EC2 instances. While network ACL operates at a subnet level, it's just like a firewall of the subnet. And finally, for security group, all the rules are applied as long as you define them. Wherefore network ACL rules are applied in their order. Let's take a look at what is a VPC peering. Since a VPC is an isolated and private, his own networking environment and gateways to the Internet. If you are managing multiple VPCs in a region or in multiple regions, in some use case, you may want to enable communication between them. Vpc peering, would it be solution for this? In a nutshell, then walk connection between two VPCs that enables you to route traffic between them using private IP, these four addresses or IPV6 addresses. In this picture, we can see this VPC and VPC be our appeared and they can communicating with each other. 10. Databases in the Cloud RDS and DynamoDB: Aws, RDS stands for relational database service. It provides a variety of database engines for you to choose from, including the most popular relational databases like MySQL, postgres, SQL, Oracle, MariaDB, and a Microsoft SQL Server database also offered and hosted relational database, or Aurora, which was released in October 2014. It is a MySQL and PostgreSQL compatible database. Let us inquiry means if your applications are running on MySQL or Postgres, you can seamlessly migrate your Aurora by doing these pain-free step. Again, a highly available, scalable and reliable database in the cloud. The cost could be just 10 percent of your orange know, we will have a hands-on demo and Aurora very soon. Or raw database provides great availability. When you insert data into Aurora DB, it automatically maintain six copies of your data across three availability zones. So if failure occurs in one A-Z or wherever you're automatically attempt to recover your database in a healthy A-Z with no data loss. The scalability for Aurora is a very powerful and easy. You can pick an appropriate Aurora instance from db dot t2, those more to deviate or Ford or a 16 extra large. And you can define auto-scaling policy for your Aurora cluster. The scaling up, down metrics can be either CPU utilization or average connection count. Is that simple? Aurora serverless is an on-demand auto Scaling Configuration for Amazon. Aurora MySQL capital addition. It can automatically starts up Sheridan and scale up or down based on your application's needs. You can run your database in the Cloud without managing any database instances. And it is simple, causes, faculty option for infrequent, intermediate, or unpredictable workloads. Okay, so now we're on the AWS RDS dashboard. And in this hands-on demo, let's go ahead and create a database. In this engine selection, we're going to choose AWS, Aurora, and we're going to choose 56 capital because this is habitable with AWS serverless, we are going to try this. So in this DB details, we will choose serverless and we give our DB cluster identifier as my aura. We can specify our master username as say y in, and we'll give it a master password. Let's say. Okay. We go ahead and click Next. We'll leave this for default, residing a default VPC. And yes, we're going to create a new VPC security group. Less uncheck this and a creative aurora serverless database. All right, Now we have our serverless database created and we can see is details here, is n. The point is these, Let's copy this for future use. And IS Security Group was created in the launch wizard. In order to connect to this Aurora serverless, we shall do two more things. The very first, as in less open this security group and enable its access for a port, MySQL port. So now we can see these security group. Let's go to inbound and let's go ahead and add it. And then we're going to add a rule, say Mexico, aurora. And in these customer section, we're going to feel a sacred grove for our ec2. Okay. We're going to pick this one which had already created. All right? Okay, so the second thing we should do is we should have a bastion host. The bastion host muscle reside in exactly the same VPC as a server-less or aura. So I have already created a bastion host there. Here. We have a public DNS void. Let's cope it. So now I'm now on the terminal and the lead me SSL data Bastian. Easy to use these bastions IP address. Okay, Now I lost logged onto this bastion host, so I can try to connect to the aurora serverless. I can simply do MySQL, username and password. And the host, should it be this one? And our password. Okay, so now we connected to this Aurora database, which is great. Okay, let's get back to DynamoDB. So nosql databases have been very popular choice for the mainstream. They are ideal solutions for storing large amount of non-relational data, while performance and scalability is imperative. Amazon, DynamoDB is a fully managed NoSQL database service. Let's suppose key-value document a data structure. It provides industry-leading performance and scalability. Dynamodb can handle more than 10 trillion requests per day, and suppose peaks of more than 20 millions requests per second. There are famous users including Lyft, Airbnb, ratifying, Nike, and Capital One, etc. There are several core components in DynamoDB. The first one, we call it a table, which is a collection of data similar to tables in relation or D1. For example, we can store least of people. In the people table. We store a collection of movies in the movies table. Then the next one we coined item. Each table contains 0 or more items very similar to rows in relational DB. An item is a group of attributes that is uniquely identifiable among all the other items in the movies table. For example, an item we're be a single movie. In the people table, each item represents a person. Then the last core component is attribute. So each item is composed of one or more attributes. An attribute is a fundamental data element, something that it does not need to be broken down any further. For example, an item in our people table contains attributes like person ID, lastname, firstname, and so on. There are several key types in DynamoDB table. The very first one is partition key. It is very similar to primary key. In relational database, it is composed of one attribute known as a partition key. Dynamo DB uses the partition key's value as an input to an internal hash function. The output from the hash function determines the physical location in which the item will be stored. For a Dynamo table, it has only one partition key. No two items can have the same partition key value. For example, the person ID, could it be the partition key for a people table or the vehicle identification number? Could it be the partition key for the cars table? The second GitHub, we call it a partition key and the store key. It is very similar to a composite primary key in a relational database, where in DynamoDB, it is composed of two attributes, a partition key, store key. Dynamodb uses partition key values as an input to an internal hash function. The output from the hash function determines the physical storage in which the item will be stored, or item with the same partition key value, our store together installed here order by sort key value. Either table has both partition key and the sulci is possible for two items to have the same partition key value. However, those two items must have different store key values. Last one is called a secondary index. Wheezing a Dynamo table, there can be one or more secondary index that'll let you carry the data in the table using alternative key. Dynamodb supports two kinds of secondary index. The global secondary index is an index within partition key and sort key that it can be different from those on the table. Local secondary index is an index that has same partition key as the table, but a different sort key. Let's have a hands-on demo on DynamoDB. I'm currently under DynamoDB dashboard. So we can go ahead create a table. I'm going to name this table as movies as an example. And therefore the partition key, I would like to give it a year. And the years type. Would it be number? For example, like 1990 to 1995, chosen an aid, et cetera. We're going to pick this year as partition key so that we can query against the movies based on the year of his producing. So definitely it can be duplicated. Yes. So we need and so Turkey and assorted key, Would it be the title of the movie and datatype? Would it be string? Let's go ahead and create this table. Okay, so now we have these movies table created. We can seize overview here. And if we go to the items tab, now, obviously there is no item in the table. Let's go ahead and create an item. So in this year, we enter 2003. The title is say like some great movie. Okay, So we can do a scan this entire table star search, and we can see this value is item is created here. If you are a developer or DBA, it will be grade that you can play with DynamoDB locally. Here is a great thing for you to play locally. Whereas if you Google DynamoDB Docker image, you when we redirect to this Docker Hub official page. So what you can do is you can run Docker pool, this dog, DynamoDB local image, and the simply run your Docker container with this command. Here, you can see I'm now on a terminal and I have a local DynamoDB up and running the listening port, 8 thousand. Okay, Here I have a Python script prepared. And I'm going to run this Python script to create a table exactly like what we had done just now. We're going to name this table as movies. And it has a hash key, the pathogen key core year, and at height 0 as the sort key. It uses the official AWS, both those three library and we can CDs the end upon the URL is my local host. If we go ahead run the script, we can see this table was created understand as is active. In this movie data dot JSON. We have prepared some sample datas. We're going to insert into our table we just created. For example, these data has a year, has a title, and has some attributes. We call it info. This is simple JSON. The Info contains many attributes for this particular data. Okay, So these script is going to connect to our localhost. And I'm going to query for this table. And we're going to open this particular JSON and insert data into the movies table one by one. Let's run this script. And we can see those movies. We're going to add it into the movies table. All right. We have all those movie data inserted in our table. Okay, in these query data to the Python fire, we're going to query against our movies table. And we want all the movies from this year 1985. So we're gonna say table query and the key condition expression would be year 1985. If we run this script, we can see all the movie is going to print it out that produced in the year 1985. With this, we come to this video. 11. AWS Lambda Fundamentals and Benefits: Hello, and welcome to the section of mastery and serverless. In the first video of this section, we will learn AWS Lambda fundamentals, and the benefits of using it. Aws Lambda is a revolution with surveys, which was introduced during November 2014 in AWS re-invent conference. The purpose of releasing them that was dead. Compared to EC2, lambda lets you run code without provisioning or managing servers. Developers just upload their code. And Lambda takes care of everything required to run and scale your code. With high availability. These dramatically reduce the effort and the time cost of building and managing Cloud services. The benefits of serverless is that if there is no invocations against your lambda function, no computation resource will be consumed and you won't get charged. You don't need to provision or administrator web servers. You want to worry about installing libraries, patching operating systems, et cetera. What do you only need to do is focusing on your business requirement and the rider code out o to lambda and heavier serve your business. Lambda is highly available. You can deploy your Lambda functions into multiple availability zones. Even if one or multiple AZ go down, your Lambda function will still be up and serving them. That comes with built-in scale up and down mechanism. As the traffic goes up or down, lambda will scale precisely with the size of workload. Lambda comes with low cost with free tier usage. And abuse uses based on the number of requests served under the compute the time needed to run the code meter, the increments of a 100 milliseconds. Even Lambda function is not invoked. You will not get charged. And even sweeter. The first one million requests within a month, would it be completely free? Aws charge, uses of each additional medium of requests. Lambda is pretty versatile. It is suppose abroad of languages including NodeJS, Python, Ruby, though, Java, and C sharp. Lambda can be great for beauty and simple web applications. Real-time file processing, mobile backends, et cetera. Lambda is also the vital server-side support for the popular Alexa virtual assistants device. With this, we completed this video. 12. Build Your Own Serverless Service: Okay, Now we're on the AWS Lambda console. Let's go ahead and create a Lambda function. We would like to choose this author from scratch option and create our own halo or the Lambda. We want to name our Lambda as my lambda beginner, and we want to choose our runtime to NodeJS 8.10. You can also specify other familiar runtime. For example, donor core, Go Java, or Python 2 or 3, or Ruby. In this row section, we're gonna go ahead and create a new row. Let's give it a name. We name it my lambda beginner row. In this policy templates. Let's add a Esri permission to eat for now. So we can choose these AWS S3 object Read Only permissions. Let's go ahead and create this function. Okay, Now we have our Lambda function created. The very first is you wanna do, is we scroll down to this code section and we can see the assemble code generated by Lambda runtime. Let's modify this a little bit. So we simply wanna do only one thing. That is, we're going to log this under the Council for whatever it is even passed to our Lambda handler. Before we actually run this lambda, let's spend a few time in going through this entire lambda page. Let's begin with this designer section. In this section, we can add triggers Q lambda. Essentially, Lambda Triggers are certain actions caused by specific events that were further triggered is lambda function. For example. The trigger can be an API gateway. When it receives network requests. They were passed to request information Today lambda, so that the lambda can process the request and the return. An appropriate response will have a hands-on demo on that. We are also config S3 as a Lambda trigger. The supported events are like new object uploaded, a new version of object got overloaded, object lifted. Your Lambda function will receive these S3 events. Another example trigger can be DynamoDB. When enabled. Dynamodb, we are capturing information about every modification to data items in a table. And your Lambda function will be invoked with the data of concrete modification event. This can be pretty useful for further data processing or is patching and also great for backing up expired items in DynamoDB into others database storage services like S3, Glacier. In this function code area, we can edit the code in line just as what we had done. We have also choose to upload a zip file or upload a file from Amazon S3. We can change the runtime for this lambda. We have also changed the runtime for this lambda. In this handler, we can specify the lambdas entry point. In this example, we'll just choose index toward handler. The filenames in the x and the function name exported is a handler. The environment variables are just alike operating system level environment of arrivals. You can specify key and values for your lambda so that during the runtime execution, your Lambda function can read values from this environment variable you specified. In this execution row section. We can select an appropriate row for our lambda, which deal size was an alter. The lambda has certain permissions. For example, wasn't a lawyer, lambda can access S3, DynamoDB, API Gateway or RDS, et cetera. In this basic settings, we can specify the memory size for our lambda. Generally speaking, the more memory you gave to a lambda, the more CPU power you will get. For example, if you allocate 256 megabytes of memory, your Lambda function will receive tries to CPU share. Then if you allocate only 128 megabytes, we should choose an appropriate memory size for our Lambda function to achieve a decent performance as well as reasonable cost. In this timeout section, we can specify the maximum execution time for our lambda and a certain scenario, you may want to increase this value to one minute or two minutes. The maximum timeout Lambda supports is five minutes. In this network section, you can specify whether or not your lambda should be run no VC or inside a given MVC. Unless you really want your lambda to access VPC internal resources. Aws suggests try to run your lambda outside of the VBC. In this debugging and error handling section, we have specified and then the letter Q resource for our lambda. The dead letter queue resource can be an Amazon SNS or SQL service. So that whenever our Lambda encounter a failure, if you specify data or q, let the Lambda runtime. We're logged specific error message and information into those resources so that you have reactivated that or analyze the runtime log. The concurrency section, we can configure how many concurrent instances can be executed for this lambda and anytime. By default, Lambda runtime does not limit you on these. But you can specify reserved concurrency to, for example, 10, which simply means that your Lambda can only have up to 10 runtime instances if there are more invocation requests coming AWS where thrombo the request and refuse to invoke your lambda anymore. All right, Let's go ahead and save our function. And what we wanna do now is we go to S3 console and upload an object. Okay, Now Amazon S3 console. And let's get into this bucket. And we want to upload an object next. Yeah, let's go ahead and upload this object. All right, this object was uploaded. Now if we go back to our Lambda console, what do we wanna do right now is we want to switch to this monitoring path. The CloudWatch captures all the runtime log for our lambda. So we can do this view logs in CloudWatch. Okay? We can see these low group if we get into it. And we actually we see these records, can print it. We can see it is event a rake or printed. This is timestamp. They even name is S3, creative output with some object. That's it. Okay, now, we're going back to the Lambda console unless career, another kind of lambda function. This time, we want to choose these blueprints and let's search for a keyword with micro-service. We want to choose this microservice, HIV-1 HTTP endpoint, Python 3, blueprint, lambda S. Go ahead. Can you can fake. This time. Let's name our function as my Lambda API found in this row, we can simply reuse the row which created gesture. Now, we can say choose an existing row, my lambda beginner row. In this API Gateway trigger, we want to create a new API gateway. The security configuration means that wasn't all. We should protect our API function from open to the ward or protected by AWS IAM or with open, but it was an API key. We want to pick this one. Yes, we are comfortable with the API name it generated and the deployment stage. We're okay with default. Okay, here is the code generated by this blueprint. What it does is simply import the standard autosave library to orchestrating with variety of AWS services. This response method defines the response JSON message. And in these Lambda handler main entry, it defines the appropriate operations according to the HTTP method. For example, if we receive a get or you receive a post, it will simply do a dinos table scan to return the dynamo data or invoke it animal food item about the Post-It body. We want to do a little bit of a modification here in this scam method. Instead of scanning entire table, we want to add a limit. Let's go ahead and create this function. Alright, we have this Lambda API function created an hour versus in is we go to this code section. We want to do a little bit of modification. So when we receive a data request, instead of doing a full table scan, we want to add a limit to it. We can specify this limit 210, which means it should have written as up to ten records. Let's go ahead and save it. And the second very important thing we should do under this execution row, since we specified this my lambda picking a row to this lambda function. And this Lambda function will go to access DynamoDB. So we should go to the IAM console and the current DynamoDB access to this row. Okay, Now am I the I m rows page? And go ahead. Click on this row. We can see the current permissions is rho hat is only the basic execution. And yes, three execution. Let's go ahead and attach policies. Covid. In this attached permission, we want to search for DynamoDB. Okay, let's go ahead and attach DynamoDB for access to it because we want to query data from DynamoDB and we would love to insert into it or you MPD data from it. That's attached this policy. Alright. Now our lambda functions should have these DynamoDB folks ask permission. Okay? Right now we're on the Amazon AWS API Gateway page. And we can see these, my Lambda API funk API Gateway. It was created. If we click on this any method and we conducted a test. In this method, we want to try and HTTP GET operation first. And in this query strings, we want to query table from these movies table. If we remember, we created these DynamoDB table in our previous section. Let's go ahead and try test. Okay, great. We can see that the items in their movies table can returned by the scattered requests. And the next, we're going to demonstrate how to invoke this API Gateway through the curl command. We can click on these stages section. And the select is default stage. And we can see the invocation URL is this. Let's copy it. And if we go back to our Lambdas dashboard, if we select this API gateway, the API key will be displayed here and we are going to copy this API key. Now, what do we want to do is we're going to use the curl command and insert this sample JSON file, which contains an item of a sample movie. And we specify the table name is movies. Okay. Our final Kirkman should be looked like these were a fire and HTTP post request. And our continent type is application JSON. And we want to specify these acts. Api key is just key we copied from these Lambda console. We want to upload these movie JSON. And finally, this is our API endpoint. If we caught it, we will get a response message and indicating SUB 200 successfully request. Okay, with these, we completely this video. 13. Managing Your Serverless Service Using SAM Serverless Framework: Before our demo, we should have the following prerequisites for serverless framework. The very first one, we should have AWS CLI installed and configured, which might have already done during the previous sections. And we need to have Docker container installed. In order to build the lambda layers. We will describe data later on, and we will have a Node.js and NPM installed. And finally, we need to have it a serverless framework installed. Okay, now I'm an observer les.com, this website. And we have following these instructions here, install open-source. So we can see this is fairly simple. We can simply run this command to install serverless framework. So I'm going to my terminal and insert a serverless framework. All right, now I have the serverless framework installed. Version 1 is 38 is 0, and the serverless command is now available as s Sos. So if I say minus B, I can get this exact version number here. Okay? Now we are going to navigate to this GitHub repo, which this is a serverless application example, which uses a Geo IP library to display your physical location in the browser. So I'm going to clone this ripple. All right, I have this rapid cologne and lend me going into it. So we can see there is a period of SAS here. We're going to run this pos edge. What is peered SHE does is it's going to pour a Docker container image and a beauty the locally and upload E2 into AWS Lambda as a layer. A lambda layer is a zip archive that contains libraries and dependencies. With layers. We can use libraries in our function without needing to include them in the deployment of package, which will be pretty handy so that your deployment package can be small enough. All right, now we have this build command finished and we have our local Lambda layer, many. We're going to deploy it into our Lambda runtime when we can't do is SOS deploy. As we can see, it is packaging our service and creating a AWS CloudFormation stack. The CloudFormation stack, we're deployed this lambda layers. Within this lambda layer, it contains the Geo IP library database and the mandatory runtime dependencies. Alright, so we have this lambda layers deployed. And, uh, we got these output with GIP, with this ARN of this lambda layer. We're going to copy this for future use. And, and now I'm Anna lambda dashboard. If I switch two layers, we can see that this Geo IP layer has been deployed by us gesture now, and this is the arm notice layer. So the next step is we're going to go into this example directory. And in this example you've actually, there is a 100 go to PY and silver as Toyama. And we're going to change this service to a yellow a little bit. So as per the instruction, we're going to place our actual lambda layer ARN under this layer section. And we're going to remove this outer one. So now we're going to trigger the server-less deployment once again, to deploy the actual lambda function. So we can run SIS deploy. All right, now we have this lambda function created by our serverless framework. And it gave us this endpoint is API Gateway. So if we copy this and navigate that in our browser. So pretty cool. It is played my physical location, state and the country. One of the great feature provided by the serverless framework is we have a single command to roll everything back. Now we can simply run ls, ls, and it removes, this would be a complete rollback. It will remove the uploaded object in S3 for lambda, it will remove the CloudFormation stack. It will also remove the API gateway, the actual lambda function, and the lambda layer you created earlier. All right, so the removal was finished. Now if we go back to our browser and we do a reload, we can see this URL is no longer walking. With these. We completely this video. 14. Best Practices for Lambda and Serverless Service: For production ready microservice. In order to achieve high availability, you should always deploy your lambda enqueue multiple availability zones to ensure even if one AZ goes down, your lambda function is not impacted. By analyzing the Lambda runtime log in CloudWatch event. You can check the max memory used field so that you can determine if you infection needs more memory or if you over provision your functions memory size, choosing an appropriate memory size for your lambda to achieve a decent performance, as well as a reasonable cost. Manually managing a large set of lambda function, we're not be practical. Just ask we've learned from the previous video using AWS SAM or the open-source serverless framework can help us development, packaging, and deployment. Utilizing these infrastructure as code technique will give us continuous deploying ability. Why I'm managing lambda functions. You can monitor your concurrency, use age, using CloudWatch, configure an appropriate and maximum concurrency execution limits for your Lambda functions for those business critical services. Hence, either give them a higher limit, such as two hundred and five hundred four loan critical and a redo of a functions, consider giving them a relative low concordance and limit. And a police be noted that the default maximum concurrently execution limit is one sound and the Parisian you can request for increasing in the AWS Support Center. Consider how your serverless application should behave in the event that your functions cannot be executed or encounter failure. For use cases where API Gateway is used to as even a source. This can be as simple as gracefully handling error messages and providing a reliable if they're degraded, user experience and hear your functions can be successfully executed again. Another example is that if there are any errors or failure occur. 15. Container and ECS Overview: Hello and welcome to the video of mastering Elastic Container Service or ECS. In this video, we will discuss container basics and its benefits. Then I will introduce ECS fundamental components. Container technology has been very popular nowadays. Hundreds of thousands of production services run inside containers. If you are not familiar with container, please allow me to explain. In a nutshell, a container is an isolated user space in which programs run directly on the host operating system kernel, but have accesses restricted subset or its resources. This produces outstanding possibility and isolation. The major benefits of containers are beautiful ones run anywhere. You can build your container image on almost any operating system. Linux distributions such as Ubuntu, send OS, Mac OS, or Windows, regardless. And you can deploy your containers onto any cloud providers such as AWS, ECS, is that a single efficiency and a density containers are some more since they do not require a separate operating system. A typical virtual machine can be easily up to several gigabytes wire. A container can be only a few megabytes. So it is pretty common that we run more containers then VMs on a single server. Also, containers have a higher utilization level with regard to the underlying hardware. These results in the reduction of bare metal costs as well as datacenter costs. Containers are lightweight. It they are essentially OS level processes. So they started typically within less than a second. Creating, replicating or destroying containers is also just a matter of seconds, thus dramatically speeding up that process as well as CICD pipeline or containers, processes runs on the same server, share the same underlying resources. However, they do not interact with each other. If any one of them crashes. Other containers will keep running flawlessly. Containers can be easily horizontally scaling. When there are more traffic usage to your application or service, you have synchronous go up by adding more containers. You can also reduce the resource costs dramatically and accelerate your return on investment. Container technology and the horizontal scaling has been used by major vendors like Google, Twitter, and a Netflix for years. Now. Now, let's take a look at AWS. Ecs. In a nutshell. Ecs is Amazon solution for Docker container as a service. It makes it easy to deploy, manage, and scale Docker containers running applications or services. You might ask, Can I just provision a Docker registry and images, then load the Docker on AWS EC2 instances. Of course you could. But with that comes overhead management, patching, distributing, workload, scheduling, scaling, recovery, and a more ECS takes the paying of managing infrastructure away from those. So that you have focused on building your own images. And it won't need to worry about managing the underlying infrastructure. The very first component of ECS, our clusters. One cluster is a logical grouping of tasks or services. When you firstly use AIS, ECS, a default cluster is created for you, but you can create a multiple clusters in account to keep your resources separate. The Elastic Container Registry shorten as easy. R is a fully managed a Container Registry which provides easy to use, storing, managing, and deploying container images. Fargate is a computer engine for ECS. It is a deeper level of abstraction for container management by Amazon. Fargate allows us to run containers without having to manage servers or clusters. By using it, we no longer have to provision figure and scale clusters of virtual machines to run containers. And finally, Elastic Container Service for Kubernetes. Kubernetes is an open source container orchestration system for automating application deployment, scaling, and management. It was originally designed by Google and has been widely adopted so far. So EKS is an AWS managed service that makes it easy for you to use Kubernetes on AWS without needing to install and operate your own Kubernetes control plane. 16. Orchestrating Your ECS Repositories and Clusters: Okay, so now I'm on the ECS dashboard. And let's go ahead and go to the clusters. And we will create our very first cluster. We will be choosing these network only cluster template, Power BI AWS Fargate as their recommended template. In this cluster name, we can give it, for example, Fargate cluster. And we were just to use, reuse our existing VPC and go ahead and create this cluster. All right, we have this cluster ready. Okay, so now we have this cluster and we will find that everything inside these clusters should it be MPD. Our next step is going to these repositories or shooting as Amazon ECR. We will go ahead and create a new repository to store all our images. We can give you a name, for example, and a create this repository. Okay, so now we have this recursive created if you look inside from this repository. So definitely we have no images right now. And if we move right, we will be able to see these view push commands. So by following this tutorial, we will be running four consecutive CRI commands. And let's just run that. And I will explain each one in details later. Let's take a look at this Docker file. So it's fairly simple. It is based on the standard Python 3.7 RBI image. And we create our working directory. Just install the Flask library and expose 5000 report. We simply run Flask application and the listening on how sport. And if we take a look at this apple pie, it is a just barebone Hello World Flask Python application. All right, so now I will be running this command one by one. All right, Now it is pushing our Docker image onto the ECR repository we just created. Given this time, let us take a look at each command. The very first recommend, we actually invokes the standard and it's the AWS CLI to get a logging so that we can get an successful authenticated session. The second command is essentially called Docker build with a tag called my ECR repository. This is the repository name we just created. And the third command where you essentially tagging our Docker image with these name with the latest version. And finally, we will push these tagged Docker image onto the ECR repository. All right, so the push completed. And now if we refresh in our repository, we will be able to see these uploaded image. With this, we completely this video. 17. Fargate Worry Free Solution for Your Container Based Services: So now I meant the ECS task definitions tab. And we can go ahead and create a new task definition. We will choose Fargate lunch type for this task. In the task definition name, we'll give it a name. I just naming the easiest task deaf. And we give it IM row. The network mode, which is AWS VPC cannot be changed for Fargate type Tulsa definition, we can pre-define. It Haskell size with a dedicated memory and CPU. And please note that these values can be overridden where we're creating our actual running tasks. And now here we should create our container definitions. The container name, we can give it as a task in the image field, which point into our repository namespace. And then EBIT will be reusing the image we uploaded in the previous video. So now in my easy our bubble, I simply copy my image URL. And here I just pasted this in the image of field. We will leave software limit to 128 as default. And in this port mappings, we would live to expose our 5000 report. We will leave or others as default and click Add an okay, we are ready to create this task definition. All right, we have our first task definition created. So here I can simply click Actions and ran task. So the lunchtime would be Fargate. And we were going to run this task inside our cluster. We are run one single task. And we will have it run inside our VPC in a public subnet. In the security group section, I had already pre-created a security group, which is fairly simple and straightforward with just a one port opening. That is the 5000 report and open to the world. We'll be reusing this group. And we want to assign IP addresses. Then let's go ahead and run this task. Okay, Wonderful. These tasks has been running successfully. And we can see it is status is provisioning. Within only a few seconds, shows it as running setters. So let's get into this task. And here we will be able to see the actual public IP addresses for these running task that's copied. And now if we navigate to this IP address with port 5000, we see this fairly simple hello AWS, ECS JSON response, which we had to defining our app.py. So stop running task is fairly straightforward. We simply click the Stop and a confirm. Whereas console show this that has stopped successfully. And now we don't have any running task. With this. We completed this video. 18. Best Practices for ECS: The first item here is load balancing. Aws recommends us to use application load balancers or ARB. For our Amazon ECS services. It provides not only the traditional ability to distribute the traffic evenly across the tasks, but also it has other benefits. For example, allow containers to use Dynamic Host a port mapping so that a multiple tasks from the same service allowed per container instance, an ARB, suppose path-based routing and a priority rules. So there are multiple services can use the same listener port on a single application load balancer. Next is auto-scaling, similar to EC2 auto-scaling within an ECS service. We can define our design account of running tasks. The ECS service scheduler respects your desired account and all time. Additionally, AWS ECR service can be configured to adjust its desired count up or down in response to CloudWatch alarms. Logging, and monitoring. Since containers are designed to be ephemeral, we should always enable CloudWatch Logs integration and the leverage for analyzing purposes. And we should extensively use a CloudWatch monitoring service. It can be configured to notify us on events like auto scaling up or down, task restarted, or anything else that is abnormal. Version control. So if you remember before we push our container image onto easy R In the previous video, we should explicitly tagging it. My container name is tagged with the name column version number. So version of control can help us easily compare differences so that we can easily roll back to a certain version. Also, tied container tag with our CICD pipeline is also a good idea. 19. SQS The Ultimate Message Queuing Service: Hello and welcome to the objection other popular AWS services. In this video, we will talk about SQS, the simple queue service provided by AWS. Let's first have an overview of what is message queue and its benefits. So basically, message queues are used for communication and coordination between distributed applications or services. It can significantly simplified coding of decoupled applications while improving performance, reliability, and scalability. Message queues are different parts of system to communicate and a process operating asynchronously. There are many benefits of using message queues in our Cloud applications. The first is decoupling message queues and elegant, simple way to decouple distributed system. Whether you are using monolithic applications, microservice or serverless architecture. Redundancy is one of the most obvious advantages to message acoustic cues helped with redundancy by making the process that reads the message confirm that is complete the transaction and it is safe to remove it. If anything fails. Worst-case scenario, the message is persisted to storage somewhere, won't be lost. It can't be reprocessed or later. Queues can be great in scenarios where your application's needs something done, but it doesn't need it done now, or doesn't even care about the result. Some people Cody's fire and the forgotten pattern. So instead of calling a web service and waited a full, complete, you can write a message to the queue and led the same business logic happen later. So cuz an excellent way to implement a synchronized programming pattern. The next is granular scalability. Message queues make it impossible to scale precisely and independently where workloads peak. Multiple instances of your application can all add requests to kill without the risk of collision. As you request scaled longer with this incoming requests, you can distribute the workload across a fleet of consumers, producers, consumers, and the CPU itself can all grow and shrink on-demand. Message queues are very reliable. It makes your data persistent and reduce the errors that happen when different parts of your system go offline. By separating different components with message accuse, you create a more fault tolerance. If one part of the system is ever unreachable, the order can still continue to interactive with the CPU. The CPU itself can also be mirrored for even more availability. So now let's take a look at the AWS. Sqs. Basically, SQS provides two kinds of cues. The first is called Standard queue, and the next is go to FIFO, or first in, first out queue. The standard queue has literally unlimited throughput. It supports a nearly hundreds of thousands of transactions per second per API action. And the sender Q provides at least once delivery. A message is delivered at least once. But occasionally due to the nature of Cloud computing, it can be delivered more than one time. The sender Q will try my best effort of ordering. Occasionally message might be delivered in order different from whichever sent. So now let's take a look at first-in-first-out queue. First of all, these Q provides a pretty high throughput in suppose up to 300 messages per second. When you batch 10 messages per operation. Fifo queues can support after 1300 messages per second to request increasingly meet fire a support request. And obviously, the order in the FIFO queue are sent and received in it's strictly preserved. As a comparison to standard coup. The FIFO queue has exactly once processing. That means a message is delivered once and remains available. Ngo consumer process and Davis's duplicates are not introduced into the FIFO queue. So as a conclusion, the standard queue is very good for sending data between applications where the throughput is important. For example, allowing tasks to multiple worker nodes process a high number of credit card validation because, et cetera. And a FIFO queue is very good for sending data between applications where the order of events is important. For example, whenever we need Ensure user enter commands are executed in the right order. I'm at the Simple Queue Service is lending page. Let's go ahead and get started. So in this queue name, we can give you the, say my test or q. And we will choose this Render Queue type as demonstration purpose. And we would like to configure the queue so that we can look into the detailed configuration. Okay, So there are a few configuration items we should pay attention. The first one is Discord default visibility timeout. So to explain, this is a mechanism provided by us curse in order to prevent other consumers from processing the message again, ask yourself is visibility timeout which default you 30 seconds. We can search with 0 secondary minimum or 12 hours as maximum. During this period of time, SQS prevents other consuming components from receiving and processing the message. If the message that isn't deleted or message whose visibility is an extended before the visibility timeout, express cons. As a failed receive, give hands the configuration of their queue. The message might be sent to dead letter queue. Prison law that visibility timeout can be a overridden at a concrete message level. Then accessing is this core message of intention period. So dyssynchrony means the amount of time that askers we are retaining a message. If it doesn't get deleted. Let's go ahead and create this. Cool. All right, now we have this myText co-created. We can see how many actions we can perform. For this cube. We can send a message, view or delete messages. We have reconfigured a queue perimeters and we can purge or even delete a queue. We can also subscribe the queue to SNS topic. That means whenever we receive a message in the CPU, we can configure to receive an e-mail or SMS notifications in SNS. And finally, we can connect these trigger for lambda functions. That means you can imagine upon receiving a message, we can initiate a workflow or perform some task running in them. So it is really powerful yet convenient. So now let's try out by sending a message to the school. In this standard message dialogue, we just type some random message. We can also choose to delay. For example, I just want to delay two seconds in the message attribute. We are specifying the type of this SQS message. So let's skip this for now and we go ahead and send this message. All right, so we saw this message has been sent and it note that take up to six seconds for the message delayed attributed to update. And we actually said that you two seconds. On the other hand, we can use programmatic way to receive messages for these particular queue. So as we can see that the queue has a unique URL printed here. Let's copy this. And here is a simple snippet of code written in Python. Q. Listen to this particular Q. Definitely. So at the beginning we can copy the URL of our SQS queue. You are here. And we can somebody say, we initiate a photo S3 SQS client and we invoke the standard API, say receive message and specifying those parameters. And this part we're just integrating into the message. And we'll print whenever we receive a message. And as soon as we receive this message will be invoked is delete message API by passing the received the handle we interpreted above. And we will print out say message deleted. Okay, Let's give it a try. So as we can see, we receive this message. And this message was exactly the one we send just now in the AWS console. And as soon as we receive this, we delete this message. So now if we go back to our SQS ADFS console and we click this View deleted message again. We start the polling. We won't be seeing that a message. We just send her again because it has been deleted programmatically. Alright, with z's, we completed this video. 20. Cognito Managed User Sign Up, Sign In, and Access Control: Cognito is eight or less fully managed. It uses sign-up, sign-in solution. It provides authentication, authorization, and user management for your web and mobile applications. It last, developers simply quickly and the security developing sign-up, sign-in functionalities. Cognito provides two kinds of pools. They are user pools and identity pours. User pools are user directories that stores user information and also provide sign-up sign-in options for your app users. Identity pools provides AWS credentials to grandeur users access to other AWS services. Cognito provides many useful features for us. First of all, is standard authentication. Cognito uses colon identity management standards including OpenID Connect, OAuth 2 and assembled to 0. Secondly, cognitive support, social and enterprise identity federation. Your users can sign up through social identity providers such as Google, Facebook or Amazon, and ASU Enterprise, and then the providers such as Microsoft Active Directory using SAML. Cognito also provides solutions to access control to other AWS resources from within your app. You can define rows and map users to different rows. So your application can access only the resources that are authorized for each user. Adaptive authentication is using advanced security features for cognito to add adaptive authentication to your application's health, protecting your applications, users accounts and user experience. When Cognito detects and you're assigning activity, such as sign-in attempts from new locations or devices. It assigns a risk score to the activity. And unless you choose to either prompt the users for additional verification or blog designing request user to verify their identity is using SMS or a Time-based One-time Password generator such as Google Authenticator. Cognito, has advanced security features which helps you protect applications using from an authorized access to their accounts, using compromised credentials. When Cognito detects users have entered credentials, there have been compromised elsewhere and prompts them to change their password. The last but not least, cognitive supports multiple compliance program. For example, HIPAA, PCI, DSS, SOC, and ISO 9000 0, 1, etc. Okay, now let's have a hands-on demo on AWS Cognito. I'm now at the lending page of Cognito. And what are we going to do is we're going to connect these Manager user pools. So let's go ahead create a new user pool. So here we can see Tutorial based wizards step-by-step. We can configure our user pool. First of all, let's give our poor name. We can say that cognito demo. And we have two options. Either we can just click this review defaults. Cognito will pre-fill everything for you. Or as we prefer, we will go through step by step. So here in the attributes, we can choose username or email address as our unique identifier in the system. So we will peak email address, and for the required attributes, we will only choose e-mail. And we go to next step in this policy's page, we can configure our password policy. So the minimum lens, we said 28 is default. We are okay with it and we're going to keep this everything default, making sure it has a strong enough password. And here we're going to say, we are open lab users to sign up themselves. And days to expire. We will keep the default and go to next. Here it asks us about enabling multi-factor authentication. Mfa. We will keep it off for now. So regarding the account verification, We are live this default as a e-mail verification. So here are the message customization page we will not use. Yes. For now. We will default. You will use Cognito and it is customized e-mail verification messages. Cognito provided to verification types, the code and the link, we will use the link. And here is a preview of the sample email message. When users sign up the mess, the subject is your verification link, and there is actually attributing to activate your account in the email. We will leave these as MMS part unchanged. We will not be using it. And we can even next step. We will skip these texts and we will see these device. So now add the app client's page. We must add an app in order to operate with these Cognito pool. Let's go ahead and add a client. We can name this as Cognito aligned. And we will end check this generate kind of secret for now, for simplicity and a demo purpose. And let's go ahead and create this app. All right, we'll click Next. So in this step it asks us to trigger an extra flow. There's a Lambda, so we will skip this one out. All right, finally we review whatever we had. Put it here and we go ahead, click, create a pool. All right, wonderful. We have our poor created successfully and be aware of these poorly. We're going to use it soon later. So the next thing we wanna do, a policy sensory creation of these Cognito demo pool. We must do one thing in order to actually play with it. We go to API integration section, and we go to this domain name. We must set a very domain name here. So I will just give it a name like this way in temp client and we check availability. So they said that domain is available. All right, let's save it. So now we have our everything ready. So in order to really play with these Cognito demo pool, we need a copy, these poor ID and an E, my logo. I have prepared these pretty simple variable rho HTML JavaScript application to demonstrate how to use it. It has quite a few JavaScript modifiers. Here. It was using ADFS or official Cognito library and have a simple app.js. We have two pages that a login and register. And this webpage has absolutely no dependencies. Apart from these, also have a package.json. It essentially we only need one NPM library that is called HTTP server. You can install it through and GMI HTTP server. So this was installed is popular jump to a library and we are serving the current directory as a local FTP server. So here is the source code review of the simple project. We only have dependencies over Amazon, Cognito, and I had actually already copied it right here. So you don't have to actually install it in this app.js. It provides a high level configuration for cognito pool. So here we can improve our poor ID and a Client ID. Let's put the pool ID we just copied right here. And let's go back to the AWS console and copy for the app client ID. So we can go to App integration and App Client settings. We can see this is our client and this is the client ID. Let's copy this one. And we go ahead and paste the app client ID right here. We save it. So now if I kick-start the HTTP server, so we can see it is navigated from this URL. Let's go ahead and navigate to it. Alright, so now I'm at my logo and we can see these directory structures. Let's go ahead this register.html. So here in this registration demo, we will enter our email password and password you'll register. So in order to do this, we need a single aligned to. So I'm current ad is temp mail.org. So wish this website provides you a temporary e-mail address so that we can play with it. So now it now generated a random email address. We can go ahead and copy this one. And if we go back to the page, we paste the temporary email here and giving them password. So can I say sorry. And we click Register. All right, so we should be receiving an e-mail momentarily. So now if I'm going back to these 10 mail.org and scroll down a little bit. And we can see that we received this e-mail from AWS Cognito. So we saw the subject and e-mail content as we just configured in cognito wizard with calligraphy is verified email. So our registration has been confirmed. So now if I'm going to login dot HTML, I'm now at the login page. And again, I paste my e-mail address and I type my password. I click loading. Wonderful. So I signed in as these temporary email I just registered. So now if we go back to the Cognito pool edibles console, we go to these users and groups and we refresh this. We can see that this user has been created here. The e-mail is whatever generated in that attempt. E-mail org and the emails accountant status is confirmed. Okay. There is one more thing I would like to show in this login page. After we succeeded to assign in AWS, Cognito actually uses JWT token as their mechanism. So we can see these JWT token has been stored in the console. So if we copy these JWT token and we're going to JWT token, we can paste the token. We just logged here. And we can see some interesting information. It shows that the issuer is AWS Cognito. And this U0 is actually our Cognito pause UIL. And we chose to you as a client ID and username. With this, we completed this video. 21. Content Delivery by CloudFront: Content delivery network, or CDN, is not a new technology. It was born and it becoming popular in the early age of Internet back in 1990s. In a nutshell, CDN is a geographically distributed network of proxy servers and their data centers. The goal is to provide high availability and high performance by distributing the service spatially related to end-users. Nowadays, CDNs are serving a large portion of England and the content and playing a critical role. Cdns deliver webpages, downloadable objects, applications, live streaming media, and social media sites. In other words, they have been used in almost our day to day life. Let's have an overview of AWS CloudFront. First of all, for CloudFront is faster and a global. It has been massively scaled and globally distributed. The network has totally 166 points of presence so far. And the leverages the high resiliency backbone network for superior performance and availability for end-users. Cloudfront is a highly secure city and then provides both network and application level production. Your traffic and applications benefits through a variety of built-in protections, such as AWS Shield, Standard and no additional cost. You can also use configurable features such as AWS Certificate Manager to create and manage customer SSL certificates at no extra cost. Conference is highly programmable. And in my humble opinion, this is one of the killing feature provided by CloudFront. You can customize for your specific application requirements by leveraging lambda at edge functions triggered by CloudFront events. Extending your customer code across AWS edge locations worldwide. Which allows you to move even more complex application logic closer to your end user to improve responsiveness and user experience. Cloudfront also supports integrations with other tools and automation interfaces for today's DevOps CICD environment, by using native APIs or AWS tools. Confront has deep integration with other AWS services. So this is obvious. Imagining services such as S3, EC2, elastic load balancing, Route 53, et cetera. They are all accessible via AWS console and a CloudFront can be programmatically configured by using standard 88 areas APIs. Even sweeter if you use AWS, S3, ec2, or ELB can address does not charge you for any data transfer between the services and the CloudFront. I'm currently at the CloudFront homepage. And then let's go ahead and create our firm's distribution. Cloudfront provides two methods of delivery. The first type is where, which is more commonly used. And the second one is called RT. Mp stands for real-time messaging particle. We will choose Web. So first of all, in the origin settings panel, we should appeal origin domain name. And then we can pick one from AWS S3, or media pack origins or media store containers. I had already create a public bucket in S3. So let's go ahead and use this bucket as our origin domain name. The origin paths means if you want confront, you only access the content from a specific directory, S3 bucket. We just want to leave these MPD in order to access the whole bucket. The next one is origin ID or UID. Let us enter a unique value to discrimination multiple origins in the same distribution. This restricted access means that whether or not we want to disable S3 URLs, enabling these were you ensure uses always access our content using CloudFront via URLs instead of S3 URLs gets can be very useful when we are using sine or URLs or Sina cookies to the restricted to our own content. The next one is we can add certain HTTP headers to our response. We will leave both MPD for now. In this default cache behavior settings, we can configure the protocol policy. Either not enabled both HTTP and HTTPS, or we want to redirect HTTP to HTTPS, or we want to disable HDP and only enable HTTPS only. So now let's pick redirecting FTP traffic to HBS. This allowed HTTP methods. By default, it only enables good and head. Or additionally, if we want to enable cross-origin resource sharing, we will pick the second one to have one extra options. Furthermore, if we want to support. Resource updating and delete. We can enable more HTTP methods here like PUT, patch, and delete, et cetera. This field level encryption config is an additional layer of security along with HTTPS, the less you protect specific data so that only certain applications can access it. We will skip it for now. And afford Z's caching details configuration, we will leave them as default. This restrict the viewer access here is to enforce whether or not to CloudFront should only allow users to access our own content using assignee UR or sign a cookie. We will choose no for now. And the next compress objects called automatically, which is straightforward. These let us specify where zone on CloudFront should compress our content that automatically. Typically we will enable these, which was yes. Lambda function associations allows us to specify lambda functions and perform some business logic inside the lambda while user requesting this CloudFront distributions ARS cause these lambda Edge. It is pretty cool since it achieves both faster delivery and edge locations. We still have fully control over what exact content to deliver. This is beyond our course and I was keeping the phone out. So the final section is Distribution Settings. First is the price class. We can configure how widely we would like our distribution to be deployed. There are three levels. We can pick either only US, Canada and Europe, or US Canada, Europe, Asia, middle East, and Africa. And finally, for the bass and performance, we can choose these all Azure locations. And these were resulting slightly higher cost. We will peak these for now. Alternative domain names are called C names is pretty commonly used. We can set our own CNAME, for example, like static door, my website.com. And we can specify our own SSL certificate and upload it into AWS ACM, so that it will configure these part so that a California we are only allows certain URLs to access our content. For right now we can just leave this MPT. And for all rest of these configurations, we will leave them as default. And we definitely want to enable our distribution. And let's go ahead and create distribution. And we can see our distribution is been in progress. So please note, these can take up to 15 minutes. Okay? So now we can see our distribution has successfully being deployed and the state is enabled. Here is the domain name of it. Let's copy this domain name and we can go ahead and navigate to it. So since my pocket is publicly accessible, and as we can see, this is a bucket name and it has only one key, that is the JPEG file and upload it momentarily ago. Let's copy this image name and appended to the URL. So now we can see is built for our PI image I uploaded. And please remember, since this is a cloud front hosted a domain, it has been distributed worldwide in all the edge locations. So this ensures people around the world can navigate to this image with the first case, the possible speed. Just in case you encounter an error. Why I'm trying to create a cloud distribution even if you had configured S3 bucket correctly. Typically this one, it said your account must be verified before you can add a new CloudFront resources to verify your account, please contact AWS support. So in order to fix this, we can simply go to the aid of a supporting either provided here and a contact or AWS support, submit a ticket. They will respond to you and help you get your account verified. After that, you should be able to create CloudFront distribution. So with these, completely this video. 22. Using Kinesis for Data Streaming: Kinesis is AWS fully managed a stream services which makes real-time data collecting, processing, and analyzing easily scalable. With Kinesis, we can ingest real-time data such as video, audio, application logs, website, click streams, and IoT. Telemetry data for machine learning. Kinesis provides mainly four kinds of capabilities. They are video streams, which capture, process and store video streams. It can security stream videos from connected devices for analytics, machine learning, and other processing. Data streams provides a scalable and a doable real-time data streaming service that can continuously capture gigabytes of data per second from hundreds of thousands of sources. Data Firehose provides an easy to use way to capture, transform, and load the data streams into AWS, data stores. And data analytics can help us to very easily processed data streams in real-time with C code or Java language. Let's take a look at each of them one by one. The first is Kinesis video stream. What can you do? So this picture is a concrete example. Nowadays, a lot of cities having installed large numbers of cameras at traffic lights, parking loads, shopping malls, and just about every public of a new capture a video seven by 24, which I use Kinesis video streams to security and the coaster effectivity ingest store and analyze these massive volume of video data to help solve traffic problems, help prevent crime, dispatch emergency responders. We have streamed video data into AWS managed services such as recognition video or sage maker. Or we have also sending the data to open source machine learning libraries such as MXNet or TensorFlow. The Kinesis Data Streams can be used to stream Almost all kinds of large amount of data. This picture shows an example of how it can be used to collect the log and even a data from sources such as web servers, desktop machines, or mobile phones. We can build kinesis applications to continuously process the data, generate metrics, Power live dashboards, and Amitabh aggregating the data into storage services like S3 or DynamoDB. Kinesis. Firehose is the easiest way to reliable load streaming data into data stores. And analytics tools can be used to capture data from connected devices, such as consumer appliance, TV boxes. The Kinesis Data Firehose then loads the data into S3, where the shift or Elastic Search service. It can provide nearly real-time access to metrics, insights and the dashboard. Kinesis data analytics enables us to easily and quickly build queries and a sophisticated streaming applications in three thing Post steps. We set up the streaming data sources, right? Uh, queries or streaming applications, and then set up our destination for process the data. Kinesis Data Analytics takes care of running our queries and applications continuously on theta, where it is in transit and sending the results to our destinations. So here I am, AWS Kinesis dashboard. If we click these get started. We can see we can choose AWS kinesis resources to create. For this demo purpose, we will go ahead and create a firehose data stream, which is here. So we are now prompted to give it a stream name. We can give it a name like is name. And optionally, we can convey the source type either directly or using Kinesis stream. So we will choose these put as an example. And then later on, we're going to invoke these APIs to push data into these Kinesis stream. Then these are all the informational messages is, was reading. Let's go ahead and click Next. So I'm here. Kinesis will be able to provide some transformation surveys at runtime. We then later on enable that and anytime. Right now we'll keep both transformation and format convention disabled. Let's click Next. So here is we're going to select a destination. So as I've seen both start point, we can always choose S3 so that we can see how Kinesis store our data in the underlying storage. We can also pick from Redshift, ElastiCache service, and Splunk, as we demonstrated in the previous slides. Here, this diagram just show how it works from a high level. So we're going to choose an S3 bucket. So I've already had a bucket. I would just reuse that. And we'll click Next. And here we then config the buffer interval is should be between 60 seconds and 900 seconds. Let's give it a 60 seconds. Now we do not want to compress it, neither. We don't want to encrypt it. And we do only in Lebo logging so that we can track in case error happens. So in this I am row that our stream must be granted access to S3 bucket. So let's go ahead and create a new IAM role. And in a new page, AWS has already had a 10-bit pre-filled for us that we're correct order require permissions. For kinesis, for example, like this list of pocket put object. So we click Allow to go ahead and create a new IAM role. Alright, next step will be review. And yes, we are comfortable with everything that's created these delivery stream. The creation of this firehose screen could take up to two to five minutes. So let's give this a weight. All right, cool. So now we see this firehose room has been successfully created. We can click to enter it and see all the details. So here we see the ion and the status is active. And what is the I Am row. So and we can have CDs test with a demo data. Let's go ahead and expand this. So we can use these useful feature to start sending demo data. Okay, so I have created these. These will continuous sending chunks of data into these Firehose stream. And it also progress that we can just see from the destination which we are configured in this ESRI pocket. So please note, it might take a, a wire to be fired. We actually presented here, since we had configure the interval 0 to 60 seconds, ideally, it should be arriving like one minute. Okay? Now we can see there is a folder going to create it. Then it was created by Firehose. If we enter into that, There's a year, month date. And our so we can see there is a file, you're going to create it here by Kinesis Firehose. We can download that is fire. All right, So we downloaded this data and the less open this with any text editor. And we can see these sample data and has been streamed into this S3 bucket. We had also do this programmatically. So these on fire shows a sample code to send data into our Firehose stream. So we initiate a client that is Firehose client and GCP lines, just a dude, I have a Python list that from 0 to a 100, we're going to add items one by one into this list. So each list item is a dictionary. The dictionary keys data, and the actual data, would it be the bytes array for a random UUID? And then later on we will invoke these Klein put record batch method. We specify our delivery stream name. And this is the stream name we just specified. And a passing the records, that is a list of dictionary, then we just simply print out the response. If we run this program, we can see that the Kinesis gave, as a result, response is a JSON payload that gave us all the successfully created records and each has a record ID. It returns a HTTP 200 and there is no retry attempts. So which is successfully invocation. All right, with this, we complete this section. 23. Cloud Service Design Principles: Hello and welcome to section six, AWS best practices. In this last section of our video course, we will discuss a suite of best-practices while walking with AWS. Topics including Cloud service design principles, managing infrastructures by code, host to control, and Coleman resilience patterns. Let's get started. Let's first take a look at microservice architecture. It is an architecture design principle that confirms the following. Highly maintainable and it has trouble loosely coupled. The deployable organized around business capabilities. For example, imagining an e-commerce website in order to serve its customers. We might have a authentication microservice for authenticating the users identity. We might have a Account Service to store and retrieve uses the profile information. We might have a payment service dedicated to process transactions and payment. And also might have a shipping service designed for dealing with shipping requirements. Each service dedicated to serve one specific business purpose and communicate with other services through API calls. This is conforming to the KP, the simple principle, the ADA. Big benefit of these is agility and flexibility. Imagining if there is a feature improvement or bug fix for authentication service. As an example, we don't vote on it. And a shaped released to authentication service only. All the other micro-services where keep running without any impact. In the Cloud Computing Era, microservice architecture makes our services easier to scale and a faster to develop, enabling innovation and accelerating time to market for new features and improvements. Aws provides world-class support for microservice architecture. In our course, we have learned surveys is like Elastic Container Service, lambda, elastic load balancing, and so on. These AWS services give us a great foundation for building a scalable, flexible, and resilient microservice architecture. Nes, Let's talk about loose coupling. It is synchronous means exactly the same principle for writing any computer programs. As a Cloud service design principle, loose coupling confirms the foreign aspect. Well designed interfaces. Those interfaces should allow services to interact with each other only through specific technology agnostic interfaces, such as restful APIs. Aws API Gateway provides easy to use yet powerful platform to implement interfaces. A synchronized integration. Whenever an immediate response is not necessary, we should consider using a synchronous integration. For example, one microservice can send the messages into AWS, SQS queue, and other microservices consume the message. This breaks the direct coupling between these two services. Service discovery. Once surveys should be able to be consumed by others without a prior knowledge of A's network. Topology details the ability of a service discovery not only hided internal complexity, it also allows infrastructure details to be changed at anytime in the future. A typical example on AWS is a fleet of easy to incidences behind an elastic load balancer. The next principle is fail gracefully. According to Murphy's Law, anything that can go wrong will go wrong. So when we design our Cloud services, we should handle failures in a graceful manner, expecting failures from happening. And when it really happens, we can provide alternative or cash the content instead of fading completely. As a conclusion, loose coupling is a crucial element if we want to take advantage of the elasticity of cloud computing. Now, let's take a look at another principle. We call Services, notice servers. So before cloud computing era, the tradition of enterprises were managing a large fleet of virtual machines and run applications on them. Nowadays the more favorite way is to avoid a virtual servers and managing and providing services. We can leverage AWS managed to surveys is, as we've already learned, we should always consider using things like RDS for Relational Database Service, SQS for messaging services, et cetera. We should also consider server-less architectures, which can significantly reduce the operational complexity of running applications. Developers can concentrate on business implementation without managing any underlying server infrastructure. As a conclusion, sticking with tradition way might not be making the most of cloud computing and might be missing an opportunity to increase developers productivity and operational efficiency. Finally, let's take a look at another vital principle. In order to avoid a single point of failure, we should conform the folding and principles. The first is designed with redundancy. Our system can fail coastal by CPU usage to high out of memory. How are these errors or connectivity issues, etc. So from the handle, we should prepare multiple resources for the same walking task. So there are one or more resource failure. We're not impacting others from walking, and hence avoid failure of the entire workflow. A typical example of this is a fleet of EC2 servers behind a load balancer. If one or more easy to incidences are unhealthy. Erb, We're route traffic to other healthy EC2 instances. And auto-scaling group can help us promoting new instances to replace the unhealthy ones. Both isolation and the tolerance. In the case of failure happens on one or more components. Our system should isolate it a failure, continue operating properly. A good example of this is a list of running ECS tasks that are registered with one target group. If one or more easily as tasks are unhealthy, the targeted group, the tax debt, and other tasks won't get impacted. And the ECS will automatically bring up new tasks to replace the unhealthy ones. So then the entire service are still functioning normally. Multiple regions and availability zones. Our services, should it be deployed and run on multiple regions and availability zones for a goal of better availability and accessibility. For example, for a production RDS instance, we should always enable multi AZ option. Another concrete example is that our lambda functions, should it be distributed into multiple agencies so that the even one or more AZ goes down. Our Lambda function can be still up in other health agencies. With this, we completed this video. 24. Manage Infrastructures by Code: In a nutshell, infrastructure as code is a process of managing and a provisioning computer datacenters, civil machine-readable definition fires. It is an infrastructural level automation. The main benefits of IACR, faster provisioning. In my humble opinion, this benefit is a no brainer. Running code is always much faster than manual operation. The larger the infrastructure is, the faster provisioning and performance we will again, by adopting IAC. Next is avoid risk of remove errors and the security violations. Since we're all human beings. And it was our node u and a meat, we make mistakes. I AAC can help us removing these risks like a manual misconfiguration, accidentally component deletion, security violations, etc. The next the benefits of IAC is cost reduction. Iac not only me to cause the deduction from financially, but also in terms of walking effort because it reduces the complexity there cuz efficiency out of manual configuration. As a conclusion, nowadays, our Cloud infrastructure could be extremely huge and complex. It is almost impossible to manually create and manage all the components. The ideal process would be we use version control system, for example, get to manage our infrastructure codebase provision in them. And whenever they are, new, infrastructure components are needed. For certain components need to be removed. We update and review our codebase and kicking off provisioning or removing. The official Infrastructure as Code tool is cloud formation. It provides a common language for us to describe and a provision or the infrastructure resources. The typical CloudFront workflow is like. We can figure our template code written either JSON or YAML. And we create CloudFormation stack. And finally, we provision and configure the underlying resources. One of the biggest advantage of cloud formation is there is no additional charge for using it. We only pay for the actual AWS resources we provision. This is the official AWS CloudFormation designer. It can be accessed by the UIL in this slide, as we can see, it is a what do you see it? What do you get approach? You can pick the resource type from the left, the pineal. There are hundreds of AWS service available. And you can also see the actual code. In the bottom panel. We can choose template languages from either JSON or YAML. And in the upper right panel, you can show you a vivid infrastructure level diagram. Another very popular Infrastructure as Code Tour is Terraform, invented by HashiCorp. It is open source and AD enables us to safely and a predictably create, change and improve our infrastructure. Their way of doing IAC is pretty straightforward, right? Plan and create. This is a code example from terraform official website. It'll create an AWS ECS surveys named MongoDB. The service belongs to an HDFS cluster which is managed by another Terraform file or snippet. The code that defines desire to count as three. It also specified a targeted group as a load balancer strategy, as well as placement constraints. As we can see, the Terraform syntax is quite simple, straightforward, and intuitive. Alright, with these, we completely this video. 25. Cost Control: Apart from the main beating dashboard, we can also access the videos page where it will allow us to view very detailed monthly invoices, split it by individual AWS service in each region. We can also print the invoices or download CSV fires from VIOS page. We can also check out the course Explorer. It gives us an overall graphic visualized report for easily analyzing our spend. For example, in this screenshot, it tells us the month to data costs are $1.84. That is a dramatic 76 percent job compared with the last months. And it provided a day-to-day cost in a bar chart, which is quite clear and useful in the course Explorer, which are also get different types of graphical reports by specifying date ranges, services, or tags. And we can also choose prefer to report the type from bar, link or stack. It let us quickly create a custom budgets that will automatically alert us when our AWS costs usage exceed or are forecasted to exceed the threshold. We said. It can be accessed by clicking the budget menu from beating dashboard. For example, in this screenshot above, I said an overall budget of $500 and ask AWS to email me as soon as the actual cost is greater than 90 percent. In order to control coastal where we must not nice the AWS Free Tier, AWS provides more than 60 products and started building on AWS using the free tier. We can learn details from this page. These are several free tier is where nations listed on AWS Free Tier page. We can see some services are free for 12 months. Others are always free. Tells months means that new customers after signing update, the service will be free under certain constraints. All it's free means, as long as your US HE satisfies the free tier constraints. There is no timeline, which is really sweet. For example, a diverse provides 750 hours per month for T2 micro instance during the first year. And AWS allow us to use DynamoDB completely free as long as we use less than 25 gigabytes of storage per month. Also for lambda, the first one million requests per month, Would it be at no charge? The next tip is very straightforward. However, we might adjust to sometimes overlook it. That is, the actual utilization rate. Get Hermione's resource type. We should always pay attention to our resource utilization rate and the appropriate decision on which resource type can provide us the best money performance rate. In this screenshot of RDS use age dashboard. As a example. We could see that all the metrics like CPU, DB connection count, free of a memory, write, read, IOPS are all pretty low. This is a clear sign that we probably have been using a tool powerful RDS instance type. We've spent more money than we actually need to pay. We should consider downgrade our RDS instance type. 26. Resilience Design Patterns: The first resilience pattern we would like to discuss is a circuit breaker. Similar to the circuit breaker at our home, which protects and the electricity sacred from damage caused by excess current from an overload. The circuit breaker resilience of Hatton handles failure. There might take a significant amount of time to fix when connecting to a remote service or resource. Instead of repeatedly trying to getting a response from remote that seem to be failed and keeps on waiting. We shouldn't let our application use an alternative walking service, or we can choose to fail fast. Bokeh hat is up right wall within the house of a sheep. It was invented in China during the fifth century in Song Dynasty. The inventor wrote that a sheep could allow water to enter the bottom without the thinking. The idea of a BOC had is that we don't lose the whole ship if something goes wrong by separating parts of the ship. Being a resilience design pattern. In the suggest asked to isolate elements of an application into pools so that if one fails, the others will continue to function. A consumer that cause multiple services might be assigned a connection pool for each service. If a service begin to fail, it only affects you can actually employ a zine afforded service, allowing the consumer to continue using the outer services. For a concrete example, we can apply BOC has principle by running multiple AWS ECS tasks. So if any task or fails, it will never impacted other running tasks. So we can protect it, our entire system continue to function. Now let's take a look at the retry principle. Service is running in the Cloud, has to be sensitive to the transient failures. Because failures will happen occasionally. Our system should be designed to handle them elegantly and transparently. Retry principle enable service to handle anticipated temporary failures when it tries to connect to a service or network resource by transparently read aloud trying, AND operation that's previously failed. According to the actual failure, our retry mechanism, should it be intelligent enough to determine whether or not we should retry with inappropriate delay and whether or not we should cancel after certain times of the shrine. A good example of the recharging principle is that AWS Lambda has a built-in retry mechanism. Whenever our Lambda function fails, the AWS Lambda runtime, we are performed three times over retry the house. And the point of monitoring principle, indicating that a web service or application should expose functional checks that external tools can access at regular intervals. In AWS platform. There are many concrete implementations of house and the point and monitoring. For example, while we register our EC2 instances onto an elastic load balancer or autoscaling group. The ERB or ASG will rely on the house checks of EC2 instances to determine the actual health status of a EC2 instance. All right, with this, we computed this video course. Thank you very much for watching. I truly hope you have enjoyed this video course and you will learn the more or less some good skills and experience on using AWS. Thank you again and good luck.