Designing Architectures in AWS | Qasim Shah | Skillshare
Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
12 Lessons (1h 22m)
    • 1. Architecture Promo

      2:20
    • 2. Web Application Architecture

      5:30
    • 3. Media and Content Architecture

      5:12
    • 4. Batch Processing Architecture

      4:57
    • 5. High Availability and Fault Tolerant Architecture

      6:20
    • 6. Disaster Recovery Architecture

      5:06
    • 7. File Optimization Architecture

      5:11
    • 8. Media Sharing Architecture

      7:10
    • 9. Online Gaming Architecture

      7:29
    • 10. Hosting WordPress Site Architecture

      8:07
    • 11. AWS Migration Basics

      11:48
    • 12. Using the AWS Well architected tool

      13:15

About This Class

With more organizations moving to the cloud, designing architectures for the migration has become a very important task. If you are familiar with the AWS services and are interested in learning more about sample architectures to host different environments, this is the perfect course for you! 

So, if you prefer to walk with the development of the world now is your chance to get started with 'Designing Architectures in AWS' - A one of its kind course!

The flipped classroom model with hand-on learning will help you experience direct  into the course as your begin your learning journey. Be sure to watch the preview lectures that set course expectations!

In this course, you'll learn and practice:

  1. When organizations should migrate to the cloud

  2. Migration tool provided by AWS

  3. See how different environments are architected in AWS

  4. Understand  best practices, and much more....  

Transcripts

1. Architecture Promo: moving to the cloud is becoming very popular. With that, more and more organizations are deciding to take the leap and move the infrastructure from on premises to the cloud. In doing that, designing architectures has become very important. Hi, everybody, and welcome to designing architectures in AWS. My name is Qassam Shaw and I've been an enterprise architect, helping organizations take that leap from on premises to the cloud for over 14 years now. This course is designed for students who have a working knowledge off the AWS environment and who are looking to find real world situations on how they can develop architectures for their organizations. In this course, what what I have done is giving multiple examples of developing real world solutions in AWS that you can take and apply to your organization or your business. So we'll look at developing several different architectures, such as applications such as gamey or if you have an e commerce website, how we can develop architectures for them. Not only that, I've also shown how and when organizations should migrate to the cloud. There are many companies out there that do not know when and how they can migrate their system to the cloud. So I've included lessons on when and how organizations can decide to migrate to a W ask and after doing that, how they can design solutions and architectures in AWS that mimic their on premises systems . This course is designed, like I said, for intermediate level students, so you would want tohave. Or you should have a working knowledge off the different services that AWS offers. No, I welcome your feedback. I have put in a lot of effort in this course, and I've made short and design in a way that you guys can apply what you learn in this course to your organization's right away. If you have any questions while you guys are going to the scores, please post them in the killing a section. I welcome any feedback, and I will be more than happy to answer any questions or clarify and issues you might have in any of the lessons. So what are you waiting for? Click out nine year old now button and start learning 2. Web Application Architecture: Hi, everybody. And welcome to this lesson. I'm looking at how we can build it of architectures. And this one is focused on how we can build on architecture that's going to be used to host a Web application. So building a highly available and scalable Web hosting can be a very complex and expensive operation. Sometimes you have dense peak periods and wild swings and traffic patterns, which can result in low utilization off expensive hardware. AWS provides the reliable, scalable, secure and high performers infrastructure required for rob applications while also enabling in that elastic scale out and scaled on infrastructure to match i t costs in real time as customer traffic fluctuates throughout the date throughout the week or throughout the month . Now here is a basic diagram of how we can develop an architecture off a AWS infrastructure , which in host a reliable and scalable Web application for us. Let me walk you through this step by step. Not first and foremost. We're going toe obviously need a d n a service, which is what AWS does for us throughout 53 so the user's DNS requests again will be served by Route 53 which is a highly available domain name system specifically developed by AWS. Network Craft is going to route it to the infrastructure running and Amazon Web services. Next, we have something called cloudfront. All of the static streaming and dynamic content will be delivered by the Amazon Cloudfront infrastructure, which is a global network of edge locations. So requests are goingto be automatically routed to the nearest education. The content is delivered with the best possible performance. Regardless of where you are in the globe, you'll get the content cached locally in the education off which AWS as around 160 locations throughout the globe. So next the resource is instead of content used by the Web application are going to be stored in A S three bucket, which, if you guys remember, is a highly durable storage infrastructure designed for mission critical and primary data storage. This will be our best option, as compared to E. B s or DFS, which will not really work for a Web application which will be used through cloudfront because with cloudfront weaken designate and as three bucket as his primary source. So, the fourth step http requests our first handled by the elastic load balancing, which automatically distributes incoming application traffic among the host of E C. Two instances that are going to be running in your infrastructure. Now, as you guys can see, the easy to instances are developed and hosted in a multiple availability zone infrastructure. Now what this is going to do, it is going to enable a greater fault tolerance. If one of the eighties fails or is down, the other one can pick up the traffic. While the 1st 1 is brought up to speed by AWS, I was busy going to provide a seamless load balancing capacity needed in response to incoming application traffic. So next in the first step, we have Web service again in both of the availability zones hosted on PC two instances. Now, with easy to instances, what's recommend er is the organization developed AM eyes or Amazon machine images. So, for example, says they are in an auto scaling group. If one of the Web servers, or easy to instances, should fail, Auto Scaling Group is going to automatically provisional new one, so it's highly recommended that we have am eyes for the Web service with the required applications, patches and software already pre loaded in the A m ice on the auto scaling group provision a newest twisters. It can just grab that, am I pop it in the easy to instance, and it'll be good to go. And then in the last step, we have the core of the application service, which is the database service to provide the high availability. The RDS or the relation database service is going to be used in a multi daisy deployment where you have a primary master RDS, and then you have a standby RDS in a different availability zone. So you guys concede this architecture provides an overall infrastructure for you to operate a Web application. In a highly available and reliable environment, you have the cloudfront, which provides the quick access before the people that are accessing it. Globally. You have the Auto Scaling Group, which distributes the lower to multiple ec2 instances. So if you have peak traffic, it'll be balanced accordingly and then you have the elastic load balancing. Also for the application service, we need the E l. B for both the Web servers for the traffic and the application server, so the application could actually handle the load also, and most importantly, this is all deployed in a multi easy environment. So you have the fault tolerance. If one ese should fail for some reason, the other can pick up the Lord while the 1st 1 is brought up to speed by AWS. So this is a basic set up if you want to host a Web application on AWS and just as a reminder, the services and the architecture that's required I z Amazon Route 53 the Amazon cloudfront the S three buckets, the load balancing easy to instances the auto scaling groups and then the RDS for the database for the application server. 3. Media and Content Architecture: hi, everybody. And welcome to this lesson in looking at the architecture for building a service and content media infrastructure. Now most of us would assume that serving visual content is probably one of the most basic and straight forward tasks. Now that gets complicated when you have serious requirements for low latency or high availability, adore ability, access control. And if you have millions of views and obviously the most important one, it has to be under budget. In addition, because of spiky uses patterns, operations teams often need to provision static hardware, network and management resource is to support the maximum expected need with guarantees waste outside of peak hours. Now the good thing about AWS is that it provides a suite of services specifically tailored to deliver a high performance media serving environment. So let's look at how we can develop an architecture in AWS to overcome some of these shortcomings that we would have if we were to do it on Prem. So the first step and anything that's going to be available over the Net is utilizing the Amazon Route 53 DNS service, which is going to be used to direct the user traffic into the AWS ecosystem. Now the first step will be the storage on. You guys concede that for this type of infrastructure, the best doors will be the Amazon as three to host the static content on the Web. The reason for that is because S three is inherently highly available and durable, and it's by default, designed for scaling out on the Web. It also will provide a great way to offer the work of survey ecstatic content on your Web servers and, most importantly, can also provide the secure access to your content servers or https. Now, obviously, if we have, global users will want them to access the content in low latency. And for that, in the second step, we're going to utilize the cloudfront service of Amazon, which is going to utilize the edge locations that Amazon has around the globe off, which are 100 and 60 and counting to this day. Now, while using the S three as origin, sir, for the cloud for distribution, you gain the advantage of having a fast in network data transfer rates, simple publishing and cashing workflow and obviously a unified security framework are the S three and Cloudfront can be configured by a Web service through the AWS Management Council . Or if, for example, you prefer. It can also be done through 1/3 party management tools because some organizations, they have their own customized tools for their Web applications. So the good thing about AWS is you can also utilize your own management tools if you want. Now, alternatively, as you guys couldn't see and Step three can also utilize the E C two instances as the origin server off the S three for hosting the static content. Now, if, for example, you want to have a greater degree of control for logging and feature richness in serving the content, then you'd want to utilize the easy to instances. Otherwise, if it's purely a static content, you can get by with just three buckets, so it depends on the type of content and the type of information that you require. So if you require the additional control and the logging, then you need the easy to instances. But just keep in mind that if you do provisioned that easy to instances, then that will also raise your costs. On the fourth step is a live streaming off featuring the power of Adobe Flash Media Server hosted on easy to combine with the cloudfront for stream distribution and cashing live streaming works really seamlessly on the AWS platform. Now this configuration used a Web server to host a manifest dot XML file. Amazon devpay Easy to instances to host flash media server with orally license pricing and then the cloudfront to serve the stream. So would this set up? You basically have optimal infrastructure to not only host static content, but also provide live streaming. Now, if you specifically on lee have static content, then you can just get by with the S three and the cloudfront. But if you have static content and you also want to provide live streaming, then you want to go ahead and do the easy. Two instances for both the static content and also for the live streaming through the Adobe Flash server instance provided by AWS again just as a recap in terms off the resource is required and available in AWS for you to develop a content and media serving environment, you always they have those easy two instances. You have that Route 53 which is the D n a server and that's going to wrote the traffic two year environment, the other cloud friend for the cashing and the low latency. And then you have the history buckets for the durable and secure storage off your content. 4. Batch Processing Architecture: everybody. And welcome to this lesson I'm looking at how we can use the Amazon AWS infrastructure to do batch processing work. Another lots of different batch oriented applications in place today that can lovers This type off infrastructure, for example, claims processing or large scale transformation media Trans Golding and multipart data processing work. Now best processing on AWS allows for the on demand provisioning off a multipart job processing architecture that could be used for instantaneous or delayed deployment off a Hatra genius and scalable grid off worker nodes that can quickly crunch through large amounts of batch processing tasks. And the best part about it is they can do that in parallel. Now, lots of batch processing architecture are often synonymous with highly variable usage patterns that have significant usage. Peaks, for example, in finance usually have month and processing, which is followed by a significant period of underutilization. The best part about AWS is it can help you overcome that variable. So let's look at how we can develop an architecture to overcome some of these issues. So here we have the basic architecture of how we can get a bat processing infrastructure set up in AWS. So the first, obviously, is that the users are going to interact with a job manager application, which is going to be deployed. In an easy to instance. This is the main component that's going to control the process off. Accepting, scheduling, starting, managing and completing bad jobs. Additionally, it's also going to provide the final results after all the crunching is done. So after the user interacts with the first cc twin tins, what's gonna happen is the raw job data is going to be uploaded into an S three instance. So we have that big s three bucket that's going to store all of their our data for the job now we're going to do is instead of just making that entire large batch of flow to the infrastructure, which is going to cause bottlenecks, what we're going to do is break that up by using the simple que service or sqs individual dot job tasks are going to be inserted by the job manager in an sqs input Q. On the user's behalf, then what's gonna happen is the worker nodes, or basically a host of E C two instances, which are deployed in an auto scaling group and that auto scaling is going to accommodate for the peak and off peak instances. And additionally, you can utilize the spot instances. If that batch work is going to be done during off peak hours, let's if it's big data processing and it can be done during off peak hours, you can utilize spot. You see two instances to even save even more costs. So getting back those Easter two instances are going to be in an auto scaling group, and the group is basically a container that insurers health and scalability of the worker notes. So worker knows they're going to pick up the job parts from the sus que automatically and perform single tasks that are part of the list off a batch processing steps. So after they've processed those tasks in term, results from the worker Nords are stored back into the Amazon as three bucket. Then, in the sixth step, progress, information and statistics are stored in an analytic store, and depending on what type of data you have, that can be either a Amazon simpledb or a dynamodb domain or a relational database. If you require the complex relationships and then you would use an RDS service. If it's this simple data, you can use the dynamodb and then lastly, you can also have a chaining process or, in the seven step you guys can see completed tasks can be inserted in an SKs queue for changing to a second processing stage. So it all depends on what type of batch processing work you will be doing. It can additionally, be changed on a second stage if required. So this infrastructure optimizes the flow of batch processing jobs by breaking up that huge batch job into smaller tasks which I handled by sqs cues and additionally, the worker notes in an auto scaling group which can accommodate for the usage peaks. So again, in summary, the services that are required to build a Optima batch processing architecture are the EEC two instances again, what one would be the main one that the user were interacted and then you have the worker nodes. Then you have the Amazon rds or the dynamodb. The simple database. You can have the Amazon as three buckets. We're gonna have the auto scaling group for the worker nodes and then finally, the sqs cute to break up that large batch processing job into smaller tasks 5. High Availability and Fault Tolerant Architecture: Hi, everybody. And welcome to this lesson on how we can build a fault tolerant environment within AWS. Now it'll be has provides services and infrastructure that are inherently fault, tolerant and highly available. But there are some aspects of the AWS environment that are not inherently fault tolerant that need extra configuration in order for them to be fault, tolerant and highly available. For example, E C two instances within AWS provide infrastructure building blocks that by themselves may not be fault tolerant. For example. Hard drives may feel power supplies may fail and racks may fail, so it's important to use combinations of features that AWS offers that we're going to look at in order for you to achieve fault, tolerance and high availability. So before I get into describing the model that you guys see most of the higher level services in AWS, such as the S three, the dynamodb, the sqs, the load balancing have been built with fault, tolerance and high availability in mind, the services that provide the basic infrastructure, such as the easy to or the physical hard drive. The EBS provides specific features such as available in his owns elastic I P. addresses and snapshots that a fault, tolerant and highly available system must take advantage of and used correctly. So just moving a system into the cloud does not inherently make it fall taller or highly available. When we're moving our on prem system on the cloud or thinking about moving it on to the cloud, we have to develop a architecture to make use of the services that enable us to make these services into being fault, tolerant and highly available. So looking at the diagram on the left load, balancing is an effective way to increase the availability of a system. For example, instances that fail can be replaced seamlessly behind the load balancer. While other instances continue to operate. Load balancing can be used to balance across instances in multiple availability zones off a region, as you guys can see their two availability zones and be, and we have a weapon application servers in both zones, along with a database server, which is replicated in both zones. So what they last load bouncing basically does. It directs the traffic to either Zone A and zero B and if, for example, one of the instances, or to fill it will automatically direct traffic to the other instance, whether in the same availability zone or in a different availability zone. There's not only caters for failures off specific E C two instances or hard drives, but also if an entire availability zone is doubt, it can also kid into that by directing traffic to and other availability zone. So it's important to run independent applications, stacks and more than one availability zone, either in the same region or another region. So if that again, like I mentioned one's own fails the application, the in the other zone can continue to run. And if you do not have those independent applications tax running in each availability zone , then that's not going to happen. So another way to accomplish that is by using elastic eyepiece, which we can see on the right side of the diagram. Now, Elastic I P's are basically public I P addresses that can be programmatically mapped between instances within a region so they're associated with an AWS account rather than with a specific instance. And those elastic, I, Peter says, can be used toe work around host or availability zone failures by quickly remapping the address toe on other running instance, or even the replacement instance that was just started by using an AM I reserve insistence can help guarantee that sets capacity is available in another zone. And lastly, another important thing to keep in mind is that valuable data should never be stored on the instant storage because instant storage is linked to the easy to instant. So if the easy to instances terminated, which sometimes can be very easily done, all of the data in that storage will be gone. So first of all is the recommended that we use BBS or the Elastic Block stores, which offers persistent offense in storage volumes that are adorable and persistent as compared to the on instance. Additionally, those EBS volumes air automatically replicated within a single availability zone. So what happens if they develop with his own fails while you're gonna you lose those EBS volumes to increase the durability further? What we need to do is do snapshots. So do point in time snapshots, which can be created and stored on as three buckets, which are then replicated to multiple availability zones or can even be stored in a different region. So this accommodates not only failures of the instance, but also failures off the availability zones and also failures of the physical E. B s hard drives. Because, as three buckets inherently are highly available and durable, So by taking snapshots or point in time snapshots off our EBS volumes, we can ensure that if those EBS volumes or those instances fail, or even if the availability zone goes down, we have those exact replicas available in S three buckets in a different zone or even in a different region. So in closing, when we're moving our infrastructure from on Prem to on the cloud, we need to make sure that we take advantage off the services that are available in order to make our environment highly available and fault tolerant, because by default, not all services are fall taller and highly available. So some of the services that we do need to keep in mind and configure our Amazon PC two instances EBS volumes, elastic load balancing and the Amazon s three. So we need to make sure that we get all of these services working together and coherently in orderto have an environment that is fault, tolerant and highly available 6. Disaster Recovery Architecture: Hi, everybody. And welcome to this Listen, are looking at how we can optimize disaster recovery through architect ing in AWS. So disaster recovery is all about preparing for and recovering from an event that has a negative impact on your I T systems. So a typical and approach usually involves duplicating the infrastructure to ensure that availability of spare capacity in the event of a disaster is available. Now, Amazon Web services allows you to scale up your infrastructure on an as needed basis. So for a disaster recovery solution, this results in a great amount of cost savings. So let's look at how we can architect to do that. So, basically on the bottom right, you see the Corporate Data Center, which hosts an application consisting of a database server and an application server with local storage for the content management system. Right now there is a new oracle, a database server on premises, along with an application server, and then they have the storage volume. So basically it's an entire on from system that they're currently operating. Now what we can do in order to have a disaster recovery on the cloud is in AWS is set up in AWS storage Gateway, which is basically a service connecting an on premises. Softer application or softer appliance with cloud based storage and the Gateway securely upload data to the AWS cloud, making it very cost effective solution for backups and a rapid disaster recovery. Now the database server backups the application, sir, volume snapshots and the Amazon machine images. Recovery servers are all stored in the S three buckets, which again is a highly durable, reliable and fault tolerant data storage system in eight of us Now the AM eyes or the Amazon machine images are going to be basically pre configured with the operating system and the applications software that is currently being used on premises. So the application servers are going to be duplicated in E. C. Two instances using AM eyes and those am eyes are going to be stored in the history bucket that you guys can see on the top left. Now, good thing about Oracle and AWS is that Oracle databases can directly back up toe on Amazon as three bucket using Oracle secure backup cloud module. So basically all you need to do is update the module on the database server on premises and then it can automatically back up through the secure connection onto the S three buckets. So at this point in time, all of your files, all of your machine images, all of your snapshots and your database server are all being backed up in tow s three buckets. In case off a disaster recovery in the corporate data center, you can basically recreate the entire infrastructure from backups on the Amazon virtual private cloud. Now, the Amazon VPC lets you provisions a private, isolated section off the AWS cloud where you can recreate your entire application and on prime infrastructure. And you guys can see that on the top, right? The application and database servers are going to be recreated using Amazon a C two instances and then the volume snapshot. You can use the elastic block storage, or EBS volumes which are then attached to the recovered application server, and then to remotely access to recovered application. We can use the VPN connection created by the by the VPC gateway. So basically, since everything is stored on the S three buckets, we can use all of that data to recreate the on prime environment in AWS Claude using easy two instances. And since Oracle is also supported by the RDS or the Relational database service in AWS, that can also be duplicated in the VPC. So in conclusion, by using the AWS disaster recovery, we can ensure that we don't need to duplicate everything in terms of the infrastructure for disaster recovery. We can ensure that we have the automated backups already automatically going to the S three buckets. And in case of a disaster recovery, we can automatically switch over to the VPC, which can sit on the cloud in AWS ready to deploy in case of any disaster in the on from systems. So the architecture that's involved in doing disaster recovery or double here in the environment in this scenario are going to be your easy to instances. The VPC, the E B s, which is going to store all of your application data as three buckets, which is acting as a central repository for all of your am eyes, your databases and your files, and then the storage gateway, which is automating the backup from the on prime onto the S three buckets 7. File Optimization Architecture: everybody. And welcome to this lesson on how we can architect for optimal file synchronization architecture in AWS. Given the straightforward stateless client server architecture, in which Web services are usually viewed as resource is and can be identified by the girl's death, teams are usually free to create the file sharing and sinking applications for their departments, for enterprises or for consumers directly. So let's look at how AWS can help your deaf team accomplished those tasks in a secure manner. So basically the file synchronization service and point will consist off an elastic load. Balancer distributing Incoming Request a group off application servers, which are going to be hosted on easy to instances. Additionally, an auto scaling group automatically adjust the number of easy to instances depending on the application needs. So there's not on Lee makes it highly available. It makes a redundant and it makes it durable. On top of that is going to save you costs because the auto scaling group will automatically increase and decrease the number of easy to instances that will be required based on the need. Now, let's see if you want to upload a file, what will need to happen is a client will need to request permission to the service and get a security. Take token now, this is what is going to make this a secure operation. After checking the user's identity application servers get a temporary credential from the AWS STS or the security token service. These credentials allow the users toe upload the files. Next, users upload the files into the S three buckets, which again is a highly durable and available storage infrastructure used for mission critical and primary data storage. Now the S three is going to make it very easy to store and retrieve any amount of data at any time. And the best part about it is is that large files can be uploaded by the same client using multiple concurrent threads to maximize bandwidth usage. Now, to increase the performance, the file meta data version information and unique identifiers are going to be stored by the application servers on an Amazon dynamodb table. As a number off files to them to maintain in the application grows, the dynamodb tables come store and retrieve any amount of data and serve any level off traffic. No file change notifications can be sent via email to users following the resource, such as the Amazon Simple email service, or SCS, which is an extremely easy to use and cost effective email. So solution. So if any changes to files occur, the SCS is going to shoot over an email to the file owner toe. Let him or are known that this file has changed. Other clients sharing the same file will quarry the service and point to check if newer versions are available. Now this curry is going to compare the list off local file check sums with the check some listed in the dynamodb table. Instead of going to the application servers and bogged them down, it's gonna go directly to the dynamodb table. If the core defiance newer files, they can then be retrieved from the S has three bucket and sent to the client application. If it does not find any in your file, isn't it does not have to increase the network traffic and access as three. The dynamodb table will let this client know that there is known your file available, so this is how we can synchronize and optimize a file service in the Amazon AWS environment does not only makes the entire infrastructure highly available. It also makes a durable, and it also decreases your costs by using the auto scaling and by using the dynamodb table to decrease the traffic that's going to your application servers. So, in conclusion, the services that we need or we would need to develop an architecture toe, have a file synchronisation service in AWS. We're going to need those e c two instances as our application servers. We'll need the auto scaling and the elastic load balancing in order to automatically scale up and down those applications service and then balance incoming load through the L B. We'll have the dynamodb table for storing the metadata and accessing to see if there nor versions available, and the S three bucket as a mean storage repository, the STS or the Security Token service to make sure that all the requests coming in are from authenticated users and the S E s, which is going to be used as our main notification service. Ascend email to users. Let letting them know that new your files are available or specific files have changed, depending on how we want to have this set up and then. Additionally, we can also have the Road 53 which is our d n a service if these files are going to be accessed through the Internet outside of the organization, this can also be accomplished by using a Route 53 which is Amazon's Deanna service. 8. Media Sharing Architecture: Hi, everybody. And welcome to this lesson on looking at how we can develop a framework if you want to do a media sharing on our infrastructure. A media sharing is probably one of the hottest markets on the Internet right now. Customers and consumers have a staggering appetite for placing photos and videos on social networking sites and for sharing their media in custom online photo albums. The growing popularity off media sharing means scaling problems for the site owners who face or are facing ever increasing storage and bandwidth requirements and increased go to market pressure to deliver faster than the competition. Since most businesses today have limited manpower budgets and data center space, AWS offers a unique set of opportunities to compete and scale without having to invest in the hardware staff or the additional data center space. Utilizing AWS is not an all or nothing proposition. Depending on the project, different services can be used independently, so let's look at see how we can architect for such an infrastructure. So this infrastructure is basically broken up into two parts. We have an uploading park and then we have a content delivery part, So let's take a look at how weaken or how users can upload data onto the AWS environment. Now the sharing content first involves obviously uploading the media files to an online service. So what we're going to do is have an elastic lord balancer distribute incoming traffic toe upload servers, which is going to be a dynamic fleet off easy two instances. And what's going to happen is the Amazon cloudwatch monitors. These servers and an auto scaling group automatically manage them, automatically scaling them up or scaling them down based on the load. So after that, the original uploaded files are then going to be stored in an S three bucket, which is a highly available and adorable stores service offered by eight of us now to submit a new file to be processed or further processed. After uploading the upload, Web servers push a message to the SQs, which is the simple que service. The Q is going to act as a communication pipeline between the file reception and the file processing components. Now, by breaking up the file reception and processing components, we are reducing the load on the easy to instances, thereby increasing the performance and the upload and processing speeds that the customers are going to encounter now. The processing pipeline is basically also a dedicated group off Easy to instances used execute any kind of post processing task on the uploaded media files. For example, video turns, coating imagery, sizing and many other things that users, usually due to uploaded photos or uploaded media are to automatically adjust needed capacity again, an auto scaling group manages it. You can use additionally spot instances to dynamically extend the capacity of the group and to significantly reduce the file processing costs. So by having those additional spot instances, we can reduce our costs by doing the post processing tasks during off peak hours so we can reduce the number off dedicated easy two instances in that auto scaling group. Now, once the processing or post processing is complete, Hestrie again is going to store the new output files now as a choice, what we can do is original files can be stored in a regular S three bucket, while the process files can be used in an infrequent access or a reduce redundancy bucket. To further decrease the costs now, media related data after that can be put in a R D s or an Amazon relational database service or a Amazon dynamodb depending on the type of information that is required and going to be stored for that media. Now, after that, Ah, third fleet of a C two instances is going to be dedicated to host the website front end off the media sharing service. So this is our second half off. The infrastructure media files are distributed from the S three to the end user via the CLOUDFRONT, which is a content delivery network to reduce Layton see by using as your locations. And then again, an elastic load balancer is an auto scaling is used on the Web service to not only balance the load but decrease the cost by increasing or decreasing the number of easy two instances in the auto scaling group. So this infrastructure were basically breaking up the upload and delivery into two separate streams, breaking up this infrastructure into three separate easy to instance, clusters were not only increasing performance, but we're essentially decreasing the costs by using the auto scaling group. So if the demand is there that you see two instances automatically scale up if it's not there they automatically scale down. So this keeps your costs in check but also increases the customer satisfaction because they will have low latency and faster processing. Additionally, by using the cloudfront, they will have the low latency off, accessing those files either the original files or the processed files through that cloudfront content delivery network. So again, to sum it up, the services that are optimally required toe have a media sharing network. We have those easy two instances in the upload section in the processing section and then in the Web server section optionally. We have those spot instances for processing the pipeline. If, for example, there is large amounts off media that needs post processing tasks like trance coating, this can be done during off peak hours by further decreasing costs through using spot instances. Then we have the auto scaling and the load balancing in order to decrease the cost and increase the performance and make it highly available. We have the Amazon around 53 which is the DNS service. Through with the users can access the Web servers and the upload servers. We have the CLOUDFRONT to decrease the Layton Sea of delivering the content back to the end users the S three buckets to store as ah to store all the media files both the original and the processed files the RDS in terms of the data store, whether it's an RDS service in terms off a relational database or a dynamodb. And then finally the sqs to break up the uploading and to increase the performance off the processing by keeping all the jobs in an sq rescue on delivering them to the EEC two instances when they can process them. 9. Online Gaming Architecture: Hi, everybody. And welcome to this lesson on developing in architecture on AWS. If you want to host games online not for hosting games online most the times there's unexpected traffic patterns and highly demanding request rates. Now the good thing about AWS is that you can have the ability and the flexibility to start small and power up your architecture in response to your players. So as they grow your architect chicken, grow with them so you can scale up or scale down your architecture to make sure you are only paying for the resource. Is that air driving the best experience for your game so you can use the managed services you need, a Biel's for popular cashing and database technologies and a lover's architecture that captures the best practices off some of the largest games running on eight of yesterday. So let's look at the architecture that some of those games are utilizing. I know this looks of quite overwhelming, but don't worry, let me walk you through this step by step. Now, the first thing that we need to do is utilize Amazon around 53. What that will do. It will make sure that our players or your players are always able to discover at your service endpoints. You can use the built in routing policies to route users based on Leighton sea or geography , because most of the times your players are going to be geographically diverse. So you want to make sure that they're logging into your endpoints from wherever they are in the globe. And the Road 53 enables you inherently to route their traffic based on where they are in the globe. After we figure out where they're located, the second step is weaken. Route users toe are back and using the elastic load balancing, which again scales automatically for incoming traffic. Additionally, we can keep the players data secure in transit. Why the https? By leveraging the SSL termination capabilities off the E l B. Next comes our Web servers, which again are going to be running on E. C. Two instances in an auto scaling group that will span multiple availability zones. What that will do it will not only accommodate for the growing and shrinking off your players, it will also do the fault tolerance. So if one off the availability zones goes down, the other one can pick up the slack. Now, just a tip. AWS recommends using the M four instant types with the enhanced networking and EBS optimized enabled that will provide the best performance for gaming after the traffic. It's the easy. Two instances in the next step is if we separate the app here from the Web, tear and leverage and internal, he'll be now. This load balancer provides additional benefits of added security by residing in a private sub net and making sure that no external traffic old rooms, you're apt here moving on down. We have the Amazon elasticache for reddest, which is going to provide a fully managed solution that enhances robustness and reduces the cost of installing, operating and maintaining a highly available and scalable reddest cluster. Additionally, you can also leverage the multi availability zone last to cash in the game to provide automated disaster recovery and a scalable tear with read replicas if required, depending on how large your game is going to be, Then we come towards the end and utilize the Amazon Aurora A my SQL compatible database, which provides a high read and write throughput up to 64 terabytes, six way replicated storage and up to 15 low latency read replicas in a multi daisy environment. Now, when compared to other instances in an RDS, this by far has the best performance. If you compared to my SQL or the Microsoft Sequels air, this would provide the best performance now just another tip or a food for thought. Amazon did a survey, and gaming customers have seen a tour to three time reduction in cost after migrating to Amazon's Aurora database service from another database service. Additionally, the game can also benefit from the high speed, low latency managed no sequel database, which is the Amazon Dynamodb, which provides predictable performance and scalability to depends on which one you want to utilize. He's a Aurora or the Dynamodb, but just keep in mind. Dynamodb is a no sequel database, so, depending on what type of data will be stored, will determine whether he used the Aurora or Dynamodb. But both have the best performance. No for storage. The best option is going to be using the simple Storage service or S three to store the game assets, the DLC and log files generated by the servers. Now, as a user based, grows geographically, we can also utilize the Amazon cloudfront as it distributed cash for content, which is going to use the edge locations off which Amazon has around 170 throughout the globe. And lastly, we can use push notifications through the SNS or simple notification service with out of the box support for Apple, Google, Amazon and Windows platforms. So this would this set up? We can provide the best performance for gaming experience for the users. What this will do. It will grow and shrink with the user base. So if you have maybe a few 100 users in the beginning, the Auto Scaling group will keep the easy to instances to a minimum. Has a user base grows? The auto scaling group will enhance the easy to instances and that will accommodate, for the increase users without having an impact on the latent sea or the performance. In conclusion. The services required to build optimal gaming architecture in AWS is the Amazon Road 53 to wrote the traffic to the best geographic point. Then we have the Lord balancing to make sure that the performance is not impacted. Then we have the E C two instances which is going to act as our Web servers and our observers on different sub nets to ensure that trap public traffic stays in the public's of net and does not go into the private sub net. Then we have the Amazon elasticache, which is to store the cached content and then as our primary database, we can either utilize the Amazon Aurora or the Dynamodb, depending on the content and the main storage. We have the Amazon as three buckets and then finally, as the users and the traffic grows as a popularity of the game grows, you can utilize the Amazon cloudfront to reduce latency by utilizing the edge locations distributed throughout the globe. 10. Hosting WordPress Site Architecture: Hi, everybody. And welcome to this lesson on looking at how we can architect a war press hosting infrastructure in eight of us. An award press is probably one of the world's most popular Web publishing platforms, and statistics say that almost 27% off all websites that are online are using more press from personal blocks to some of the biggest news sites out there are on where press platform? No, since WordPress is used so widely, there is a total Bs architecture that weaken, develop toe. Start hosting the WordPress on AWS. So let's look at how we can go ahead and develop the architecture in AWS. Don't worry. This might look pretty overwhelming, but let me walk you through this step by step. So let's start from the left hand side where we see the users coming into an Amazon wrote 53 which is AWS says a d. N. A service of the Deanna Services. Going to wrote the traffic into our Amazon cloudfront and the club for is going to store the static and dynamic content, and the reason we're going to store it in Cloudfront is so we can reduce the Layton see because cloudfront utilizes educations, which Amazon has spread all across the globe. So this way doesn't matter where your users are. They will. Their first point of contact will be the edge location where the cloudfront static and dynamic content it hosts is hosted. Therefore, reducing the Layton see by quite a bit after the after hits the Cloudfront. So let's say, if the content is not cached locally at the edge location, the CLOUDFRONT is going to go ahead and submit the request to the network. And the first point of contact is going to be the issue and get with. And the gateway is basically going to allow communication between the instances in the weak PC and the Internet. So after hits the Internet get way, we're gonna go ahead and take the traffic to a net gateway that were address translation. Gateway. In each subject, there's going to be one that get what you guys he went up on top, and then there's one on the bottom, and that's one. Enable the Amazon Easy. Two instances in the private submits both application and data toe access the Internet and it's always good practice toe have and that get way to segregate your internal and your external networks. And the reason you guys see there are two Nat gateways into different availability zone is for the high availability. So if one availability zone was to go down or for some reason go down for maintenance or for some other issues, the other availability zone will be able to pick up the traffic. So users are not going to notice any downtime after them that get way, we're going to utilize the application load balancer that's going to distribute the Web traffic across an auto scaling group off Amazon. Easy two instances in multiple availability zones like I just mentioned and load balancer is not going not only going to help us distribute the traffic off our users, it's also going to reduce the costs since we are using an auto scaling group. As the traffic increases are easy to, instances will also increase. But consequently as a traffic decreases soul R E C two instances. So this way you don't have to have a whole bunch of reserved or on the man easy to instances always running the auto scaling global automatically scale up and scale down based on the need and the demand. So from the application load, balancer, we're gonna go on to step number five, which you're going to run the WordPress site using easy to instance, and with Amazon Ec2 instances, we can install the latest versions off WordPress, Apache Web Server, Ph. B seven and Opie Cash and building Amazon machine image that will be used by the Auto Scaling Group launched configuration to launch new instances in the group. So, for example, as the traffic increases, the auto scaling group is going to recognize that Maury see two instances are needed, and it will use that am I or Amazon machine image to trigger new instances to be launched and set up. Now if database access patterns are read heavy. Mama, we might want to consider using a WordPress plug in that takes advantage off cash leering like Amazon Elasticache. Would you guys see meme cast in front of the data base layer to cash frequently accessed data and again, the whole point of this is to make sure that we reduce the Layton see so the end users don't notice any lag, regardless of how many users are accessing the WordPress website. So why use in the last two cash or meme cashed in front of database? We are greatly going to reduce the stress that is put on the database. Next comes the database. No, it's highly recommended to simplify the database administration by running Amazon RDS or the Relational database service using either Aurora or my SQL and Aurora. If you guys are not familiar, is Amazon's own database service. Or you can use an industry standard, my SQL database. There's also Microsoft sequel server, and, depending on the type of data, dynamodb could also be used. But again, that is specific to what type of data is going to be stored in the database, whether it's relational or whether it's not will determine whether you use or ah, my SQL or dynamodb and on the Amazon, Easy to instances, access shared war. Press data in an Amazon E. F s file system using mount targets in each availability zone in your vpc to guess, see that as a last step for step number eight, because by using an Amazon DFS, which is by nature very simple and highly available and scalable, the WordPress instances have access to the shared unstructured war press data like PSD files, config, themes, plug ins and accept tra. So this is so This is a basic set up off how you would want to have your environment provisions on AWS if you are going to be hosting a WORDPRESS website. So just as a recap that services that we would want to provision in hosting award press website is first of all the clot front questions to reduce the Britain see, and then we'd want to get the VPC set up in a multi ese environment. You can have either one VPC like you see on the screen or, if you prefer, you can also have multiple virtual private virtual private cloud networks if you want a wholesome in separate regions. But for the sake of simplicity, we've kept in money PC, but put him in two availability zones to make sure that our environment is highly available . Then we have the application load balancer and the auto scaling group the Lord downstairs, just going to distribute a load to the different easy to instances where, as the auto Scaling group is going to scale up and scale down our environment based on the demand. Then we have obviously the easy to instances and optionally the elastic cash or meme cash, depending on if our data can be cashed and then we have our database instances. Here you guys see Aurora, but but my SQL or even dynamodb, could also be substituted depending on the type of data and the type of actions that the database will be performing. And then finally all of her files are going to be stored in an Amazon DFS, which is the optimal storage system for a WORDPRESS hosting website as compared to a mastery bucket or EBS volumes. So if you are going to be setting up award precious website, this is the optimal architect er that you want to make sure is set up at a minimal level to make sure that your Wordpress website is hosted in a highly available environment and also is reducing the Layton see for your end users regardless of where they are in the globe, they will get the best performance based on the clot front locations 11. AWS Migration Basics: everybody and welcome this lesson on looking at why and when an organization would want to migrate to AWS. So in the following lessons, we're going to look at the different architectures we can create in Amazon AWS and how we can implement them. So in this lesson, I wanted to give you a good overview off what organizations should do before they decide to migrate to the cloud and specifically to aws another lots of reasons why an organization would want to migrate to the cloud. Some are mitigating to the cloud to increase the productivity off their workforce. So I've seen a lot of companies with the data center consolidation or rationalisation projects migrating to the cloud, especially those that are preparing for an acquisition director or have otherwise experience some kind of infrastructure sprawl. Over the years, there are also companies they're looking to completely reimagine their business, using modern technology as a part of a larger digital transformation program, and I've been involved with quite a few of those in the past few years. With the advent of cloud computing and cloud applications, lots of companies are looking to transform their infrastructure and move to the cloud before we decided to move to the cloud. There are few things that we need to keep in mind what organization is going to have their own unique reasons and constraints. But I've seen a lot of common drivers that I wanted to share with you guys that customers consistently apply one migrating to the cloud, so 1st 1 is operational. Costs are key components. Off operational costs are unit price off the infrastructure ability to match supply and demand. Finding a pathway toe optionality and employing an elastic cost base and transparency. We have to make sure that we not only I know each one of those components, but also keep in mind how aws, or how any cloud platform can help you overcome and achieve these operational costs. Then we have workforce. Productivity now typically practically is increased by two key factors. First is not having to wait for the infrastructure and having access to the breath and depth off AWS with over 90 services at your disposal that you would have otherwise, I could build and maintain yourselves so, in fact, it's common for cloud platforms such as a do we have to see workforce productivity improvements off amazingly close to 30 to 50% following a large migration. The 3rd 1 is classed avoids eliminating the need for hardware. Refresh programs and constant maintenance programs are the key contributors to cost avoidance, and there's business agility migrating to the eight not migrating to eight of this cloud helps increased overall operational agility. It lets you react to market conditions more quickly through activities such as expanding into new markets, selling lines off your business and acquiring available assets that offer competitive advantage. By using multiple AWS organizations, you can merge two different AWS accounts in tow, one where which enables you to operationally manage them as a single unit but also keeping them separate. Additionally, through the use off AWS Lambda functions and such, you're able to build applications on a serverless platform, which enables you to increase and decrease your capacity as and when required. And then, lastly, we also operational resilience now. This may seem obvious, but reducing an organization's risk profile will also reduce the cost off risk mitigation over 16 regions comprising off over 42 availability zones, Amazon Web services has a global footprint improve up time, which also reduces your risk related costs. So these are the key business drivers that organizations usually use in deciding when and how to migrate to the cloud and specifically in choosing eight of us. Now the path to cloud adoption is quite unique for every organization. The stages of adoption that you guys he described here can be used the way to understand some of the steps involved. First begin off with the project face, which is when you are running projects to get familiar and experienced benefits from the cloud. Then there's the foundation stage. So after experiencing the benefits of the cloud and decided this is right for you, you then build a foundation to scale your cloud adoption. This includes creating a landing zone just could be a pre configured, secure multi count environment in AWS. You can also do the cloud Center of Excellence operations model as well, assuring security and compliance readiness. So after you have the foundation down, that's when we get to the migration stage in which you migrate existing applications, including Michigan Mission critical applications or the entire data center to the club. As you scale your adoption across a growing portion of your portfolio. I don't Lastly, we have the reinvention. So now that the operations are in the cloud, you can focus on reinvention by taking advantage of the flexibility and capabilities off eight of us to transform your business by speeding time to the market and increasing attention on innovation. So this is the basic adoption strategy and stages that a lot of organizations use and find useful now again. Like I mentioned, every organization will have their own stage. But this is the basic bare bones stages that most organizations will follow in one shape, form or another. Now there may be some cases where your let's, say, contemplating large legs in migrations in isolation. But most of the time, migrations are going to be part off larger enterprise transformation project and most of them in world, a five step or five phase approach. The first phrase we have the migration, preparation and business planning. Now here, you determined the right objectives and begin to get an idea of the types of benefits that you're going to realize now. It starts with some foundational experience and developing a preliminary business case for a migration. This requires taking her objectives and account, along with the age and architecture over existing applications and their constraints. In the second phase, you have the portfolio discovery and planning this you need to understand your i t portfolio, the dependencies between applications and begin to consider what types of migration strategies you will need to employ to meet your business case objectives. I would portfolio Discovering migration approach You're in a good position to build a full business case. Next, you have 1/3 and the fourth phase, which is designing, migrating and validating the application. You hear the focus moves from the portfolio level to the individual application level and you design migrant and validate each specific application. Each application is designed, migrated and validate according to one of six common applications strategies, which AWS also for sure, the six ours. But just keep in mind that there is a whole different process for migrating your applications onto eight of us Now. Once you have some foundation experience from migrating a few APS and plan in place that the organization can get behind, then it becomes time to accelerate the migration and achieve scale in terms of migrating your entire infrastructure and applications onto eight of us, and in the final phase is the operate. So as up applications are migrated, you iterated on your new foundation, turn off old systems and then constantly iterating toward a modern operating model. Now you're operating Model becomes an evergreen set off people's processes and technologies that constantly improves as you Margaret, more applications onto the AWS cloud. So this is basically a simple process off how you can migrate your infrastructure and your applications from your arm from system to the AWS cloud. It's also important to consider that while one of the six strategies that may be best for migrating certain applications in the given portfolio, another Streisand might work better for moving different applications in the same portfolio . So here you guys see, the six common strategies are most used strategies that organizations employ in terms of moving from the on prem into the cloud. So first want you guys. He on top is re holster also refer to as lift and shift in large legacy migration scenario , where an organization is looking to quickly implement its migration and scale. To me, the business case majority applications are we hosted, which can, which most of the obligations utilize services such as AWS SMS to be automate the relisting process. Then there's also Leap re platform, which is also lift, tinker and shift. So this entails making a few clouds optimization in order to achieve some tangible benefit without changing the core architecture off the application. So whereas in Re Horse, you simply pick and drop into the cloud where tinker you pick it up, you make a few modifications to it before dropping it into the cloud. Then you have repurchase, which is dropping shop. So and this this is a decision to move to a different product and likely means your organization is willing to change the existing licensing model you've been using. So, for example, for workers that could be easily upgraded to newer versions, this tragedy might allow a feature set upgrade and smoother implementation. So good example of this is, let's say, if you're using a legacy database, you can decide to do the repurchase or drop in shop and migrate to the EMS on Aurora or Amazon Dynamodb databases. The 4th 1 we have is re factor or re architect, so typically, this is driven by a strong business neat at features, scale or performance that would otherwise be difficult to achieve in the applications existing environment. So if your organization is looking to boost agility or improve business, continued, continue T By moving to a service oriented architect, ER, this drags may be worth pursuing. Then we also have an option to retire, which is identifying i TSS that are no longer useful and can be turned off and then retain What did You may want to retain portions off your portfolio because there are some application that you're not ready to migrate and feel more comfortable keeping them on Prem or you're not ready to prioritize an application that was recently upgraded and then make changes to it again. So in those situations, you could either decide to either retire it all together or retain an on Prem and is migrate the rest of the applications or your infrastructure after the cloud. So now that we have a good idea of when an organization should and could migrate in the AWS cloud, so now you've decided to migrate to the quality of good idea of how you're going to do it. You have your business goals in place. Let's look at different architectures that we can design an architect in AWS toe host different types of environments. Let's dive into the rest of this course and look at how we can architect in different platforms on AWS. 12. Using the AWS Well architected tool: Hi, everybody. And welcome to this lesson on looking at the AWS. Well, architected tool. So this is a tool that is basically gonna help you review the state of your workloads and compare them to latest AWS architectural best practices. It's developed by Adovia s to help cloud architects build a secure, high performing, resilient and efficient application infrastructure. So it's going to provide you with a consistent approach to evaluate architectures. And believe me, it's been used by tens off thousands off organizations across the globe and been given great reviews by all of them. The best part about it. It is a free tool available in the AWS Management Council, which were look at in a few minutes and all we do is basically just define our workload. Answer a set of questions, and it's going to pop out a results for us. So in the diagram, you guys see what we're busy going to do is we're going to identify the workload interview . So we're gonna answer a set of questions that AWS is going to post to us, and the tool is going to review the answers against the five pillars established by the well architected framework, which is the operational excellence, security, reliability, performance, efficiency and cost optimization. Forget. Remember from the previous lesson, we briefly looked at all five of these. And then after that, what is going to pop all you'll get? Videos and documentation related to the aid of this best practices is going to generate a report that summarizes the workload review. And then you can also review the results off the workload across the organization in a single dashboard when we are deciding to migrate to AWS and developing architecture in AWS based on our business, it's always best practice to use this tool to first define our workload and what kind of changes we need to do. Because if you are migrated, the clot or if you are developing something new in New York, it Thatcher In AWS, you always want to make sure that you're using the latest technologies and you are following the best practices, so this is a free tool to help you do them. So let's log into our management council and see how we can get this tool to help us figure out if our architecture is following best practices. So here we are. We're in the dashboard of the AWS man console. I've already going into the the Well architected tool, and this is the diagram that we basically looked at. Some you do. I'm gonna go ahead and click on, define workload to start answering a few questions and see what kind of results we're going to get. And here we can select which industry our company falls under. There are host of interest, sees that we can select from so what I'm going to do. I will just pick a sample one, let's say, a digital advertising. And then there's also a non option of further dive into the specific industry. So if I have selected, digital advertising is going to break that down for me. If I, for example, selected financial services, it will break down that one differently. Let's the stick was our digital advertising. Let's say that I am a publisher off heads, and here we can select where the workload is going to run for our organization. On these are all of the AWS regions across the globe. So depending where our organization is located or where our offices are located, weaken select that specific region And if we have multiple regions, we can also select multiple regions, and it will pop out the workload for us there. Let's see if I have a my main operations in the U. S. And the Nile's or have a team operating in India. I'm work just like these two regions within the AWS framer and environment. When Mr War Gold is going to run, whether it's production or preproduction, I'm going to the select production. And if you have multiple eight of this accounts, you can also have this span across those multiple accounts. I'm gonna go ahead and define our workload. And here is where we can start answering those questions based on these five pillars, which we've already discussed previously. So I'm going to go and do is click on Start our review, and here is going to ask us nine questions about Operational Excel excellence. It's got 11 questions on security. It's got nine questions on the liability. It's got eight question on performance efficiency, and then it also has nine questions on cost optimization, so we would have to go through an answer each one of these, depending on whether they relate to our business case or not. If, for example, we look at the 1st 1 in operational excellence attesting, how do we determine our priorities and then has a set off answers for us? Additionally, we can also put notes. So if there's a disinformation, we want a deal us to know, based on how we determine our priorities, we can also put them in the notes or we have multiple other options. If this question does not apply to us based on our type of business, we can say this question does not apply or if none of these are applicable to our business is also an option for all questions for none of these. Additionally, if you want to know what they mean by evaluate external customer needs, there's an option for info here. If we click on that on the right hand side, it gives us details to all of these answers. So, for example, evaluating the action all customer needs, it lets us know that what a legless means by that it means involved key stakeholders, including business development and operation teams, to determine where to focus operation efforts on external question leads. This will ensure that you have a thorough understanding of operations support that is required to achieve business outcomes. So if you want further clarification on which answers are right, just click on the info and it will give you details of all of these answers on the right hand side. And this is the same for all off the five pillars. Click on Security again. Like I mentioned all. If it is the same, and we can get additional answers here for double for the first question in securities, how do you manage credentials and authentication? So here we can select and again most of from our multi select. So there's not one or the other weakens, like as many options as are applicable to us. I'm gonna go ahead and just quickly answer all of these questions, and then you can come back and see how it'll be. S has evaluated my answers right now that I have answered all of these questions that we saw previously, you guys can see that the status here down here has changed toe off them to be answered. So let's say if we want to go back and change any of the answers that we've specified, in any of these five pillars. We can go ahead and continue our review, and it will take us back to all these questions. Weaken. Choose which ones you want to change if, for example, you have a requirement to do so. So let's say that we're all finished with our answers. Who decided that the that we've fully explained our workload and we're trying to do and want to generate the report because we could either generator report to Donald A. Pdf of it, or we can click on the improvement plan, and it gives us a status off our current set up that we have and how we operate based on these five pillars. So you guys can see that it has identified 21 areas that are considered high risk by AWS based on their best practices. And it's identified 24 areas considered medium risk based on their own framework. So if we click on the high risk gives us the questions were the areas which they feel are a high risk when you're migrating to the cloud, or even if you're not migrating to the cloud in your own on from system, these areas should have improvements to increase the risk level off them based on the five pillars. She does concede that one of the questions that I answer is How do you detect and investigate security events? So if we click on the down arrow, it lets us know recommended improvement items on how we can improve on how we detect and investigate our security events. And the same goes for all of the other questions. How do you protect your data at rest, based on the answer that you provided. And again, those answers would be how you are actually operating your current environment. It gives us a recommended action plan to improve how we protect our data at rest. And that goes hand in hand for all of these 21 items is identified as high risk, and the same goes for medium risk. So if we click on any of these, it takes us to their documentation and explanation on how we can go ahead and improve on managing our credentials and authentication. So here we can get multiple information on how we can do multi factor authentication or set up password policies, and so on. Additionally, it also option for resource and partners. So, for example, if you would like a consultancy to come in and help you, this will give you a list of available organizations that are a daily A certified that can come in and do an overall Oreo off your system and help recommend how you go about improving all of these areas identified as high risk at medium risk. Additionally, also, have a improvement status where you can choose the state off your workload improvements, whether they've started their in progress there completed or the risk is acknowledged. Meaning you know that there there is a risk of social with that that's been identified by AWS. But this is how you operate and you're not worried about us. Weaken. Flag them as risk. Acknowledge So this is a very good tool and regal dashboard for you to not only see what areas you should improve on based on best practices, but give you a good project management platform where you can manage identifying and correcting these areas that are identified as medium and high risk, or just stating that yes, the risk has acknowledged, you know, but there's no way to mitigate it based on how you operate your organization, and here also, you can define the pillar priority. So let's say that operational excellence for you takes priority over security or reliability. We can go ahead and change these priorities by clicking on the edit button here. So again, depending on what your business processes are, what your goals are, you condemn fine, which priority takes precedence or the other, and all automatically changed the high and medium risk and identifying based on this risk. Because, as you guys can see from the bottom, most of these high risk items are security related security events. Protecting your data protecting your data because we've specified security as a first priority pillar, which most of the time, by default, it should be your first priority pillar. But again, every organization operates differently. So if it's operational excellence or cost optimization, you can put that on top, and it will identify the risks based on this priority and just one final thought for the generating report. If you generate the pdf, it will basically Papa a pdf that could be shared within your organization that just gives an or all overview about all of the question that you've answered, and it gives a good overview about the industry and the details you've selected. Based on this review, it gives the high and medium risk areas that is identified in terms of these five pillars is identified for areas that are high risk and five intermediate risk in operational excellence. And same goes for security, reliability, efficiency and cost optimization. And then it goes on to provide the answers that you've provided for all off the questions for all five pillars. It's gives a question. He gives us choice that you've selected and the choices that you have not selected. So if you are sharing this report within organization, they can see all of the options that are available and option that you've selected. So if there is one that you have not selected that should be selected, they can flag it and identify it, and you are able to go back and edit year answers, and it will again give you a different improvement plan based on those edited answers. So this is a very good PdF where go tool to share within your organization to help you identify areas that can be improved when you are moving to AWS and areas which should be improved based on best practices, because if you are going into the cloud, if you are changing your infrastructure, if you are architect ing in you architecture on the cloud, whether it's an AWS or in any other platform is always good practice. To make sure that you are, you identify what you're currently doing and to benchmark it against best practices, and this tool is very robust and helps you do that in a very easy and simple fashion.