AWS Certified Cloud Practitioner 2020 | Anand Rao | Skillshare

AWS Certified Cloud Practitioner 2020

Anand Rao

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
69 Lessons (7h 9m)
    • 1. Course Introduction

      2:05
    • 2. 2 Course Agenda

      2:46
    • 3. 2 Need for Cloud Computing

      12:31
    • 4. 4 what is cloud computing I

      5:24
    • 5. 5 What is Cloud Computing II

      5:56
    • 6. 6 The blood and bones of Cloud

      12:01
    • 7. 7 benefits of cloud computing

      9:11
    • 8. 8 Key concepts and Terminolgy

      5:38
    • 9. 9 Econmies of scale

      1:20
    • 10. 10 capex vs opex

      2:54
    • 11. 11 What is a Public cloud

      1:46
    • 12. 12 characteristics of Public Cloud

      1:21
    • 13. 13 What is Private CLoud

      1:22
    • 14. 14 Characteristics of Private CLoud

      1:34
    • 15. 15 What is Hybrid cloud

      1:01
    • 16. 16 Characteristics of Hybrid CLoud

      1:11
    • 17. 17 review and what next

      0:29
    • 18. 18 What is IAAS

      3:54
    • 19. 19Use cases of IAAS

      1:45
    • 20. 20 what is paas

      2:06
    • 21. 21 Use Cases of PAAS

      3:35
    • 22. 22 What is saas

      2:28
    • 23. 23 What is Shared Responsibility Model

      9:21
    • 24. 24 Foot Prints of Amazon Web Services Datacenters

      13:58
    • 25. 25 AWS Console Tour

      9:32
    • 26. 26 Free access to AWS

      2:58
    • 27. 26.1 Creating a Free AWS Account

      2:52
    • 28. 27 IAM Part 1

      8:50
    • 29. 28 IAM Part 2

      3:19
    • 30. 29 IAM Part 3

      8:23
    • 31. 30 IAM Part 4

      4:19
    • 32. 31 IAM Summary

      2:25
    • 33. 32 Networking Fundamentals Part 1

      3:58
    • 34. 33 Networking Fundamentals Part II

      8:06
    • 35. 34 Conceptial Overview of VPC

      4:45
    • 36. 35 AWS VPC Walkthrough

      16:51
    • 37. NACLS and Security Groups

      7:06
    • 38. What is Compute

      4:42
    • 39. 38 AWS Compute Services

      13:25
    • 40. 39 EC2 Instance Lab Activity

      23:05
    • 41. 40 EC2 Connecting to Windows Machine

      6:29
    • 42. 41 Ec2 Instance Linux Instance

      7:50
    • 43. 42 Storage Fundamentals

      8:21
    • 44. 43 AWS S3 Simple Storage Services

      11:32
    • 45. 44 AWS S3 Simple Storage Services II

      6:25
    • 46. 45 AWS S3 Storage Classes and Data Lifecycle

      13:10
    • 47. 46 AWS Storage Gateway

      5:57
    • 48. 49 Route 53

      12:16
    • 49. 50 Cloud Front

      11:24
    • 50. 51 Cloud Watch

      15:47
    • 51. 52 Cloud Trail

      5:53
    • 52. 53 Simple Notification Services

      7:46
    • 53. 54 AWS Config

      3:32
    • 54. 55 AWS Config LAB

      9:50
    • 55. 56 AWS CloudTrail vs

      2:16
    • 56. 57SQL RDS

      9:34
    • 57. 58 NO SQL Dynamo DB

      4:20
    • 58. 59 ElastiCache and Redis

      4:28
    • 59. 60 AWS Lambda

      5:34
    • 60. 61 Shared Responsibilty Model

      5:06
    • 61. 62 Security and Compliance Services

      7:39
    • 62. 63 AWS KMS

      2:19
    • 63. 64 AWS organizations

      3:00
    • 64. 65 AWS Organizations Lab Demonstration

      7:43
    • 65. 66 AWS Pricing

      5:16
    • 66. 67 AWS Billing and Cost Tools

      4:24
    • 67. 68 AWS Support Plans and Trusted Advisor

      5:51
    • 68. 69 AWS Whitepapers

      2:48
    • 69. 70 AWS Cloud Practitoner Conclusion

      2:31
12 students are watching this class

About This Class

This course introduces you to AWS products, services, and common solutions. It provides IT technical end users with the cloud fundamentals to become more proficient in identifying AWS services so that you can make informed decisions about IT solutions based on your business requirements.

Whether you are just starting out, building on existing IT skills, or advancing your Cloud knowledge, this course is a great way to expand your journey in the Cloud.

Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. A Cloud services platform such as Amazon Web Services (AWS), owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application.

AWS began offering its technology infrastructure platform in 2006. At this point, AWS has over a million active customers using AWS in every imaginable way.

This course is approximately 8 hours long in total, and will be delivered through a mix of:

  1. Instructor lectures

  2. Video demonstrations through Hands on Labs

The course curriculum is designed as follows :

  1. Introduction to Cloud computing

  2. First Steps into Amazon Web Services

  3. Identity and Access Management

  4. Virtual Private Cloud

  5. All you need to know about EC2

  6. Simple Storage Services

  7. Autoscaling , Elasticity and ELB

  8. CloudFront

  9. Route 53

  10. Monitoring with Cloud Watch

  11. Logging with SNS

  12. Auditing with Cloud Trail

  13. AWS Config

  14. RDS

  15. DynamoDB

  16. Elasticache

  17. Redshift

  18. Serverless computing with Lambda

  19. AWS Shared Reponsibility Model

  20. Security and Compliance on AWS

  21. AWS Key Management Service

  22. AWS Organizations and Pricing Model

  23. AWS Billing and Cost tools

  24. AWS Support Plans and Trusted Advisor

  25. Reference Documentation with AWS Whitepapers

Transcripts

1. Course Introduction : Welcome to the aid of Louis Technical Essentials. Course, My name is AnAnd Rao and I will be an instructor for this course. The first thing that I would like to tell you is just a little bit about myself. I've been an instructor for about a decade, but I've been an I t. For over 16 years. My background is pretty varied in a word, in data centers, servers, networking, storage, virtualization, Hyperion, VM ware world and lots off. My carrier has been dedicated to active directory. Scripting has been a part of my career as well. Where work with power? Shell script? A lot. So now that you know about me, let's talk about the course. The first thing that we're gonna do is we're gonna talk about the cloud. I will start by saying that this course is really designed for the absolute beginner. Now there are two paths that you can take. So if your background is that you're a professional, maybe the sales professional or a finance guy, or someone who works in the non technical rule but would like to have the knowledge of AWS so that you can then work in the technical rules than this course is for you. This course also gives you that extra technical background that you're looking for, which will be getting with the help of hands on labs and demonstrations that we're going to do as we progress in this class. So if it sounds like this course is for you, I would just like to say Welcome. We're going to have lots of fun as we go through this course and we're going to give you everything you need to help you keep your feet warm in the world off Amazon Web services. Now, the last thing I would like to give you some information about how you can get in touch with me should you need certain additional assistance. I have a linden account, so be sure to connect with me and the links that you see in this video. We're Leijer directly to Malindi in profile. Feel free to reach out to me in a time. I'll be more than happy to help you, however I can. So with all of that, I'm going to go ahead a wrap up this video and I thank you for watching. I'll see you in the class real soon 2. 2 Course Agenda: welcome to this introduction. Lecture in this lecture will understand what we're gonna learn in this course. I intend to start this course from the very basics from the very ground up so that you can understand the primary benefits of cloud the nitty gritty ease of the cloud terminologies used into this world. And then we slowly introduce Amazon Web service services to you. For example, will talk about I am just identity and access management. Virtual private cloud, elastic compute cloud, which falls under the compute section. And then, as we understand that we'll talk about the different kinds of storage is available in today's world. In Amazon, one of the key characteristics of cloud computing is elasticity and scaling. So we'll talk about that as well. How do you balance the load between the Web servers with help off elastic load balancer is something that we're gonna understand as well. We learn about the critical services like DNS cloudfront and loud 53 as well in the second half of the course. Well, provisioning infrastructure is not sufficient. You also got a morninto them logged them and set up notifications as well. So we learn about how do you use cloudwatch for monitoring Cloud Trail for auditing and logging and notifications using simple notification services? Amazon, of course, has database services in the farm off SQL and New SQL. We call them as RDS and dynamodb well, then jump into cashing services in the database section called as Elastic Cash and the data warehouse component, which is red Shift. Today we're moving on to a server less world, where organizations would like to build their modern applications using server less services like AWS Lambda. We cannot leave out security because there are hackers out there who are eyeing on your critical intellectual properties. So we need to understand what services eight of Lewis's offering. As for a securities concern and compliant his concern, it's important to understand the AWS shared responsibility model and how AWS provides security and compliance, but also talk about the kms service and other services as well, which are important to note. Finally, toe end. This discussion will also talk about pricing and billing and understand How will you get charged for the services that you're using? What are the best practices there, and then I'll also walking through certain white papers and documentations that are there. That would be helpful for you as part of your reference and also is part of your technical growth. So let's get started. 3. 2 Need for Cloud Computing: Welcome back, and thanks for joining me again. It's time for us to get started for our four section here. Let's understand, where did it all start from? What's the need to even talk about Cloud these days? So before there was Cloud, we had Commodore 64 right? So I'm just kidding. We really do not have to go that far, But that's what most of the people started their computing from. So these were kind of personal computers where people installed their games and different kinds of word processing applications and software. Maybe you had an accounting application where you would keep track off your budget as well . And remember that all of these applications and the application data was all saved on the computer directly. Well, as things have old, something like declined server architecture was also created. So let's think about why we would need a client server architecture when everything was running good on that Commodore 64 machine. Think about a scenario where you walk into a groceries toe or a retail operation or some kind of outlet where the personal maintaining the cast register has to communicate back to the central server for quieting the information may be to get the invent tree off the items that they have in the store. Maybe they want to query the cost off each item. What is the I D number for each item? And all of that information is centrally stored. And this is done so that the multiple computers or the registers can access that information, Apparently. So that means that the information of the data gets stored centrally on a server. Now, why do we need to do that? So think about a situation where you have a server sitting out there and you're sitting on your computer and you start to abate the data on that server itself. So there is no connectivity. So if that's ever needs to get information from a different network or a different server, there will be no way it could do that. You see where the problem is right now, because the update that you made on that clients of rocket picture is not reachable to this server. You have to connect it to the network as well so that your server gets updated information . So by creating a client server, architecture were able to have multiple computers that are able to use that application and update it. So that's the importance are why we created the client server architecture. You know, you know that client server architectures evolved over a period of time. So what we're gonna do is replace the server with Cloud. Let's say so. Instead, off running applications in the server. Now we're running it on the cloud. So your applications like Microsoft Office, your word power point, all those types of applications. In many cases, they're all running in the cloud, and we're calling them as obviously 65 these days. Even before that, we had things like Gmail, Yahoo mail or something like Dropbox, where used to store your data or even Google Drive. And how about that Microsoft solution called as Microsoft one drive. What all of these applications have in common is that they're stored in a remote location. You got nothing running on your local PC at all. You're connecting to those applications using intern it. Those applications are maintained and run by Ah, third party could be Microsoft or Google in this case. So this is kind of an evolution off client server architecture. It just that we're not maintaining the server, and the server is not in our traumas thes days. It's all in the cloud. So as things are evolving, we're seeing more and more applications being migrated from your client server design to cloud type architecture. So when we think about these application, think about what makes up an application. But what you need first is the computer. So think about computer as a brain off the operation. So computers a CPU itself, and then what you will also need is the memory, which you call as Haram, or random access memory. That's the basic competent that makes up and remembers. The activity is done by your computer before they're committed off to a disc or to a storage. So these two components are kind off the primary competence off your computer. We're talking about the CPU and the memory. So think about the days when used to complain about the application, saying that hates running too slow or too fast. That is because it's being processed, slow or fast by the CPU and the memory together. Then there is the next component, which is this storage itself. Storage is where the actual data itself is saying, and then where the application is seen, the storage could be the hard drive that's sitting in your local computer. It could be external hard drive that is attached to the computer to add probably the extra stories that you always wanted or, in today's modern world, regard cloud based applications like Dropbox, Google Drive or one Drive. Now these are all kind off large stories devices where you can actually save that data. So your stories location is where the data is C. Let's circle back to the same situation where we're talking about. The cashier was trying to retrieve information about the invent Torri off items in the store. What the application would need in this case is a database, because in the store you got so many items in every item will have a unique i. D. Its own cost. So those i ds cars, the name off the product needs to be maintained in some kind of data. Bates. So the database is responsible to store that data in a structured way so that your application can easily search for the data and retrieve it as well. And then you got network So Networks is a little bit more off a discussion here, So let's think about your home network first, right in your home network, you got a device that allows your house to connect to the Internet. That device is typically something like a cable modem might be a router, and that gives connectivity after the Internet. Now inside your house, you got individual devices, and they may be computers, tablets or even your phone. And these devices may want to talk to each other. They might want to talk to the Internet as well, for various reasons. So for your device to talk to each other, we typically need something called as a switch that people do not have a full blown switch in their house. They may have something. Call it a router or a SoHo route or small office home office router that will perform the functioning off a switch. And all of that really happens in a simplified way that they all get connected to the switch devise, and now they're able to talk to each other because they're all connected with that device. The switch will then allow the communication to flow through the Internet now. Any time when you want to talk to something that's out there on the Internet, let's say there's a computer out there that is housing something like Facebook or something . Yeah, so when you want to access that Facebook, your computer communicates through the switch through the router to the Internet to something called as a DNS server. Why do I need a DNS? Ever think of a DNS over like your contact list on your phone? So when you hit that button to invoke Siri on your phone and say, Hey, call Mark or something, it's gonna go in search for Mark and New Contact list and then call him. So Siri now knows the name of the person and also the full number. Because it's in your database, DNS is no different. Deena's is like a mapping off your names, which is a website with its I p address. So when you goto phrasebook dot com, it's gonna go ahead and look up. Freeze book in the list of contacts, let's say, on the d n a server and fetch the appropriate I P address for Facebook. So that's really an oversimplification of how it works. Remember that it's at a very basic level. Well, lead natural to be able to communicate. So this might be your home network. This might be our office, or this might be Facebook itself. Inside each of these, we got a separate network, and all of these networks are able Teoh communicate with each other over the Internet. So inside every network you got this basic components like compute storage, database and networking. All right, now, let's get into another topic. Call as Slover sprawl. What is that? In a corporate enterprise environment applications grow really fast. And with decline server applications theory, they are going at an exponential pace. It's a lot of developers and organizations need to deploy an application, and every time they want to do so, they will go ahead and deploy a new server. And that means that with 100 applications, we gotta have 100 servers. So even before the server start getting fully utilized to their capacity, we have started to deploy new servers. That also means that there's a lot of unused CPU, unused memory, unused storage as well, and they're all working really hard, and the people who manage the servers kept on adding additional servers instead of consolidating them all on individual servers, lots of organizations will have tens and hundreds of servers that are not being utilized to their full potential. So what started to happen from that is that many organizations started to face a lot of different challenges. One of those challenges is So we're sprawl because the data center started running out of space, and setting up a data center is not something really cheap and easy to do. Running out of space has become a serious problem for a lot of data centers. They also have problems with being able to support the power requirements because even if you're not utilizing the servers to their full potential, there still require a certain amount of power and cooling for them to run. It's called as heating, ventilation and air conditioning costs. So a lot of data centers started to really be challenged by not having enough physical resources to handle those applications that were being deployed. Additionally, think about the administrative workload off managing all that servers. So from a business perspective, you started to see a lot of challenges for things like we gotta pay capital expenditure to purchase servers, you could easily spend hundreds of thousands of dollars on buying more acute mint. So these are capital expenditures that several organizations may have to pay for, and the worst part is that they depreciate. What is the average life cycle off a hardware? Well, the average life cycle off any hardware? A Cuban is about 3 to 5 years, so you can imagine the ongoing cost that many businesses we have to pay to maintain those applications. And then you started to have a distributed workforce. You got people working from home, people who are traveling there, sitting in multiple offices, maybe in remote offices as well. And they have their own self unique challenges to be able to allow people to access their applications. And you have to think about security concerns as well. How are they going to remotely connected the data centers? Do you have a secure connection into the data center? Do you have a VPN as well? We have to test the hardware. Then the procurement process stops and finally you will provisioned them. All of this process is takes extended period of time, so this is just demonstrating how complex things can be with remote workers and the applications need for more and more applications and the need for space in the data centers . So how are we going to do this efficiently? How are we going to do this cost effectively and still keep it secure? That's where Cloud comes into picture. In the upcoming lessons. We're gonna talk about the muscle and steal that makes a cloud and how they're utilized. I'll see in the next lesson. 4. 4 what is cloud computing I: Let's start this chapter with an understanding off work cloud computing this Today, cloud computing has become a buzz work. Most of the enterprises and small and medium business companies use this tone, probably without an understanding off it. In this chapter, we learn about what cloud computing is and what benefit it has to offer. Traditionally, if we ever have to build applications, we had to procure a lot of hardware. For example, you will need servers, you'll need operating systems and the respective licenses. You'll need firewalls, router switches, load balancers, data basis and the list isco's endless, and that means that you will need a lot of capital expenditure. Capital expenditure is the amount off money that you will invest initially to procure that hardware. On the other side, operational expenditure is what you'll need to maintain that hardware, for example, heating, ventilation costs, electricity costs, human and manpower. Pierre Old course. All that is operational expenditure with traditional computing. Both keep X and OPEC's, which is capital expenditure and operational expenditure. Both of them were high. And today, with the advent off small and medium business companies and start ups, we really do not have the scope to invest so much on the hardware or licenses for the operating system. So what do these start ups due to provision their infrastructure to let the business run? Or did they have to do what they have to resort to something called less cloud computing? This is where the whole bunch of resources will run. So the startups will simply put the resources, for example, servers storage, daily basis networks, software's artificial intelligence and the respective analytics. Everything into Cloud Cloud is nothing but a massive, huge, humongous data center that is maintained by the cloud provider, for example, Microsoft or Amazon or Google Cloud. For that matter, and cloud providers will make sure that the services that are offered to you will have a faster way of delivery. They innovate faster that the sources would be flexible, and you can definitely scale the resources as and when you want. And the best part is the Cape X and OPEC's really falls down exponentially. You typically pay for the cloud services only what you use, so you only will have very low operating costs. Your infrastructure will be running more efficiently, and you can scale your business when it's required. Let's look at the definition of cloud computing. Now that we understand cloud computing definition off cloud computing is given by Obbadi. Call s n I s t n I s t is National Institutes of Standards and Technology And if and they defined the definition of cloud computing in their publication 801 45 here is the definition. It says that cloud computing is a model for enabling you Victor's convenient on demand network access to a shared full of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service providers. Interaction. What does it mean? It means that I, as a cloud user, should be able to create ah virtual machine, create a server storage, maybe an SSD or in HDD or any kind of application whenever I want it when I want to provisioned them and I want to create them, probably in the middle of the night, I get an idea and you want to create a several right? Then you should be able to do it. You should not be calling up the call center off Microsoft or Amazon Web services. Standing in the queue. And when the culture for executor picks up the phone, you don't say, Hey, can you help me create a VM? Make sure that as a GP off memory that has 40 terabytes of hard disk, make sure that port pointing to is open. We're not gonna do that, because today we have our own self service provisioning portal, where you can type in your user I D and password. And then you enter into the world off cloud services provided by the cloud provider, which could be Amazon max of Azure or Google Cloud. So you will be able to create resources as long as you have Internet. As long as you have a device that supports a good browser, that's all you need to maintain the most sophisticated infrastructure in the world. So no more procurement or devices, no more maintaining them. You really do not have to pay the electricity Cars are air conditioning course. It's all done by your trunk cloud provider, and they maintain it. But you just have to pay as you use it. It's more compared to the utility computing, for example, the electricity bills at your home that you get you do not pay the whole bill at once. You do not pay for the transform or costs or the supply lines you just pay based on your uses. This is exactly what it is. A simple an ology that can be used. I'll also give you Ah, simple definition that was given by Fed the federal CEO. Um, and here is the definition. I would like you to read it, give of minute and see how simplistically the definition of cloud computing has been brought up to you. 5. 5 What is Cloud Computing II: cloud computing is like renting resources resources like storage or CPU cycles. You only pay for what you use. The company providing these services is referred to as cloud provider, and the user is called Cloud User. The examples of cloud providers are Microsoft, Amazon and Google, and Cloud users are people like you and me and organizations. The cloud provider is responsible for all the physical infrastructure, like the physical hardware, and they're also responsible for the hardware that's required to execute your work and keeping it up to date with patches and antivirus is the computing services offered tend to ready by the cloud provider. Mostly, they include compute power, such as Linux or applications. They also includes storage, such as files and databases, networking competences to how secure connections can be established between a cloud provider and your company. And also Anel ticks off compute storage and network so that you get a visual telemetry and performance data. The goal of cloud computing is to make running a business easier and more efficient. Whether it's a startup or a large enterprise, every business is unique and has different needs to meet those needs. Cloud computing provides a wide range of services. We need to have a basic understanding off some off the services that it provides. Let's discuss about compute power and storage. What is Compute power When he sent an email or do certain activities on the Internet like booking a reservation or paying online bill. You will be interacting with the cloud based servers or the servers that are located somewhere else, and you are communicating with those servers just with the browser and your computer. As a consumer, we're all dependent on computing services provided by various cloud providers that make up the Internet. When you build solutions using cloud computing, you can choose how you want the work to be done based on the resources and needs. For example, you may want to have more control and responsibility over maintenance, so you create a virtual machine. Ah, virtual machine is an emulation off a computer. That means everything is virtual and in the form of a file, the hard disk, CPU memory and all other confidence that comprises off machine will be in the form for file , and hence the term virtual, just like your desktop and laptop virtual machines will be able to provide you the necessary services, giving optimum performance. Each virtual machine includes an operating system, and hence it also includes a hardware that appears to the user like physical computer running the Windows or Lennix operating system. You can install whatever you want, like a software that will be necessary to run the tasks that you probably want to run in the cloud. The difference is that you don't have to buy any hardware or install the operative system. The cloud provider will run that virtual so for you on a physical server in one off their data centers, and they will be often shared with some of the servers. Probably isolated and secure. With cloud, you can have a VM ready to go in minutes at less cost than a physical computer. Virtual machines are not the only computing choice. There are other popular options as well. That means if you want to host a Web application, virtual machine is not the only choice. There are other choices l like containers and server less computing. Let's look at this picture and understand differences between containers, and several is designs off the applications. What is a container? A container for wides, a consistent, isolated execution environment for the applications. They are similar to virtual machines, except that the do not require a guest operating system. Instead, the application and all its dependencies is packaged into a container, and then a standard runtime environment is used to execute the application. This will allow the container to start up in just few seconds. Because there is no overhead off the operating system to boot and initialize, you only need the application to launch. One of the first pioneers off such projects was Docker. This is an open source project Docker is one of the leading platforms for managing containers. Docker containers provide an efficient, lightweight approach to application deployment because they allow different competence off the application to be deployed independently in different containers. So what are silver? Let's computing's several is computing will let you run application cored without you creating and maintaining and configuring a server. The core idea is that your application is broken into separate functions. That run went triggered by some action. This is ideal for automated task, such as you can build a serverless process that automatically sends an email confirmation after ah customer makes an online purchase. The soulless model differs from watch of machines and containers in that you can only pay for the processing time used by each function, and then it executes itself. Virtual machines and containers are charged when they are running, even if the applications are ideal. So this architecture does not work for every application. But when the APP logic can be separated toe independent units, you can test them separately, update them separately and launch them in microseconds and making this a prose the fastest option for the deployment. 6. 6 The blood and bones of Cloud: Welcome back in this lesson. We're gonna talk about what clouds are made off into start our discussion. We're going to talk about something that you're probably already familiar with, which is software. As a service, You might not know that you are using a software as a service application, probably every day in your life. Okay, so let's think about what a common software service might be. Software as a service is where the entire infrastructure operating system and the software that is provided by the third party is given to you in the Farmer for Web application. A pretty common example is email services. Think about Google providing email service to you or even any other provider. Let's a Yahoo has Yahoo mail, and there are many other types off software as a service email providers. Another type of software as a service is something you may be familiar with. This Dropbox. The Dropbox provides storage as a service that allows you to save your files from your computer off to that dark box storage. So there are many different types of software service. You might even be using something like office 365 in your corporate environment. So Microsoft provides our Web based Microsoft Office Services using Microsoft Office 365 interface, and that is so for a service as well. So you see where this is. Getting to write software as a service is pretty common, and he used that on almost a daily basis without even knowing it. So now we have an understanding off soft burial service where you got the weather application delivered to you using the Web interface. Now let's talk about infrastructure as a service. Infrastructure service is the unlike hardware that's required for you to run your applications. So if there is no hardware, no network and no storage in the back, and Google will not be able to provide you with the email services than it, so they cannot provide you with, ah, software as a service. But what if there is a requirement in your organization where you want to work on virtual machines or physical servers? Right. So you need the underlying infrastructure itself so that you can start building your own custom applications. That means you want the compute you want the network you want the storage and all those components that we need so that you can create your own app or software. So infrastructure as a service is where all of those infrastructure related entities like the Compute CPU Memory network storage is all handled and provided by the third party. So if you look at the hardware stack, the stack is basically a set off components that are grouped together to form an infrastructure. So in this stack you got network and we spoke about networks a little bit in the previous lesson, and then you got storage. Storage is again where you keep all your data. And then you got something called compute, which is the brains of your operation, right? So that gives you the ability to do the processing. So when you put the data on, compute the computers able to process the data to complete the task based on what kind of data it is, So when you do infrastructure, the service you are actually leasing or renting that infrastructure stack, which is a CPU memory storage and the network so you're paying the or third party to provide you with that hardware. This for an example, think about a virtual machine that you are hosting on AWS, for that matter, right? So, infrastructure, the service will be giving you that virtual machine with your cloud provider, and all the interaction will happen with something called as a hyper visor. Hyper visor allows us to create virtual hardware. So now all of these components, like the Compute storage and Network, are logical in nature. They are in the form of a software is the hyper visor that will let the compute talk to the rial physical hardware. So on top off the hyper visor, we're now able to utilize virtual machines, and these virtual machines will make up your infrastructure as a service. Okay, with that, let's move on to a platform as a service in certain situations, you may not want to host your own hardware as well. At the same time, you don't want to go for software to the service. You want something in the middle, which is platform as a service. So let's say you want to host a database and you are a database expert. You're great at running all the qualities. You know what SQL stands for, and you can make your own customization. So you're a great app and database developer. But of course, not everybody has knowledge on every sector. So similarly, the AB developer and the database guy does not know about the infrastructure competence. For example, they don't know about the backups. How about encryption security and various other parameters that go into the details of the infrastructure? So that is where you will go ahead and provision in our platform as a service in platform A service going back to the same example off the database. What you're essentially getting is just the database. Let's call it as SQL So in platform where service, If you're looking for SQL, you just get SQL. You do not have any control on the underlying hardware or the underlying operating system, right? So that's what platform of the services the difference between infrastructure as a service and platform services that an infrastructure service you get lots more control on the hardware and platform and service. You can just host your applications, just connect to the database and host your data. But you do not have control on the underlying hardware well. In infrastructure service, you can install your own operating system and then install the applications on top of it as well. In platform, you don't get that option and platform a service. You do not have an option to choose your own operating system. Okay, so that's platform as a service. So when people provide cloud computing services, they are basically providing one off these three types of services, which is infrastructure service, platform and service and software and service. Let's go into another section where we talk about private clouds, hybrid clouds and public clouds. OK, so Ah, private cloud is where you control everything. Okay, That means you have your own data center. And inside this data center is your compute network storage and everything that's required to make a data center. Now, the advantage off owning your data centre is that you can customize this however you want, but whatever kind of hardware and you can manu plate the hardware whenever you're ready, you can upgrade that, get rid of some CPUs and memory and bring your own when you want to do that so you can do whatever you like to do with that environment. But of course, there's a cost of doing that, and that is really high. It is expensive. Teoh build and maintain data centers. And the life cycle of a queue print is typically not more than 3 to 5 years, so you might spend hundreds of thousands of dollars of potentially millions of dollars as well purchasing the acute mint and the sad news is that in 3 to 5 years that equipment has to be replaced. So you can imagine in 3 to 5 years that a cute moment has to be replaced. So you have to invest high capital costs only to get 3 to 5 years out of the the Cube mint , and then you go to pay for the employees as well who are supporting the cube mint. Gotta pay for the electricity, power cooling and everything that goes along with it. Right? So once you have pushes the queue print, it's yours. And if you don't utilise it, that's kind of bad for you. So you already paid for the recoupment. That's where hybrid cloud comes into picture and lot off organizations are doing the hybrid cloud here, where you will have part off your infrastructure on your premise. Inside your data center and some of the parts. Some of the services with the cloud provider. So let's think about a service call as backup and disaster recovery. And that means that if you got something like a disaster happening in your environment, tornado or a hurricane comes through and wipes out your data center, we can then just fail over to the cloud. Right? So in this case, you got some off your infrastructure on your premise. But just for disaster recovery purposes, and to keep the lights on, you got that infrastructure replicated to the cloud as well. Of course, we're gonna have some kind of connective ity between your on premise data center, your corporate data center and the cloud. This is a hybrid environment. So with this, you get great flexibility. You'll also use hybrid environments when you would like to migrate a piece off your data center to the public cloud. And then over time you migrate over to the public cloud as the public cloud gives you a lot of flexibility because there are no upfront course, it's pay as you go. So for some reason, you decided that you don't want to use the cloud anymore, and you don't want to pay for it, right? You might have to come with a migration plan to get your date out off it so that you don't have to continue to pay for the cloud if you don't want to. The overhead is really low because all of that, unlike hardware cooling, the power and everything that's required to establish the data center is handled by the cloud provider, so you don't have to worry about those costs anymore. The public cloud is definitely scalable and elastic. We're gonna talk about that in the next lesson. Will we understand the key characteristics off cloud computing? But at this time, just be aware that the elastic environment can grow and shrink and can be dynamically used when utilizing the public cloud environments. And this is something you really do not have when you use private cloud again. There are challenges with governments, right? So when the government's rules changes also, when the compliance regulations change, you gotta follow them in Public Cloud. It's a public cloud provider who does all those changes and takes care of those challenges . You gotta be very careful about the type of cloud provider that you want to utilize. You have to utilize the cloud provider that will support you with your requirements will be helpful in provisioning. Resource is faster at the same time keeping your deployments simpler. Make sure that you don't have to worry about the underlying infrastructure. Just a question for you. What kind of a cloud provider is AWS or azure? For that matter? Or Google Cloud, for that matter? Think about it now. AWS and Azure, another club providers as well. They provide infrastructural service and platform in service as well. That means you get the underlying hardware provided by AWS, and then you can also choose to host database on platform. As a service, you can set up your own virtual machines to install your OS and your applications on the other side. You also get something like a simple story. It's service s three in AWS. O s story is kind of a bulk storage where you can upload virtually any type of file. Think about it like a drop box office or Google right service, where you upload the data into the Google cloud and it stays there. Esther is something similar to that, so AWS provides infrastructure service, but they also provide additional services that provide applications as a front end toe those infrastructure services just to simplify your ability to utilize AWS. So that's about it in this lesson. Just to summarize in this lesson, we spoke about what public clouds are, what infrastructure services, software service and platform their service. We also learn about public cloud, private cloud and hybrid clouds. Thanks for watching. 7. 7 benefits of cloud computing: Now that we know the definition off cloud computing, let's also understand the benefits of cloud computing. There are two kinds of organizations that may want to choose cloud computing as the preferred partner. Teoh place their infrastructure in the cloud. One is the new companies or start up companies who have nothing on their own premise infrastructure. Others are existing businesses who might want to choose a gradual movement from their on premise to the cloud to save money and get all the benefits and good features of cloud computing. In this video, we'll learn about the benefits of cloud computing and what it can offer to the business. Cloud computing is a cost effective solution, primarily because it provides a way to pay as you used the so services. Or we can also call it as a consumption based pricing model. Because you really do not have to pay any upfront, pre defined amount for the computing resource or hardware it just like renting the hardware , you just use it, and then once you're done, you give it back to the cloud provider, then U P for just that amount of time you have used it. This consumption model bring several benefits. Because there are no up front costs, there is no need to purchase or managed the costly infrastructure. The ability to pay for additional resource is only when they're needed, and you can stop paying for resource is when they're no longer needed, and this also allows for better cost prediction. Prices for individual resources and services are provided so you can predict how much you will spend in a given billing period, and you will know what what is your expected usage and what will be your probably the next month's bill as well. You can also perform analysis based on the future group using historical usage data tracked by the cloud provider. Another advantage is scalability. You can increase or decrease the resource is or services based on demand. When the infrastructure was on your premise, this particular feature was so difficult and had scalability and elasticity are one of the key attractive features of cloud computing on your premise. If you want to keep your infrastructure scalable, then you need to procure quite a lot of infrastructure in cloud. You don't have to do that because scalability is done on demand. Consider that there is ah e commerce application and there's a sale going on, and that sale will lead to a spike in traffic overnight. Because the cloud is elastic in nature, the cloud computing provider will automatically allocate more resources to handle the increased traffic. And when the traffic begins to normalize, the cloud computing provider will automatically d allocate the additional resources to minimize the cost. So when there's a whole lot of traffic that will rise of disip you, you such the memory you search the disk I ops in the network utilization. The cloud program automatically bumps up new instances in your infrastructure, in the cloud, and when there is no traffic, the CPU comes down. That net for collapse comes down, so the cloud provider will automatically get rid off the additional resources and that will automatically manage your costs and building. There are two concepts we should talk about in scalability when his verticals killing and other one is horizontal scaling. What we discussed so far is horizontal scale, which is a process off adding more and more servers that function together as one single business unit. Consider the same example of an e commerce application so usually you will have a Web server or maybe a cluster of Web servers behind a load balancer. These Web sewers will be handling the Lord from the production environment. These Web sellers would be handling the current traffic. When the traffic goes high, I'll be creating more and more servers, and this is an example off horizontal scaling. The new servers that are automatically provisioned can also get automatically connected to the load balancer verticals. Killing on the other side is about adding resources to the existing set off servers. So if your server has four jimmy of memory, then you just spike up the memory from 4 to 6. So you're giving more power to the existing server is an example off vertical scaling. Vertical scaling is also called a scaling up, whereas horizontal scaling is called as scaling out cloud is current, it is up to date. When you use cloud, you're be able to focus more on things that matter. For example, building and deploying applications and focusing more on your businesses. Cloud eliminates all the burdens off you having to patch and maintain the software, hardware upgrades and all of the right T management tasks, and all this is automatically done for you to ensure that you are using the latest and the greatest tools to run your business. Distantly, the cloud hardware is maintained and upgraded by the cloud provider. For example, if hardware competent on your Iraq fails, then the cloud providers someone who fixes it. It's not you. If new hardware update becomes available, our form were a bit becomes available. You don't have to go to the hassle off upgrading the phone where it is a cloud provider or does that. The cloud provider will also ensure that there are software updates and hardware upgrades that are made available to you automatically. Cloud is reliable. When you're running a business, you want to be confident off your data is always going to be there. That means availability is handled by the cloud provider. The cloud computing provider will provide data backups disaster recovery on their replication services to make sure your data is always safe. In addition, redundancy is often built into cloud services architecture, so if one competent feels, ah, backup competent takes its place. This is also called as fault tolerance, and it ensures that your customers are not impacted when a disaster occurs. Disaster recovery backups, fault, tolerance and availability are the critical features of cloud computing. Clough is global. Cloud providers have a fully redundant data centers, which are spread across different parts of the world, which is called as a region. This gives cloud a geographical footprint for the customers to make sure that customers feel as if their application is running locally. And there's ensures that the customers get the best response time possible. No matter where they are in the world. You can replicate your services into multiple regions for redundancy and locality. You can replicate your services into multiple regions for redundancy and to make sure that there is least Layton. See, you can specify a specific region to ensure you meet data residency and compliance. Loss for your customers. Cloud is secure. Think about how you can secure your data center a lot of things that you need to take care off right from the physical security to the technical controls that must be in place. That's lot off overhead on the customers. You need to take a look at who's accessing your building, who's operating the server racks and so on. How about digital security? Multi factor authentication. How are you protecting the data applications infrastructure from potential threats that are going on today? When it comes to physical security? The cloud provided take scare off the infrastructure with heavily guarded walls, cameras, gates and security, personal and so on to protect the physical and assets. They also have strict procedures in place to ensure employees have access only to those resources that have been authorized to manage. There is something called as digital security. One thing that makes cloud unique is that you rent compute and storage resources from a shared pool. Plus, did I can travel in different ways within a data center between your centers and across the Internet? Well, we want to make sure that just authorized users access it, and that then becomes a shared responsibility. Cloud providers will offer you with tools to mitigate security threats, but you must be using those tools to protect the resources you use. Cloud computing is cost effective, scalable. It's elastic, it's current, reliable and secure, and it's all done to make sure that you run your business easier. That also means that you are able to spend more time on what things that really matter and less time managing the underlying infrastructure 8. 8 Key concepts and Terminolgy: Cloud Services is a big shift from traditional way businesses. Think about idea. Resource is cloud services have particular characteristics and considerations, and in this video will be talking about the key. Concepts and terminology is used in cloud computing world, the 1st 1 being high availability. High availability is the ability to keep the service is up and running for long periods of time, and the businesses must make sure that there is very little downtime, depending on the service in question. The Web applications, the data basis really cannot afford to have downtime for wrong period of time because that will have negative impact on businesses. We spoke about scalability and elasticity in the previous video, but just to read rate scalability is the ability to increase or decrease. The resource is for any given workload. We spoke about an example off the Web application that's been bombarded with lots of traffic and that e commerce application just eats up the resources on that virtual machine so you can add additional resources to service a workload. Also call us killing out, or you add additional capabilities to manage an increase in demand to an existing resource . Also, call as scaling up scalability does not have to be done automatically. Elasticity on the other side is the ability to automatically or dynamically increase or decrease the resources as needed. Elastic resource is match the current needs, and the sources are added or removed automatically to meet future demands when it is needed . And this is one of the most advantages part off cloud computing. Our distinction between scalability and elasticity is that elasticity is done automatically . Agility agility is the ability to react quickly. Cloud services can allocate and d allocate resources quickly. They are provided on demand through the self service portal. So vast amounts off computing resources can be provisioned in minutes. There is no manual intervention in provisioning or D providing services. Fault tolerance is the ability to remain up and running even in the event of competent or service failure. Typically, redundancies built into cloud services architecture. And that means that if one competent fails, ah, backup competent takes its place. The type of service is said to be tolerant off forts. What is disaster recovery? Disaster recovery is the ability to recover from an event which has taken down a cloud service climate services disaster recovery can happen very quickly, with automation and services being readily available to you global reach. It means that the cloud services ability to reach the audiences around the globe cloud services can have presence in various regions across the globe, which you can access. Give your presence in those regions, even though you may not have an infrastructure in that region. What is a customer? Layton's incapability Now, if the customers are experiencing slowness with the particular cloud service, they are said to be experiencing some Layton see, even though more than fiber optics air fast, it can still take time for services to react to customers actions if services are not local to the customer, Cloud services have the ability to deploy resources and data centers around the globe, so thereby it will be address ing customers latent issues. What is predictive cost considerations? We also spoke about this in the previous chapter, but it is the ability of the users to predict what costs they will incur for a particular cloud. Service costs for individual services are made available and tools are provided toe allow you to predict what costs a service will incur. You can also perform analysis based on future growth, technical skill requirements and considerations. Cloud services can provide and manage hardware and software for work clothes. Therefore, getting a work load up and running with cloud services will demand less technical resources than having I D teams build and maintain physical infrastructure for handling the same workload. Ah, user can be exporting the application. They won't run without having to need skills, toe build and maintain the unlined hardware or software infrastructure. Increased productive iti za key feature here. And that's the whole attractive part, which most of the consumers are using. Cloud for on site data centers typically require a lot of hardware's up call as racking and stacking software patching. And there's several other time consuming I T management course. Cloud computing eliminates the need for many of these tasks, so I D teams can actually spend time on achieving more important business goals. Finally, security. And that's not least important. Cloud providers offer a broad set off policies, technologies, controls and export technology skills that can provide better security than most organizations can otherwise achieve. The result is strengthen security, which will help to protect data applications and infrastructure from potential threats 9. 9 Econmies of scale: the concept of economies of scale is Thea Bility to do things more cheaply and more efficiently when operating at a larger scale in comparison to operating in a smaller scale . Cloud providers such as Microsoft, Google, Amazon Web services are very large businesses, and they were able to leverage the benefits off economies of scale and then pass those benefits to the customers. This is a parent to end users a number off ways, one of which is ability to acquire hardware to lower cost than if single user or small businesses were purchasing it. Let's take a look at storage cars. That's just an example. Storage costs have decreased significantly over the last decade due to a cloud provider's ability to push his large amounts of stories at significant discounts. They are then able to use that storage more efficiently and pass on those benefits to end users in the form of lower prices. There are limits to the benefits large organizations can realize through economies of scale . A product will inevitably have an underlying core course as a becomes more off a commodity based on watered cause. To produce competition is another factor which has an effect on costs off cloud services 10. 10 capex vs opex: for someone to implement their idea right, a good program or create a new gaming solution or make things better for this world. With the help of an application, they will need a sophisticated infrastructure. In the previous decade, startup companies needed to acquire a physical premise and infrastructure to start the businesses and begin their work. Large amounts of money were needed to get a new business up and running or to grow an existing company. They would have to buy new data centers or new servers to allow them build out new services , which they could then delivered to the customers. This is no longer the case. Today, organizations can sign up for service from a cloud provider and get up and running within a few minutes. This enables them to being selling or providing services to their customers as quickly as possible without the need for an insignificant upfront cars. Now there are two approaches. Oven investment one is called K pax. Second is koalas. Operation expenditure are OPEC's keep packs, or capital expenditure. Is the spending off money and physical infrastructure up front and then deducting that expense from your tax bill over time, Capital expenditure is an upfront cost, which has a value that reduces or depreciates over time. Operation expenditure is the spending money on services or products and being built for them. Now you can deduct this expense from your tax bill in the same year. There is no up front costs, and you pay for the service or product as you use it. For example, when you build a data center, you will need louder switches, firewalls, a lot off servers as well. And that initial amount of money that you pay to HP, Dell, IBM, Cisco, etcetera is capital expenditure moving forward when you pay for the maintenance off. That, for example, licenses subscription hating, ventilation, air conditioning, electricity, telephone bills as well as human manpower and their payroll stops is calling an operation expenditure. Companies wanting to start a new business or grow their business do not have to incur upfront cars to try out a new product or service for customers. Instead, they can get into the market immediately and p as much or as little for the infrastructure as the business requires. They can also terminate that cost if and when they need to do so. If your service is busy and you consume a lot of resources. In that month, you receive a huge bill. If those services are minimal and do not use a lot of resources, you receive a smaller bill. Ah, business can still use capital expenditures strategy of the wish, but it is no longer a requirement that they do so. 11. 11 What is a Public cloud: types off cloud models. In this video, we'll learn about public. Klaus Ah Public Cloud is owned by the Cloud services provider, who's also called it a hosting provider. In this case could be Amazon Web services, Microsoft Azure or Google Cloud Platform. The cloud provider is a service provider and they provide resources and services to multiple organisations and users. The organisations and users like you and me will be connecting to the cloud service through a secure network connection, typically over Internet with the public cloud, there is no need to provision a local hardware, so you do not have to manage and update your system. Everything is running on cloud providers hardware. In some cases, cloud users can save additional costs by sharing computing resources with other cloud users . And this is also call s multi tenancy. A common use case scenario is deploying a Web application or a block site on hardware resource that are owned by cloud provider. Using a public cloud in this scenario allows the cloud users to get their website or blawg up quickly and then focus on maintaining the site without having to worry about purchasing , managing on maintaining the hardware on Midget Watts, businesses can use multiple public cloud service providers, depending on the scale they want to run on. Much of Azure is an example of Public Cloud Provider, the company's today that are using Microsoft Azure, Amazon Web services and Google Platform are companies like uber large companies like Novartis, small and medium business companies as well that use cloud to save costs and get the benefits that the cloud provider provides. 12. 12 characteristics of Public Cloud: Let's talk about the characteristics off public club ownership. This is the resources that the organization or end user uses. Example. Storage processing power CPU memory resource is do not belong to your organization that is utilizing them, but rather their owned and operated by third parties such as cloud service providers, multiple and users. Public cloud modes may make their resources available to multiple organizations and not just your organization. Public access. The public cloud provides access to the public. That means that the resource is in the cloud providers location can be access to the Internet availability, and this is the most common cloud type deployment model where the cloud provider ensures the availability off your data and machines Connectivity. Users and organizations are typically connected to the public cloud over the Internet using a Web browser skills. The public clouds do not require deep technical knowledge to set up in your organization. You do not need a lot of resources as well. All you need is a laptop or desktop device or any smell platform that will have a browser and very basic Internet connectivity. And that's all. The skills are all managed by the public cloud provider 13. 13 What is Private CLoud: talk about private clouds. When we think about the term cloud, it always doesn't mean that it's with 1/3 party provider. It could be within your organization as well. Private clouds are owned and operated by organization that uses the resources From that cloud. The organizations or enterprises will create their own cloud environment with their own data center. That means the resources, hardware, storage and everything else that's required to make a cloud is what in the data center off the enterprise. And that means that employees of the organization can use a self service provisioning portal to log in to the private cloud and then create computing resources. The organization, although always remains the owner and is entirely responsible for the operation of the services they provide. A good use case for this private cloud scenario would be when an organization as data that they really do not want to put in the public cloud. Perhaps there could be legal reasons. For example, medical data that has to be compliant with HIPAA standards cannot be exposed publicly. Another scenario is where the government policies require specific kind of data to be kept in the country privately. In the next video, let's take a look at the characteristics off private cloud and actually make it private 14. 14 Characteristics of Private CLoud: In the previous video, you understood that the private cloud is owned and operated by individual enterprises. Let's understand the characteristics off private club ownership. The owner and user of the cloud services are seen if you compare it with Public Cloud, the owner, which is a cloud owner and cloud user to the two different things. Cloud User is us, and Cloud Owner is the public cloud provider in case off public Cloud. But in private clown, the owner is the company and use the results of the company. Because everything is lying on premise, everything is lying inside your land hardware. The owner is entirely responsible for the purchase maintenance management off the cloud hardware users. Ah, Private Cloud operates only within one organization, and cloud computing resources are used exclusively by single organization or single business connectivity. A connection to a private cloud is typically made over the private network. Like we said, it is within your land environment and that will make it highly secure. The connection to your private cloud is not available on the public. That means you cannot access those private resources in the private cloud from the public or Internet environment. You can only access it from your intranet skills. Tow bill, Your private cloud. It will require a deep technical knowledge because you then have to maintain set up and manage your private cloud infrastructure. 15. 15 What is Hybrid cloud: the last one in this section is hybrid cloud. Ah. Hybrid cloud combines both public and private cloud environments and happens that hybrid clouds will allow you to run your applications in most appropriate location. An example of hybrid cloud used in scenario will be hosting a website in Public Cloud and linking it to your highly secure data bands hosted in your private cloud. And for such a hybrid cloud scenarios are very useful when organizations have some things that they do not want to put it in public cloud, possibly because off legal reasons, for example, you may have medical later that you don't want to expose that in public. Another example is one or more applications that run on old hardware that really cannot be abated. In this case, you want to keep the old system running local in your private environment and connected to the public cloud for authorization or for storage. In the next section, we learn about the characteristics off hybrid cloud models 16. 16 Characteristics of Hybrid CLoud: when you combine the power off public cloud and the advantages you have in the private. How what you get is a hybrid cloud in this video. Let's learn about the advantages on characteristics off hybrid cloud environment, resource location. It clearly means that there something specific resources that will run in public Cloud and other dependent resources that will run in your private cloud infrastructure cost and efficiency. Hybrid cloud models will allow an organization to leverage some off the benefits off cost, efficiency and scale that are available in public cloud models so you can take advantage of both private and public control. Your organization will retain most of the management controls because you also have your infrastructure running in private club skills. Keep in mind that your organization will need considerable amount of technical skills to create, build, manage and maintain the hybrid cloud environment, because now you have infrastructure to maintain in private club as well as the integration . Competence with the public cloud must be maintained by your technical team members 17. 17 review and what next: so far, we learned about what is a public cloud private cloud and then our hybrid cowed as well. Let's talk about the types off cloud services, for example, infrastructural service, platform your service and then software yourselves. We'll also learn about what are the advantages off them. What are the characteristics off each one of these cloud services? And what are the use of scenarios off infrastructure service platform and service and software reserves? 18. 18 What is IAAS: Do you know what it takes to build an infrastructure for an organization? And when we think about infrastructure, think about the underlying components off any application. For example, a server hardware's like routers, switches, firewalls, the underlying cabling and then everything that goes with the physical infrastructure. You need a lot of security monitoring, long access, load balancing, clustering, storage, resiliency, backups, and this list can just go endless. These are the services that any application will need for it to function properly. The infrastructure of the service provider will supply this range of services to you as a service, so you do not have to go ahead and create that infrastructural on your premise and infrastructure. As a service, the cloud provider will host the infrastructure competent that I just listed, and it will be hosted in their data center. All the servers storage and the networking hardware, as well as the hyper vise earlier, will be provided by the cloud provider like mix of azure Amazon, IBM plowed etcetera. So you do not have to worry about the underlying cabling or procuring the hardware for your virtual machine. As an infrastructure service customer on, we have to do is have access to the Internet and probably a subscription with the cloud provider. And then you can provisional virtual machine off whatever conflagration you want. You can go ahead and transfer all the workloads to the cloud provider in the virtual machine. In the Cloud Cloud, customers can use cloud providers, racks, switches and firewalls and that can be provisioned in a matter off minutes. Any cloud computing model requires the participation of a provider and that including for sexual service as well. The provider is 1/3 party organization who's selling the I s services. For example, it could be Amazon Web services, Google Cloud Platform or Max of Pleasure. The businesses might have to deploy a cloud environment becoming their own infrastructure services organizations will choose infrastructure service because it is very easy, fast and cost efficient to operate a workload without having to buy and manage the underlying infrastructure. Iess is the effective model for workloads that are temporary, but some of the organizations have gone for permanent workloads as well. In front of the service is the most basic category off cloud computing services with infrastructure service, your rent idea, infrastructure servers, virtual machines storage networks and operating systems from a cloud provider on a pay as you go basis. So it is instant computing infrastructure because they are provisioned instantly. They are managed instantly over the Internet. And when you don't want it, you can just decommission that virtual machine. Remember that in in furniture service, there are no up front costs uses. Just pay for what they conceal. The user is responsible for purchase, installation, configuration and management off the software inside that operating system. But remember that the cloud provider is then responsible for ensuring that the underlying cloud infrastructures let's as virtual machines, storage and networking is always available for the user. So when you use infrastructure, the service, ensuring that the service is up and running is a shared responsibility. In the next set of videos, we will also learn about what is a shared responsibility model, and and there's a lot we need to talk and understand about it in detail. In the next video, we'll learn about the common usage scenarios off infrastructure as a service 19. 19Use cases of IAAS: on your local data centers in your office. You may have workloads running on your virtual machines, and if your organization is looking to adopt cloud as a platform, then you may want to do a lift and shift off those virtual machines to the cloud. And you may want to do that on infrastructure of the service. Typically, I A s facilities are managed in a similar way as on premise infrastructure, and that provides an easy migration path for moving existing applications to the cloud. There are teams that can quickly set up and dismantle test and development environment. So if you're like to bring new applications to the market faster than I s is the way to go in first to the service makes scaling development testing environments up and down fast and economical. If you would like to run websites using infrastructure the service, then what you gotta do is build a new virtual machine in the cloud, typically aws Azure or Google Cloud Platform and then start installing Apache Tomcat and then host your Web application on it. So, in particular, services also used for Web hosting organizations may want to avoid the capital outlay and complexity of stories management, which typically requires a lot of skill staff to manage your data, meet legal and compliance requirements. Infrastructure services useful for managing unpredictable demand and steadily growing stories needs. It can simplify the planning and management of backup and recovery systems, so infrastructure services also used to store your data back up and performed recovery when required. 20. 20 what is paas: the developers are responsible for building great applications that we have today. Developers are very good at writing great code, making sure that the application works and performs at its best. But one of the major roadblocks that developers have is did not have an understanding off the underlying infrastructure where the application will be hosted. For example, the developer may not know what encryption is and what backups are. The platform, as a service will be providing an environment for building such applications so that the developer does not have to worry about those concepts. He just have to focus on writing. The best code platform as a service provides an environment for building, testing and deploying saw for applications. The goal off pass is to help create an application quickly as possible without having the developer toe worry about managing the underlying in for such a, for example, when deploying a Web application using paths, you do not have to install an operating system. You're not have to even take care off the windows of dates or install the antibiotics on the machine. Past is a complete development and deployment environment in the cloud resources are purchased from cloud service provider on pay as you go basis and access over secure Internet connection. There are no front costs, and users pay only for what they conceal. The cloud user or the developer is responsible for the development of their own applications. However, they're not responsible for managing the server or the underlying infrastructure. This will allow the cloud user or the developer to focus on the application or the world lords they want to run. The cloud provider is then responsible for operating system management and network and service configuration. Cloud providers are typically responsible for everything. Apart from the application that the user wants to run. Cloud Provider provides a complete management platform on which the developers wants to build and run the applications. 21. 21 Use Cases of PAAS: platform as a service or pass is a hosting model in which, instead of providing the bare metal machine or the basic operating system, you will be provided with a platform on which you need to build something for the end. Users sometimes past is also referred to as silver, less designs or several. It's architectures. Let's talk about some of the popular types off platform a service, all the use cases, D. B. M s is a typical example here. The vendor will host the database for you, and you need to build your own data basis, which is then consumed by the end users or other applications. The benefit is that you really do not have to manage the database management services. There is inbuilt monitoring an automatic backup. There are numerous benefits off using DBM s on platform and service, and that includes inbuilt monitoring, automatic backup and minimal intervention required for optimizing your qualities. Some of the examples here are azure SQL or Amazon relational databases. Google Cloud and Cloud Spanger also falls under database management service in pass web application hosting services. Another popular use case Here, you will get the run time for your Web application like PHP, Java darkness, No Js etcetera and you also get a story is where you can host the application files via FTP . So there is inbuilt application monitoring, Enter logging and no runtime management, because the patching is also done automatically. Some of the examples off Web application hosting service, our azure Web, APS, elastic beanstalk and Google app engine container orchestration is another one, which is very hard these days. Whether it is the doctors warm or Cooper notice the cloud provider will man is the cluster for you. You simply have to work on creating the container and add notes to the cluster when required. Usually do not have to manage the cluster Nord because all the complex task off managing services are taken care by the vendor. Some of the examples of container services are azure. Continue services. Amazon Elastic continue service, goon container engine and Google Cuban. It this engine Big data services also picking up the heat. Under this category, you will get a cluster built for you. It's more like a warehouse. You simply specify the parameters like size and type, and you'll get the results up and running very quickly. you get the interface to interact with the platform on certain tools to monitor the help. All the tedious tasks offsetting of the cluster is not a new at all. This is very cost effective as you can start and stop the cluster on demand. There are no upfront investments needed for setting up this workload. Monitoring and patching is also vendors responsibility. Some of the examples of big data services are Azure Data Lake, HD Insight, Amazon Red Shift, Elastic, My produce, Google's Big Table and Big Equity. To summarize this platform as a service is a platform for your application. For example, databases. A platform for your application anything that is required for your application to be hosted can be hosted on platform as a service where you do not have to worry about the underlying hardware operating system monitoring, patching anti viruses and backup solutions. 22. 22 What is saas: software as a service is a method off software delivery that allows data to be access from any device with an Internet connection and a Web browser. In this weapons model, software vendors host and maintain the servers, databases and the cord that constitute the application. This is a significant departure from on premise offer delivery model to a cloud based software delivery model. First, the companies do not have to invest in extent your hardware to host the software, and this, in turn, allows buyers to outsource most off the idea responsibilities. The SAS provider is gonna take care of everything, including the software this time. In addition to allowing remote access through a Web to the software applications and data, the software as a service also differs from on premise software and its pricing model on premise. Software is typically purchased through a perpetual license, which means the buyer's own the license to the software. They also pay about 15 to 20% per year in maintenance and support fees suffer. The service, on the other hand, allows buyers to pay an annual or monthly subscription fee, which typically includes the software licence support and most of the other fees. A major benefit off SAS is being able to spread out costs over time. So off the examples that we can talk about is Office 365 which is and Cloud based email delivery model from Microsoft. Traditionally, if you ever had to install email services, you have to install Microsoft Exchange on premise, and that needs quite a lot of hardware. Now with Office 365 you do not need any hardware on your premise. All you need is a license from 1365 and Internet connectivity and supportable browser. That's all. And now you have the email services on boarded onto your premise for your employees. The other examples where we can talk about our CRM solutions salesforce, Facebook, Gmail were also examples of software and service because I do not have to install anything on my promise or on my computer. All I need is a browser and log in with the user I D and password given by the cloud provider SAS provider in this case. And then I entered the world off often of the service delivered by the software provider 23. 23 What is Shared Responsibility Model: Okay, Now that you know what's a private cloud? What's a public cloud? What's a hybrid club? You also know what's infrastructure service platform? Meserve's. That's often it's those. You may have several questions by now questions. Maybe while if I created server in the cloud who's responsible for it? Is it the cloud provider or is it me? If I am using Gmail? And if I lose my emails, who is responsible? I uploaded quite a lot of data in platform in the service in my database. Who was responsible for it? What if my data gets compromised? What if my data gets hacked? Who's responsible for it? In this lecture? You will understand the responsibility and ownership off cloud provider and what's your responsibility, and we'll use the analogy of consuming a pizza. Now there are several ways in which you can consume a pizza or get a piece of for yourself when you're hungry. First box that you see is where you will be making your own pizza that is in house made pita, and you have lot of responsibilities out there. If you want to make a pizza, it's a heavy duty task you're responsible for the toppings, the pizza dough, they van, making sure that the electricity is there. Gas is there. The kitchen utensils are there. So gotta have all the ingredients in places. Quite a lot of responsibility in cooking something at home, as opposed to while you go out to the market, pick up the things that are required to go and pick up the pizza dough, and then you pick up the toppings and accept right, pick up all of those things and that once you reach home, you assemble them together and put them in the over. Well, your pitch is ready in some time. We have very rest responsibility there because you have just given all that responsibility to the ingredient provider. Could be Wal Mart or Target wherever you may want to pick up the things from, Well, you don't have to cook whole a whole lot of things in the second case where we refer to as kitchen as a service in the third case of third box where you say that Hey, you know I don't want to cook. I have no mood. So you're gonna just order your pizza. You just call your favorite pizza provider, and then they get the pizza for you. You're not responsible for the ones in green, which is kitchen gas over and pizza dough. It's not your responsibility atop because the fare pizza guy, they are the ones who will be creating the pizza for you once it arrives at home. You are responsible for fuel things, right? You gotta decorated. You're going to have a dining table place to sit and eat all that in the last section, which is pizza and service, or do nothing. You drive to the place, order your pizza from Jordan hulls or them. Get your pizza delivered to your desk and then you consume it and pay the bill. You're not responsible for cooking. The pizza are the toppings a pizza dough open? So when you're Dinan, then you're not responsible for anything the different ways and will use this analogy to understand what your responsibility in kitchen as a service, as opposed to walk in and big as opposed to pizza and service, Right? Let's see how we can compare it with each of these, like on Prem and infrastructure and platform and software it serves on your planet's is as good as cooking the pizza are making the pizza at your home. It's a lot of responsibility, right? Similarly, we have a lot of responsibility when you think about provisioning something in your office , so you want to make everything in your office. You want to host your application in your premise, the new responsible in each of those layers, right from the networking layer all the way to the top application layer. What kind of network connectivity do you want to have? Is it straight cable? Are fiber optics or across cable? What kind of storage? Well, is it Sam, NASA or desk? Do you want to contact IBM or Dell or HP for the servers? What kind of virtualization model do you want to go for Type one or type two? Should you go for hyper V or V and wet? What kind of operating systems are your applications Supporting Line X or windows have its windows. What flavor? And then, if Islamics santo s a boon to the red hat, which one? Well, that's a lot of thinking that goes behind provisioning the infrastructure on your premise and that also is directly proportional to the amount of time that you will invest. So remember the three key things that I told still, why we go to Cloud KPIX Cutting down on the KPIX, cutting down on the operation expenditure and time to market on premise. Infrastructure just takes down a lot of time. A lot of investments has the need to move the cloud. So in, in your premise, you're 100% responsible for every layer in every entity. Let's talk about infrastructure. Serves what's your responsibility? Or you know that an infrastructure service you create Sobers, Let's say and your responsibility starts right from the moment you've created a server, what's the hardware on which my virtual machine is hosted? That's that's something we don't care. It's not my responsibility. What's the virtualization layer? Our the underlying virtualization layer? Are they using Cavey, EMS or parable actualization? Are they using hyper very or VM? Ware is something that I am not at all concerned about as a cloud user. When I create a server in the cloud, it will also create a hard disk for me. But what kind of storage is the back and using? Is it a san? Is it ice cosy or dass are nasty. Well, that's not funny about it. Networking? How about the cross cables or straight cables of fiber optics? Well, it's not our concern, Atar, but your responsibility starts from the operating system. Layer the ones in the blue, which is operating system. The one see operating system is installed. You install the patches, you do the server hardening and US cloud user will be installing the anti lattice. You will be operating the definitions of the interests. You will be opening the right set off firewall rules so that right sort of people can connect to it. So those are the responsibilities that you have. What kind of applications will be downloaded on the machine and installed server Harding procedures, security protocols? That's your responsibility. So the ones in the blue, an infrastructure of the service statement is your responsibility. Once in the green is cloud providers responsibility in the screen shark, you see, managed by Microsoft. But it could be any provider. It could be Google. It could be. But this holds true for any cloud provider like Amazon as well Platform as a service. Ah, platform. This service is you understand that it's like more like a data basis or anything that runs in the back end, so developers do not have to focus on the back and activities. Now, when we think about such entities like databases, hosting the data on the database is the platform of the service administrator or the application developers responsibility. What's in the back and networking storied servers. Virtualization operating system is not my responsibility atop right. And when we think about data hosting the right kind of data, securing the data, encrypting the data and then also ensuring the dairy is bagged up at regular times is your responsibility is a cloud user. What kind of applications? Air connecting to my database that is also your responsibility as a cloud user In suffered a service, we do not have any responsibility. The entire section isn't green networking storage servers modulation all the way up to the top to the applications layer. We have no responsibilities. So think about Google. Gmail services are Microsoft Office 365 Services or service Now services. We're responsible for our Internet connectivity and ensuring that we don't share credentials or passwords with anybody, right? So you don't give away your Gmail credentials to anybody. You keep it confidential, that's all. But you do not have to worry about what's their electricity bill? What kind off hardware they have in the back end. What kind of operating systems are they running in the back and how many people they have, what kind of manpower they have in the back into style. Responsibility? Well, to give you an overview of this, we are responsible for our data. But keep in mind. Ultimately, it's the customer's responsibility to protect the data, to ensure that the data is confidential to ensure that data has integrity and data has availability. When we designed systems in the cloud, we always think about design four Failure and design for security always have highly available systems and always have highly confidential information protected by multiple layers of defense. 24. 24 Foot Prints of Amazon Web Services Datacenters: Welcome back. Thanks for joining me again as we get started on this section AWS global infrastructure. So this is where we're gonna get introduced to various topics, including regions. What availability zones are what data centers are. But before we start talking about all those wires and how to get connected, let's talk about who AWS is. Amazon Web services was launched in 2000 and six, so it's been about 14 years now, so that just tells is that cloud is already a decade old. Today it's 2020 and Amazon has over 200 services, providing a range of services from network compute storage data basis, developer tools. I mean, I can go on and on. But this is just to tell you that Amazon has got everything that you need under the sun. As far as technology is concerned, you can think about Blockchain artificial intelligence, Internet, off things, serverless computing, kubernetes. Amazon has got everything, and we'll get you know about those as we get along through the course. So instead of starting this lesson by telling you what services Amazon Web services offers , let's fly 30,000 feet about the ground and look at how Amazon looks from the sky. So the Amazon Web services global infrastructure has 22 regions as on today. So the AWS global infrastructure has 22 regions around the world with 70 available T zones . We're gonna talk about available to zones in here in just couple of minutes. But let's understand and look at how the region's looks like. I've pulled up a website from Amazon Web services, which is infrastructure dot aws. So you can go here as well and take a look at the latest footprint off Amazon Web services across the planet. So all of these orange boxes that you see is nothing but AWS global infrastructure map, which represents regions. Okay, so all of these are regions that also includes the physical location where they're physically located at the highest level. AWS physical infrastructure is made up off numerous regions. As you see, we got some of them in not California on the West Coast on the East course. That's not Virginia. He got a lot of these data centers in the European region as well, a few of them in the Middle East. It's one in India's one in South Africa, coming up a lot of them in the Southeast Asian region and Australia as well. The one thing to keep in mind is that every region has multiple available T zones. Okay, so let's understand that by clicking on a particular regions, what I'm gonna do is click on Sao Palo. Let's say, all right, it's gonna zoom in right there. And what it tells me is that there are three available T zones. Now, these three little dots, the three little blue doubts are nothing but available T zone. And you gonna find such available T zones in every region. Okay, so let's do that for not Virginia. So I'm gonna go back up, close that and then let's say click on Frankfurt, for that matter and what we have here, you've got three available T zones in Frankfurt. So when you're playing around this world map, you can scroll to the right as well and to the left as well with your mouth, and then get that first hand experience with this wonderful interactive this playoff, Amazon's data center. So now I'm going to click on this, not Virginia. And what I have here is six available D zones the highest so far, isn't it? But what is and availability zone available. T zones are where your actual AWS resources are located. So let's take a look at this hierarchy one more time. They got all of these multiple regions. Okay, all of these orange boxes that you see are nothing but regions. But then you got available. The zones inside the region's okay, so every region will have minimum to, but they also have three year more available to zones within them. So that's our region. The orange box is a region, and then their data centers within that region. We call them as availability zones. And what I'm going to do is just focus on one particular region. Here, let's say not Virginia, Okay? I usually pick up not Virginia, because this is considered to be the guinea pig for Amazon. So they do. Lots of testing and new features are introduced in this region, So this region here has six available T zones. So what's inside each available T zone now? Each available T zone is a data center, or it could also have more than one data centre within that available to his own. So when your provisioning resource is in Not Virginia. It's going into one of these blue boxes. It's going into one of those available T zones. Okay, so remember that inside the region you got available T zones and inside the available T zones, you got one or more bear centers where your AWS resources will recite. So I'm gonna scroll in a little bit here, and then what you see is the blue box. I cannot go any further than this, So just imagine this as a huge data center. Okay? Where you will have your virtual machines or databases sitting inside that. Okay, so this map is pretty cool map that puts all off the AWS competence together, and you can see that it shows your resources that exists within AWS. So these orange boxes represent where AWS exists, as on today around the world. This is a pretty interactive map, so you can go ahead and click on these individual components, and it's going to tell you what the definition off availability zones is, what a data centuries and what it comprises off what a region is, So you can go ahead and click on these interactive map and play around to get that first hand experience. You may be already thinking about this. What is the need for an available T zone, isn't it? Previously, we learn about these concepts. Call as high availability and fault tolerance. Remember? So those were our definitions for terms that we learn previously, any other lessons available? T zones fulfill the needs off high availability and fault tolerance available. T zones have low latency connections between each other. Each available T zone is isolated from each other, and there could be hundreds of miles apart. But remember that these data centers are these availability zones within that region are connected to each other with Lowell agency links. All right, so these blue dots maybe 100 miles apart, but they are connected with high speed connections. So the local agency just means that fast connection between the sides and there is not a lot of delay or lag when traffic is sent between the available T zones. So that was about available T zones. Now, what's this pink color doing here? What is this pink dot These pink circles represent points off presence points of presence are also called as edge locations and these are geographically located all around the world . Now I'll just close this pop up and let you see the world map where all these ping dots are located. Now you'll see that there are more ping darts or more point of presence. Locations than regions are available. T zones. These pure peace or points of presents are essentially what we call as educations where AWS uses these locations to replicate certain data and certain services. So think about an application that is doing the content delivery networks for your cdn. So in AWS, we call it as cloudfront. We'll talk about cloudfront much later in the course, but think about how you gonna access a Web page no matter where you are in the world you want that were pays to look up quickly. So these point of presence or the edge locations, will be responsible to cash that Web page in the location that is nearest to you so that those images pop up fast on your death stop and all the death stops off all the users accessing that with application. So AWS uses their global network and these points of presence to replicate that data in those applications all around the world. And then that weighs. Everybody is able to get access to the data quickly and again. That's an oversimplification off what cloudfront can do. But that is just to give you and idea off what cloudfront is, what Cdn is and how these points of presence are utilized. I'm gonna move on to the next point now, which is networks. So all of these white lines that you see connecting between the regions are the networks, right? So these are daughter lines that represent the AWS networks. So these networks connect the regions, and they also connect the available T zones together. They are fast networks, and they are also running through the oceans that run across the ground under the ocean to link the regions that are located in different parts of the world. Think about the fact that somebody went underwater for you and laid cable so that the networks could communicate with each other at various places around the world. We're not gonna cover how delayed the cables on the water and how things that getting connected with his data center. But these facts are just good to know. So so far. We talked about regions and then we talked about availability zones and how the availability zones are able to replicate the data between them for high availability. Remember that this high availability is not always automatic. There are certain services that do replicate data between different available T zones automatically, but there are others where you are responsible because you provisioned them. You are the architect and you design the environment correctly to create that hi available t across availability zones and across regions. So let's go ahead and take a look at the next piece here, which is the data center. So inside our available to his own, we have one or more data centers. So this is where the resources are hosted. So these are all combination of physical servers, storage, compute and everything that falls under the network category like firewalled switches and security competence as well. So all of these acu print the physical ones. They're all located inside the AWS data centers. So when we create our virtual machines or virtual resources, they reside in one of these data centers. So in reality there's a physical server, but an actual we're creating virtual resources like a virtual machine or even a virtual storage. The interesting concept is that if the hardware behind the virtual machine starts to feel, we could move that virtual server to a new piece of hardware, and that's one of the advantages are virtualization. But even though you're moving it from one hardware toe another, they're still reciting inside the same AWS data centers. And then it can be replicated as well across geography ease. This is done because the different data centers communicate with each other. You can go ahead and provision. The resource is closes to the users who are using them. So let's say you got your customers and users located in Australia. What you're going to do is go ahead and create that virtual machine in one of the Australian data centers right somewhere in Sydney. So if you got users in America, you could do the same thing, right? So you create resources in the regions that are closest to the users, and you got to make sure that you deploy your resource and multiple available to zones so that you got your infrastructure as fault, tolerant and highly available. But that doesn't mean that something can never go wrong. Okay, a time she may want to keep your high availability across. Geography is as well do not rely on fault, tolerance and availability for a single geographical region. So think about if you need to separate data for high availability in different parts of the world. So you got your office somewhere sitting in Frankfurt, and then you got your office somewhere in northern California. Let's say so. You can actually replicate that data between the AWS regions across the AWS network, So look at the dotted lines there that represent the network itself. So your data is gonna go all the way to the wire under the ocean back and forth between Frankfurt and Northern California. So you have the ability based on your application on what your applications can do, you can go ahead and keep your applications highly available within the region as well as between the regions. So, as you see, Amazon has got lots of benefits to offer. So what I'm going to give you is a small homework. Here, go ahead and navigate to this website infrastructure that aws and before you start running , the next lesson this course, go ahead and play around this buttons really little bit. Understand how AWS infrastructure is nicely spread out? It crossed the geography ease and what these individual points have to say. There is no point in me reading this little paragraph in little dots here because you can go ahead and read this yourself and you may find some interesting points here. So that's all for now, folks. In this lesson, we learn about how Amazon Web services data centers are spread across biographies. How Amazon is ensuring high availability and fault tolerance with the help of available T zones, cashing the data in the point of presence, and how quickly they can transmit the data back and forth between the available T zones and between the regions with the help of superfast networks that they have laid down for us. Thanks for watching, and I'll see you in the next lesson. 25. 25 AWS Console Tour: Welcome back, guys. Thanks for joining me again. In this lesson, we're gonna take a tour off AWS console. So we're gonna talk about what? AWS console is the length and breadth of the AWS options we have inside the AWS management control and understand the graphical user interface better. So that different things that you can do within your aws account, As you see, I'm already logged into the AWS management console using the girl right here. And here is the girl to access it. So you got console dot aws start amazon dot com and you can see that I'm connected in us East one region. So we're gonna talk about that later in just a minute. So looking at the icons appear on the top. The first link is a kind of a home button. So this is the home button. If you click on it, you're going to come back to this same page as an example. So let's say you're working on the S trees storage and you're here and you're managing some files and updating few things here and you want to go back to the home page. All you gotta do is click on this AWS icon and it will take you back to this homepage. The next icon here is a services icon, right? So if you click on the services link, it gives your drop down menu off all the AWS services that we have as on today. And you can click on these links to jump to various AWS services. So once you click on the services, it just pops up. The list of AWS services for now will not be going through all of these services that AWS provides. It is just an overview so that you understand and learn how to navigate around the AWS management console. For example, if you're like to work with compute instances and you would like to create virtual machines in Amazon, then he would go to easy to, and it then gives you the dashboard off. Easy to in not Virginia region. I will show you one more thing here. So let's go back and click on services on the top right inside. You got two options and these two options did a mind How you want to view this options in the pop up, for example, by default, these options are grouped by compute storage databases, etcetera. But you can also group based on alphabetical order, right on the top. Left inside, you see the history of all the services that you have access so far. And you also have a search button where you can go ahead and search for a particular service. Let's say you want to search and work on a daily basis. You can just type that here in the search box and quickly navigate to that particular service. So instead of going and searching for that service under the services menu, for example, RDS falls under the our section. Right. So I will scroll down and find that relation database here, and that may take some time. So instead of that school here in the search box and tied that here and you can directly go to that particular service, Okay, so by default, these services are grouped by subject or category. But you can also go ahead and alphabetically ordered them a to Z. I'm gonna click anywhere outside this pop appear and it just goes away. The next thing under this section is the resource group. The resource group allows me to take resources within AWS that I want to group together to manage. This is very important for charge back purposes and organizing your resources. So as an example, think about having different departments within your company and they have AWS resources. There may be an HR department using suddenly see two instances and the marketing department using a set of databases as well. So you would like to categorize them based on the business unit. This is done with the help off the resource groups, so you will go ahead and create a group called them a sales marketing, or HR, and then keep their respective resources inside that group. Since you're talking about this, let's talk about tagging. Every time you create a resource group, you tagged them. The tagging is important to identify who the resource belongs to, so those tanks will allow me to group those resources together and then that ways I can manage them as a single group. Next to the resource group is a small pin right. This pin will let you keep the frequently access resources on the top as a sharp cut. Let me show you how that is done, so I'll click on this pin and lets him working on AWS organization every day. So I'll just click on this and then drag and drop onto the top. I can drag and drop multiple things. Let's I'm working on auditing function, which is a cloud trail, and I would like to look at the logs every day, so I'll put that as well in my shortcut. So what we did to keep it there is that we grabbed it and then dragged and dropped it all the way onto the banner. So these are just shortcuts that you're creating to quickly access them for different services. So I'm gonna go back to the management council by clicking on this home page button and then move one with the next option we have, Which is the bell that this bell is also called as the alert bell. This is gonna show us the list off AWS related alert. So if I click on view all alerts, you can see the list off alerts AWS has posted over time. You can identify the most recent alert here, and you can also see the status off it when you click on top off it and to see if it is resolved or not. These alerts are originating from Amazon's data center, so there is something wrong in one of those Amazons data centers or regions. Then it just gets published here. You can also click on the effective resource is tab to see if this particular alert has affected any off your AWS resources. So that was about alerts. Now let's go back to the home page by clicking on the AWS logo, and then the next option is the account information. When you create an AWS account, you supply certain information to Amazon, for example, your first name, last name and then your address, etcetera. That's all stored under my account there several other things that you can see, which is information about your organization, what your service quarters are billing information, orders and invoices and your security credentials. So I'm not gonna go and review in every hyper link in this particular lesson because we would see them in a different lesson. But this is the link where you get access to those resources, and this is where you can click on Sign out as well. Now, next to this account link is the region. So if I click on this particular hyperlink, I get to see all the regions that Amazon has as on today. So I should like to provisioning your resources and let's say Europe. Then you'll click on Europe and the Now you're connected to Europe region, which you'll notice is how the hyperlink on the top change right. It changed to you hyphen central, and you have in central one, so everything that you do now is directly connected and will be provisioned in Frankfurt region. I'll switch back to a not Virginia, and you'll notice how the hyperlink changed to us. East one. You see how quickly your toggling between the regions that you saw previously in the world map in the previous lesson. The next option is the support option. So if you like to contact support for any reason you have certain issues. You can go to the support center and manage your cases or manager tickets. That was, you can talk to and communicate with an AWS engineer and get your issue sorted out. There's a forum documentation, training and other resource is is where you can go ahead and learn a lot about Amazon how technology is working. There will be like minded people who are all connected on this forums where you can see how the questions are posted by various engineers geographically, and how AWS engineers and other people as well are assisting them to solve their problems. Or let's move for the down. And at this point, what you can notice is that there is an AWS console mobile app as well, with a lot of things that you can manage with the mobile app Ali gotta do is install that mobile app on your device, and you get a similar screens on like this and you're connected so you can provision resources on the go school. On further, you will see that there are various links here that will help you improvise your learning. And then there's a feedback option so you can let it iblis know about your management console experiences on the left inside for school. Down further, there are some wizards and work flows here that will help you complete a specific task. It just the way that AWS is trying to take common things that people do and provide them with shortcuts and how to do it again, trying to simplify the experience. That's Colon further, and these are the list of sudden, complex tasks. But then they give you step by step guides on how to complete specificity. Lucien's within the AWS environment, so that's all folks. For now. In this lesson, we learned about what AWS management console is, how you can navigate around the 80 bluest management console and the various options around it. Thanks for watching, and I'll see you in the next lesson. 26. 26 Free access to AWS: welcome back in this lesson. Let's go ahead and learn about how you can get started with AWS. Iblis gives you one year free trial. Well, that doesn't mean that you can create large virtual machines and pumping like terabytes of data and still gonna be free for you know it doesn't work like that. So it's important to understand what is free for us, how much is free and what you will be charged for. You can get those numbers in this page. If you just Google for its AWS free tier to land appear, which is AWS Start amazon dot com slash free, and you'll get you know, all the numbers that are free for you and when you will be charged as an example here. If you go ahead and create an easy two instance, which is a virtual machine, you will be allowed to run that for 7 50 hours in a month. The size of the virtual machine should not go beyond T to micro or t three micro. All right. That means you cannot create a virtual machine more than one CPU and one gigs off memory. That's what these instance type means. Similarly, for s tree storage. You cannot go more than five gigs of storage for RDS. It sound 50 hours. So as you scroll down, you will see the upper limits off every service for that trial duration. Okay, you can also go ahead and search them. As you see in the left side, you got lot off search filters so you can go and search them. Let's say if you want to operate on a no SQL database, so I'm gonna check that, and then you will be able to find out what is free for us for as SQL RDS concern and no SQL dynamodb is concern. Similarly confined the stats for other tools as you check and uncheck these boxes. So one thing to keep in mind that you get a free trial version for 12 months, there are certain services that are always free and some off them that are provisioned to you on trial basis. OK, so it's worth going to aws that amazon dot com go ahead and search and filter these options to get familiarity with these numbers. One thing to keep in mind is that if you cross 7 50 hours or if you provisions in any resource beyond T to micro, Let's say m three large, then you will be charged for that resource. Okay. Organizations usually set up building alarms so that they do not go beyond a certain budget at number. In our case, it's important to set up a free tear usage billing alarm. So just in case you start to get charged on your resource, you get notified about it. We'll take a look at how you can set up a building alarm in the upcoming lessons. But here in this lesson, I just want to take a look at the kind of resources that you can provision with the free tier and that you can set up a building alarm just in case you go beyond the budgeted number. 27. 26.1 Creating a Free AWS Account: one of the best ways to learn any technology or any product is to do things yourself. Get your hands dirty to know it Better for you to go ahead and practice and understand AWS better. You need an AWS account. Once you have an AWS account, you can log in and practice creating a virtual machine, creating a network. And there's a lot of things that you can do with AWS. In this lesson. Let's go in and learn how you can create an AWS account for free. All you gotta do is navigate to aws dot amazon dot com slash free, and it will bring you to this location. All you gotta do is create a free account. Trust me, it's not gonna take more than 10 minutes to get ready and start provisioning your resources , right, So you go ahead and type your email, address your password, confirm the password and then type in AWS account name. Usually this AWS account name is the name of your company or name off your organization. And if you're doing this for yourself, you can go ahead and type in a fictions name. That's fine as well, but I think most of us have an Amazon account, you might have used it for purchasing on amazon dot com website. Then, in that case, you can go ahead and sign in using that account as well. Once that is done, you have to submit your credit card as a means toe. Authorize and tell Amazon that it's a really person who's creating an Amazon account. Well, why would you need a credit card? The only reason to use a credit card is to authorise yourself, and that tells is that it's not a robot. Otherwise malicious act can write a script to create hundreds of accounts in Amazon and then start building. The resource is with the help of a script, and that's something Amazon doesn't want or even any other cloud provider would not like that to happen. So they need some kind of an authorization and credit card is an acceptable method off authorization in AWS for 12 months. If you stay within the free tier subscription, you will not be charged a single dime. So I will go ahead and type in some pictures, email address here and type in rocket science at hartman dot com and give some random but secure password and type in some random itiveness account name here. Okay, that's going to go ahead and ask you for this information, right? It's regular farm. We gotta fill in some of your personal information, like first name, last name, company name, phone number and all of these attributes, and then go ahead and check this box, click on, Create account and continue. And there, after you will be asked to you punching the credit card details and then that should be done. Okay, so in your case, you can go ahead and select the personal subscription instead of typing in the professional one. Unless you're creating it for your company. 28. 27 IAM Part 1: welcome to this module on identity and access management. Dissection is really important because this is where we grant access to other people, create user accounts for let's say, developers or even finance people so that they can see the building reports in AWS. It's important to assign right set of permissions to your developers that they get access to AWS just as much as their job demands. So let's go ahead and dive in tow. I am, and understand this better I am is a service that will let you create user accounts. Groups rolls, manage user credentials, set up password policies as well. Here, you can secure your account with multi factor authentication and manage the A P I keys for programmatic actions. We'll talk about these individual components one by one. But let's get into the I am section and see what we can do here to go to the I am section of this click on services and then type. I am here, and it just pops that up or you can type. I am in the search box. It will take it the same section because we are now here in the I am dashboard So on the left hand side you got the options to create users, groups, rules and policies, and we'll go through these options one after another. But this time I'm just showing you around the dashboard. So, as you see, the user count is just one. That means I have created just one user. In my I am in the subscription. I've got one group. I've got 11 managed policies, 58 rules and zero identity providers. Right below that, I see the security status off identity and access management. Have I deleted the root access keys? Yes or no? Looks like I've deleted access keys and that's why I have this green check box. But what is an access key? An access key is used to programmatically access the services in AWS. At this point, we're logged into AWS using AWS start amazon dot com. This is a graphical user interface, but there will be situations where you may have to launch power shell or command line interface or programmatically read and write data into, Let's say, the S three bucket, right. So at that point you will supply the access keys to the application like the command prompt or the power shell so that you can then upload and download the data to estimate bucket or create or manage delete the virtual machines in the easy to section. Okay, so remember that access keys are usedto programmatically manage the resources in AWS. But if you're like human is the resource is using the graphical user interface, you will need ah user I d and a password. Okay. As an example, I will go ahead and create a user account in AWS. So to do that, I'll click on users. Well, I could have clicked that or click on users the same thing. All right, so I have a user account called as Mary, and you got other status takes like Mary is not part of any group. Mary has an access key and the age of the access keys to 73 days, which is approximately a near and the password ages to 93 days, which is not good. So Mary has not changed the password since 2 93 days. When did Mary last log on? It's been 77 days since Mary logged on, and Mary has not set up any multi factor authentication right So here in the user's dashboard, I will be able to see all the users that I have in my AWS subscription and their respective statistics. As for a security is concerned. OK, now let's go ahead and click on Add User, and I'll fill up this form. Let's sell column as Rob. And as you see, I can create multiple user accounts in one go. All right. But im gonna just keep it simple and create one user account for now. The next option here is how would you allow rob toe access? The resources in AWS. Is it programmatically using access Key I D and Secret Access Key and or using the AWS management console will. You can give both the options here. That's not a problem, but this is where you can control how a user can connect to your AWS services. So if you're granting access to a developer or somebody who is writing that a P I calls using 1/3 party application, Davidow need an access key and a secret access key, for sure. But if it's a person who will log into the AWS console, maybe to create resources using the user interface or maybe generate some billing reports for the finance team. Then they will just need ah, password. Okay, so there are two options of generating a password. One is automatically generated by Amazon. 2nd 1 is where you can type in your own password for the user, Rob. And then you will share this past orbit, Rob. Okay, I've let it automatically generate, and the next option is require password recent. So once Rob receives an automatically generated password from you, do you want Rob to keep his own password? So when Rob logs in for the first time, he will be prompted to change his password, which is a good security, Portia. And that's why we would like to leave this check. Okay, I'll click on next. And this is where you can add users to a group. Permissions are always granted to the group. Okay, we'll speak more about it when we start creating groups and then assigned permissions to the group. But remember that it's always a good practice to have users added to a group and then assigned permissions to the group instead, off adding permissions to the user directly. You got other options. You can copy permissions from an existing user. So if I would like to grant rob the same permissions as Mary, then I will check this. And when I do that, it means that all the permissions, like, easy to full access. And there are four more permissions. It seems right here. So all of these permissions that Mary has will be replicated and assigned to rob as well. The last option is where I can attach the policies directly. Policies are set off permissions, right? So, for example, I want to make Rob a storage admin. So I want to make him and had been to the S three bucket. So I'll hit s tree here and then click on this one. Test Amazon s three full access. So what am I doing here? I'm making Rob. Ah, full administrator. Forestry. So if I go ahead and uncheck this and check this, can you answer what's gonna happen next? Well, Rob will get read only permissions to s three bucket. OK, so I'm going to do just that. So I'll click on next. And with this, Rob will have read only permissions to the S three buckets. Let's click next and that brings us to the tagging section. Tagging is very important. I think we did mention about this when one of the previous lessons when we're talking about the resource groups tagging is away toe, identify the resource is identify the objects so you can maybe charged them back or identify who they belong to. For example, I'll say Business unit So Rob belongs to Business unit Call as marketing. Okay, who's the manager? So I'll put that mark at company dot com Is the manager off? Rob right? Similarly, you can have up to 50 tags per resource, so I'll click next. Finally review this and click on Create User. It doesn't take much time to create a user for us and the ego it is created. Rob now has ah password, which is automatically generated and also has an access key i d. And a secret access key as well. I need to share this information to rob so that with access key I D and Secret Access key Rob can programmatically access resources in the S three bucket. And if you would like to manage certain activities using the user interface than Rob will use his user name and the password to log in. All right, that's all for now. In the next lesson, we'll see how Rob can log in and show you the user interface that Rob can see when he logs into the air of Louis control. 29. 28 IAM Part 2: All right. So we have the user callers Rob created. So, as you can see in the user's dashboard, you've got to users, Rob. That was created recently. So let's see how Rob can log in. So let's understand the procedure all will use to log in and get access to the resources. Okay, so I'll take a step back and show you something on the dashboards. I'll go back here and then you might have noticed already that there is I am users sign in link. So all the I am users will use the following link to log into AWS portal. Okay, so what I'm gonna do is copy this and then launch incognito mode and then paste there you are really here. So this is what Rob will use always to log into the AWS porter. So this is my account. I d specific to my subscription that I'll type rob here and then type rob's password. I don't know Rob's password yet, so I will minimize this quickly navigate to use your section and see if I can reset Rob's password so I can go to security credentials. And right there, there's a console password option click on Manage and then select this radio button custom password and then type in the password here. Okay, so I'll go ahead and type some random password and then click on apply. I also have the option to go ahead and let Rob reset the password. And this is what we do in the production as well. You will go and check this. Share this password with him over email or some other communication channel. But since we're working in a demo environment, I leave this unchecked to speed up the labs. Okay, so if we can apply right, so that stops password. Go back to the Rob's signing portal now. All right. So here in this password section, or Philip the credentials and then click on signing Now, once rob science in, he should be able to make modifications to s three bucket only, although he can see that we can create easy to instance and probably elastic beanstalk as well. And then he's able to view everything that AWS provides. But does he have permissions to go ahead and create a C two instance? Okay, let's see that. So Rob is very curious. Now he clicks on the easy to dashboard and tries to maybe create an instance. Now he's in the EEC to dashboard. Let's say he wants to know how many running instances are there in the subscription clicks on that. And then he gets a message that is not authorized to perform this operation. Absolutely. This is because Rob has been granted permissions just to the s tree that to read only permissions and then he can change his password, That's all. So Rob is kind of a new user, and then he's allowed to access s the only based on the permission set that has been granted. So go back to s straight to see if I can actually read and view all the buckets. All right, they go. Rob is able to see all the bucket skinny upload something. Of course not. He will not be able to upload Biggers. Rob has just read only permissions based on what we granted to him in the next section. Let's learn and continue to know about the security features like multi factor authentication and then all about access keys and secret keys 30. 29 IAM Part 3: switching back to the dashboard now, we were looking at few options right here. Delete your access keys. Until this point, we know that we can create user accounts and give them role based access controls or granting them permissions just as much as they need. This is also called as privilege off least access or privilege off least control. Moving forward from that to the next topic. Which is these five options that we have in the dashboard to improvise the security pusher off I am. The first option is deleting your root access keys. We now know what access keys are, but just to reiterate access keys are usedto programmatically access. The resource is in AWS, so if you want to create a s three bucket or let's say easy to instance or a virtual machine in AWS with help off accord or using power shell, then you'll need access keys and the secret i d. One of the best practice here is to remove or delete the access keys for the route. Now who is a root account? The root account is us myself. So when you create your account for the first time in AWS, your account becomes the root account. The root account has full, unlimited permissions on AWS. With the help of root account, anything can be done. You can go ahead and provision and number of resources in all services. All that is good, but it also becomes dangerous. So if some unethical person has hands on on your access keys and the secret keys will they will be able to do anything on your subscription right? And who's gonna pay for it? Unfortunately, it's you who will be paying for that. So for that purpose, it's important that you go ahead and delete your root access keys. The way you do that is by clicking on the manage security credentials, options navigating to the access key section and then make sure that any access key that's here is removed or deleted. OK, so that's one are going back to the dashboard. Looking at the second option, which is multi factor authentication. You need to make sure that you got MF A on your root account. Now what is M. F. A. The acronym itself defines it so multi factor authentication is multiple factors off authentication used for accessing the resource is at times. It's also call as to F A. I'll take a step back and explain the authentication method that Rob used when logging into AWS portal. Right? So Rob used the user ready and password. So that's one factor. There was no second factor. The second factor is probably for use Ah, OTP on your phone as well. Or maybe you get OTP a one time password on your email. Okay, so that is a second factor. So the flow would go something like this. Rob types in his user name and then the password it's signing and then, ah, messages triggered on your phone or the authenticator app or your email where you get a one time password. You take that one time password and paste it in the portal. So you are getting on syndicated to prove that it's really you who's trying to access. The resource is and nobody else. So the how is if there is in a malicious actor who's trying to pose as you because he stole your credentials, will not be able to log in because they will not be able to satisfy the second factor or the one time password. Okay, so that is multi factor authentication. You see where this is going, right? You are securing your account. You're creating multiple layers of security around the authentication module itself. So that's what multi factor authentication is proving something who you really are with the help of something you know, which is that password and 2nd 1 which is something you have, which is the one time password, your mobile phone, the authenticator, app, email or anything It could be. So these are the two factors, although there is 1/3 factor, which is someone you yourself are, and that falls into the category of biometrics. So apart from typing in your user radian password, you also put your fingerprint or maybe retina scan or facial recognition. These are part off something that you are. But at this point, AWS supports just two factors. One is ah, password, and the 2nd 1 is using an external m afraid device or ah or a utf security key or a key for which are produced with organizations like Cuba Key. Or you can use a game alto token as well. So if you got any authenticator app installed on your phone, it could be any authenticator could be Gula syndicator Facebook authenticator. Microsoft are syndicator. You can go ahead and integrate that with that device with that, a syndicator app. And let those two factors kick in to your authentication. Okay, so that's about the multi factor authentication. We spoke about access keys and secret keys earlier, so I'll go back to the dashboard and look at the third option, which is creating an individual. I am account. So it is not recommended that you use your root account to log into the AWS subscription. When organizations create their first log in with AWS, they will not use it to log into AWS to do their regular activities. Instead, they will either create accounts here like the way we created Rob's account or integrate that with on premise active directory, in case they do that as well. Always use groups to grant permissions. What we did in the first lesson was we create an account for Rob and granted him permissions directly. Now, this was just done to explain things to you in a better way. But organizations will go ahead and assigned permissions using groups. Right? So this is how it goes. You will go ahead and create a group right here in this section and assign policies to the group and then add users to the group so that users will get the same set off permission. Same set off policies that are assigned to the group. Okay, Now, this fourth section is all about using groups to assign permissions. So if I go back to the group section here, looks like I have a group called Les Cloud hyphen Storage admits. Right, that's there. Now, are there any permissions assigned to it? Yes. And these permissions are s tree full access, elastic file system read axes and elastic file system. Full access. Okay, So any user who is a member off this group will get those set off permissions. That's what it means. And this is the best practice that security professionals and cloud professionals recommend . Right? And the last option is applying, and I am password policy. Now, what is an I am password policy? Password policy defines how complex a users password can be so that they do not type never simple password like their cat's name or in a pets name as their passwords. Right? So we define how complex the password should be. And how long should we remember the passwords? Let's take a look at that quickly. So I'm going to click on Manage password policy. And then what you will see is a box here, which is set password policy. All right, this is where you can enforce the minimum password length. Do you want upper cases in the passwords? Lower cases. Do you want to enforce at least one numeric? Do you want that to be alphanumeric? And all those options that you see here There's no point reading this, but a permutation combination of these makes your password complex, and I will not say impossible, but rather say difficult for an attacker to crack your password. Okay, so I click on cancel. So to improvise your security, Portia Amazon recommends these five options and all of them Must be stick Didn't green. Okay. All right, folks, that's all for now in this session, Let's go ahead and connect again in the next lesson. Thank you. 31. 30 IAM Part 4: in this lesson will talk about how you can accomplish the principal off least privilege. In the previous lessons, I did mention that you gotta attach the policies to the groups and then add the users to the group's Let's go in and see how that works in action. Now I'm locked into the AWS portal. I'll go and click on policies, and then what you will see is lot off inbuilt policies that Amazon has for us on the top right inside. What you see is that regard about 651 inbuilt policies. Now what I'm gonna do is click on one of these policies, and they understand that in detail. OK, so let's say I'll go ahead and search for E. C to hear easy, too, by the way, stands for Elastic Compute Cloud, which will let us create virtual machines in AWS. We'll talk more about that in detail in the upcoming sections, but for now let's stick to policies. So I've got something called Amazon easy to full access. And what does that do while it is giving us full access to all of these services, which are interlinked with easy to night so anybody who has, uh, this policy linked will be managed will be able to manage everything as far as easy to is concerned. Okay, So what we usually do is go ahead and create a group and then link that group with the policy. Let's do that. So I'll go ahead and create a group and then call it as easy to admin or something and click on next. Now, what we have is a list of policies that we saw in the earlier section. Okay, Now I'm gonna search for my policy and then select the policy that you want to attach to the group. Okay, now we're done with this. Let's click on next step and finally review and click on Create Group. That's done. That's how simple it is. Now it's time to go ahead and add users to the group. Okay, Now we already have a user called as Rob. What I'm gonna do is go to use our section, click on Rob and then so let the groups tab and add users to the group. Right? I've got all the groups in my subscription so far the group that I want drop Toby in is easy to admin said like that. And at two groups, that's it. So now Rob will be able to log in and manage everything as far as easy to virtual machines , load balancers or concerned. OK, let's see that. Now I'm gonna go and launch an incognito mode and you remember what I'm doing right? And bring the same set of factions that I did in the previous lessons. I'm going to copy the I am link for the user. Navigate to incognito mode. Go and log in as him. Okay, type Rob and type the password as well. There you go. I'm gonna go full screen, OK, so not this time. I'm gonna go to E. C two and then you will see that running instances is zero. In the previous lessons, we got running instances as hyphen and when we clicked on this, we got a message that we do not have permissions to view this page. But now clearly say is that I do not have any running instances in this region. And that means that I'm able to read the data from the sea to dashboard. Well, things will go ahead and work out for Rob if he goes and clicks on launching stance and follows through the wizard and creates a virtual machine for himself. But then we're gonna do that in the next section when we will do easy tools. Okay, but one thing to keep in mind that privilege off least control is very important for security purposes. So when we talk about the best practices in the next lesson, I'll mention that one more time about Remember that for now. Okay, Now, one thing to keep in mind is that when Rob logs in, he will be able to see the following on the top right hand side, which is his name at the count I d. This number is your account I d related to your subscription. Okay, for now, I'll go ahead and sign out of this and close this guy. All right, Justice summarize. In this lesson, we understood what policies are. We created a group, attached the policy to the group and added the user rob to the group within, signed in with Rob and saw how the permissions are effective for that user 32. 31 IAM Summary: Hello, guys. Welcome back. Thanks for joining me again as we get started to talk about some of the best practices off I e. M. So the first thing to remember is that the root user account is created when you create your AWS account, your email address becomes the root account and it's always recommended not to use your root account for everything within your AWS account. Right? So did not use it For the daily admin tasks, you should create an additional account which is an I am user for daily administration and then grant them permissions based on what they need to do. I am users. When you create them, do not have any permissions by default, so they can just log in. But by default they would not be able to access any resources. So you will then have to grant those I am users access to the resources based on the principle off least privilege. And what does that mean? It means that you will give them access to just as much. They want to do the job. You spoke about granting permissions. Why are groups and then assigning policies to the groups and then there is roles is not always thes e users who log into the applications, but also applications need access to other applications as an example. If we want to allow, let's say, an easy two instance to Access S three bucket, then we create an I am role here, right? And then you will attach this role to the policy. This role can then be used by different services to access other services. And then we spoke about multi factor authentication, which is creating an additional layer off security additional layer of protection for your AWS services. So, apart from using the user I D and password, you'll have to provide that random six digit cord which could be generated by ah Kee farm. Or it could be in our estate token or an authenticator app, and then you have a P I access keys. These access keys are used at the command line level, are used programmatically to log into your PC tune stance or maybe access any other resource in AWS. So that's all for this section, folks. Thanks for watching, and I'll see you in the next lesson. 33. 32 Networking Fundamentals Part 1: Oh, guys on Welcome back. Thanks for joining me again as we get started with another section on virtual private clouds. But he wouldn't before we get started with those technical jargon. Let's understand the networking fundamentals that working is the fundamental for any organization that is dependent on I. T. For computers and devices. To communicate with each other, you need a network. A network comprises off devices like routers, switches and firewalls, which then connect to your devices or computers using wires. Or maybe wire. Leslie. When understanding networks think about your home network, how is that connected? So at your home, you got a router and at least one device like a laptop or a phone, which may wirelessly communicate with your router. To understand this better and just to make it fun to learn. Let's use a school analogy. Okay, so if you got a classroom full of students and if Rob needs to talk to Mary to get some help on an assignment than a rock and just call Mary and ask for help, that's our simple it ease. In a classroom scenario, that means Rob can just be seated at his desk and still get the work done. But imagine if Mary is in a different class room. In this case, Rob needs to walk up to the door and then identify where Mary is. Then walk upto that door, identify Mary in that class and then talk to her. Okay, so that means if Mary is in a different room than Rob has to walk through two doors, door one and door to now. This is an enology toe. Understand networks? OK, now let's imagine that these classrooms are your networks. It's called them and one and N two. And these doors are your routers, R one and r two. We just naming them now. Robin, Mary are like your devices like workstations, maybe a desktop laptop, maybe a printer on. We'll call them as Device one and device to now. If Device one needs Teoh, send some information to device to. This can be done within a network directly without the help of the router. However, if device to isn't a different network, then the wise one would not be able to find it in Network one. So it's gonna ask for help. Their outer run then helps device one to find the route to device to the traffic of the message, then goes through the router to to finally reach device to the routers. Help the devices find the shortest path to the destinations. Routers are an exit point to that network. All of these devices are known in the network with help off I P addresses. Communication between the devices happens with help off I P addresses and not names, although when you access a website or a printer, you may be accessing them with help off a name. But in the back end, their translated to an I P address with the mechanism call as DNS or domain naming servers domain naming systems. All the networks related concepts remain the same. Regardless, your network is at your home, in your office or in the cloud. It just works the same way. I know this is really an oversimplification off what networks are. It can go far more complex and this, but I want you to take time for those who are pretty new to all of this to have some kind of a visualization like help you relate to the basic concepts off a network. Thanks for watching, and I'll see you in the next lesson. 34. 33 Networking Fundamentals Part II: Welcome Back Children Network School. In the previous lesson, we learned a little bit about networks with the help of the school in ology, where Rob wants to share certain information with Mary. Let's take this understanding to the next level where we will introduce new network terminologies, for example, what networks are what a sub net is, what a cider block is, etcetera. So the outre blue line that you see here is there for a reason. The ultra blue line is the perimeter off your school, OK, so we're gonna call it as a network. You can also imagine this as a boundary off your network at your home where you would have a wireless WiFi connection. So that's your network, then What are those green boxes? Well, those are also networks, but because they decide inside the large box and because they're smaller than the larger box will call them as sub networks or sub nets. So we have sub net one and submit to you can think of it as two different classrooms or two different rooms in your house. No, we said that the whole school is a network just like the way your house is the network, and the little classrooms are sub networks, just like your rooms in your house. So we created two small networks inside that bigger network from the previous lesson. We also understood that devices within the sub net can talk to each other directly. They do not need the router. But if the need to talk to devices in a different submit, then they need some kind of a routing mechanism toe. Identify the location of the destination and find the shortest path off it. Think of it like Google map. When you put the source and destination in the map, what happens? It shows the shortest path to you. This is how it works and networks as well. The source in this case would be your device, which is, let's say, device one, and the destination is device to the path to the destination, and the route to the destination is calculated by the router. We did mention that we need I P addresses and not names like Device one or device to. It's paramount that we have I P addresses for every device. So all the devices in the networks or the sub nets have I p addresses. Okay, remember that. Also remember that networks have cider blocks the cider stands for classless into domain routing. Now this is defined in RFC 1918 What is RFC 1918 rfc one line 18 is a document which is published in I e t f dot orc. And here's the link, which you can check later on. But for now, this are upsy defines the usage off private I p addresses and blocks off I p addresses. Now this discussion is beyond the scope off this lesson as it would take us to a totally different network world and it of the too confusing at this time. So if I start explaining about how sub networks how we convert decimals toe binary, it will just get too confusing. So let's park that discussion for now. And let's just map in the mind that a cider block would nothing but a notation a representation off your network. So every time you see a slash after an I p address, it's a cider block. Ah, cider block is a notation on a representation off your entire network. It could be the whole network or the smaller sub minutes as well. Now, because submit is also a network, it will also have a signer block. The cider block defines how many devices can be hosted inside a network. For example, if you have 24 in the side of block, let's say it can host to 56 devices in that submit. And what if it is 30 30 would then be two devices. What if it is 16? In that case, you will have 65,000 devices in that network. That's huge now, Okay, so the number of devices that you have in the network is inversely proportional to the number after the slash. So the smaller the number bigger, your network is bigger. The number after slash less devices, the network and host. Okay, you do not have to memorize these numbers as this can be easily calculated by using the online submit calculator. All you gotta do is put your numbers here and you'll get the right statistics. Now, As you see here, I've got 30. And it is telling me that the number of hosts I can have in that sub net is to what if I change to 25 C. Now I can host 1 26 devices in that subject. What if I change it to 24? They will then go to 54. So I can host to 54 devices if I have a side of block off 19 to 1680.0 its last 24. So go ahead and give it a try. Took on this Class A, B and C and play around how these options will change. The number off horse that can be defined in that network are It's a moving on from that discussion. Let's go ahead and start branding our networks. Let's say this network belongs to a fictions company call as certain global and third global has to sub networks or to sub nets. Right? Okay, great. So there is another company that third Global wants to transact with and also share certain data. Let's call it as the reigning cloud. Now, this is also a fictions company. Okay, We need to send some data from the wise one in sub net one in the third global company to the device to in sub net one inside reigning cloud subject. So the data needs to go from here to here. The how will this happen? Remember that the outermost boundary, which we call as network, does not have doors yet. There must be some kind of a door that will open up when the data wants to go out and get in, isn't it? So let's install that door now. It makes things much easier now. Technically, this door is also called as ah gateway. We would be using this gateway to intercept the traffic and run several intelligent algorithms on it to know if the data is all right. If it is militias or dissent by the right party, etcetera. And then you also find the route to the destination. Okay, so it's the gateway that sends the traffic out off your network. This traffic can be going to another company like reigning cloud or to the Internet. So if you're standing emails chatting with somebody, it's still going out through the gateway at your home. Your gateway is a router itself. So let's go out and summarize this Now. Every device has an I P address, isn't it? Every network has a cider block. The sadder block is a notation that defines the number of devices in that network. And when there's a device, it can be your laptop desktop printer could be camera. It could be an I. O T. Device. It could be just anything that can communicate on the network. Gateway is an intelligent device that sends the traffic to a different network or even to the Internet. The gateway is manufactured by organizations like Cisco Checkpoint, Baracoa, etcetera. Well, these are the well known manufacturers, but there lots of other says, Well, well, that's all for now. In this lesson, folks, hopefully this has been informative. Teoh. Let's connect again in the next lesson. 35. 34 Conceptial Overview of VPC: Hello, guys. Welcome back. And thanks for joining me again. As we get started on our next lesson to conceptual overview off the PC. This is where we'll start to introduce. The terminology is used in the AWS world. As far as networks are concerned into this world, we do not build networks physically in the cloud. When we start to build any resource in the cloud, we make virtual resources. It's like, ah, virtual machine, a virtual database hosted on a virtual instance. And then the network is also virtual in Amazon Cloud. This virtual network is called as ah vpc. VPC is an acronym that stands for virtual private cloud. All the concepts that you learned in the previous lessons will now be applied to help us build Ah, virtual private cloud or a VPC in eight of Louis. So in AWS, virtual private cloud is your network sub net remains a subject, so there is no special name for a sub net in aws it it's going to remain the same. And then inside the submit, you start hosting your resources. For example, it could be a database or an easy two instance that would recite inside no VPC Inside your sub net, you will have full control over who has access to the resource that you place inside your re PC. You can define what should be. The I. P address ranges off your network off your sub nets, you create the sub nets. You can figure how the data should go in and out off the network gateways. Now, you should be aware that when you create an AWS account, a default VPC is created for you. So we're gonna take a look at the default vpc in the upcoming lesson before now. This lesson is just still make you familiar with the terminology is used in the Amazon world. So the VPC is your network and sub net is shorthand for a sub network which is also a subsection off the network were generally includes all the devices or computers or databases in that specific location. So you can compare it to our school analogy. OK, where you got all of those devices sitting in the same class and they can talk to each other. And all of these devices have their own set of I P addresses. These I P addresses are derived from the cider block that is assigned to the sub net. So since you are creating the sub net, you're creating the virtual network. You decide what should be the cider block of the sub net. What decided block of the VPC must be is something that you decide. And based on that the i p addresses off Those devices are automatically assigned. So in this case, if you take a look at the cider blood that we chose, which is 19 to 1 sex ed zero zero's last 16 and for the subject regard 19 to 168 30 dark zeros last 24. So that just means that all the devices in this sub net will have the same network. I d. Well, it is. The first part of the cider block is called as the network I D. And the last octet or the last part will then be called us the whole study. OK, so this part here will remain the same for all the devices in the network. But the last October will be unique will be constantly changing for every device. So inside your aws cloud, you can have several V pieces together, and those VP sees can then talk to each other as well. Remember the scenario in the school where he said, there's a certain global network and there's a reigning cloud network and they would like to talk to each other? Well, that's the same scenario, which I'm referring to here. Now. If you got multiple VP sees in AWS, you can have some kind of routing mechanism done so that these devices in different networks talk to each other just to summarize. You can have multiple VP sees In AWS. You get a default vpc in every region by default. The resource is like the easy to instance, and the data basis will be hosted inside the sub net. The I P address off those devices will be governed by the cider block that you assigned to the sub net. All right, so that's for the summary part. In the next lesson, let's go ahead and dive into the Amazon console and see how the VPC is look like and where you can create sub nets using the user interface. Thanks for watching, and I'll see you in the next lesson 36. 35 AWS VPC Walkthrough: a network in AWS is called as a VPC. We learned from the previous lessons that a sub net recites inside of a PC, and then you will be hosting resources inside that submit. Let's navigate to the aid of lose control and see how you can implement this pictorial representation. This console is not new to you anymore. This is the AWS console, and I'm connected to the not Virginia region. Let's navigate through the V P C section. So I'm gonna type in VPC here in the search box, click on it and take me right to the BBC in US East One. As you see that in the U. R L here. Okay, Now in the BBC dashboard, you can see the number off VPC. So have the number of sub net the number off route tables in every other entity that's related with the BBC's To start with, Let's go toe the PC section. And in one of the lessons, I did mention that by default in every region you do get a VPC. So if I move to the extreme right and look at the default three PC column, it says yes, that means this is a deformed PPC now. It also has a cider block off 1 70 to 31 00 slash 16 Right now are there in a subject's inside it. Let's go ahead and check that now Click on sub nets on the left inside. And then I noticed that there are multiple sub Net are like six subjects in each available T zone. Not Virginia has six availability zone if you'll recall from the AWS their central lesson. There were six availability zones and one sub net Each has been created in each available T zone here. Okay, so it looks like this default rib UC has six sub nets decider block off. The VPC is it's last 16 network range, whereas a sub nets are individually chosen and the cider block off the sub nets are here as follows. What you also have your is the Internet gateway that is attached to the VPC, right? One thing that I want you to remember is that every resource has a unique identify where now this default Papacy has a unique identify that looks like this where a sub nets have their own unique identification. Okay, now, how do I know whether these sub knitter actually linked to the VPC here. Well, I'll go to the sub nets and then look at the vpc column. All of them represent the same BBC. This is what tells me that all these sub nets are in the same vpc. Okay, now, if I go back to the Internet Gateway and then look at this one this Internet gateway is linked up to my VPC and there's a route table as well, which is also linked up to my VPC. So these are the four entities that are acquired. When you provisions a custom VPC when you want to deploy a vpc all by yourself and you do not want to use the existing BBC or the default VPC, then you have to provision these four things. One is a vpc itself. Then you provisioned the sub nets inside the VPC. You then link the Internet gateway to the VPC. Remember from the school analogy that we had doors to the classroom and that's what allows the data in and out off that network. And that's what the Internet gateway is doing. And finally you're right, the loud in the loud table so that you can allow the traffic in and out off your VPC. Okay, so these are the four sections that are very important for you to provision of EPC. That being said, I will go ahead and create a VPC. Now click on your VPC and then create VPC. I'll type the name off my VPC as test hyphen bpc, which is OK. In my case, this is just a demo environment. But if you're running these in your production environments, make sure that you know what you're naming. Standards are what you're naming conventions are. Okay, I'll move on to the next section, which is the I P V four cider block. If you recall from the previous lessons from the school analogy, we had decider block as 1 92.1 68.0 dot zero slash 16. I'm gonna follow just that. If you go back and look at the subject calculator, you will know that's last 16 will let you create about 65,000 devices in that network. This is a pretty break network to handle, and organizations usually do not need such big networks in the cloud. The next option is about whether you want i p v six assignment to your BBC or not. Okay, by default, you're not getting an I P V six, which is okay, because let's say your provisioning and A I OT devices and lot of sensors on your network, which need at millions of I p addresses. And that's when you may think about going Teoh I. P. V six. And that's a good use case. But if your provisioning virtual machines database doing the regular two tier design or three tier design, this last 16 network should absolutely do the job for us. Okay, so I will stick to the first radio button and click on Create. So that is my VPC now. OK, so this is my custom vpc that God created. This is a unique identifier with the falling centre block. So what do we do next? Remember, we've got to create sub nets who go to create route tables and Internet gateways. There's a long way to go. Let's do that step by step. I cook on sub nets now and click on create sub net. Okay, and then I'm gonna call this as sub net one. And in the drop down menu, I will select my custom VPC that I just created. The 1st 1 is the default three PC Right now I want to keep sub net one in my custom VPC and not in the default one. All right, that's where it goes. The next option is about availability zone, not Virginia has about six available to zones, and you're going to see all of them here. Where do you want to provisioned this sub net in? OK, so I'm going to select a here and then also choose a cider block for this, which is, let's say, 19 to 168 $30.0 slash 24. If you do any mistake here, it's not gonna accept, and it will better out right there on the screen. But I didn't get a natter. Looks like it's happy, so I'll click on Create Awesome. So my first sub net in my custom BBC is created now, so the next step is to go and create an Internet gateway and attach it to my VPC. This procedure is simple to say, create Internet gateway. Give it a name. I will type in test hyphen, VPC hyphen, Internet gateway. All right, and click on create. That's done as well. The next step here is to attach it to my V P. C C, By default is detached, right? So I need to just right click and then attached to the VPC, which will then give me the option. A drop down menu with a list of VP sees that do not have a gateway yet, right? So I do not have a Defour DPC here. Why? Because it already has an Internet gateway. So I'm gonna select that and click on attach Done one last step before we conclude this is creating the route table. But what you'll notice is that a route table is already created for our vpc. Look at that. It's already lived up to our custom Vpc right, so we do not have to create a round table here, but instead you gotta go to the routes. After selecting the route table, go to the route and then what you will notice is that there is only one route available here on what this route tells me that everything within the network within this particular range is route herbal and reachable. So if you got devices within the network, they would be able to talk to each other. Of course, a firewall rules must allow that, but here the clouds are established. But if your device needs to be reachable and if it's needs to talk to the Internet, then you need to write that rule specifically. And this rule here just tells me that local traffic, the land traffic, will be able to reach each other. OK, that's what it means. Now I'm gonna edit routes and then add it out to the Internet. Right now, Internet is known as 0.0 dot 0.0 slash zero. That's what Internet means. And where is that traffic allowed from? The traffic is allowed from Internet Gateway. Which is this? OK, I'm going to click on Save Rocks. That's done. Now I've got two routes. The route number one tells me that everything is locally route herbal within the land environment. Within your re PC, the 2nd 1 lets the traffic in and out from your embassy to the Internet. Right? This representation or this side of representation means Internet. Okay, so one side provision in my virtual machine inside miss submit, I will be able to connect to it from the Internet. Let's go and see how that works. Okay, so that's my VPC. The next step here is to go ahead and create a virtual machine inside this sub net, right? So if you recall in this lesson, we started by drawing a diagram and the diagram had couple of boxes. We had the VPC we had the sub net, and we had that easy to instance as well. Now the next step is to create an easy two instance. Let's do that. I'll click on services, navigate to the EEC to dashboard and then click on launch Instance. Okay, well, keep in mind that easy to is yet another section we still have to discuss about what easy to is and various nitty gritty is involved in creating virtual machines. And then we've got to go a lot more than detail as far as easy to is concern. But for now, I'll quickly do next, next, next, and make sure that I provisional virtual machine. But I'll not be explaining much about Izzy tools in this section because the core focus is to understand networks well, understand, easy to in the next module. Okay, so for now I will scroll down and say, Hey, I want to create a Windows machine that is 2019 and select that own. Make sure that it's got one CP. When one gigs off memory, that's fine. And in this drop down menu will select my VPC and select my submit. Also, make sure that I get a public i p so I can connect to it. Finally, African review and launch. They get a pop appear. I'll explain this to you when time comes when we talk about easy to in the next module. But for now, think about this key pair as a password to log into the machine. Okay, You need this key pair so that you can log in and retrieve your password. He always got to keep this keeper carefully with you. Do not share it with unauthorized people. Otherwise they'll be able to log into your windows machine and gather that data. Now, as you see, I've initiated the launch off on easy to instance, I'll take on this random value, which is nothing but ah, unique identifier for my easy to instance. Okay. So I could done that, all right. And retrieved all that Meritor for my C two instance. But what I would like you'll notice here while things are getting in place for us, this guy's pending. It's still initializing That's OK. Well, you can use this time to look at the metadata now The VPC i d is this which is our vpc. So if you recall the name of the VPC was test hyphen B p c. And the name of the sub net was sub net one. Right, So we're falling the protocols here we're creating ah virtual machine in the sub minute in the vpc that we created. Okay, well, we also have is a public i p address associated with this. Easy to instance, do not get confused Every time I say See two instance because it's nothing but a virtual machine, right in traditional environments you had VM wears and hyper V's and we used to create virtual machines and top of that here in the AWS world, you call those virtual machines as easy to instances. That's all all right, I'm just waiting for this initialization to complete and the instant state is running anyway. So that is healthy. That's good. And once it is ready, I got a copy of this public I p and launch my MST SC, which is Microsoft Terminal Services console, pays that I p address here and this. Go click on connect. It's gonna fail, of course, for now, because things are getting ready for us. All right, so it looks like I got a administrator and password screen here, but then I will not be able to retrieve the credentials till the time this initialization is complete. Okay, so I'm gonna pause this section. Positive the video for now and started again once the initialization is complete as well. Okay, so I connected back, and I'm gonna pick up this public i p one more time launched the Microsoft Terminal Services console, pays the I p address, click on connect, and then take care of this credential section. Right earlier did mention that I gotta have this pain key, right? This is a key player, which I must have before I log in. Right, Because I will retrieve the credentials with this pen file. Let me show you how so I'm going to right click on my easy to instance that just got provisioned. And then there's an option to get the Windows password right now. Of course, it's gonna go into my downloads file. So I'm gonna choose file and get that pim file from there. Okay. As you see, have chosen the Windows Key Paired Art PEM file and the data inside that, which is a certificate information. It's all retreat from there. Now. I took on decrypt password and you will see random characters here. And that is your password, right? So I'll copy that. All right, copy Just that and then go back to this console. What? I will type my user i d which is mentioned right here. OK, on the password, I'll paste it from the clipboard and see. OK, that's all that so quick. That is okay. And there you go, right. It is getting connected to a Windows based machine, and that's how quick that is. It will take some time initially because number one it has got such low memory. CPU and memory is just one in one. So it's gonna take time to get to my desktop and settle down, right? Moreover, if I try to do some actions on it. It's gonna be very slow because of the very same reason. Right? But the objective here is to just show you the power off cloud as to how quickly you can provision resources. Right? It's how fast Daddy's. But let me try something here. So I launched the command prompt and then type in the I p conflict of retrieve its I P address. If you know it is the I. P. Address off, this machine is 19 to 168 30.1 83. This is the same range as that off the subject. Right? So let's go back and look at that. Now I'll click on the submit, right. This is the subject that we provisioned for minutes ago to create a VPC. Now, the subject that we have the range that is mentioned here is 19 to 168 30. This is the network I t. And the last octet is the host. I t in this case right now. What I did mention in the previous lesson is that the network I t will remain the same for all devices. But this last octet may change, And that's exactly what happened in this case as well. We got the same natural gaiety. But the whole study will change every time you create new resources and provisioned new devices in this submit. Hopefully you understood all about networks, sub nets, Internet gateways and loud tables and how it can be provisions in AWS quickly in the next section, let's go in and learn about the security groups. 37. NACLS and Security Groups: in the previous lesson, we created an easy two instance. It was hosted inside a sub knit, and that sub net was linked upto are vpc. The BBC had an Internet gateway through which the data or let's say the traffic can go in and out. Okay, so that was the whole purpose off the previous lesson in this lesson. Let's go ahead and understand the knack ALS, which stands for network access control lists and security groups. Let's understand this Where the use case. Okay, so I'm right now connected to the easy to instance. Right? So the I P addresses 100 27 35 to 47 as you see on the top, left inside here. Okay, now, that's the same I p address as you see right here. Now, what I'm gonna do here is install. Ah, I I s on this machine, so I'm gonna make it a web server. So let's say we are hosting some kind off web application on this. So because it's a Windows server, you need to have my eyes installed on it. So I will goto this ad roles and features. Click on it, go to the Wizard and then install I Yes. Okay, so that's the wizard. I'm gonna hit next. Next again. So I've got the role section here and then under that, just select I I s except us add features. And then again followed through the wizard to complete the procedures. I'm just going to make it a plain, simple Web server. Click on install and let the installation happen. It looks like the installation is complete. I'm gonna trick on clothes and then close the server manager Israel. The school had an launched the I E and the navigate to its to tp Colin, local host. Okay, so hit. Enter. What it should give me is the default II s page. Right. So this is the default. I s page That means, I ask is installed on this machine. What I can also try is navigating to diss I P address, which is 127. That 35 to 47 right. I can try accessing it from external world. So I would launch my browser here and then navigate to this I p or rather just say go to this I p address. Okay, Look at that. It's still in the connecting status. It will not work because there are no ports allowed to access the web application. Right. So that's why it's not gonna work. This is an eight city P website. So look at the message. Check your connection, check the proxy in the firewall and run network diagnostics. So what do we do in this case now? Since this website is an http traffic, we gotta allow its to tp inbound. For our easy to instance, the traffic is two ways inbound and outbound. The out born traffic is allowed anyways, but the inbound traffic is always controlled with the help off security groups. Now, on the right hand side, you see something called security groups. A security group is kind of for firewall that is attached to your easy to instance. I'm gonna click on this security group and show you what I mean. Now this security group has two kinds of rules. One is inbound and the 2nd 1 is outbound. As you see the out bone traffic is allowed for all kinds of traffic to the Internet. That means any traffic there is going out from this virtual machine will be allowed on all ports and all protocols. That is because the out bone rules has the following rule that all kinds of protocols are allowed on all ports to the Internet. So all kind of traffic is allowed to go out from that virtual machine from that easy to instance. But what happens toe in bone traffic? When we navigated to the i p address off the web application or the Web server, it was an inbound traffic to that easy to instance. So the only traffic that is allowed is port number 3389 What port is 33894 3389 is for remote desktop protocol traffic. And that is why. And that is how I could connect to this watching machine from my laptop using Internet right, because the traffic is allowed inbound on port number 33892 that I p address from the Internet. Okay. Now, in order to have the eight city P rule as well, I need to allow an ad an inbound rule. So I'm gonna add a rule for the http traffic have to find the http in this list. It is 80 by default. And where do you want to allow it from? I want to allow it from the Internet, Okay, and then just save it. Now. Two kinds of ports are allowed inbound from the Internet to the C two instance, and they are Port number 80 which is eight city P or Web traffic Port number 3389 Rdp or remote desktop protocol traffic. Okay, now let's go ahead and refresh this page and see what happens. They go immediately. I'm able to access the new application. So this is more like, ah, firewall, which is connected to that issue to instance. Now let's go back to the picture and understand where it is actually located. Now, if you look at this, easy to instance is our resource. Every resource has a security group attached to it, so you can then go ahead and modify the inbound rules and the out bone rules. There's one more thing. Call s knuckles Natal's are attached at the sub net level. So I'm gonna show that to you. So let's go to services and then navigate to the VPC. Under this section, you have something called Lesa Nickel Nickel stands for network access control lists Now this knuckle is attached to our sub knit. So there are two places where you're gonna have firewalls. One is at the security group level and other is at the sub net level. And not because the traffic is allowed by default on the sub net from the Internet, it is allowed. That's why we were able to access this page. You really do not have to modify things on the national side. But security group is more restrictive as compared toe knuckles. If you modify anything on the knuckle, then it will be applicable on all the resources inside that submit. But if you modify something like an inbound rule or on out born rule on the security group , then it will be applicable only to the resources that are linked to it. In this case, we have only one issue to instance. So it will affect and impact just the easy to incense things to remember. That security group are the resource level knuckles honored the sub net level 38. What is Compute: Welcome back. And thanks for joining me again In this section Off Compute Services we're going to cover elastic Cloud computer. Also call as E. C. Two. We'll discover the very basics of computer. So what are the components or the basic components off a computer? The first thing is something that we already are very much familiar with at this point, and that is the operating system. So typically, you're going to see Windows or Linux operating systems when you talk about Amazon Web services, but out in the world at different kinds of operating systems that were used for personal reasons as well as for businesses. So we use Windows, Linux, Mac operating systems and other flavors of windows like Windows seven, Windows 10 Windows eight and then different flavors off limits, including Red Hat, Federer and sent us. And there are lots of other operating systems as well. And then, depending on the kind of application you're running, you may be choosing a particular operating system. So the operating system is something that communicates with the hardware and that will then allow all the applications to perform their tasks. In the hardware section, you got the CPU which is the most important function, which is like the brain. So the CPU does all that processing for the tasks. So all the thinking off your computer is done by the CPU, and then we have hard drives, so the hard drive is where the data is stored. So whenever data eyes processed, it is done by the CPU, and then it passes it off to a hard drive for keeping it safe for longer duration. So hard drive is your storage, so the data goes right inside the hard drive. Now they're different kinds of storage as well. You got the local storage, which is connected to your computer directly. And then there is a remote storage, which is far off from your computer, which you will retrieve when you need it. In organizations, people use a central place where you're going to keep all of that data remotely and safely . So that was about the CPU. You got the hard disk, and what you'll also need is something that can help you connect to the network. It could be intranet, or it could be Internet, so we need the network adapter or the network card to provide that Internet access so the network adapter could be or Ethan had based where we put the cable in, or it might be WiFi beast. But the goal of the objective is just to have that communication established toe other computers in the network or remote networks. So once you get access to the Internet, you're gonna need a firewall. The firewall is responsible for helping to block unauthorized or undesirable access to your computer. Maybe a virus or malware is something that you want to keep away from your computer. So several operating systems have the operating system level firewall install. So, as an example in Windows, you got the Windows firewall in the Lenox operating systems. You got something like I p tables. So we use this firewalls again to help block undesirable or any kind of malicious activity from accessing your computer. Okay, so that's three things. One is the CPU, and you got the hard disk and then the network adapter. What you also need is the Ram Ram is a random access memory, and that's where the data is processed by your computer. So before the data is processed, it goes into the memory. The memory, then hands it after the CPU for actual processing. Think off RAM as a short term memory. The hard drive of the storage is a long term memory. So all kinds of data goes into the memory first, before it gets processed by the CPU and the CPU, then pushes it off to the storage for long term processing. So these are really basic components that make up the computer, and we'll take this components and use that when we're trying to learn about AWS. The elastic cloud, Compute instance, comprises off all of these different entities that you're talking about. So the easy to instance has the memory has a storage, has a network adapter and also the ram. And that's what you pay for when you provisions and needs to do instance in AWS. In the upcoming lessons, we're gonna talk more about institute, start provisioning them. You might have seen a bit in the previous chapter, but here we're gonna take a deep dive and understand what? Easy to our thanks for watching, and I'll see you in the next lesson. 39. 38 AWS Compute Services: Hey there. Welcome back. Thanks for joining me again as we get started with our second topic, which is an overview off compute Services. In this lesson, we're gonna talk about what easy to is what component it has the various purchasing options Amazon offers and then the end of the lesson, we're gonna talk about the benefits and the use cases off. Easy to instance, let's go ahead and get started. So what is an easy to? It is Ah, basic virtual computer. Everything that is connected to it is virtual in nature. For example, the memory CPU, hard disk network adapter. Everything is virtual. The virtual machine is also scalable. So remember that scalable means that we can add additional resources to adapt to the overall capacity requirements that we have using. Easy to eliminates your need to invest in the hardware up front so you can deploy and develop applications faster. Amazon easy to enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast the traffic. So think about something like having an e commerce website where you're selling your products during the holiday season, so during a Christmas or a black Friday, you might have a larger requirement to handle that traffic, while Amazon easy to allows you to add those additional resources for as long as you need them. Now, once the Christmas season is over, then you can get rid off some of those servers. That means you pay only during the peak hours and during the non peak hours you pay minimum amount based on the number of servers you're using. That's really one of the beauties off the cloud, which is you do not have to poaches all that hardware with all that up front cost just to have it sit during your off peak seasons. Let's dissect the components off easy to and understand what H component means. So the operating system is an equal and off. Am I? We know that am Isa and Amazon machine image. Now let's relate this to an operating system. It could be Windows or Linux operating systems. So am I is nothing but, ah package. It is a package that is pre conficker, and you can deploy that operating system to your easy to instance. Wants to deploy your instance. It has that operating system on it either Linux or Windows. Then they receive you In AWS, we call this as instance type. So the instance type is the CPU, which is a processing power, the brain that thinks and processes the data and also ladera that the applications are using. And then you have the EBS A. B s stands for elastic block storage. So basically, think off it like you're see drives and D drives on your windows machines. EBS would be equal in off that C drive in D Drive on your PC, to instance. So you haven't EBS volume where you're operating systems installed, and then you can have additional EBS volumes as well, where your actual data is kept and then your network adapter is what we're going to relate to an I P address. So that's basically something that gives you Internet access and let to communicate with other easy to instances as well. Now, with that network adapter having an I P address, it can communicate with other applications or databases well within your PPC. So any C two instance does have a network adapter and that we have that security right? The firewall well at the easy to instance level it's called as a security group. Now, when we think about operating systems like Windows, Windows has a Windows firewall and Lennox has I p table. So basically, the firewall is something that guards are protects your computer from things like malware, viruses or anything that is malicious in nature. Ah, firewall is an additional layer to help protect against unauthorized access into your computer. So on your Easter two instance, we got the RAM, which is the random access memory. And remember that the data goes into your ram first and from your ram. It is passed off to your east to instance, for processing purposes. Once it is processed by your ec2. Instance, it then stores on the EBS storage as a final result. So you remember thinking, How much do we pay for one? Easy to instance, in AWS, the answer to it depends on various scenarios. Let's understand the various purchasing options for an issue to instance. Okay, so the answer to this would be a long one, So there are three basic types of purchasing options that you need to be familiar with. The first is on demand. The own demand purchasing option allows you to purchase an instance. Type and provisioned it whenever you want. You can terminate it at any time. So if your business manager comes up and says, Hey, can you create a virtual machine for me and installed this application? Well, you would go ahead and spin up that new easy to instance, and a week later, business changes its mind and asks you to delete that. Easy to instance now, because the business does not need any more, you can go ahead and shut it down and terminate it. How much do you pay what you paid for? Only the duration for his devotional machine is up and running, and on the ones of virtual machine or the E. C two instance is terminated. You no longer have to pay for it. This means you pay it on Lee for the duration. The virtual machine has been up and running. Now this is the most expensive option, but it gives you the flexibility in terms off when you want to provisioned it, how you want to use it, and when you want to terminate it, you are charged for that instance while it is running, and it is built by the hour. You can provisioned and terminate the on demand instance at any time. That's what on demand means not. Let's take a look at the second option, which is a reserved instance. A reserved instance allows you to purchase an instance for a longer Peter of time. So you got two options you can select between one year and three year, and you get a significant discount. But you're required new responsible for the cost off, that instance for that entire period. It doesn't matter whether you're instances on or off. You're gonna pay for that instance for repeated off one or three years. So when you choose this option off reserved instances, you can choose to pay the entire money up front or pay partial up front or not pay upfront at all. But the bottom line is that you're responsible for the entire price off that instance for the time period, regardless of how often you use it. And then finally, there are sparred instances. Spartan stances are a way for you to bid on an instance type, and then you only pay and use that instance based on when the price is equal or below your bidding price. Let's understand this with an example. You know about how stock market works right in the stock market. You bid, but typically you bid more than the actual value. So if the stock values running a to humor bid at 2.1 or 2.5 the higher you bid, the better your chances off winning our spot. Instances work the opposite. You better prize, and you're gonna say that I don't want to pay a dime more than this price. And as long as the price is equal to this or below the price, you'll have access to that instance, right, so it gives you a substantial discount. You can deter mined what you want to pay for that easy to instance now when you bid for it . And if Amazon has space and capacity available in that region in that available to his own , you get that easy to instance so basically, it's based on supply and demand. Anything that is unused is basically sold out through bidding, so when your bid is accepted, meaning your price is equal to or below your bidding prize, then the instances provisioned for you, and that instance will stay online until the price is higher than your bid price. So if somebody else bids more than you, what's gonna happen while your instances automatically terminate? So when we think about these three instance types, the best ways to remember that the spot instance is good forbidding. Okay, think about auction. So if you're looking for an instance for sharp peters of time, and it doesn't matter whether the machine is up and running and somebody just terminated, it doesn't matter for the application than spot instance is what you're looking at. If you have an instance where you know you're gonna need that for a long period, like one year or three years, then you'll think off reserved instances. The force. You will be saving money if you go for reserved instances because you are now committing to Amazon for a long Peter of time. Do you got two options there one year or three years? And that's when reserved instances makes the most sense with on demand in stances. You want the flexibility to spin up when you want to keep it in your subscription, as long as you want and terminate when you want. That's what on demand instances are far now. That being said, let's go in and talk about how you will be charged. So we spoke about the free tier in the earlier lesson, right? So we know that there are certain free services in easy to and all of that depends on what type of instance you're using. And how long are you using? Those instances will keep in mind that all instance types are not available for free tier. You cannot just provision an M three large instance which has, like, eight gigs of memory and foresee pews and expected to be free. No, that's not gonna happen. So not everything is for free in freight here. So when you're starting to provision resources, you need to think carefully in advance as to how the building is effected. How you will be charged for that Is it, of instance. So keep in mind that you ensure the right sizing of the computer do not choose high compute for applications that demand lots of storage and less compute storage also comes in different flavors here in AWS. Yeah, so you can choose high I ops for applications that need higher disk performance. But then, if you're choosing a disk with high eye ops, your bills will also increase. I ups stands for input output operations per second. And that is basically how fast the storage is gonna perform. How fast it's gonna be processing that data. The pricing can also change because off the kind of operating system you're using, remember that you can choose between Lenox and Windows Lennox being open source. You don't pay anything to anybody. As far as using the operating system is concerned and Microsoft is a proprietary operating system. So cut off what you pay goes to Microsoft as well, to keep in mind that you choose the right operating system when provisioning and easy to instance. Now, apart from CPU, the disc, the operating system, you are also charged for the data transfer. How much data are you moving in and out off that easy to instance, the pricing off the easy to instance will also get affected, depending on the region where you're easy to instances hosted. So an easy two instance hosted in North Virginia will not cost you the same as the C two instance hosted in Australia just for an example. Now let's go ahead and take a look at the benefits and the use cases off easy to instance. Well, they're typically several use cases where you want to use an Easter to instance and easy to instance, hold true in all scenarios where you want to use a virtual machine. It could before hosting ah Web application or to host your website just to summarize Easter . Two instances are used. Teoh handle different kinds of application workloads. You can create the kind of easy to instance you want with your operating system and then pack it up with the power you want to give it to. The easy to instance, these easy to instances are extremely reliable, and they're built on secure infrastructure. They are inexpensive, they're pretty easy to use, and you can deploy an easy two instance typically within a couple of minutes. And that's going to be an overview of easy to compute services. Thanks for watching, and I'll see you in the next lesson. 40. 39 EC2 Instance Lab Activity: let us now take a look at the easy to console or the easy to dashboard on the AWS portal to kick off things. Let's go ahead and first create. Easy to instance. To do that, I will go head entry gone. Launch instance, you will recall from the previous VPC of the network lessons is that we did navigate to this point, but we're quick to finish through as well, because at that point it was not relevant. But now we'll go into individual specifics and details and understand the creation off Easy to better. So when you initiate the provisioning, often easy to instance, first thing you land up here is the AM I section am. I is nothing but kind of an operating system that you select in this list. Am I is an image so that includes the operating system application server and applications as well. What does that mean? So let's take a look at one of these am eyes, which is Amazon Lennox to am I, and it has an SSD volume. What does this include? It includes a customized version off Lennox, which is Amazon looks. Plus, it also has the following binaries installed. So you do not have to install these packages yourself. And if you look at the next am I which is the Amazon Lennix? Am I 2018 version? It has, of course, the Amazon Linux operating system, plus the following packages. Installing it right. It also has docker PHP, my SQL and Pours grass database and several other packages inside it. So this Amazon image is a package. It's, ah, zipped up version off your operating system on the application so that you do not have to invest your time installing that operating system and install those applications that you need for your business needs. Similarly, you will find the list off differently. Next flavors in this list, plus windows as well. Okay, so one thing to notice is that every air mine has a unique identify. It is a my hyphen and then followed by a random 16 character. Gwede indicates a unique identify her. Right now I'm creating an instance in not Virginia region. Okay, so if you have Amazon Lennox air mind not Virginia, its unique identify We were different from the one in Let's say Singapore. Okay, so both of them will have different. Identify IRS. The second point that I want to stress here is that not every operating system is free. Anything that is free is tagged as free tier, illegible, and as a school down there are paid versions as well. So if you're working in a free tier version, you just created an accountant. You're playing around things. Then I suggest that you select the free tier eligible versions so that you do not get charged for playing around in the playground. Okay, that being said on the left hand side, you can also filter them and check this check box and say, Show me, just do free tier one. So there is no way you can accidentally click on something that's chargeable. Okay. Now, apart from this list, there are lots off other AM eyes in the marketplace on what is a marketplace. Marketplaces like a library. And this is where there are thousands of images from different vendors. They for take a look at this example. This image has sent to US operating system, plus the lamp stack installed with his lamp. The lamp stands for Lennox, Apache, my SQL and PHP. So this image, or this am. I has the entire lamp stack so that you do not have to install it yourself. And as you scroll down, you will see lots off products that are categorized based on as you see here, Dev Ops, Infrastructure software, machine learning, etcetera. So if you're looking for, let's say, a CRM application for your company Ah, customer relationship management for your company, you'd not have to install that application explicitly. All you gotta do is identify your CRM product. Go and identify that in this list, selecting appropriate one with suit your business needs and then go with that. Are you charged for using air mice in the marketplace? Well, some of them are free. Some of them are pay as you go are the ones go for licensing bases? That is something Gonna look at individual case by case basis. Of course, there's a check box here where you can say just show me free trial ones. But there are others as well that may charge on hourly basis or annual basis. OK, just an example. I'm gonna pick up this a m I, which is movable type six Apache or something. And once you click on select. It's gonna tell you what you will be charged for. So you gotta pay for these software. And then the easy, too, as well, right? So you got to pay for the compute part, which is CPU and memory, and you pay, like, a fraction off a dollar here. And then you also pay for the software this software, which is movable software, which appears to be a social publishing platform which has got a lot of functions. If you really need that application, you gotta pay for that application, plus the compute part. If it write, the software competent goes to the company and the bills for the easy to will then go to Amazon Web services. It all depends on the instance type as well. So the larger the instance type, you select more money you pay to Amazon Web services. Right. So that was about the marketplace. I'm gonna click, cancel and then go back to the initial screen where I have Amazon inbuilt images. Right. So there are lots of these now, at this point, you may ask a question. Can I make my own image, right? What if I do not want bison? What if I do not want Ruby in, I'm only working on per right. And I'm not working on my SQL and my application needs just pose grass. Right? So you want to customize that? Am I so that before the deployments in your infrastructure, you want to use Dad image as a standardized image for all servers? Can I do that? Absolutely. So Amazon has made things simple for us, and you will be able to see such custom images under the my in my section. Right. But how do we make those images? And I'm going to show that to you, Laurent, Once our virtual machine is created, I'll make it a point to make sure that you know that as well. Okay, but for now, I'll go ahead and select Windows based machines. This is a Microsoft Windows server. I'm gonna get a 2019 server. It is free to your illegible. Okay, so I'm happy with that, and I'll click on select. All right. Looks like it's going to create a 64 bit machine for me. Okay, So what do we have now? This is the place where you select the instance. Type. Okay. Instance type is nothing but the CPU and memory that you choose. For your easy to instance, the larger machine you choose, more money you pay. So as you scroll down, you got instances of different permutation combinations. So this is four CPUs and 16 gigs. Off memory is again eight CPUs and 32 gigs off memory. And then it's network adapter is 10 gigs, and this just means that it can handle massive sets off network traffic. Right? That's what it means. So as you scroll down, you can identify different permutations and combinations. Okay, so for now, I'll screw up and show you one more point here that you can go ahead and filter them. Based on instance type is your application demanding lot of graphics. The new select Ah Gpu instance is your application demanding lot off memory. Are you creating an application for machine learning purposes, or do you want an application that needs a lot of storage or it demands lot of high I ops. You can filter them based on that, and it's going to show you the instance types based on your requirement, right. For now, what I'm gonna do is go ahead and select on all instances and then make sure that I selected free tier because there's enough for our laps right. You don't want to get charged unnecessarily and pay your bills to Amazon. But what does t to micro give you T to micro type gives you one CPU and one memory, right? That's what it means. We got t too small. T two medium t two large t two x large as well. I know that some of the organizations also called these as T shirt sizes just for fun, right? So if somebody saying, Hey, can you create a easy to instance off T shirt size T to the two x large? You know what they mean, right? So you got to come here and select this, but for now, I will happily click on T two dot Micro, which gives me two virtual CP use and one gig off memory and click on next. Now, this is 1/3 page, and it gives you a lot of options, a lot of details and and you need to make sure that you select them correctly, starting with the 1st 1 How many instances do you want to create. So if I type six, it's gonna create six instances with the same parameters and specifications, right? That's what it means. But I'm not hosting a production environment here. Just go ahead and select one and then talk about the next one. Request spot instances. Now this storm's part instances is not new to you. If you'll recall from the previous lesson, there are three ways you can create an instance. This is purely for billing purposes so you can go for on demand. Instance. Ah, spot instance and a reserved instance, right? So what do you recall from Spartan stents? Lecture? Well, the sport instance means that you are bidding for the price. So let's say you do not want to pay 50 cents an hour. Rather you want to stick with, let's say, 20 cents an hour. Right, so you got to go ahead and type that number in here, so you're gonna type 0.0 to 5. So if Amazon has capacity in that region, which is not Virginia in this case, and the capacity must also be available in that available T zone, which has won a one beer, one C and if you have that available, they will provision that virtual machine for you. Okay, so that was about spot instances. And now we're looking at it practically as well as to how we bid for spot instances. For now, I will go for an on demand instance. So I will uncheck this box and let it go through this wizard, and then that creates an on demand instance for me. Okay, Now, the next thing I gotta do is choose a network in which I want to provision. My easy to instance, in this case, my V p c would be the one that we created in the previous lesson. So I will select that, and then I can choose the only submit I have in that the PC. The next option is whether you would like to have a public i p to that easy to instance, I will enable it because I want to have public I people In other cases, you can go ahead and set it to disable as well. That's fine. So I will enable it. Placement groups are primarily for creating ah, physical machine instead of creating a virtual machine. So I leave the whole section now because I really do not want to create a physical machine . The next section domain joined directory is about joining this easy to instance to your active directory. So if you have an active directory environment or directory services environment in AWS, you can choose to join this easy to instance to that domain while provisioning this watching machine. Okay, that's what domain joined director is. And when you're doing that, you need to have some kind of a permission to join your easy to instance to the domain. And that is done by the I am role the shutdown behavior. The shutdown behavior is when you right click on that. Easy to instance, once it is provisioned and then you want Teoh, delete it or you will to say shut down at that point in the background, should it just go ahead and stop that visit to instance or terminated, Terminating is when you will lose the virtual machine forever. You will not have any data at all, So I always recommend that you choose stop because shutdown means stopping the machine and not terminating right. So I will leave it a stop. And in the various options related to hibernation, which is enable hibernation as an additional start behavior. So when you right click and stop that virtual machine or easy to instance, you would get an option and I will say something like, Hey, do you want to hibernate? Decc to instance instead, off shutting it down. Okay, the next option is enabled. Termination protection enable Termination protection will protect us from accidentally deleting and accidentally terminating that. Easy to instance. Usually we check this box in production environments so that anybody who has a higher level of privileges cannot accidentally delete our easy to instance right. Monitoring monitoring is enabled by default anyway, so you get monitoring. But then you can enable additional monitoring or detailed monitoring by checking that box. What does that mean Now when you look at monitoring, let's say on the graphs you're getting CPU worse this time or memory versus time, Or, let's say disc utilization versus time. The time Delta would be about five minutes. So if you do not check this box that time dealt on, the graph will be five minutes, so it's gonna plot those darts on the graph every five minutes. If you uncheck this box of leave it unchecked. But if you check it, then the time Delta will be 30 seconds, so you'll get more detailed monitoring. That's what it means. Then, of course, if you check it, additional charges will be applicable. The next option to Nancy is. If you would like to create a virtual machine, the first option is running on a shared hardware instant, so this will be multi tenant. Or it will be a virtual machine. Or, if you're like to create a physical machine, which is dedicated physical box just for you. And the last one is, if you already have a physical machine, you will be allowed to create a virtual machine on your physical machine so nobody else will be using that physical box. It's gonna just you who will be provisioning virtual machines on that voter box so it will not be multi tenant. I will choose the first option where I want to make my virtual machine multi tenant, so other customers or other consumers off AWS will be able to share those Resource is the next option. Elastic graphics is more about controlling the graphics acceleration off that easy to instance, right, So if you got your application that demands lot of graphics, then you will check that box T to D tree. Unlimited enable is about bursting the loads. What does it mean? So right now, I have selected T to Micro and let's say the application demands more CPU and more memory than it can burst the load and then go beyond the baseline. Offer tea to micro. Easy to instance. Right, so it just gives you that own demand performance. The next option is about network interfaces. By default, I need one network card, at least, but you can add multiple network cards you can automatically assign I P addresses. Add I P addresses manually here as well. Next, do you want to create that in a different sub net while you have the options to select that in the drop down menu? If you would like to add an additional network card, all you gotta do is click on add device and it gives you the options which you can select from. But this time I'll just leave it as E. T. H zero, which stands for eternity, Card zero and look at the advanced options now one of the important section here is the user data. Now in the user data section, you will copy and paste some kind of a script, right? So when the virtual machine is being provisioned at that time, this script gets executed and there's certain operations. It could be just any a script. It could be, Ah, Power Shell script or Python script. Let's say you want to copy a lot of files from the guitar or somewhere. Then you will write a script that it can copy those files or install some application during the VM provisioning. That's what user data means. You can just copy and paste the script or even imported as a text file. Okay, now, that being said, I will go ahead and click on next to understand storage. Now, under the storied section. What's more important to understand is a volume type. In the previous lesson, I did mention that you will be charged based on the eye ops off the hard disk. This is where your operating system is running. You can add multiple volumes to the same easy to instance, and this is called as E. B s elastic block story so it's gonna be an EBS volumes by default. You got one drive, but you can click on this. Add new volume option to add as many volumes as you like for that. Easy to instance. But there are other options which controls the performance off that Dr by default. It says it's gonna attach a general purpose SSD solid state drive and the maximum I after going to get is upto 3000 I ops. OK, but what if you want more than this? If you are application demands more than 3000 I ops, then you're going to select provisioned I apps. And this is where you can go up to 16,000 eye ops by using provisioned. I obsesses d so provisioned IOP Jesse's d gives you much better performance as compared to general purpose, right? Whereas the other one, which is magnetic, is not a necessity to regular HDD dry. Okay, So by selecting general purpose and provision, you get SSD disks, but the magnet ing ones gives you a justice standard. It's D d drive. Okay, so magnetic can be used for sandbox environments, testing environments where you do not think performance is a key criteria. But with the other two performances guaranteed. How much where it all depends on general purpose are provisional I r. Of course, provision psyops will give you more I options compared to and general purpose. Having said for this lab, I'll select general purpose flick on next and then type in some tags here. What a tax tax are responsible. Teoh. Create a C M d. B for your configuration management database so you can tag this easy to instance as, Let's say, name. I'm gonna type Web server. Okay, And then I'd say who the owner is the owner off this What is? Let's say, Rob. Okay. And now I'm going to say who the application owner is, So the application owner will be. This is the same guy I'm gonna type in his email address. This time. It's a robert company dot com. All right, now you can add multiple attacks like that to say, back up, Needed yes or no, they say Yes. At times, we want to charge back your business units, so it's important to have cost centre as well, so you'll put your cost centre i d. So that over a period of time when virtual machines are easy to instances are created with the same cost centre value. You can pull up a report and charge that business unit back right? That's what tags are. This is nothing but a key value pair, all right, so I'll select that and then moving on to the next important section, which is security groups. This is very important. Security group is where you will conficker the rules to allow inbound connections. Now it was intelligent toe identify that it is a Windows based machine, so it knows that it needs sport number 3389 for that inbound connection. So what it has done is that it has automatically added port number 3389 is a rule. But let's say you want to make it a Web server. What you're gonna do is click on Add Rule and then because it's a Web server, you need port number 80 inbound. But what you can also do is select this drop down menu and then select http here, Okay, it's TTP is right here and then automatically populates a lot of these feels, which is port number 80. TCP inbound is allowed from the following Cider Range, which is Internet 0.0 dot 0.0 slash zero is nothing but Internet. And there's this well, I p v six version off it. OK, now there are other options that you can select in the source. You can type your custom I p, by selecting my I p. And that would ensure that the traffic is allowed only from my i p address and not from anywhere else. Okay, And then you can type in a brief description here. So for now, I'll go ahead and select custom. And then let's just type in 0000 slash zero. And that allows the traffic specific to port number 80 and 3389 from the Internet. Rest of the ports are blocked. Okay. Um, all right, that being said again, let's click on review and Launch, which gives you the summary off the realist options that you've selected on the right hand side to guard the edit option to edit a specific item in the Wizard. If you're happy with that flick on launch and now is the place where you can select the key pair, key pair is a combination off a public key and a private key public key is stored with Amazon, and you need to keep your private key safely in some kind of a repository. All right, so a combination off public key and the private key gives you the password and lets you authenticate into your easy to instance. That's what it means. So you can either choose to use an existing key pair. I already have a key pair in my region, but let's say I've lost its key pair. What I can do is go ahead and create a new key pair on type my own name and call it as new key pairs or something. And I need to make sure that I download this key pair. Okay, now, that key pair we just got downloaded needs to be kept somewhere safely in some kind of a vault. Because if I lose it, there is no way I can log into this machine again, right? There are other methods. There are work arounds, what you're gonna talk about later. But for now, just know. Let's remember that for you to log in to an easy two instance, you need a key pair so that you need a private key, which also has an extension off P E m. All right, so that being said, let's click on launch instances, it takes about 4 to 7 minutes to create an instance completely once that is ready. So I'm gonna pause this video now, So I'm gonna pause this video now and then connect again once a virtual machine is up and running. All right, then. See you in the next lesson. 41. 40 EC2 Connecting to Windows Machine: all right. The ec2. Instance is created. Now it's time we go ahead and look into the various nitty gritty ease and the metadata off this easy to instance, I'm gonna click on it. And as you see, I've got to issue two instances, but one of them is stopped and the other one is running. Okay, so I can, of course, go ahead and right leg and start this instance as well. And while it is initiating and starting up, let's go ahead and click on the 1st 1 and then look at the various options here. Okay, So in the previous lab, we just created one instance and looks like this instance was created in a test. IPhone bpc in the following sub net. And it's got an I p. Address off 3.23116056 Now it's a public I p address. Right? It also has a private I p address. In fact, it must, because it is hosted in my sub net one. Okay, so the a m my i d is a Windows server 2019. So that's my Amazon mission image hosting the operating system off 2019. It has the inbound room something like this. Just deport number 3389 open from the following source, which is the Internet. So I will be able to rdp to this issue, to instance, from the Internet. Let's see how that works. So I'm going to copy this public I p and then launch MST etc. Which is Microsoft Terminal Services console and I will pay is the public I. P address here is going and click on Connect and Nurtured Pop up the User I D and password . The credentials prompt right here. Now it's time to go and get the password for this is a two instance which we can get it by just right clicking on the sea to instance and the new click on Get Windows password. Once that is done, you then click on Choose file. Now this is the key pair which was generated when we started creating that easy to instance right? So if you recall from the previous lesson, we were talking about ah, public key in a private key and hand shake off both of these real result in a password. So Amazon has that public key and we have the private key. So you got to do that handshake. Now, I will choose the file and browse to the directory. And then, as you can see, I've got the new key pair Dark pem file here. And this is the certificate value. Now I'm gonna click on decrypt password. And here is the password that I'm gonna use to log in. Right. Of course, you can log into the virtual machine and change his password because there's no way you can memorize this random string, right? Well, copy this and go back to the credential. Prompt type in my user, I d on the password and say, OK, I should take me straight to the easy to instance in one of the previous radios I did show you how you connect to it and how you make it a Web server on. That's where the need for opening Newport's comes into picture. Right? So if you're making this virtual machine as a Web server, you will need port number 80. What if you make it as a database server? So let's say if you're installing Microsoft SQL on it and that needs put number 1433 inbound. So you've got to go back to the security groups here, which is right here under the matter Later. Click on that. And then under the inbound rules, you can modify the inbound traffic of the ports that you want to allow. For example, at this point, I've got put number 3389 And that's why I could rdp toe this easy to instance, because Rdp works on port number 3389 But what I'm saying is that if you would like to have http port because there's a Web server or SQL port because this is an SQL server, then you have to add those relevant port in this list, right? For example, you just go ahead and click on add Rule and then select the respective protocol and then type in the respective port number as well. Right? So there are a lot of well known protocols in this list is they're all well known. For example, http pop three i map l'Abbe is a well known, but just in case your protocol is not in this list, you can go ahead and type in ah, custom TCP port or a custom UDP port as well. Right? So So let's say your application is listening on TCP 1433 Right. So, again, disquiet and type in manually here as well. I took on cancel and show you some more information about the matter later off. The easy to instance, There you go. So one important aspect is the network interface as well. That's how we're getting connected to this virtual machine, isn't it? No network interface, No communication. The network interface for this. Easy to instances. Call S e T H zero. Let's click on it and then find out what we got. What we have is a public I P address, which can be seen right here. But we also have is a private I p address. When we started this class, I did mention that every easy to instance must have a private i p address. And it has got a private eye. Peter. So 19 to 1 68 30 Dr 33 At this part, the first part right here. The 1st 3 OC that's represents the cider block as well. Which is the network ideas in it. So this is the network Whitey which it picked up from the sub nets. Cider Block. So what was the subject, sir? Block. It was 19 to 168 $30 0 slash 24. Let's go and verify that. So I'll click on this subject. It'll take us to a totally new screen in a different tab, right? So what do we have for this? You got 19 to 168 30.0 slash 24. So one I P address from this range was allocated to E. T. H zero and we got dot through 33 as the whole study. All right, that was about Windows machines. Let's go ahead and create a Lennox instance in the next lesson and see how that works. Although there is no difference between creation off Windows Machine versus the next machines just the same thing. But Rdp doesn't work for Lennox Machines Rdp or this Microsoft Terminal Services Council is only for Windows boxes, not for Linux. For Lennox, you will lose terminal or use 1/3 party tool call s party. So we'll download the party tool and go ahead and connect to the limits machine. Right. So let's get started with this topic in the next lesson 42. 41 Ec2 Instance Linux Instance: Let's go ahead and launch an easy two instance in this lesson. Okay, so I'll click on this launch instance button and I will take me to the exact same set off steps that were used for creating a Windows machine. Right? So in this case, I'm going to select on Amazon Lennox. Am I? Select that and what you'll notice is the same set off screens that we went through when creating the Windows box X that stick to tea to micro because late next is okay with one CP . When one gig off memory as well, I will go ahead and hosted in mind custom re PC in my custom submit and make sure that I have ah, public I p assigned to it. So I decided to enable if I said it to disabled and I won't have a public I p, it's just gonna be internally row doble within the VPC. It's just gonna have a private I p. If I said it to disabled now, I want to access it from the Internet, so excited to enable flick on next and then look at the storage options. What's important to distinguish here is that for a Windows box. You have minimum capacity as 30 gigs. But for Lennox got eight gigs, right? So that's the differentiation between windows and Linux Box here. That's the minimum size. Of course, you can scale it up and size it up to higher number, but it starts with eight gigs. All right, hit tags and then the security group. Now hear what you'll notice is that by default, it has picked up port number 22. What was it in the Windows box? Put number 3389 Right. So Windows works on Rdp protocol. Now, this is a Linux box which works on port number 22. Right? So when we do S S H or party to the Limits box were actually transacting on SS its traffic on port number 22. So that's why we're allowing it from the Internet. Finally click on review and launch and then hit the launch button. This screen is not new to you. It is familiar. So I'm going to use the same key pair to connect to the the next instance. So what does this tell us? It just tells us that you can have multiple windows or Linux machines combination of them. No problem. And they can still connect with the same key pair. I don't have to create one key pair per machine or poor easy to instance. Okay. So to keep it simple, I'll just leave it to one key pair and use that for multiple laps right here on launch instances and give it some time while it creates that. Easy to instance for me. Okay, I click on view instance. All right, So that's my insitu instance being provisioned, this will take about 4 to 7 minutes. So help pause the video for now and then resume again. Once it is ready. There are several methods of connecting two Olympics instance you can use S S H two wills. You can also use open ssh tools or use the most popular tool called as party to connect to that Lennox instance. I've already downloaded the party tool on my computer, so I've got the party and the party gen Daddy XY on my computer. So you make sure that you have both of these tools installed or downloaded from the Internet, right? So I'm gonna launch the party tool and then copy and paste the public I p address off my Lennox instance and then paste it in the host name field. All right, now, you also need to supply with the password, right? You just cannot take on open right now because you need to supply. Ah, file call as a PPK file. Now, to supply the PPK file, you need to navigate to Ssh. What? And right there, you got the browse button. Okay, you need to browse to your PPK file, which is a private key file. Right now, the file that we downloaded earlier was a pen file. It's not a PPK. Fine. Let me show that to you and explain what I mean. So here is the file that I've downloaded. It's located in my dot spam folder. Right. So the file extension is a P E. M. File. Right now, this pam file needs to be converted to another extension, which is called as a PPK file. This conversion is done with the help off another tool. Call us pretty jen. So I'm gonna go and launch the Patijn tool and then use this tool to translate this pem file or convert this pen file to a PPK fire, Right? So I'm gonna lord the file by clicking on the Lord button. In fact, I will copy this path just to make things faster. Load this, pace it here and right here I'm gonna select all files and you will be able to see the pen file. This click on open was ignored That pop up. And now we can click on Save Private Key, right? You will get a pop up if you like to pass Phase protected. That means in the background you will see a key pass phrase and a confirmed past present. You can go ahead and put some secret past phrases here and that will protect your PPK file . But for now, let's click on yes and then save it in the same path. I'll call it as Lennox. BP care something that's fine. But what you will notice here is that the file got saved but this time with an extension off PPK. So PM is a file that we downloaded from Amazon when we're creating that easy to instance. And PPK is a file that we created after converting the pen file to PPK using the Pootie Jen utility now I will launch. They put a tool to browse it to mind. PPK file. So it's going and browse that and my people care file is right here. This time I can click on open because by I P addresses intact, and the PPK file is also intact under the arts section. Let's click on open. You'll get this pop up this quick on Yes, and the Now you're ready to log in. What is a user ready to log in? And windows? It's ah administrator, But in linens for Amazon images, it is easy to hyphen. User does it enter? And I was good. We got connected. You can go ahead and run your favorite commands a less pwd for president. Working directory PS hyphen E f is just like your task manager. So I'm currently operating on my Linux instance, which is hosted on AWS to that Lennox instance. But if you're using a Mac operating system, the steps will be different, which is clearly documented right here. So if you right click and then click on connect, you will see that here of the steps you can go ahead and use to connect to that Lennox instance we gotta run ch moored 400 ch martyrs to modify the permissions off that file. And then you connect to that. Easy to instance, using this command. This is for Mac operating systems or any operating system that has S s edge installed on them. Well, there are other matters as well. You can use a browser based connection. As you see, there's the easy to instance connect. And then the connection will just happen over the browser. So if you're not allowed to install Pretty Jen or the Pootie Tool, then you can just use a browser based session to connect it. Right? So easy to hyphen user is a user name. No, you got to do is click on Connect. They go. You got the same thing. And here is the Private I P address, which is 19 to 168 30 that 79 which is the same I p address off this easy to instance, right as the same thing. So, in this lesson, we learned, How do you create at the next instance, how do you connect to the limits instance and the various methods that we can use to connect to that easy to instance 43. 42 Storage Fundamentals: Welcome back to this lesson on storage fundamentals. Let's think about data. We have lots of data spread across different gadgets and devices. It could be rewrite. Herbal cities could be those hard drives or SS D's. I don't think we have floppies anymore lying around, but basically these are the most common types of media that we, as individuals have and always use them to store our data. Let's think about an organization. How much data do they have? How much data are they going to manage? You can imagine how unmanageable it would be to have all of these different mediums lying around where you store your data. So many organizations within your data centers use something called as a storage ari. A storage array has many hard drives installed inside it, so organizations use the storage array to store the data. For long term and short term purposes, organizations would have different kinds of data, and depending on that kind of data, they would have a particular kind of storage, isn't it? I'm referring to object storage, file storage and block storage. Let's take a step back and understand these as it is very important to know them. It is no secret that data is changing, and data has been growing explosively in the world in every industry did I stored with high fidelity than ever before. We have moved from traditional architecture to new means off, categorizing, classifying and storing the data. In late 19 nineties, object storage was developed. It is the general term that refers the way we manage plate and organized in units of data that we call as objects as opposed to the most common Baylor stories model, which is the block storage block storage stores data in fixed blocks, whereas file storage stores as a file hierarchy or a flat organization object storage stores in flexible size containers, which we call them as buckets or containers. Object storage consists off three things. One is data itself. The object Storage is dealing with storing unstructured data or data in the cloud so it could be photographs, videos or the product manual or documentation. The second piece is the marriage better off the object, which includes additional information about the data, which can then be used for indexing and file management. The third piece is the globally unique Identify where that is assigned to every object. So this is used to retrieve the data without knowing the actual address off the data. And this is what makes object storage much more faster. Object based storage is perfect for massive growth of data that we have into this world. With this approach, you can scale up to Peter Bytes of data and beyond. You can keep adding there indefinitely, and the sky's the limit. Now I'm gonna take you back to the previous scenario where we were working with the storage . Hurry to store that data. And this is how organizations or enterprises to it from this story. Jerry, we create volumes, and these volumes can then be used. Teoh separate and allow us to upload data based off different use cases. Let's say I want a different volume for every department, or I want a different volume based of the type of data that I'm going to store. So you're gonna carve out some space from that bulk storage to upload the files. So it would be the same thing that you would do with an external hard drive in your home that you plugged into your computer and then save your files into it. So when you think about how this storage space is being used and how we might carve it up, it is essentially a bulk storage, so you can think off a bulk storage as a giant hard drive, where you can upload almost any type of data. That data gets access using some type of an application service with bulk storage, so there's no access to underlying operating system. So you might be using something like a Dropbox Google drive or Amazon s tree as a cloud bulk story IT service. You have no access to the unlike operating system that runs the service, but you do get a piece off storage that's carved out for you, where you can upload your individual files. Once you create an account and you have access to those files, you can then see those files, and no one else can see your access them unless you share that space out with them. Now we think about other types of storage the cloud provider really providing different kinds of storage. But keep in mind that operating systems will work great with block storage is when it comes to Web applications, where the Web applications needs to retrieve some kind of a file like a picture photograph video document. It is great to have object storage connected with such services. Now let's map all of this discussion with AWS storage. Now what? We create an easy two instance. You also create a volume for that Wes, just like C Drive in D Drive. That volume is nothing but, ah, car out piece from some gigantic storage that Amazon has with an install the operating system. And sometimes we add data to that as well. So when we think about this from a server perspective, things like Web Server, a database server mail server well, these are all types of applications that you might potentially see install, and it's over. So lets navigate to the AWS console here, go to easy to, and I've got the easy to dashboard open, and I do not have any running instances. So I will launch an instance and click on this Amazon Lennox to am I and select that, and I'm gonna leave this as Teacher Micro leave these as default as well, and then paused near the story. It's section. So what do we have here now we'll be selecting. Essentially is the sea drives a route is a C drive for the limit. So if you are choosing a Windows machine, it's gonna be a C drive. And for the linens machines, it will be a route volume this size off eight gigs that, you see, this is being attached to your Lennox instance. So this eight gigs is reciting on that remote storage Ari, and we're going to attach this volume to our easy to instance. Now, this is a space that you plan to allocate to our route volume. From a Windows perspective, this would be your C drive. This is where your operating system will be installed. And for Lennox, it will be a different operating system. We can just look at it as a route drive, and this gonna be your eight gigs in size. But you can notice that they're different volume types here, So this is going to be directly proportional to your performance off your easy to instance . So, in one of the lessons we did discuss about these volume types, we didn't mention that if you select magnetic, it's going to give you very less I ops and ah, good use case will be to create this magnetic volume type for sandbox environments or test environments, as opposed to provisioned Diop's, which will let you change the eye ops for that operating system. You can go up to 16,000 eye ops per volume for this block storage. So what you're essentially storing here is the data inside a block storage. Great for operating systems. But if you would like to have files, pictures or videos, you know what to select. You'll be selecting an object storage, which will be talking about in the upcoming lessons. That's all for this storage essentials lesson. Thanks for watching, and I'll see you in the next lesson. 44. 43 AWS S3 Simple Storage Services: thanks for joining me again. And welcome back to this lesson off Simple storage Services s three or simple storied service is an online bulk storied service. Now, this is an object started service from Amazon Web services. S three has a simple weapon to face that you can use to store and retrieve any amount of data at any time from anywhere. It gives you the user access to the same infrastructure that AWS uses to run its own global network off websites. It is really fast, highly scalable, reliable and inexpensive. Now that's the term which most of the businesses are looking for. So this service s tree aims to maximise benefits off scale to pass those benefits onto you . So essentially, AWS has large amounts of storage space available, and they're making that storage available toe others in order for everyone to get benefit of the scalability off the storage space at a significantly low cost. In the previous lesson, we did mention about two main kinds of storage, which is ah, bulk storage s tree or an object storage. And we also spoke about block based storage are block storage. The last three is that bulk based stories that we're refering to. OK, essentially, at the root level, you will go ahead and create something called as buckets. Buckets is where your data will go and re side. You can have multiple buckets of the root level and then thousands of files inside each bucket. So what you're creating here is an hierarchy of files inside the bucket that you store on s tree. Now, instead of just talking about it, let's go ahead and look at the history inside AWS. I'm right now logged into the AWS console with my root account, I'll click on services and then click on s Tree. Simple started services object storage in here. All right, So, as you see, I've got a lot of these buckets created already and they are bind it to a particular region and a lot of these buckets are accessible to the public. One of the buckets that I got is not public, and you can see the creation time stamp as well. Let's go out and create our own bucket and understand the whole process. First hand, Let's go click on this orange button on the top right inside, create bucket and then We just have to walk through this reserve, select the options that regard to create a s three bucket. All right, so let's go ahead and give it a name I'm gonna named as test. The one thing to keep in mind is that the bucket name must be unique. That must not be any upper. Cases are spaces in the bucket name, right. That means that I cannot use the name as test, although it's in the lower case. But somebody in the world across Amazon's name spaces might have used this name test, right? Just like your domain name, right? So if you want to put you as a domain name, you should buy a domain that is unique. And nobody else in the world should I have used it bucket names and no different. Right? So make sure that you give it a name, which is kind of unique and more resembling to your enterprise, right. So I'm gonna call it as let's say, fraud bucket and put some random characters right next to it, right at the end of this resort. It's gonna evaluate and check if this name exists, and if it exists, it's gonna add a route. OK, the next option is a region you got an array of reasons to select from. You can choose any location in the world map, but it's always ideal and recommended to keep the bucket as closes to the user. So if you got your users probably in the East Coast or West Coast, you might want to select not Virginia or California, respectively. Okay, not just for demonstration purposes. I will select not Virginia, and keep moving. The next option is granting permissions to people. If you're planning to host a static website where people access the content across geography ease, then you would want to grant public access to your bucket, right? So if you're applauding certain pictures and videos and you want people on the Internet to access it, right, so that's when you will go ahead and uncheck this box and keep going next that make sure that you check this box, which says, I acknowledge that the current settings might result in this bucket and the objects Witten , the bucket becoming public. Okay, but if you would like to district permissions to specific people through access control lists and role based access controls and permission ng model. Then you have to check and uncheck these options and look around which one is best for your use case. But as you notice, the default option is to block all public access. That means when you create a bucket by default, nobody will be able to access this bucket. That being said, let's go ahead and click on the create bucket button. All right, so our bucket is created and the bucket name Proud Bucket Oneto one was accepted. So in this list, let's go and find that out, and it's easy to find our bucket. But if you got lots of bucket, let's see hundreds of buckets in your organization. Then you gotta search box where you can simply type the bucket name, and then this is our bucket. Let's click on this bucket and look at the various options that we have inside it. At this point, my bucket is empty, but then there's an upload bottom that you can click on an ad files you can either drag and drop or click on add files. So what I'm gonna do is drag and drop a picture here, okay? And that is actually a map off Azure data centers and then okay, gone next. This is where you can grant permissions to that particular file. The file sizes 28.7 kb as you see it on the top right here. OK, At this point, the owner or the root user has full permissions. You can read the object and has the permissions to read and write the object permissions as well. But as you scroll down, you will see something interesting. Do you want to grant public access by default? The public read access is not granted. You can change the options by selecting this drop down menu and then select grant public read axes instead of to not grand. But keep in mind that when we're creating the bucket with selected block public access and that's where the permissions are being inherited from, so top level permissions always rule and the bottom level permissions will be denied. Right? So that's how it works. The top level permissions flow down to lower level objects by default. We'll go ahead and finish through this reserve and moving forward. We'll see how you can modify this to grant permissions to this particular file. Let's go next. The next set of properties is all about what kind of started class do you want to keep for that object? Do you want to keep that in standard or standard infrequent access or move it to Glacier? We'll talk about these options later on. In subsequent lessons. The next option is about encryption. Do you want to encrypt the content off your restaurant bucket? The content is not encrypted by default, but you have the options to encrypt it with either Amazon Master key. That's where Amazon will keep the keys off encryption with themselves. But they also have given the option to encrypt it with your master key, which will be kept in your key management system. So keys to encrypt the STD bucket will be kept in your Caymus, provided you have created a key, right. So right now it looks like I have a K a miss key. So I can and crypt the S three bucket with my AWS kms Master Key, right? But then here Mess is a key management service, which is a managed service from AWS, which led to you manage the keys in terms off creation off a key dilation rotation of the key as well. But if that is too much of a headache for you and you don't want to manage the keys over yourself, you can go ahead and choose Amazon s three Master key or leave. It has none. If you don't want to encrypt it, Okay, I'll click on next year. And this is a summary where we can review If you're happy with these settings. Just click on upload, not depending on the file size. It may take some time, but looks like it was pretty quick. You got 28.7 kb uploaded right here. Now I'll click on this file to look at its matter data you scroll down there is a U. R l here, which I can click on to access it. All right, so now I will click on it to see what happens with this right leg and open in new tab. What I have here is access denied Perfect because we did not grant access to public for somebody to access our files. Let's go back to the history console and see how you can grant permissions to grant permissions. I gotta go back to the main bucket and then click on permissions. And this is where I'll be able to grant in deny access. Okay, by default, as we discussed block, all public access is own. What we have to do is edit this and uncheck this box school on the bottom and then save it right. There's a pop appear and I got a type confirm and just click on that. All right. Now, the public access settings are abraded, successfully blocked. All public access is said to off. I'm gonna go back and then look at the file and its properties, look at the girl and tried to access my file now. Okay, so well, this right click on this u R l and open a new tab and they're ago. I'm able to see the content which is going through my girl in the S three buckets. That's how you make your webpage publicly available. Let's take a step back and take a look at other important options we have inside the bucket . What's more, important form an auditing perspective is that the auditor must know who's uploading the files. Who's modifying the files, whose changing things or rather whose deleting data from your S T bucket this is part of the auditing are ditching is provided by a service in Amazon, which is called as cloud trail. So if I click on this services and then look under management and governance, this little service call this cloud trail which is looking at every service in every action that you're doing inside your AWS account, right? So if you go back and go back to s tree, you will notice that I uploaded a file and Cloud Trail knows apart it. I really do not have to configure anything inside the S three bucket to activate or enable cloud trail. So I'm getting that default according set by Cloud Trail. Okay, now, if a navigate to properties a lot of options that you can take a look at, for example, version ING, server access, logging, static, web hosting, object level logging, default, encryption. What we're gonna do is take a look at these options and other options inside it in the next lesson. What we know so far is that s three is the object level storage where you can upload pictures, videos, files, exacta. You can also grant permissions to public so that people can view your data from the Internet. This is controlled using permissions. We also know that are getting off your s three bucket is done by Cloud Trail. Let's go and take a look at the property's option in the next lesson. Thanks for watching so far. We'll see you in the next lesson. 45. 44 AWS S3 Simple Storage Services II: let us now take a look at some of the advanced options that are really helpful for developers and also for people who would like to do our editing and security folks. Basically, the property section is a one stop shop for administrators, developers and the security into Zs. Never see lots of boxes here, so if you go ahead and start with the first box, which is version ING will help you keep track off multiple versions off the same file. So let's say you're uploading a file called as index targets TML, and after a couple of days you upload that file again because you modified the file and he would like to have the latest changes to be reflected on your Web application. The Harmony files are there inside the bucket. If you enable your version ng, you will have multiple versions up it. You will have the previous version as well, so that you can roll back to the previous change just in case there are any problems with the new content, right? This really helps us preserve any existing object version so you can roll back to the previous version. Now, in this case of God This file call as as your map dot PNG. If I click on it, the latest version is just that file. If I applaud multiple files, I will have the option to choose between those multiple entries, thereby can switch between multiple versions. Okay, so that's what burgeoning is about. Very important for developers who constantly change information on the bucket. Right server access logging. There have been times when you would need to know who's accessing your Web application. It's important to have some kind of logging to trace back. Who's connecting to your bucket, whose modifying the changes. So I'm gonna go and enable logging and select my bucket and next in the target prefix, I can say, Hey, only check for html file. So something like that, Yeah, it's not plus, but it's dot html so it can sense a log for all the HTML files. So there by I will be able to track down any changes on those particular files that have mentioned here. Okay, the next option is again very important for the developers static website hosting. So if you would like to host some kind of a static content inside the bucket then you would enable this radio button that says used this bucket to host ah website. And then my main website content will be written here. So anybody who tries to navigate to this particular u R L will be navigated to the index start. It's terrible file inside my bucket, thereby You're just making your bucket as a hosting place, right? You can also read addict all the qualities to a different location. Let's say anybody who tries to goto this Ural will be redirected to Let's adaptive dot dot your company domaine dot com. Right, So that was about static website hosting Object level logging is detail logging as opposed to server access logs. Object level logging will give you details about all the read requests and all the right requests that are anchoring inside your s tree bucket. And all of these logs will then go inside a trail event call as cloud trail. Like I mentioned in the previous lesson, Cloud Trail is a separate auditing service. If you click on services, scroll down under management and governance, you will find cloud trail here. So all those events off read and write right here will be going to the cloud trail for audits. So the auditors or the finance guys So the external people may have access to cloud trail toe identify what's going on in your production bucket or you're deaf bucket. So object level logging is for our editing. It is very important that you go ahead and apply encryption. How do you ensure that your bucket is encrypted? We're just going to the default encryption and then setting and selecting one off these by default that you see, your bucket is not encrypted. But you have the options to choose between E s to 56 which means that the keys to encrypt the bucket will be with Amazon. So hence the name Amazon s tree manage keys. The keys to encrypt the content off the bucket will be managed by Amazon. And if you're not happy with that or if you're compliance team does not agree with it, you can choose to keep the keys with yourself. So you can say, All right, I will use the default encryption option, but then I want to manage my own keys. I don't want to give Amazon any kind of permissions to manage my keys. So I will leave that to me and my team to have that Keyes managed. When I say manage, it is creation of keys, dilation of keys, the rotation of keys Exactly. So you will have more responsibility. As far as key management is concerned, Amazon is a separate service call s K M s, which is called is a key management system or a key management service where you can go ahead and keep your keys safely and to all the manage mental attitude, creation, delay, shins, rotation, etcetera. All right, so the other three options we have, I'll click on none at this time. Click on Cancel and then I'll go ahead and summarize these options where you will have the poisoning so you can roll back to a previous version. Server access Log in So you see the details about who's accessing the content. Static website listing toe Host your Web application inside the yesterday bucket, an object level logging so that you can have ordered team. Look at the read and write requests into the S three bucket and then the default encryption to improve the confidentiality of your data. Using either Amazon managed came a service or using the keys in your own K. M s. All right, That's all about this, folks. Let's go ahead and connect in the lex lesson. Thanks for watching. 46. 45 AWS S3 Storage Classes and Data Lifecycle: welcome back to this lesson and thanks for joining me again. And in this lesson we'll learn about two important aspects off data. One is the classification of data in the 2nd 1 is a life cycle. Let's start with classifications of data in AWS history, we have something called as s three storage class. Ste storage class represents the classifications assigned to each object in the street. I'll show you what I mean. So I'll go ahead and start uploading a file and you get a pop up like this, you can either Dragon drop or click on add files, so I will go ahead and drag and drop. Let's say a file. Koalas. See, I try it into this and there's an MP four file. It's a video. Okay, I'm going to upload this now if I click next is a certain permissions, which I'm gonna ignore. I believe the public permissions as well. And now this is where the storage class comes into picture. In one of the lessons, I did mention that I'll explain this later, but now is the time. So there several pieces off information about stories, classes. You have the name off the storage class in the first column, and then you got what it is designed for, what you can store or the whole purpose off the storage class. And then you guard the minimum number off availability zones, followed by the duration time or the minimum amount of time for which you will be built. For that particular story, it's class and then the size, which is the minimum building size for an object. In that particular stories class, you got two more columns here, which have been lately, added your regard. Monitoring and automation fees and the only stories class where you will have the fees applicable with the poorer object in the intelligent here, right, and then, finally, the retrieval fees. There is a cost of retrieval as well, and the highest retrieval fees is in glacier and Leisure Deep archive. We'll talk about those, but then, starting with the 1st 1 now, the four storage class is standard stories class This story's classes the most frequently used, So when you upload the object by default, it goes into storage. And that's how you could access your objects right away very quickly, like within milliseconds. You want to be able to get that file and download it immediately. Then standard is the stories class that you would like to use in this use case. Notice that you got minimum three availability zones, and there is no minimum stories, duration or minimum size for billing. But the thing to keep in mind is that this story's class is also the most expensive because it's got a lot of flexibility. You're not paying in a retrieval fees, but you will be paying a lot if you access that data frequently. The next storage classes, the intelligent tearing. Now this kind of tearing is for data where you're not quite sure what the access patterns gonna look like. So when you're uploading the data into your bucket, you really don't know whether the data is being frequently accessed or not frequently accessed, Or are you uploading it for the purpose off archiving? If you do not know that, then you gotta select intelligent tearing over a Peter of time. This intelligent earing will look at the patterns for how the file is being accessed, and then, based on those patterns, it's gonna move that data between the various stories classes now again As you see, we got minimum off three available T zones, and the minimum building duration is 30 days. So if you put something inside this kind of tearing, it's gonna bill you at minimum off 30 days. All right. Now, the next store ist here is the standard infrequent access this closely resembles with one's own infrequent access as well. The only difference is that in standard infrequent access, you got your data in three available T zones minimum. But in one's own frequent access, it's gonna be there in minimum one available tease. Oh, well, that's the biggest difference there. But if you choose either of these stories classes, you're gonna do that only if you plan to access the data infrequently. But you'll still have kind of a millisecond latency there, right? So in both of these cases will have that millisecond laden see, but it is still used for infrequent access. The main difference is the resiliency between both of these. So just in case there's a natural calamity like that's a hurricane, a tornado or earthquake or anything that you think will be causing a data center failure. In that case the entire date, I will be unavailable, right? And it's gonna be a big problem if you have chosen ones own infrequent access. Because in that case, Amazon will store your data in minimum one availability zone on Lee. The guarantee in one easy but in standard infrequent access. It's gonna be replicating in a least three available T zones, so thereby you've got more resiliency in standard infrequent access. Again looking at the minimum building duration, it's 30 days for standard and also for the single available T zone stories here. Now let's move down a little bit further and regard Glacier and Glacier deep dive archive. That's quickly China about that as well. So Glacier is for long term archival. If you look at the description here, it says that it is used for archival. So this data is something that you're not gonna need very frequently. Maybe you want to see this data foreseeable in future, and there is no reason to access it Frequently. Think about data that is required for auditing compliance, log details or anything that you want to keep it for long term retention purposes. Right? In that case, glacier is a good choice now, with glacier you do not get that Milli second access time, It may take several minutes or several hours, toe, actually retrieve the file. And then there is a retrieval fees as well. Okay, if you look at the last column, there is a retrieval fee in all of these tears. Glacier travel fee will be part G b of data that you want to retreat from the glacier archive. There is a resiliency aspect here because the data is getting replicated to at least three available T zones, and that holds true for Glacier Deep Archive as well. The major difference between Glacier and Lay Sha Deep are kind is that the default to the triple time for the deep archive is to allow hours. The second difference is the minimum bill. Duration is 180 days for deep archive, whereas for glacier, it is 90 days. Keep in mind that because these are minimum duration, that if you need the data and different storage class, you're still gonna pay the minimum for the stories class that it was originally in. Okay. And then there is one story. It's class which is the last one reduced redundancy. And of course it is not recommended by Amazon. So when do you want to use that? Reduce redundancy. Why do we have in this list here? Well, reduce the tendency is for data that is frequently accessed but is non critical, as shown here in the description section. Overall, you got very low cost for this began a limbless plans to get rid off, reduce redundancy from here. So if you do not see this when you're doing your labs, don't get worried because Amazon anyways had plans to get rid off it. Overall, there are several attributes that dictate things for a stories class, it could be object availability, durability and frequency of access. If you scroll up to the top, you got the standard storage class. This is the default that is assigned to every object when you upload that object. Yeah, So, of course, when you're applauding it, you can change the stories class to anything you want based on your requirement. But the default one is standard. But just keep in mind that once you set the stories class to something, you will be built for that minimum duration. All right. At this point, I believe it a standard click on next and then just let it applaud to our bucket in AWS the big file. So it will take time. So I'm gonna pause the video and then get back to this one once it is ready. All right. It looks like the file is applauded here, and I'm gonna click on it to find out its properties on the right. Inside you'll see a papa and under the property section. You got that? The storage class is standard. What if you want to change it? Well, it's pretty simple. I'm gonna click on that. And then you got the same section. Open up. You can go ahead and toggle between these. Let's say you want to move it to Glacier because you're no longer needed. And you may need it. Maybe after, let's say, an ear, because that's when auditing will start. Right? So what you gotta do is select that and then click on Save and this file will be tagged with a new story. It's class. All right, I'll go ahead and click on cancel and then talk about a new point. A new topic call as life cycles. When we started this lesson, we did speak about the agenda of this lesson. We said that we're going to talk about the stories class and you know about that now. The second point is life cycle. The life cycle is how we move an object from one story. It's class to another based on a particular time interval. A few seconds ago, I did mention that you can move this object from one storage class to another by doing this manually. But what if there are thousands of files in your bucket? How will you get that automated? How will you automatically lead the bucket? Choose the stories class for the files inside it? Well, that's where Lifecycle comes into picture. So let's take a look at what it looks like. I'm gonna go ahead and click on management here, and what you see here is the life cycle option. You can click on add lifecycle. Just enter a rule name. That's a life cycle policy. So one and then you got the options to limit the scope to specific prefixes like the extensions or any file that is a particular tag or apply to all the objects in the bucket. Okay, I leave the second option here and then click on next. Now, if you have those Ning enabled on the object, then you can go ahead and choose between current version and previous version. Will you got the option to check and uncheck either of them or keyboard of them together? But this is where you will select the transition. Okay, now I'll just select the current version and show you what's there inside the transition. At this point, my files are in the standard stories class, okay? And then I would like to move it to, let's say, one's own infrequent access. After 30 days, you can go ahead and edit this number. That's not a problem, but it just means that after 30 days or after the number that you define here, it will be moved to one's own infrequent access. That's how I'm defining the stories. Class off the object after a set duration of time. All right, I'll click next and the next, and then go ahead and finish this. Now keep in mind that this rule applies to all objects in the bucket so we can just check this and say I Ecologists, I know I'm aware of what I'm doing in Click on See and that's it. Now you got the lifecycle policy in place. It means that my objects that are in standard stories class will be moved. Teoh one's own infrequent access after 45 days. So overall, what we have is an industry bucket that is durable, reliable and scalable. It is secure because you can go ahead and and crypt the content with the options that you have under properties and then under default encryption, so that makes it secure as well. History is one of the services that integrates with almost all of the other AWS services. This means that we can use a service. Let's say an easy two instance and the output off the application in that easy to instance can be redirected to your S three bucket. History is very easy for us to get data in and out because we can upload the objects directly into the bucket. So you saw under the overview section how easy it was to upload the content inside the S T bucket right? Regard pretty detailed administration capabilities of the S three bucket. You can grant permissions to control who has access to the data and the require a few different ways off protecting the data and how you can grant access to the right set off people at this point, that's all for the S three stories, Class. Thanks for watching. And I'll see you in the next lesson. 47. 46 AWS Storage Gateway: Hello and welcome back. Thanks for joining me again as we continue to talk about the new topic called Storage Gateway. So in this lesson, we're gonna talk about what storage gateway is. And what are the different kinds of stories? Gateway, their purposes and benefits. So what exactly is stories? Gateway. A simple definition for storage Gateway is that it's a way you can integrate your existing applications. Let's say on your premise with AWS storage services that you have in ADA Bless. So if you recall from the stories lesson, what are the different kinds of story it services do we have? There is block storage, which we call as CBS. And then there is Object Storage s three, isn't it? So if you would like to fully migrate your on premise infrastructures storage to AWS storage, then you might want to use a storage gateway Data can be bagged up to the AWS cloud, meaning that you're able to use data locally or just for backups and data protection. It just moved over to the AWS cloud, or you can cast out there a locally at your data center. In that scenario, the data actually recites in the AWS Cloud. And based on when people access the files, the files actually are cached locally within the customers on premise data centers. So no matter which way storage gateways used all data that's transferred using the stories , Gateway is optimized for fast and efficient transfers. So what is the storage? Get Weight and AWS Storage? Gateway is a hybrid stories device. With this, you can have own premise applications connect you the cloud based storage seamlessly. So what are the purposes off a storage gateway? We can use it for backup in archival disaster recovery data processing in the Cloud story. It's tearing and migration. We can also use it in scenarios where you have multiple data centers across geographies. And let's say some of these are remote sites, and then you can connect your applications and other services through. It's a virtual machine or ah, hardware, Gateway Appliance. Using the well known standard protocols like NFS, SMB and Ice Cosy, and then the gate wake next to your AWS storage services like s Tree, Glacier, Glacier Deep Archive, the Block Storage, EBS and AWS Backup. So these services provide storage for files, volumes, snapshots and virtual tapes and AWS. This service is highly optimized data transfer mechanism with bandwidth management, network resilience and efficient data transfer. So let's look at a little bit more at different deployment models that we have in storage Gateway. So there are three kinds of stories gateways they're gonna talk about. One is a file gateway. Then there's a volume gateway and then a tape gateway, starting with the 1st 1 Ah, file Gateway. So you have a file gateway and file gateways where data is uploaded to s tree for use with object based workloads. Remember that when we upload data toe s tree were applauding objects. S tree can also be used for stories tearing to allow for data storage on the most cost efficient storage class. So if you recall from the previous class, we spoke about how you have various stories, classes and how there's different cost. Associate it with those different stories classes. So you can move your data from standard stories class, toe, glacier or to one's own I A or maybe even two intelligent stories here. And then we got volume gateway. So there two types of volume gate rates here. So let's take a look at the first type, which is stored volumes. So stored volumes gives you the ability to keep the customer data on premise. And that data can be periodically, be bad up to AWS based on snapshots. So this is great for hybrid bills environments where the customer would like to keep their data locally and they just want to use AWS for backups. And within this volume Gateway, you got a cast volume as well. So cast volumes stored there in AWS and then data that's most frequently accessed by customer is actually cast in customers data center for fastest axis. So this gives customer the best of both worlds. They get fast access to the data because it's cached locally, but they get all the benefits of having the data stored in the cloud. And then we have tape. Gateway, a tape gateway, is designed for long term offside. Did archiving in the cloud? Typically, when we think about data are carving, we think off tape backups. Many customers previously had existing tape backups on their premise and had an array of tape libraries. The disadvantage off using these local tape libraries is that tape as a medium is unreliable. Can you imagine trying to restore some critical data from a tape and then you pop in the tape and there's nothing in there. Your tape as a medium in itself is very unreliable. So with the tape gateway, we archiving data in the AWS cloud, and this means that we don't have to worry about reliability off that data in the tape. So that's all for now in this lesson. Off stories. Gateway. Thanks for watching, and I'll see you in the next lesson. 48. 49 Route 53: Hello there. And welcome back. Thanks for joining again as we move on to the next topic, which is allowed 53. So we're gonna talk about a review of DNS and Route 53 what the features off Route 53 are. So let's go ahead and jump in to the AWS console and take a look at that quickly. Now I'm in the AWS console and I will type around 53 in the search box, okay? And this gives me a pop up, so I'll click on that to navigate to the Route 53 section. All right, so here, as we see, there are four main functions off Route 53. There is Deanna's management, traffic management, availability, monitoring and demand registration. Although most of these are self explanatory, if you understand DNS and how the flow works, this will become very easy for you. But if DNS is a new topic for you, I suggest that you go ahead and take a step back and learn about DNS and see how the routing off the traffic works on the Internet. So when you go to a u. R l, let's say Dabdoub dot Google dot com. How does it actually get delivered to your browser? That's the whole purpose of DNS. OK, and this is something you must know and consider this as a prerequisite for this class. I'll start with domain registration, as that is a low hanging fruit, and we can wrap this up quickly. So as the name itself says that the main registration will help you register a domain. So let's say you want to start a company. The first thing that you like to have is a domain name for branding purposes. So let's see, the new company is that dr dot my products dot com. Okay, What I'm gonna do is click on check to see if it is available or if somebody else has picked it up. So it'll bless Route 43. Domain registration tells me that my products dot com is unavailable because somebody else has taken it. But there are lots of other suggestion that it has for us on the right hand side column pricing, you'll see that the prices also displayed the price is not the same for all kinds of domains. Keep in mind that this prices as for one year this process is just like buying a domain from your favorite registrar. It could be GoDaddy. It could be big rock, cool skater, whoever you have been buying the demands from. All right. So he around 53 tells me that you can purchase domains if you plan to set up a new business and you would like to have an online presence, not just that you can transfer domain as well. This is possible if your registrar or if your existing registrar is letting you do that. Let's say you have a domain call as the cloud mentor dot com. All right, so that's the cloud mental dot com, and I'll go ahead and check to see if I can transfer it just to inform you that this particular website is I just start on my name and it is registered with, ah, Register Cola's Big Rock. But looks like Big Rock is blocking me to transfer my domain to relative to three because it has been logged. So if I need to transfer domain from my existing registrar, let's a go daddy to rout of the tree. I need to contact meta destroyed and ask them toe unlock it and tell them that you would like to transfer it to a new registrar, which is AWS around 53 in this case. Okay, So domain registration aspect here off out of the three is letting your register domain and transfer a domain as well. All right, that being said, let's go back to the dashboard and take a look at the next important option, which is DNS management. So we first completed domain registration, and now we're looking at DNS management. Now, once your domain is registered, okay, you have a domain called as my products dot com. Let's say you purchased it. You would like to have sudden records like an a record or a senior record like www dot my products that come are male dart my products dot com and similar records like that. And that is possible under Deanna's management. If I click on get started, it will give you an option to create a hosted zone. So my zone name will be my domain name itself. Let's say hypothetically I have I demand register like that, and I got to select whether it's public or private. In this case, I'm going to select public and click on Create They Go My public DNS zone called as my products that come is created. But if I would like to have an a record, I click on Create Record Set and then type dub dub dub dot my products that come here now this domain name is automatically added as a suffix. What I gotta do is then just go ahead and type in the value, which will be the I. P address off my Web server. So I type in the I P address like 23 65 $32.52 or whatever is your I P address off your website. Okay, go ahead and click on Create. Similarly, you can create the record for your meal. So that will be, let's say, 58 dark, $65. 96 start, let's say 34 I'm just typing in some random public I p address here and click on create. All right. So, as you see, we can create multiple record sets. Not just a records, but you got options to create a C name. Amex record our record for I p v six txt exactly as you see In this drop down menu, the DNS registration option will let you create the DNS records, right? So we looked at two options. One is registered domains and the 2nd 1 waas Deanna's Registration. Let's take a look at traffic policies, so I'm gonna click on traffic policies now and then click on Create Traffic Policy. I'll just give some random name to it. It's a traffic policy one or something, and click on next. Now, As you see, you got a nice blank slate here, you might be wondering what we can do with this. Well, what you can do here is man is the traffic as and when it arrives into your data center in Amazon. But let's I click on Connect, and it gives me various options like waited rule fail. Overrule Geo location rule Layton See multi valued and your proximity. With these rules, you can set and define how the traffic must be routed into your AWS data centers or even outside AWS. This for an example, African waited rule, and then I'll type in a number. Let's a 70 and then here 30. It just means that 70% of the traffic goes to the end point that you define here and then 30% of the traffic goes to an end point that you're defining here. Now these could be public I P addresses which can be routed to other cloud providers. Location as well. It could be on your premise as well or within AWS in the first drop down menu, you can select if the traffic needs to go toe one off these end points. So if you have your application hosted on, let's say yell, be application load balancer, Right? Of course, this option is grayed out for me because I do not have any application load balancer. But if you had your load balancer in any of the region's, you would be able to see that in the drop down menu. Not just load balancer. You can also select s three bucket. So if you have hosted a static website in SC, you should be able to see that as well. But basically what I'm saying is that with waited rule or a weighted traffic policy, you can distribute the traffic based on the numbers that you define here. Okay, if I just close it and take a closer look at the other option. There's a fail overrule with this. What I can say is that all of the traffic, the 100% of the traffic goes to the end pointed to define here. But just in case that goes wrong or just in case that goes down, because this endpoint in that particular region is is disturbed or that region is facing a calamity. In that case, the traffic can be redirected to a secondary region. That's what fail over means. OK, now I'll close this and take a look at the other option, which is geo location rule. Now here you can distribute the traffic bears on a particular region. The traffic originating from Europe goes here, and the traffic originating from, let's say, a particular country. Let's say I want people from Bahrain to get into this particular region to be routed to this particular endpoint that you define here. Okay, so that's what Geo location is now at this level for the essentials or a depressed Vizner level. You really do not need to know all of this. But I just wanted to tell you the power off traffic policy that you will have multiple traffic policies to work with, depending on the kind of routing that you would like to see in your Web application. So that was about traffic policy. So far. We understood about register domains, where you can register and transfer domains. And then we learned about Deanna's, where you can host the zones and create your own records and the new go traffic policies, where you can create your custom routing conflagrations and distribute the traffic accordingly. The last and the very important point is the health checks. How will you get to know if a particular endpoint is not working right? So let's say you got multiple load balancers and you got Web applications at a global scale . How would you ensure that that load balancer hosted in not Virginia is healthy? And if that goes unhealthy, you get alerted. That's what health check does. So I'm gonna click on health check here and show you the options that regard. So just for an example, I'll go ahead and type health check as the name and then define What do you want to monitor under this? Do you want to monitor an endpoint, or do you want to monitor the status off other health checks? Or do you want to monitor the state off a cloudwatch alarm? Usually this health check under out of the tree is used to monitor endpoints. Now, if you choose to do so, uniter Define unit to tell health check whether you want to monitor an I P address or domain name, you specify the I P address, followed by the protocol on which it should monitor. If it's a Web application, you would like Teoh run the health checks, Honesty, PS or Scdp. But if it is listening on a random port, let's say it for for three. Let's say the Tomcat Web server. Then you would put that here along with the I P address. So what I'm gonna do is put some random i p address here just to keep things moving. Okay, and then I'll click on next. Look at the girl that it has constructed. So looks like it's gonna query the falling I P address on the falling port using the Falling protocol. That's what it means. A TCP eight for 43 on this I p address, right it next. So when things go wrong, would you like to generate an alarm Yes or no And that is then pushed through SNS. We'll talk about this in essence sometime. But SNS is nothing but the way you can send alerts and get them on your email it or maybe get it as a messes notification on your phone. You have to define these topics in your S and S will talk about this later. But then for now I will select a particular resonance topic and create a health check. So that's how health checks are created. That's to summarize folks in this lesson. We learned about register domains where you can register and transfer domains were then learned about Deena's management where you can create a records. TXT records, see name records and different kinds of DNS records by hosting his own Indian s management with then learned about traffic management, where you can create policies to route the traffic to your geographically host Ed load balancers are the static website hosted in s tree. Finally, we learned about monitoring Europe of application with the help of available to monitoring and health checks. Thanks for watching this lesson on Route 53. I'll see you in the next lesson 49. 50 Cloud Front: thanks for joining me again in this lesson where we're gonna talk about aws cloudfront. So let's go in and talk about the features and benefits that AWS Cloudfront offers. Let's get started. So what is Cloudfront? Cloudfront is the content delivery network service from AWS, and that just means that this service will be cashing the content into what's called as an edge location. Let's talk about it more in detail. Let's say you got your website were somewhere located on the East Coast of United States, and now your business grows so much that you got your audience everywhere around the world . You got people in the Middle East accessing your website. You got people in Asia and, of course, in United States and maybe fuel sets of people in the European region. When the audience at that scale accesses your Web application, there are chances that the audience will complain about the Layton see that they get while axing dog of application. When you use cdn along with your Web application, you're just saying that let me cash my content all across the world, so the content that the customers are trying to access will be cashed in the nearest edge. Location. What do you see? The benefit will be then. Well, the benefit would be that the customers can then access the content more quickly. So every time you access a website which has a global presence, there is a probability there is a chance that they are using some sort off content delivery network. Traditionally, there have been several content delivery networks and our command is a company is one of the most famous content delivery network. With AWS. We just say that Hey, you do not have to go to that third party. We are a one stop shop. If you can host a website here, you can cash content to so we can leverage the edge locations of Amazon to host the cash. The cloud front edge locations provide fast access to the content and that content can be in any form. It could be data. It could be videos, applications, and AP eyes the advantages that it replicates the data to various points that are in different places around the world. And this just helps ensure that customers not experience a lady insee when they're trying to access the data. So your customers will not complain about being tired off waiting for that website load. And if that ever occurs, customers will just go to a different website and you will lose your business. Now you don't want to lose your website visitors. That's where Cloudfront comes to rescue in the form of content delivery network to cast the content. So what that does is that the user goes to access the data. It pulls that data from the origin. In this case, let's say East Coast of United States, which is where the source server is, and then it loads it into the cloud front edge location that's closest to the user. So if the user is somewhere in Australia than that, content will be cashed at that location. So next time someone from that region tries to access the data, they reach that cloudfront edge location, and the website then loads up from there. And, of course, it's gonna be quick this time because it's not doing multiple round trips to the source location. Amazon calls these edge locations also as point off presence. So if you recall one of the lessons in the forest module, we did see when we're talking about availability zones, regions. We did mention about edge locations that are used for cashing the content. The security professionals will be worried about the air spanning across geography e's and also the probability of DDOS attacks. If you're wondering what DDOS attack is well did, ours is distributed denial of service in a DDOS attack. There are typically thousands off compromised servers, and they could be located anywhere. Those computers attempt to overload a particular server. So there's so much of traffic that's it to the server that any genuine traffic will not be allowed to access the genuine content. So it will be extremely slow to access. And you'll be losing all your genuine customers. So DDOS attacks a lot of times. Target DNS. The DNS servers, the Web servers and application server says, Well, it really doesn't matter. The end goal is just to crash that system and thereby preventing the people toe access, the Web application or the data that they're trying to access. So why are we talking about de doors and security when we're talking about cloudfront? How does cloudfront help there? Well, that is because the data is being replicated to all of those points off presence as a compromise computer attempts to access the source of So they are actually read attic 2 to 1 of the cloud front edge locations because these locations are distributed worldwide. So Cloudfront is actually minimizing the potential off a DDOS attack because instead off being sent to the source server, the militias guy is sent to the cloudfront point of presence. So the source server is never impacted by using cloudfront it just not the files that are distributed, but also the attack surface is also distributed worldwide. If you do not use cdn, you can imagine what's gonna happen. The DDOS attack will be targeted towards that individual Web server. So what is cloudfront? Where Cloudfront is a global content delivery network, It's gonna help you deliver your data, your videos, applications and the A p I that you need to your viewers insuring the lowest latency and high transfer speed just to keep your customers happy. Cloudfront is very well integrated with AWS and you saw all of those points of presence that are everywhere all across the geography. We can see that in the AWS global infrastructure. Cloudfront integrates very well with the service call s a double A shield. Now it'll blow. Shell is about DDOs mitigation. Okay, Just remember that it's one of the security service so ably a shield is there to help you mitigate the DDOS attacks that were just talking about all AWS customers. Regardless off paid off, retrial or enterprise, they are protected by the basic version off a blue shield for free. And of course, there's an advanced version of AWS Shield and you get lots of other features along with the advanced version and there's an additional cost involved in that. Now, I know we did die Grissom talking about cloudfront. We're gonna take a step back and talk about cloudfront now. But the point was that CLOUDFRONT integrates very well with edible A shield and other services like s three, where you're gonna host your static websites or maybe elastic load balancer that is connected to your easy to instances. And these services can be used as origins for your application. We're gonna talk about aws lambda a little bit later, but then I thought, I'll bring this point into this discussion as well because aws cloudfront does integrate with AWS Lambda, which is a server less part off AWS. So just to give you a just of what AWS lambda is no lambda gives you the ability to run your custom cord without requiring ah, physical server, right? So traditionally, if you ever had Teoh run your custom code what you have to do while you have to, then spin up a server, virtual or physical doesn't matter, and then install the operating system. Make sure that you got all kinds of server hardening procedures on it and then finally get your code with AWS lander, You start top down, you start with your court and not with the server. What happens with that? Then when you see a lot of times in it, you will have no physical server or a virtual server to manage your code, right? So the developers can just go ahead and start pushing their court instead of asking the system administrators to spin up a server. We're gonna talk about aws lambda a little bit later. But then again, I want to mention that Cloudfront integrates with AWS lambda as well, right at this point, I'm gonna take a step back and then just to refresh our concert, right? Let's talk about how does all of this work Let's break it down Now? When a user tries to access the data, that means user is trying to load of the website or something. Let's say the cloud mentor dot com or something, right? So when the user's searches for the cloud mentor dot com they are better to around 53 or some kind of for DNS name, resolution service. In this case, you will have your demand registered in route 50 tree with this name. So within a lot of the three, we have an alias record that said to one of the cloud front edge locations. So the first thing cloudfront does is dio check if the data is actually cashed at that location. If it is, then the data is immediately returned back to the user. If the data is not cast, what's gonna happen if the data is not cashed at the edge location than it has to go to the cloudfront origin to access the data and that origin can be your elastic load balancer? You're easy to instance, your S three bucket or your Lambda and all of those can self as a cloudfront origin. So let's say it's a elastic load balancer. Right? So the elastic load balancer can still distribute the workload amongst the easy to instances. And then that data is returned back to the edge location, right. The data stays on the edge location for certain duration of time. It cannot sit there forever, isn't it? Because the developers are working on the court, the developers are updating the website, and that abraded code is going to your ec2. Instance or your rest Reebok it And what's gonna happen if the content is sitting there on your edge location For a long time, the users will not be getting the updated content, and that's why there must be a time to live. A t t l. An expiration date for that cast content on that ed's location. So when that expires, it is automatically removed from the edge location, and the next time someone goes to access the data, it has to go back to the origin and return that data back to the cloudfront on an edge location. So when the data is sent to the edge location as soon as the first byte of data hits the edge location. Then it is sent back to our customers computer. So that is about Cloudfront. Cloudfront is there to help you minimise around trips back and forth to the origin server. That's 0.1. It is cashing the content. That's 0.2, and with all of those about points, it is reducing the lady NC toe the end user and point for. Of course, it is making the customer happy, right? So that's all for now, folks. In this lesson, we learned about cloudfront and how that integrate with other services in AWS And, of course, all the benefits that Cloudfront has to offer. Thanks for watching, and I'll see you in the next lesson. 50. 51 Cloud Watch: Welcome back. And thanks for joining in this section will talk about monitoring off AWS services with CLOUDWATCH. So cloudwatch helps you with monitoring metrics and logging as well. They're numerous cloudwatch benefits, which we're going to talk about in this lesson. So Cloudwatch is a service that allows you to monitor various elements off your AWS where it could be your issue. Two instance or could be a database. No SQL like dynamodb or could be a elastic load balancer. Well, it integrates with every other service that we have in AWS, so let's go ahead and go over into the AWS management control. See how cloudwatch will be helpful to monitor your easy two instances in this example. So as you see here in the AWS, easy to dashboard, I've got an easy two instance. If you click on your easy to and then click on your monitoring section, you will see several parameters related to your easy to instance. For example, you might want to check what is the CPU utilization? What is the disk reads and discrete operations and several other parameters related to the sea to instance, we do not see the numbers in the graph here because they see two instance has just been created a minute ago and there hasn't been much responses, and Cloudwatch hasn't been able to capture much on its dashboard. That being said, let's talk about this on the top. Right inside, we got a filter which will let your filter and look at those status takes based on what you want to see. For example, if you want to see the data off last 24 hours, just click on it and then it will filter it automatically. Okay, if you want to see a holistic status off all the services in Cloudwatch, then you got to go to the cloudwatch specific dashboard to do so you gotta go to services. And under management and governance, you'll see cloudwatch. So let's click on it. And the navigate to the Cloudwatch dashboard. There you go. So what you see here is an overview off my services. And if I have any alarms or later to that service, are they okay? Status or is the data insufficient for AWS to capture? Right. So if aws cloudwatch is not getting enough values from route off the tree, or is it too is then gonna go and fall under. Insufficient. If there is some problem, let's the CPU is high or you're pumping in too much of data into s three. Or if a usage is high or if you're building budget has gone beyond a certain number. They will then fall under the alarm section. And if everything is okay, it then goes into the okay section. Well, that is about dashboard. But then you can definitely go and click into these individual sections on the left inside . That's a alarm. Insufficient are okay, And then find out. Why is it insufficient? Why is it okay? And why is it in alarm status? Right. So let's say click on Insufficient. And at this point, it says that my route 53 status is not good. Well, of course, because I deleted their out for 23 being a zones that deleted the cloudwatch alarms from there as well. And that is why I do not see any alerts and hence insufficient data. Okay, There might be another reason as well. Why? Insufficient day lies here because when I was creating a health check in loud 53 I typed in a dummy Random I p address for the cloudwatch for the health monitoring to go and check. And because the public i p address was random, it was dummy. It is a saying that it does not have any sufficient data to pull that information. OK, and you will see a similar kind of information under alarm and also under. Okay, well, you'll see information and okay, if everything is all right just to tell you what services running properly. Okay, now this particular dashboard that you see is a new dashboard, right? There was an older dashboard as well, so I'm going to switch to the original dashboard by clicking on that. And that's how it used to look earlier. So if you see something, screen sharks in the blog's when you're, let's say, researching and going to any third party websites to search or maybe some YouTube channel to learn more about it. You missy screen shorts like this. So don't panic. Amazon is constantly changing, just like other clouds. And as we say, cloud is a moving target, right? So things keep on changing from the cloud vendor side. You don't have to worry about. Just keep your fundamental, strong and rest of the things will fall in place because things in the back and keep on changing Amazon improvises the look and feel off every service constantly, so did not focus on the ace. Two. Takes of the beauty off the console but rather focus on fundamentals and build your foundations, right? Okay, I'll go back and click on this one to look at the new design. So that's how it looks today. More modern, more appealing. One thing that I want to show is the dashboard section. You can create your own dashboard for different teams. Let's say you got a team for storage and there's another team for server management, and every team wants to look at their own services, right? Why would a server team want to look at storage aspect? And why would a storage admin wanted look at database aspect? Right. So for that purpose, you can go ahead and create a new dashboard and let's say I'm creating a dashboard for the stories teams. I'm gonna call it as storage team dashboard. All right, and click on Create, and that is ready for me. A gives me something like a pop up and it asks me, How do you want to see information on the dashboard? Do you want to see, like, a graph? Do you want to see numbers? Do you want to see some kind of text, or do you want to customize any qualities? Right. So let's say you want to see how many buckets do you have? Any arrested bucket right in that case line or stacked area will not fit into this particular use case, but you may want to select numbers, right? In other use case, if you want to see the CPU utilization off, your easy to instance, what would you do? Well, you would then look at line graphs or stag area, comparing the CPU utilization between multiple virtual machines or multiple ec2. Instance. Which one do you think will you select? But I will selected stacked area because there by I can have multiple graphs plotting over the X and Y axis, right? So, depending on your use case, you may want to pick and choose a particular widget type on your dash sport. I will select the number at this time and click on Configure Now you got a blank slate There's nothing inside the cloudwatch graph because for cloud, what still doesn't know what you want to see on that rigid right now Because we have created this dashboard for the stories team, I will choose either. Let's s three. How many block storage is? Do I have? So let's hear. Select s three and click on storage metrics. Okay. Now, as a scroll down on the last column, you will be able to see all the metrics for your buckets. Right now, let's say I want to see how many objects do I have in my bucket, right or what is a bucket size? So let's click on this particular bucket. The bucket name is my account. I d high from the bucket name. Okay, so it should tell me the bucket size. All right? I don't know it having a number, because it's gonna take some time to pull and get that information for me. All right, That being said, you can also customize it, increase the size and decrease it as well. And then you can add more rejects to the same dashboard, right? So let's say I want to now select a line widget click on configure and then go back to the same stats. Let's s three stories metrics. And now you want to plot the graph and want to see how many objects have been created over a period of time. Right? So developers are constantly working on your bucket. They keep on uploading content, your bucket. So you want to get stats off? How maney objects have been uploaded over the Peter of time. Right. So let's say outlook on this particular bucket, a bucket called as Deep Razor. And I want to know the number off objects. Right? So at this point, I do not have any the Alice on the graph. So I will see it as empty when I click on create widget. But you know what? Let's change the timelines to three days or maybe one week, Diego. Now I have started to see some numbers. Right now, there has not bean any activity on this particular bucket since weeks now and hence I do not see in a data, So that means nothing has happened from six AM to nine AM but then, if you customize it, let's click on those three little dots and then edit. I can change it to duration weeks, months, etcetera and then say update widget. All right, so that is how you can create different kinds of widgets on your dashboard. And finally, when you're happy with this dashboard, you can click on save dashboard and then shared with other team members. Okay, there are other items to look at here in the dashboard. For example, if you could go on actions, you can create a new dashboard, save the dashboard as rename and several other simple cell flex military options. You can also view and edit the source, and it shows you the whole thing in Jason format, which could be copied and then pumped into your own custom monitoring tool as well. Okay, let's go ahead and add one more dashboard for our easy to instant. Someone is called as infra dashboard click on Create, and this time I want to add a line widget you can configure, and I want to get some stats for my easy to instance. Okay, so how many easy to instances toe have? So I want to find out the total disk reads discrete bites, described bites network and network out as well. So you might have already noticed. As I keep checking these boxes, the graph is already populated with few statistics, as you see on the top are more interested in the CPU starts. So I'm going to select the CPU utilization off this particular easy to instance. Okay. And now I click on create Widget. So there you go. I see some stats, and this, easy to instance, was just created few minutes ago. And that's why I'm able to see the stats on the dashboard. So this is how you can configure your dashboard to look at specifics off the easy to instance, or the S three bucket so you can look at the CPU utilization the number of objects in your restaurant bucket and get a loaded as well. So if your CPU utilization goes over 80% you can configure in such a way that you start getting alerts on your email that is done through simple notification services. You can also get an alert if the number of objects in your s three buckets let's say goes about 100. Let's go and see that inaction. Since we have the two dashboard here, I'm gonna go toe alarms And then we already noticed this alarm earlier. But then what if I have to create a new alarm, right, So I can go to the graph and select a particular metric for my C two instance. So I just want to say that if my CPU utilization goes about 80% I want to be alerted. I want to take an action so I can go here and then select CPU utilization because I have selected a metric. Now, what do you want to do with it? If the CPU goes about 80% for a period of five minutes, then what do you want to do with that? So I'm gonna put my metric value here, And by doing so, I'm just saying that if the average CPU utilization goes over 80% for petered off five minutes, I would like to get an alert. Now, as we discussed, earlier alerts are done through S and S s and s stands for simple notification services. Notification can be done through several means. It could be done through email or SMS or Web books their several ways, and we talk about that later in the upcoming lesson. But at this time. I'm just saying that if the CPU is in alarm status, I would like to have an alert sent. And I've already pre contra good to SNS topics and behind which my email is embedded. OK, so notify me, is embedded to the falling email address. Whereas if I quickly take it off and then look at the other one, it is linked up to Well, nothing. Well, I can go to the essence console and then attach an email or attached my phone number to it . But then you got the point here, right that SNS is used for notification purposes. And at this point, I will get her of this and then select an SNS topic that has the following email address set right now. Apart from sending emails, what else can we do? Well, proactively. We can do certain things now because they CPU utilization is over 80%. You don't want to be taking action, react really, but rather be proactive and fix that problem. Right? So what are we talking about here? We're talking about scaling the service now because the CPU is at 80% We would like to build new virtual machines in that artist killing group, so you have two options to take. You can either take an auto scaling action or add an easy to action as well, under which you can stop the easy to instance. Terminate these into instance or reboot this instance. But if you do not want to operate at an easy to level, you can go ahead and do an order scaling action as well, which will automatically start building easy to instances so that the average CPU utilization comes down and your services are not impacted, right? So if you just have an email option selected, your action will be reactive in nature. But if you do one of these, you are taking a proactive approach. Okay, I hit next year and then type in an alarm. Just a CPU is high, and this is what I will see in my email as a subject, and the description can be typed here. I will just ignore that and hit next, and there you go. So in the dashboard, I've configured that if the CPU goes about 80% show that to me in the graph and if that situation never happens, I get an email and we did notice that Cloudwatch has the options toe proactively conficker auto scaling and also take easy to related actions. And this is not just for easy to or s tree, but every other service that we have in AWS, right? That can be done for databases. It can be done for no SQL s three being stock lambda and every other service that cloudwatch integrates with. So just to summarize what we learned so far, Cloudwatch is a monitoring service from Amazon. It will let you monitor every other service that we have in AWS. I just want to mention that there are cloudwatch agents that are available. So if your services are non Amazon, for example, it could be on premise or it could be on Azure. You can have cloudwatch agents sitting on them. So you go ahead and download those agents and install it on your premise or an azure cloud . What still has the capability to monitor them? What a powerful tool, isn't it? Well, that's all for this lesson, folks. Let's go ahead and talk about and auditing tool in aws what's called as cloud trail. Thanks for watching. And I'll see you in the next lesson. 51. 52 Cloud Trail: in this lesson, let's talk about an important aspect called Cloud Trail in your organization. If you have a need Iblis account, you will be creating virtual machines. May be your team members creating some s three buckets, someone else deleting used to two instances because it's done with his job. But at the end of the month, you would like to know who is doing what. Basically, you would like to audit your AWS subscription. So that's how you will enable compliance, governance and any kind of risk auditing for your AWS account. So Cloud Trail is letting you monitor Onley actions that are done by user accounts. And who are the user accounts in AWS while they are the accounts created in your I am, isn't it? So Cloud Trail is letting you see the actions that are taken by different I am users. So if somebody is the leading in s three bucket, stopping an instance restarting an instance or any actions that are taken in your edible a subscription is captured by cloud trail for you. Let's go ahead and jump into the cloud trail console and take a closer look at it now in my subscription during this lab demonstration. I've done several things in the previous lab. I created an easy two instance there a couple of dashboard