Cloud Computing for Beginners - Infrastructure as a Service | Idan Gabrieli | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Cloud Computing for Beginners - Infrastructure as a Service

teacher avatar Idan Gabrieli, Pre-sales Manager | Cloud and AI Expert

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

29 Lessons (2h 22m)
    • 1. Promo IaaS

    • 2. Welcome!

    • 3. Our Learning Objectives

    • 4. Welcome to the World of Cloud Computing

    • 5. Private vs Public

    • 6. Hybrid-cloud and Multi-cloud

    • 7. Hyperscale Cloud Service Providers

    • 8. Cloud Service Models

    • 9. SaaS - Software as a Service

    • 10. IaaS – Infrastructure as a Service

    • 11. PaaS – Platform as a Service

    • 12. FaaS – Function as a Service

    • 13. Demo - AWS, Azure, GCP

    • 14. Virtualization Technologies - Introduction

    • 15. Underutilized Physical Servers

    • 16. Virtualization with Virtual Machines

    • 17. Vertical and Horizontal Scaling

    • 18. Microservices and Cloud-native Apps

    • 19. Virtualization with Containers

    • 20. The Benefits of Containers

    • 21. Introduction to Infrastructure as a Service (IaaS)

    • 22. IaaS - Transform IT to Utility

    • 23. Compute, Storage and Networking

    • 24. Demo - IaaS Solution with Microsoft Azure

    • 25. Pricing Models

    • 26. Main Advantages

    • 27. And also Disadvantages

    • 28. Typical Market Use Cases

    • 29. Let’s Recap and Thank You!

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class


Cloud Computing is HERE!

Over the last couple of years, many business companies decided to use more and more cloud services as part of their digital transformation. Trying to be more innovative and flexible to the dynamic business landscape by leveraging the power of the cloud. Cloud computing includes a variety of cloud service models, like SaaS, IaaS, PaaS, and FaaS. Each one of them is a complete category of cloud services that are used to solve a variety of business challenges.

Infrastructure as a Service (IaaS)

This training is about the IaaS model. The IaaS model provided by large cloud providers is helping companies to transform their private IT infrastructure into a utility service. Reduce the footprint of enterprise applications sitting in private data centers and leverage the capabilities in a public cloud environment.

As a first step, we are going to review the key terms in cloud computing to establish a clear understanding of the big picture, and then we will zoom on the IaaS model. What are the building blocks, what are the typical use cases, advantages as well as disadvantages, pricing models, and more!

Would you like to join the Cloud Computing revolution?    

Meet Your Teacher

Teacher Profile Image

Idan Gabrieli

Pre-sales Manager | Cloud and AI Expert


Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Promo IaaS: We all know that cloud computing is everywhere. Almost any software application we're using today for personal usage, or it walked for performing something productive is based on cloud computing. Although the last couple of years, many business companies decided to use more and more cloud services as part of the digital transformation. They are trying to be more innovative and flexible to the dynamic business landscape by leveraging the power of using cloud services. Cloud computing includes a variety of cloud service model, like software as a service, infrastructure, as a service, platform as a service and function as a service. Each one of them is a complete category of multiple cloud services used to solve a variety of business challenges. My name is done and I will be your teacher in this training, I would like to talk about a single cloud service model of cloud computing, which is still very popular option called infrastructure as a service. The infrastructure as a service model provided by large cloud providers helps companies transform Dell private IT infrastructure into utility service. Reduce the footprint of enterprise applications sitting in a private datacenters and leverage the state of the art capabilities in a public cloud environment. As a first step, we will review the key tells in cloud computing to establish a clear understanding of the big picture. And then we'll zoom on the infrastructure as a service model. What are the building blocks? The typical use cases, advantages and disadvantages, pricing model, and Mall. It. These are our training roadmap. I think it's going to be a great starting point for anyone who wants to understand the concept of cloud computing without getting into low-level details. Thanks for watching, and I hope to see you inside. 2. Welcome!: Hi, and welcome to this training program about cloud computing. My name is hidden in our V0 teacher. As you may already know, cloud computing is everywhere. As consumers or end user, we use cloud computing while accessing many types of application as a service. Like our Gmail account for emails, Office 365, or handling documents and much more. Our data is stored and managed by a service provider, Sowell around the globe. And in most cases, we don't care how the service is running as long it is provided to us 24 hours, seven days a week. It is one type of cloud service model called SAS Software as a Service. And in the last couple of fields, many software vendors are now providing the software as a cloud service. It is true for the consumer market as well as in the business market. Another interesting trend that is still happening in the business market related to the IT transformation into the cloud, almost any business company has some internal IT infrastructure like Silvers network, devices such as switches, routers, firewalls, load balancers, stowage appliances, and much more. It is usually located in one or more private data centers that are handled by that company. This IT infrastructure, I mean, private infrastructure, is used to run a variety of application. Every basis has its own set of applications that are needed to run the business. It is hard to imagine a successful business today that doesn't rely heavily on data, software application and IT infrastructure. Such private IT infrastructure costs money. Meaning eight requires substantial ongoing capital investments for upgrading the infrastructure once in a while, like buying a new more powerful silvers, those storage capacity or network bandwidth and so on. A dedicated theme, okay, IT theme should keep it up and running 24 hours, seven days a week. It has limited capacity. So resources should be carefully provision. I mean, not every new project. We'll get the needle IT resources on time because acquiring new hardware is usually a long process in many companies. What if a business owe any organization worldwide, can rent the needed IT resources or IT infrastructure from someone else. Instead of using its own data center. As you may guess, it's called Infrastructure as a Service. And it is the topic of this training. We're going to talk about the specific service model in the context of cloud computing. In the next lecture, I would like to review the list of learning objectives in this running. 3. Our Learning Objectives: Let's quickly review our main learning objectives and the course structure for setting the right expectations right from the start. As a first step, we'll start by reviewing the fundamental terminology of cloud computing in the context of infrastructure as a service. Things like the high-level definition of cloud computing, a different day, cloud service models and deployment options, private cloud versus public cloud and so on. And the next thing will be to talk about utilization. Utilization technologies such as virtual machines. And then into more complex set of peaks, which is about containers and more, it will be a useful background. Before moving on. We will also review some of the main market options a while looking at the leading players in the public cloud market, meaning Amazon, AWS, Microsoft x2, and Google Cloud. Then we'll start to talk about the infrastructure service soirees it. What Albert main building blocks. Why should we consider using this kind of cloud service? Will try to understand their advantages and disadvantages of infrastructural service, how this service provided and used, what is the pricing model? Also, I will show you a high level demonstration of creating an infrastructure as a service solution using one of the leading public service providers. And the next step will be to connect between infrastructure as a service, as a solution to basis problems and challenges will talk about typical market use cases like lift and shift existing application into the cloud. Using the cloud to create testing and development environments quickly and more. It is the best way to understand the business case of this cloud service model. Keep in mind that this training is more about the big picture and not about creating an infrastructure as a service solution, like a step-by-step technical guide. If this is something you are looking for, I have a different training program for such a different learning path. Doing this training, you will find a small quizzes to help you test your knowledge and understanding. In any case, uh, feel free to ask me questions and I will be happy to assist. It is our high-level roadmap. Thanks again for joining. I wish you exciting and useful learning. See you in the next section. 4. Welcome to the World of Cloud Computing: Hi, and welcome to this section, we are planning to perform a quick overview of the key terms in cloud computing that will be used along with the specific training. Maybe you already familiar with some of them, which is great. But anyway, I recommend you to use the section as a refresh just to make sure we are fully aligned and ready to move on. Okay, let's start. What is cloud computing? I'm sure we encounter the terms cloud computing. It's every well, we can hit all the technical mumbo-jumbo discussions about cloud services and cloud service providers like Amazon, AWS, Google Cloud, Microsoft does x2 and more. As a starting point, I think it would be useful to define the meaning of cloud computing at the high level. Cloud computing is a model for enabling on-demand network access. A shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort. Let's break this definition. Access to a shared pool of computing resources is about allocating some capacity from physical servers sitting in a data center. Imagine a large group of servers that are stacked on top of each other. It is the pool of availability sources. Some of the capacity in those servers is used, is being used, meaning are located. And some of it, it's free available to be allocated. Those servers can be located in the same datacenter or maybe multiple datacenters. Now, imagine that you can sit at home login into some web-based management system and allocate virtuality sources inside those remote servers by yourself. On demand means that you, me, or anyone else can access and allocate such virtual resources and pay for them with a credit card. Today I want a virtual server with this capacity. We'll use it for one week and then I don't need it anymore. I can't deallocate it on-demand and stop paying for that specific virtual servers. It is the meaning of on-demand. I'm saying virtually sources because it's an important concept in cloud computing. Let's say we have ten physical servers in a data center. Now let's assume that each server can be allocated to one single application, like application number one to several. Number one, application number two to server number two, and so on. In that case, our pool of physical resources can only handle up to ten applications. Think about a situation where a specific application is only using 5% of your particular server's computing power and memory capacity. So this server is mostly in idle situation. We're using a fraction of that server capabilities. It's such an efficient use of computing resources. This application does not need a full physical server to run. It can use something smaller, a slice of that server, like a slice of pizza. So let's say that it will be handy to take each physical server and slice it into many small mini servers. Like dividing a silver into 100 virtual Mini servers. And now in total will have 1 thousand virtual Mini servers in our data center. It is much more granular option to allocate IT resources. It is the core concept of virtualization. It is a technology that allows to break down physical resources into a pool of smaller virtual resources. And cloud computing relies on virtualization technologies for sharing resources between many users, many application, many customers will talk about virtualization in more detail later. Another important term is about private cloud and public cloud. Let's talk about it in the next lecture. 5. Private vs Public: I assume you may wonder, what's the big deal hill with virtualization, almost every big company has some private data center with multiple servers. And probably they are using some virtualization technology. They can use it to allocate virtual resources for their end user and their internal departments. That's true. And it is called a private cloud. It is an IT environment inside the company. The infrastructure is managed, controlled, and maintained by the IT department, and it is not shared with any other organization. It is the traditional IT approach which is still useful for many cases. For example, some organizations are obligated by regulation to keep data, data, and application in their private cloud. Now, when we talk about cloud computing, it is usually not about private clouds. It's about public Clouds. A public cloud is a group of connected datacenters located in many places around the world, managed and operated by a third party company called a public cloud providers. Customers like companies and individuals can use a variety of cloud services in a public cloud. Such public cloud providers provide the option to rent IT resources on demand in the same way that electricity providers provides electricity for many customers. Why cloud computing is similar to some level to a utility company. Business models pay based on your consumptions. Instead of buying and managing physical servers, you can rent virtual resources and services as needed. Now, even though this public cloud is used by many organizations for many types of use cases, each organization will have its own secure and private tenant. Inside a public cloud. Public cloud is a multi-tenant solution with separation between the tenants. Customer X cannot see or access the data or application of customer Y that is using the same public cloud. Today, several players are providing global public cloud services, and we'll talk about them later on. Each one of them has a public cloud. In addition to the private cloud and public cloud, they're more combinations. Let's talk about them in the next lecture. 6. Hybrid-cloud and Multi-cloud: Many organizations are using private clouds, also called private datacenters for many years. If you check a typical medium to large size enterprise company, you will find hundreds of different applications being used in that company for many task. Some of those applications are entirely based on the SIS model, meaning there is no footprint impact on the private cloud. They are not using any backend, ideally sources and inside the private cloud, except the internet connection that is required by the end user to access the internet into and then to access those SaaS applications. Let's call those Application Group a. Group B, I'll applications that are running as on-premise deployments. They are using a variety of IT resources inside the private cloud. In many cases, it's not possible, it's not making sense. So maybe not cost effective to migrate those application into a public cloud environment. This is Group B, sitting and running inside the private cloud. And the last group, group C, applications that are running in a public cloud environment. Meaning the organization decided for all kinds of reasons that it makes sense to run them in a public cloud. It is the reality most organization a mix of application related to group a, group B, and C. Therefore, the most popular and practical cloud deployment strategy is actually called hybrid cloud. In a hybrid cloud model, a company has a private cloud, which is the on-premise data centers, as well as resources in one or more public clouds. This combined model is used to expand the private cloud capabilities. Usually those two clouds will be connected and integrated. So end-user in application will be able to utilize the two options easily. So this is a hybrid cloud. The next type of deployment strategy is called multi-cloud. And the idea is simple. There are multiple public cloud providers like AWS as x2 and Google Cloud. Each one can be more attractive in specific use cases on a technical level or with a better pricing model. Maybe a specific cloud service is not available in Google Cloud in a specific region, but it is available in AWS or perhaps the other way around. So for some organization, it makes sense to use more than one public cloud provider. It is the meaning of multi-cloud, meaning using several public clouds together. Let's talk about those companies that are running public clouds in the next lecture. 7. Hyperscale Cloud Service Providers: We learned that cloud computing is mainly about public clouds that can provide cloud services, a variety of cloud services. Those cloud services are available to anyone. Anyone that would like to pay for such services, such as public cloud, is managed and operated by a company called public cloud service provider. It is similar to how utility providers are walking like power, water, or guess. We engage with the specific utility company and pay only for what we use or consume. The provider is responsible for managing the complex infrastructure and making sure that services walking as needed. Looking at the market today will find multiple public cloud providers, but only a few of them are considered to be hyperscale cloud service providers. Hypo scales means running and operating data centers in many locations, a world-wide, with millions of servers and millions of applications. And those datacenters are connected with high-speed network infrastructure. While using the latest computing and networking technologies. Just think about the global network that Google or Amazon is handling for the global operation. It is unbelievable, huge. It is also a question of scalability. Those hyperscale service providers can provide almost unlimited capacity. If a customer would like to get hundreds of silver for some other night computing task, he or she can get it for me. Hyperscale provider just allocate and the allocate resources on demand. As you may guess, it is a costly global infrastructure. And therefore, it is setting a high barrier for small players that are trying to penetrate a this going cloud computing market. It's not so easy to become a hyperscale cloud provider. Today. Those hyperscale cloud providers, I'll mainly Amazon, AWS, Microsoft or x2, and Google Cloud. If you check the roadmap of each one of those three players, you will be able to see that they are continually investing in infrastructure upgrades using the latest technologies and adding more data centers in more location worldwide. It's a global fight between those giants because there are many business opportunities in a cloud computing. As a quick history reminded, Amazon was the pioneer in cloud computing for several years until Microsoft and Google understood the potential with cloud computing and decided to create the public clouds. As a result, they were lagging behind the Amazon AWS for several years. And therefore, Amazon is still the leading public cloud provider in market share. But as you know, in the high-tech industry, things are changing very fast. Every Microsoft assuming google cloud, I'll now consider to be almost one-on-one alternative to AWS. They successfully gained a significant market share. And it is safe to say that their cloud services are competing successfully with AWS. The Bottom Line, which is a good news for everyone, is that we have multiple alternatives to select when looking for a public cloud services. Each one of those plays is adding more and more cloud services every year. By the way, it's not just about providing the options to rent IT resources on demand. In the next lecture, let's talk about the leading cloud service models to better understand the types of public cloud services. 8. Cloud Service Models: I guess you remember that we talked about cloud service models in the getting started with cloud computing training course. Or maybe it's the first time you encounter this term. In any case, I would like to perform a quick refresh and also provides an important update as the service model changed a little bit. A Cloud service is a service provided by a cloud service provider. Those cloud services are also called X as a service. That term as a service means cloud computing service that is managed for you. So you can focus on other things. If we will open one of the leading public service providers portfolio, we'll find hundreds of cloud services. New types of services are being introduced every year. And to be honest, it is also becoming more confusing to customers. Service providers are trying to organize a those services into different categories. Like solutions, technologies, market use cases, pricing model, industries and so on. And other thing that is heading to the overall confusion is the variety of market product names per each service provider, like Amazon achieved as x2, Cosmos DB, google Cloud BigQuery and so on. They're very creative while picking all those different names. One primary way to divide those services is by using something that is called cloud service models. It is a high-level definition of a cloud service. And service providers are not directly using these high level definition when presenting the portfolio because it's not enough. It's too high level. Still. For people that are just learning the concept of cloud computing. It is vital as a first step to understand those service models. So let's review that quickly and then we'll zoom on each one of them. At the bottom, we'll have infrastructure as a service, which is the main topic in this training. The 0x is basically I, which represent infrastructure. Next is the platform as a service. Now the x is a platform that is the things managed by o provided for us. A new layer called function as a service was a recent update to the cloud service model. And of course we will talk about it in details later. And the last layer at top is software as a service, SAS. The difference between those service slaves is about one thing, the level of responsibility from a customer point of view while using a software solution. Let's review each one of them in the following lectures. 9. SaaS - Software as a Service: I like to start with SaaS software as a service because it is the simplest form of cloud computing and it is the well-known and popular option. Every organization uses hundreds of software service application like CRM, ELP, project management, handling a variety of documents, creating money-making videos, monitoring system, and much more. Ok. There are thousands of software that are provided today as a SaaS. It a SaaS model. Users or customers gave me access to software features over the Internet using a web browser. The software provider managed all back-end cloud infrastructure. And it is entirely transparent to the end users or customers that are using this software. The customer who can be home consumer or a business customer is not buying a onetime license. It is a service that is available on a subscription basis or pay per use basis, typically a monthly or yearly price per user. The price will scale up if we need more user or more features. This approach eliminates the need to install and run the application on the local user computer. And by doing that, simplifying maintenance and support, the software providers can instantly release updates that can be used by all customers the next time they will login into the service. Now, there are also drawbacks of using software as a service. For example, security is an issue as the application data is stored outside the organization datacenter in a public cloud. The service availability and end user experience, I'll directly lying on available and reliable internet connectivity. I mean bandwidth and latency. Still nevertheless, SaaS is a very popular option and all of us are using such cloud services at home and also at Walk. But, and this is very important things to understand when we talk about cloud service providers like Microsoft does x2, Amazon AWS, SAS Software as a Service is not irrelevant cloud service model. Microsoft does x2, for example, is not offering a SaaS solution. The focus is on the layers below that we'll talk in few minutes. Microsoft as a software company, is of course, offering SAS for many types of application, like Office 365, OneDrive for file sharing. But it is not related to Microsoft as x2 products. You will not find Office 365 as a service option inside a zoo. Let's move to the other layers. 10. IaaS – Infrastructure as a Service: Now that we finished talking about SaaS Software as a Service, we can start from below and discovered the relevant layers for Cloud service provider like a zoo, AWS and Google Cloud. All the discussion from this point will be about one simple question, How to develop and deploy an application in a cloud environment. The customers are now small to large size enterprise companies. Those companies are using many types of application like of this shelf application that they purchased from software vendors. Or in other cases, those companies would like to develop the owns applications and utilize a public cloud environment. The first popular option to deploy and run applications in a public cloud is infrastructure as a service. Using these layer, a, companies can end virtuality sources for a public provider to build and deploy their applications and then pay based on actual consumption. When we use this model infrastructure service, we built and maintain the virtual infrastructure for our application inside the cloud. On the other hand, we don't manage or control their underlying physical infrastructure, but we are responsible for the virtual infrastructure. Infrastructure as a service is like the first generation of cloud computing adoption in the industry. Taking our existing application and be great some of them into the cloud. This can help reduce the cost of running a private datacenter or maybe helped to quickly deploy new infrastructure to any application. Cloud providers typically Bill infrastructure as a service on a utility basis, which is the cost based on the time each resources was allocated and used. One thing to remember is that the end user for creating infrastructure as a service solution, our IT administrators, okay, IT cloud experts, people that are familiar with IT infrastructure. They know how to allocate and configure the cloud IT building blocks used as the infrastructure to run a specific application or maybe a group of application. Anyway, this training is focused on infrastructural service to help you fully understand this service model. 11. PaaS – Platform as a Service: The next service model is called Platform as a Service pass. As the name suggests, the main idea will be to use platform as building blocks for building application. The end users are now developers, okay, not IT administrators, as we saw in the previous one, meaning infrastructures service. Let's say you are a software developer as part of a team that is supposed to develop an end-to-end application. Okay, you need a development tools. You also need a temporary environments to build and test the application you develop. You need the database to store the application data in more. One option would be to use the infrastructural service option and Kate, all that infrastructure by yourself. However, from a developer point of view, all those IT requirements are considered to be overhead. Spending a few hours to allocate virtual machines installed and Linux operating system and then set up a database server for deployment environment is not the main job of a developer. A developer would like to focus on the code and not on IT task. He or she can ask for the help and support of their IT team. But this is just adding more complexity to the equation. This is the selling point of the platform as a service model, developers can use out of the box development tools in the cloud use a variety of platforms like building blocks that are managed automatically by the Cloud provider. For example, you can quickly allocate a Managed SQL database without performing any virtual machine allocation, any operating system installation or database installation in case the operation system should be updated. This is done automatically by the Cloud provider. In case the traffic load on a database is growing, the cloud provider will scale the allocated resources in order to accommodate the traffic growth. By the way, it is not just for developers. It is also the same story when running the software in a production environment, many middleware components are needed to be managed. And this can be done by the Cloud provider. You control the application and services you develop. And the cloud service provider is typically doing everything else, ok, managing all of those building blocks, all those platforms for us, platform as a service is considered to be the second generation of cloud computing. If you're looking on the adoption trends, instead of spending time allocating, monitoring, and managing virtual resources, let's focus on the application itself. Let someone else maintain the silver patch security updates on their OS level scholarly sources if needed, monitor traffic patterns and design redundant architecture and more. It is boiling down to how much you would like to outsource, control, and responsibility to someone else and focus on the core elements of the application. Today, cloud providers are offering ME types of services using this service model, meaning platform as a service, it is becoming a very popular option. I'm planning to create a dedicated training just on this service model, but this is not the last step in cloud computing evolution. Another important and innovative approach evolved recently, and let's talk about it in the next lecture. 12. FaaS – Function as a Service: For a long period, the three layers, meaning software as a service platform as a service and infrastructure service, where the only service models of cloud computing. Today, a fault model called function as a service, which sometimes is used interchangeably with the term server-less computing. It is the third generation of cloud computing where organizations are getting more cloud native applications. Later on we'll talk about microservices, which is part of this third generation. Serverless computing is a new approach to development then run application in a cloud environment. The notation serverless can be a little bit misleading because the application is still using servers, of course. But those servers are entirely under the cloud provider's responsibility. The cloud provider is saying, forget about the IT infrastructure. How will enable the ID overhead? And you should focus on your code, your application. You should write your application code and provides as small building blocks called functions. You don't need to provision or manage any cloud infrastructure. A function is a piece of code waiting to be triggered by an event. In case the relevant event is coming, a resource from the cloud environment will be allocated dynamically 200 dots specific function when it is finishing lining the cloud resource being used for that function will be deallocated. The resource allocation is happening on runtime in milliseconds. So in an ideal situation, okay, those function are not using a knee IT resources. This approach is not relevant, of course for all cases, okay, for all use cases. But it is gaining momentum. Overall. I think it's an interesting new approach. And I plan to have a d, dk training course just on serverless computing and the concept of function as a service. 13. Demo - AWS, Azure, GCP: We talked about the concept of cloud computing, cloud deployment options, a cloud service models, and hyperscale cloud providers. I think it's an excellent time to review at high level, the leading players in the market at that provides such capabilities. And I'm talking of course about Amazon, AWS, Microsoft x2, and Google Cloud. So let's open the website of which a cloud provider and review some of the coal cloud services and solutions that are offered. Adele, keep in mind that today when you will open the website, it may looks a little bit different from what you see in the training when I recorded that website, but most probably 95% will be the same. We are starting with Amazon, AWS. This is the website. Under the products option, we will see an extensive list of Cloud products and services, which can be a little bit overwhelming for beginners. Those products are organized into categories such as analytics, application integration, compute containers, database, a developer, eaters, Internet of Things, machine-learning and ball. It is amazing the variety of cloud services that are being offered today. If I select Compute, we see a new page, a, dedicated it to this category. Scrolling below, we'll see many marketing messages that are used by AWS to differentiate their services. But I would like to get into this list. Aws compute services. Assuming I would like to create virtual machines by myself, which is related to infrastructure as service. I can consider using an AWS service called Amazon Elastic Compute Cloud. In short, EC2, which is used for infrastructure as a service solution. Or maybe I would like to create containers. In that case, I can consider using a few AWS services like Amazon, ECS, ECR, NEK, S. In case I would like to develop a more cloud native application with server-less computing. Then the related cloud service is called Lama zone, a lambda. When selecting a specific cloud service, such as this one, Amazon Elastic Compute Cloud will get more details. The upper menu bar is organized into additional topics like features, pricing instance types, and various training resources. We could also start our investigation on that website from a different starting point based on solutions. Okay? A solution by a use case, by industry or an organization type. For example, I will use the use-case category and inside select cloud migration. And I will get a dedicated website about this use case. We then Overview best practice and how to migrate a k, studies of customers and resources. One thing we can say about Amazon AWS website is that it's very organized and a helpful, by the way, the website for creating cloud-based solution is not what you see right now, okay, we need to open an account with the username and password and then access different AWS website. I will show you that website. They later on. We are moving to the next cloud service provider, Microsoft or x2. We have solutions, options to start navigating between the cloud services. Those solution divided into categories like Application Development, AI, cloud migration. And in each category there is a list of available cloud services. Let's go back to the main menu and select the products. As you can see, the categories eerily similar to what we saw in AWS. Okay, we have analytics, block chains, compute containers, et cetera. One thing to remember is that those services are not divided here by the industry service models that we just talked about. Like a platform as a service function, as a service infrastructure as a service. As you can see below, there are more than 200 different products. So it will not be so useful to divide them by using those three service models, we need more granular navigation. Let's say that I'm interested in building an Internet of Things solution. So i have such dedicated category with all the relevant cloud services to build such a solution. Let's click on compute. The first option for creating virtual machines is related to infrastructure service. However, the menu option called a up service is related to a platform as a service model. Let's click on it. As you can see here, it is described as a fully managed platform, okay, the keyword here is a platform. Therefore it's related to platform as a service. Let's go again to select, Compute and click on a zoom functions. It is a cloud service for building serverless solutions that are related to the function as a service model. As I mentioned before, we'll have dedicated training on Platform as a Service and also on function as a service. One thing that I keen on this website is a documentation menu option. Microsoft is doing a great job of providing a product documentation that is important when searching for something online and looking for an official answers. Let's move to Google Cloud. The upper menu is a little bit more straightforward. It is starting with why Google? And here, I would like to show you something relevant for any hyperscale cloud provider, the Global Infrastructure. And I will click it, see Google Cloud's locations. It is an updated summary of the Global Cloud infrastructure of Google Cloud, which is changing every few months. Looking below, we'll see how the datacenters are divided into regions in different location. And if I would click on network, we will see the global network that Google is operating on a global scale, which is amazing. While scrolling below, we'll get a useful table that indicates if a specific product is available in a particular location. I can click on Europe and then look on the second line called App Engine. Ok is one example of a product that I would like to use. And we'll see if that product is available in some location. For example, it's not available in Netherland and in Finland today. On the other hand, the Compute Engine product is available in all those locations in Europe. We should check with each cloud provider before trying to create a complex, say, cloud solution. Another thing that I would like to show you a quickly is about pricing. Let's select Compute and then compute engine. On the left side, I will choose pricing. And then on view pricing details. If will score below, will see a list of virtual machine types. And then the price for each machine. The price say for one hour usage, I can switch it also to a monthly calculation. We'll talk about pricing model later on. Okay. It was a high level overview of the three plays. Let's check your understanding so far using a quick quiz. See you again in the next lecture we're going to talk about utilizations. 14. Virtualization Technologies - Introduction: Hi and welcome back. In the previous section, we talked about cloud computing definition. What are the cloud deployment options and the cloud service models? I mentioned that cloud computing is based on virtualization technologies, but I didn't drill into that topic. And some of the feedback that I collected from the Getting Started with cloud computing training course is that many students wanted a more in-depth technical understanding of the concept of virtualization. Therefore, I'm dedicating this section to talk about this concept. We will review the first generation of virtualization, a meaning virtual servers, and then the second generation based on containers. We will also discuss some important topics like microservices and cloud native application and how they are related to virtualizations. 15. Underutilized Physical Servers: As a first step, let's take a look at the physical bare-metal rack, silver. If you managed to see such a computer device in a datacenter, It looks like a large pizza box mounted inside a wreck that can hold multiple servers stacked at each other. When looking at a specific server, it is a computing system with a particular set of features like CPU, power, memory capacity, storage capacity, available interfaces, and so on. It is usually a much more powerful computing device than a home computer like a personal laptop. To run applications on this cell will we needed operating system like Linux or Windows. The most popular operation system for servers is Linux. The operation system provides an abstraction layer to application running inside for accessing the different components in that server. Then we can install one or more applications inside that operating system. Like for example, in email server. Now, assuming this email application server uses around 15% of the machines, CPU and 10% of memory. We are not fully utilizing the silver computing power. We selected the best bare metal silver with impressive capabilities. But it is mostly in an ideal situation, okay, the email server application running inside the physical server is using a small fraction of the silver capabilities. We have an underutilized silver. One option to consider will be to install another application on that physical server, like editing a file sharing application on the same physical server. That's doable to make several vacation on the same physical server, but less recommended. It is less recommended because those application, I'll server-side applications. Server-side applications are much more complicated creatures, okay? They all composed of multiple connected components like databases, API, gateways, and so on. A server side, application is optimized to walk with a specific operating system. Also, a server-side application will require specific computing power and memory capacity to function smoothly. Think about our simple example of an email server side application and a file-sharing server side application. Maybe the email Silva requires a walking on Windows operation system. And on the other hand, they file-sharing inequality one on a Linux operating system. In that case, we can't put them in the same operating system. Maybe the evil salaries is much more important than the file-sharing service. And you would like to prioritize who is getting more resources. Another issue is related to ongoing maintenance. Assuming one to upgrade the email application to a new version. If the two applications are running on the same cell, well, they will be down doing the upgrade process, both of them in that scenario, the file sharing service must go down just because we would like to upgrade the email service. If that's the case. We can't run those two server-side application on the same physical server. We need one physical server for the email service, in one physical server for the file-sharing service. And we get back to the same problem. Under utilized an inefficient use of computing resources, because those constraints are leading us to use a dedicated bare metal silver for each server-side application. Imagine that we have three physical servers with individual dedicated purpose, okay, one is an email server, one is the file-sharing silver, and the last one is a Web server. The percentage of each server is presenting the actual utilization of each server. For example, the file sharing is 5%, which is a fraction of their full several potential. If you think about that, this is a massive waste of IT resources and the magnitude is growing. When we have a datacenter wheat, a large number of physical servers. Ok, so we understand the problem, meaning underutilized service in a large a datacenter. Let's move to the solution. 16. Virtualization with Virtual Machines: As you may guess, virtualizations is a solution to that problem that we just mentioned in the previous lecture. In this lecture, I plan to go a little bit deeper to understand the concept better. Virtualization is a technology that allow us to create multiple virtual resources for a single or maybe a group of physical hardware systems. It helps the CAPM or break the connection between the physical hardware and the resources that can use this hardware. As you may guess, we need some new component that will create and manage this virtualization Lille. This component is called a hypervisor. A hypervisor is an application running at top the physical hardware servers, like especial operating system, and can be used to split a server into separate and secure computing environments called virtual machines. Okay, in short, VMs will looking at this simple drawing, we have the physical infrastructure layer below, okay? Imagine a group of ten physical servers. On each physical silver, we are running this hypervisor obligation. Now we can have a consolidated the hyperlink URL. And all the silver below are just a pool of physical resources that the hypervisor can use. The physical hardware equipped with the hypervisor obligation is called the host server. Now we can go to the hypervisor management console and start the magic of utilization. We can dynamically allocate or de-allocate multiple virtual machines on different sizes. It will help us to improve the utilization of the physical servers. When we create a new virtual machine, we need to define the virtual machines profile or properties like a CPU, memory storage, networking interfaces, and other resources. Each virtual machine is acting as a separated mini cell over the hypervisors job will be to distribute the computing power, a coding to the predefined size of each virtual machine that is running on a particular server. And they can be many virtual machines that are running on the same physical server. And this is being done by this hypervisor layer. The virtual machines that are using this host silver below are called guest machines, okay, you have hosts and guests. As I explained, each virtual machine can be allocated with different computing profile. And then we can install a dedicated operating system on each virtual machine, which is called guest operating system. As you see in destroying each virtual machines, coupled with the guest operating system provides an environment that is high isolated from the rest of the system, from the rest of the virtual machines that are allocated here. Now we can install a variety of application on each virtual machine. It can be one single application or multiple application on the same virtual machine. The applications and data stored in one virtual machine are not visible to other virtual machines even though they own the same physical machine. Let's look back to our simple example. Now we can create one virtual machine for the email server side application with a specific computing profile that is more optimized to the requirement of that application, okay, without wasting a computing resources. And then install a particular operating system that this email server requires, for example, Windows operating system. The next step will be to create another separate virtual machine for the file-sharing server-side application with a different computing profile and maybe different operating system like using Linux and another virtual machine for the web server and so on. And other small topic I would like to cover in the context of virtual machines is a virtual machine image. What is the meaning of a virtual machine image? Let's understand a concept. When an IT administrator is allocating virtual machines for some applications, it is just a Silver. You need to install the operating system and then configure the setting in that operating system. So it will be optimized for data application you would like to run inside. And then you need to install another layer, which is a list of libraries that supplies services for the upper layer application. And finally, install the actual application with all the required setting. The bottom line is that it is a long process. Okay. It's a, it's a it's a slow, long process. And it's a manual process with many steps to follow. And it makes sense that some level of automation is needed. There is an option to create automation scripts as part of the cloud environment that will perform those steps automatically. But again, it will be a long, slow process. As a simple analogy, think about the process of installing Windows operating system on a desktop computer and then several application inside. It can quickly each couple of working hours. Okay, that's the shoot. That in a server side application it will be longer and more complicated. The next approach will be to create a virtual machine, install the operating system configured operating system setting, and installed the application and all the required components and setting. Okay, it will be like a template. Then take a snapshot of all this configuration directly from the hard drive of that virtual machine. Install it as one big binary file as a template to create similar virtual machines. Every time we need to create a new instance of an email server, we need to create a virtual machine and dump the image inside. That's it. In a couple of minutes. We'll have it up and running. It is also useful when we have a failure in a cell value, ok, for some reason, we can spin off a new virtual machines instance quickly. And other common scenario is when the load on some server is growing. And we would like to create a new parallel instance or a node of the same Silva. Again, it's all about using virtual machine images as templates. By the way, in a public cloud environment, some third party software vendors provide their software application. We can image, okay, complete image coupled with the operating system or the required setting and all the application installation. So all you need to do is to select the required image from a catalog list and dump it into a new Virtual Machine. That's it. Now this option to spin off new virtual machines instances, a form images, helps achieve a critical capability in a cloud environment. This capability is about scalability, scaled the capacity of the allocated virtual resources on demand. In the next lecture, I would like to explain the concept of scalability. 17. Vertical and Horizontal Scaling: In the previous lecture, we talked about virtualization technology for creating virtual machines. One of the key capabilities of such technology is the flexibility to change the on-demand computing power L of a specific software component in a bigger application, it is called scaling. And I think it is important to understand the term. At least in high level. Scaling, is the process of managing our cloud resources capacity to help our obligation meet a set of performance requirements. When there are not enough available resources to handle demand, then the application will be impacted, Of course. Or when there are too many resources that are not being used, we will waste money on nothing, okay, it's a careful balance. The goal is to meet our defined performance requirements while optimizing the utilization of the resources, of the cloud resources. And in most cases, this is an ongoing process. Assuming our obligation demand is flat and relatively constant, it will be easy to calculate the right amount of needed resources, meaning select the best size of virtual machines for each component in our application. The traditional IT strategy to design a system and choose the size of virtual machines is while using the maximum peak load. Meaning, let's design an IT system that can handle the peak load of that application. As you may guess, it's not a cost-effective strategy, especially in a Cloud environment. And this is the sweet spot of using the scaling options of a cloud environment. We can ensured configure the right amount of resources to meet the current demand, and then automatically adjust the, the IT environment as demand changes. Okay, so Schelling resources is essential, that's clear. But what kind of option we have, Okay, well, there are two main options when performing Scaling, vertical scaling and horizontal scaling. Let's start with Vertical Scaling. Vertical scaling is also called a scaling up or down. Scaling up is an act of adding Moly sources to a single instance, okay, the most relevant resource type that we are talking about is the virtual machine. When allocating a virtual machine, we are specifically selecting a computing profile, meaning the number of CPUs and memory capacity. Scaling up is all about. The more resources such adding more CPU or memory capacity to a single instance. And also the other way around scaling down is reducing the amount of CPU and memory. In many cases such vertical scaling is a useful option to solve all kinds of performance issues, okay, just increase the memory capacity or add more CPU power in a specific virtual machines. And you can quickly solve some performance bottlenecks. On the other end, vertical scaling has some disadvantages. We can't always increase the resources of a virtual machine will encounter some limitation. The underlying hosting environment. The cost is not linear when we switch to a bigger virtual machine, okay? A bigger virtual machine type. Allocating bigger virtual machines will cost much more money. As I say, it's not a linear cost. Another disadvantage is that when we choose the scale up or down a Virtual Machine by selecting different instance size, we need to reboot that virtual machine. So the operating system running inside will be updated with a new virtual machine size. And this is not always an acceptable situation, especially if we need to change capacity in high frequency intervals, okay, it will take several minutes until a new Virtual Machine will go up. And the other way around, which is not acceptable approach for some application. The next Schelling option is horizontal scaling, that is also called scaling out. Oh, scaling in. Scaling out is about adding additional instances to support the load on our solution. Instead of adding more capacity by making a specific instance more powerful, we add capacity by increasing the overall number of instances. Instead of allocating a much more powerful virtual machine is we are doing in vertical scaling. Here we can add another smaller virtual machines running in parallel like a cluster. The application can better distributed traffic load on multiple instances that are performing the same function on that cluster. It can be more linear, goat, which will be translated into a more cost-effective solution. In today's modern cloud-based applications, a horizontal scaling is becoming a much more popular and cost-effective options compared to vertical scaling. Still, boat options are useful and used. Now there is another type of virtualization technology, which is more recent evolution in cloud computing. And it is gaining tremendous market momentum. It is called containers. But before we can talk about containers, we need to cover another essential topic called microservices. 18. Microservices and Cloud-native Apps: Microservices, if you're a software developer or an IT expert, than you probably heard about it. As always, I would like to start with the problem. Before running to the technical details of the solution, we saw that virtual machines used to divide a physical server into smaller computing units called virtual machines. Each virtual machines has its own operating system. And then we can decide if we would like to run one or more application in a specific virtual machines. Many legacy application that used to run in Physical Cell wells will easily migrated to run in virtual machines. It can be in a private cloud or public cloud. The adoption of virtualization technology by the market was quite fast because in most cases, you don't need to change the software. Thus approach was great and useful to utilize how do the sources much better as, as we talked in the previous lecture. However, the software industry change dramatically in the last couple of years. There is a strong demand to redesign applications that will be more cloud native, OK, it's another buzzword, cloud native applications. Cloud native applications, application that are walking more smoothly in a cloud environment. Those application to better harness the power of the cloud, they should easily be scaled up in, out very fast. From a developer perspective, such cloud native applications should be updated more frequently. As you may guess, a new software architecture is required to create cloud native applications. In the software industry, it is called micro services. Instead of building a big and complex application as a single building block in something that is called the monolithic application. Let's break that application into little pieces called microservices. Each micro service will be a like a mini application, a small unit of code handling a specific narrow task, a single function. And the mindset will be to separate those microservices as much as possible so they can be developed and deployed independently. Ok, it's a keyword. So remember, independently, it's a powerful approach because each micro service can be developed by a dedicated team and then scaled independently in a cloud environment. This kind of software architecture was born by the hyperscale cloud based software players like Amazon, Netflix, Facebook, Google and others. They were facing huge challenges while keeping their infrastructure optimized to the software requirements. I strongly suggest you to review my dedicated course about micro services to get a deeper understanding of this important topic, it is called a beginner guide to a micro service architecture. It's a mind shift in the software industry and it has a dramatic impact on the IT industry. Anyway, back to our subjects, that is about virtualization and containers. When we have an application that is divided into many small micro services, we have a huge overhead problem while using virtual machines. Feel tall machines are not optimized virtualization lives for microservices. And I think it would be better to understand the problem with a simple example. Let's say we have 50 micro services that are the building blocks of one application. Okay, should we allocate one virtual machine pair, each micro service, which means we need 50 virtual machines, just one. Our application. We need to take into account that each virtual machines requires a dedicated operating system, which by itself consumes resources. Each virtual machines includes a full copy of the operating system coupled with different models called libraries to one some application taking up tens of gigabytes okay, of storage. Just check out how much disk space your Windows or Mac OS operating system or a Linux operating system is using. It's a couple of gigabytes. We will have 50 virtual machines with 50 operating system. Those 50 operating system require substantial computing power and memory capacity and disk space to one, processes related to the operating system. It's a massive waste of computing resources. When the application is Meany application like a micro service, it's overkill. So we can think about the quick solution or walk around. We can put several micro services in the same virtual machine and reduce the number of virtual machines, reduce the number of guest operating systems. It's a simple IT solution, but not really aligned to the microservices approach. In that case, the micro services located in the same village on a machine share the same resources, which contradicts the basic requirement in micro services designed to separate those micro services as much as possible. Something is not walking here while using virtual machines. We need a different virtualization approach that is more optimized to microservices architecture that is being used in cloud native applications. And as you may guess, it's about containers, and let's talk about them in the next lecture. 19. Virtualization with Containers: In the previous chapter, we talked about the new development concept in the software industry called microservices to create more cloud native applications. A single micro service is like a building block of an end-to-end application when using the traditional visualisation options. If we put a micro service in a virtual machine, it will be a waste of resources because of the operating system overlaid per each virtual machines. Each virtual machines runs a dedicated virtual operating system called guest operating system. And other issues related to deployment time, the process of creating a new virtual machine and waiting until it will be up and running can take up to several minutes, which is vast. But in other cases, it's not fast enough. The virtual machine entity is too heavy vehicle for small mini applications like microservices, we need a more lightweight vehicle in that perspective. In the shipping industry, physical containers are used to isolate different cargo and protect them doing the long sea journey. The cargo payload is divided into standardized contain is that can be moved and loaded into ships like building blocks. Each cargo is located inside a specific container. So we can create a container with specifications for moving, for example, coffee beans, while keeping the right temperature inside a container. And other container can be used for moving medicine, payload, and so on. Each container is isolating the cargo inside the container, even though all containers are basically located on the same large cargo ship. Let's go back to our story about running micro services in a virtual environment. What if we can visualize the underlying operating system, Annalen virtual entities directly on the operating system level, but still keep them as separated environments without creating virtual machines. As a first step, let's remove the guest operating system per each virtual machine and then remove the hypervisor layer because we are not going to create virtual machines, then place a single operating system, pill, physical server, as we have done before, before the age of Utilisation. Now, for dividing the host operating system into virtual environments, we need a new label, which is called Container Engine. This engine will help us to create a new type of virtualization entity called a container. A container is an isolated virtual environment, but without using a dedicated guest operating system that we used to have with virtual machines, this environment is used to run a small unit of code, a single micro service. Now, when a group of micro services related to some application running in containers, this application is called a containerized application. A containerized applications precedes each container as a separating computing unit with specific properties like CPU power, memory size, file storage network interfaces, and so on. Instead of providing virtual hardware to a village on a machine, we offer a virtual operating system to our application. This is the concept of containers. When we are looking at the two computing stacks. On the left side, we have the physical infrastructure, the delay. The next layer is the hypervisor, which provides the virtualization capabilities to create multiple virtual machines. Each virtual machine has, it's a guest operating system and one or more application running inside a specific virtual machine. On the right side, which is related to containers, we have the physical infrastructure. The next layer is the host operating system and then something that is called Container Runtime engine. The Container Runtime agent provide operating system level virtualization to the upper layer containers. This engine is used to run containerized applications. Okay, like we see here, microservices 123 that are related to the same application. As you can see, there is no dedicated guest operating system per each container, like we saw in virtual machine. Which means the image size of the container will be smaller compared to a virtual machine image. Just as a quick reminder, you remember that we talked about virtual machine image, which is like a template to quickly create new virtual machines that are identical. The same goes for containers. It is possible to create a container image that is being used to create new containers of the same type very quickly. A container image is just a file encapsulating some unit of software package. Okay, it's important to understand that a container image becomes containers at runtime. When we take the container image file as a template allocating resources and then run it. It is basically called the container. Now let's review the main benefits while using containers. In the next lecture. 20. The Benefits of Containers: Think about a micro service that is handling the function of a shopping cart on a website, ok, you are adding all kinds of products in a website and the shopping cart is updated. Suddenly the number of users is increasing to a new spike because of some marketing promotion activity related to that website. Assuming this myco service that is handling the shopping cart was packaged as a container image. Then the cloud system can automatically spin off new container instances of the same micro service for handling the traffic load. Okay, it is called horizontal scaling or scaling out. Now because loading up a new containers does not require loading an operating system like in a virtual machine, then the deployment time of applications running containers can be down to seconds. Ok, you are creating a new virtual machine and it can, it can be up and running in seconds. This lightweight nature of containers means they can be started and stopped very quickly, which is a perfect match to rapid horizontal scaling in a cloud environment. Now, when a developer package an application inside a container image, and of course I'm talking about some micro service. Something very useful or very powerful is also achieved. The container image is designed in a way that it will be transparent to the underlying operating system and infrastructure. Therefore, a container can be deployed and run in multiple environments, different environment. It is like loading a container for one ship to another ship. Okay, the ship represent different environment, a different operating system. Just think about the typical life cycle of developing applications. Developers are developing new features in some applications using their development environments. Then they can package the updated code in containers as container images and ship them over to the operating system without knowing how the containers will be compatible with the next environment. Okay, meaning going into production, the containers are a standardized way to run application in multiple environments. Then the operating group can take those container images created by the development team and deploy them in the production environment. It dramatically streamlines the pipelines of releasing new software updates from development to production and shipping new features in applications much faster. As a quick summary about containers, this is yet another virtualization approach. It has a great feat for the cloud native applications that are based on micro service architecture. The low overhead of containers reduce the footprint of a specific micro service in IT environment. It provides a more granular and faster scaling option in a cloud environment. The standardized packaging of software inside containers provide an easy way to move containers between different environments. The downside of containers is that it is much more complex to manage a compared to virtual machines. There are all kinds of management systems just for creating and deploying containers. Maybe you heard about Kubernetes, which is an open source software solution to automate scaling, managing, updating any removing containers. It's a popular container orchestration platform. In that context. When talking about cloud computing, it is down to the level of responsibility and control. The infrastructure to create, manage, and run containers in a public cloud environment can be based on Infrastructure as a Service. Which means most of the hard work is done by the, by us, by the customers. And other option would be to use as some platform as a service options for running containers, which reduces the overhead and complexity of using containers. It's up to us to decide which option is more relevant. Another thing to keep in mind is that it's going to be a mix. Some parts of the application will be based on virtual machines, and some parts will be based on containers. In other cases, we'll even see containers running on virtual machines. And in such a hybrid approach, those technologies are complementing each other rather than competing with each other. Okay, we finalized our cloud computing introduction with the key themes to understand. I highly recommend you to check your understanding using the upcoming quiz. See you again in the next lecture where we're going to zoom on the infrastructure as a service model. 21. Introduction to Infrastructure as a Service (IaaS): Hi and welcome back. Thanks for watching so far where the comprehensive refresh and update on some of the key terms used in cloud computing. In case something is not clean, you are more than welcome to reach out and ask questions. I will do my best to help you and guide you from this point and moving forward, I would like to zoom on one service model called infrastructure service will also have a dedicated a training courses on platform as a service as well as on functional service. It is essential to consider that I am not going to train you on creating and deploying resources using infrastructural service model. Because it's a different learning PET that is more relevant for cloud IT administrators. If this is what you are looking for, then consider taking my becoming a cloud expert training polygon. It's a deep dive, very technical training on how to configure a virtual resources. In this training will focus on the overall picture and to, when. We'll talk about the basic definition of infrastructures and how it is helping to transform IT into a utility service. Advantages and disadvantages, we know that nothing is perfect. The pricing model of such surveys, how do we pay for using infrastructure as a service? Typical use cases in their business vote. So we can understand where the service model is being used as building blocks of such surveys in a Cloud environment. And the last part will be a review the main market options when looking at the leading cloud service providers. It is our high-level plan. I hope it will be interesting and useful. Let's stop. 22. IaaS - Transform IT to Utility: Let me ask you a question. Why do you think a company that can own and operate a private datacenter, we consider to use a public Cloud and specifically the infrastructure as a service model. If they can use their private data center without spending money on renting IT resources, then what's the business case in why even consider this option? Well, the answer is related to the challenges of operating private datacenters. The first challenge is about the costs associated with building an awning datacenters. Datacenters are very expensive. Most of the organization IT spending in, inside companies is related to datacenters. It starts from the cost of real estate occupied by the servers sitting there. Okay. It's like a room dedicated to the datacenters. The 24-hour, seven days a week. Electricity consumptions for running the servers using network devices, cooling systems and security devices. The cost related to the actual hardware and software technologies, like buying powerful high-end servers, routers, switches, storage devices, hypervisor solution for virtualization, and firewalls, monitoring tools, a license for operating system database and more. Ok, you can imagine that the costs are building up. And what about the people or the team to maintain that complex infrastructure, configured new instances or their new hardware as needed and more Ito and troubleshoot issues. The next challenge with private datacenters is about flexibility. In a private datacenter, you are limited with the free capacity in your servers. If the capacity is not enough, you need to follow the complex process of buying new hardware. Secondly, we all know that technologies are getting old very fast. In that perspective, it is fair to assume that our data center will not offer the latest, most updated the chronologies unless we'll keep updating the technologies in the datacenter older time. All those challenges preventing companies to fully utilize IT technologies. And this is the unique selling proposition of cloud-based infrastructure service solutions. Cloud providers are helping to transform IT into a utility service. It's a mind shift in the IT industry. They're saying, forget about running a private data center with all the associated costs, the massive capital expenses, and then ongoing monthly operational costs. You can rent computing resources on demand as a service and start to migrate your application to the cloud. Pay for resources you use. And when you no longer need those resources, you are not going to pay for them. You can also keep the private cloud and use the features and capabilities in a public cloud as a way to compliment, to extend your on-premise application and replace some of them based on a case by case scenario. For example, if your private datacenters computing capacity is running out for specific application, you can use the public cloud as an extension to the private cloud by transforming state of the art IT technologies into utility service. Now, small to medium sized companies can use public clouds to leverage the latest technologies without investing in buying expensive hardware and taking advantages of the public cloud, massive global deployment and servers worldwide. Just think about some startup that they have some challenges to start to invest in building data center. And this is a, some kind of a perfect solution to use the technologies and the capacity in a public cloud and immediately start to use it. 23. Compute, Storage and Networking: One thing that is important to understand when using the, the infrastructure as a service model is that we build our cloud-based infrastructure by renting a variety of cloud resources. It is like a Lego game. The cloud provider is providing the lego building blocks. I assume you played with a Lego game in your childhood? I'm still playing with Lego with my little kids. Every year they ask for more complex games with hundreds of small pieces that are connected to each, to each other one by one to form a bigger and more complex structure. Creating a virtual cloud solution using infrastructural service is similar in many ways to a Lego game. A cloud environment provides many types of a virtual resources, such as virtual machines, containers, network interfaces, IP addresses, virtual networks, storage, capacity and much more, those different types of pieces can be created dynamically and then connected it to create the end-to-end IT environment that is required to a specific application. In infrastructure as a service, those types of virtual resources can be divided into the following main groups. Compute, storage, networking, and security, and also management. Let's review each one of them. Compute. The first very straightforward category is called compute. For running a server side application will lead a coupe of virtual machines. Each virtual machine will be used to run a different model or a layer. In our end-to-end application, each module running in specific virtual machine will have a different computing requirements. It is usually related to two main parameters, the amount of virtual CPU and the amount of memory. For example, a relational database model, we'll need a more memory optimized Virtual Machines. Meaning it has a high memory to CPU ratio, more memory to, to the amount of CPUs. On the other end, a web service will require a more compute optimized Virtual Machines, meaning it has a high CPU to memory ratio. Therefore, a public cloud provider will have multiple types of virtual machines that will be divided into families or categories. And inside each family you will have a more granola selection. We are moving to the next category of virtual resources, which is about storage. Other the compute category we talked about allocating virtual machines. Each virtual machines. Must use one or more virtual disk, use to store the operating system files, the application runtime and data. We can dynamically create virtual disks and then attach them to a village one machines that we created, a virtual disk has the following main parameters. For example, of course, the disk size and gigabytes, isotopes, meaning input and output operations per second. This is like a performance measurement. A benchmark is to identify how good is the storage device throughput per disk in megabytes two seconds. Then those virtual disk would be divided into two main technology into categories. The first one, which is becoming very popular, is a solid-state drive SSD that are designed to support intensive workload, intensive input and output. And the second one, which is a little bit more legacy, is HD disk drive, which is all the technology which lower input and output capabilities. But again, the price will reflect the technology that you are going to use. Now, if we will open one of the public service provider than under the storage category, will find a long list of storage services and virtual disks. However, most of them platform as a service and infrastructure as a service. And we'll talk about them in the training dedicated to platform as a service. But when you're talking about allocating virtual disk, this is related to infrastructure as a service. For example, a Cloud Service, Storage Cloud Service that enable us to upload and download files is a platform as a service because we don't handle any storage aspect. We don't allocate virtual disk or something like that. The cloud provider automatically manage it. We just get an API to upload data. So this is related to platform as a service. The next call building block in our infrastructural service lego game, is about networking and security. A complex server-side application is commonly divided into many models or layers. Each module can be located in a dedicated virtual machine. In most cases, those models must communicate with each other in order to form an end-to-end application. For example, a web server sitting in one virtual machine would like to query a database server running in a different virtual machine. This kind of communication between virtual machines is achieved by using a variety of network resources. Virtual network resources. For example, I can create my virtual network with my private IP addresses, creates a virtual interface and attach it to a virtual machine. Then create another virtual interface and attach it to a second virtual machine. And from this point, those virtual machines can communicate with each other using this virtual networking infrastructure. Assuming I would like to connect my virtual network to the public Internet, to outside. I can allocate a public IP address and attach it to the relevant virtual machine. Okay, this is like an additional resource that I can use. From this point, anyone with an internet connection will be able to access these virtual machines from outside. Okay, let's call this virtual machine a gateway virtual machine. In most cases, I will limit the type of services that can be accessed from the external Internet to that gateway, okay? It is part of the security configuration that I need to apply. For example, I can configure a variety of firewall rules to pass or block a specific traffic type. It can be traffic from outside. It can be traffic between virtual machine in my private network and so on. And other very famous network component is a load balanced cell. In most cases, we will have a cluster of virtual machines performing the same function. Okay? Like a, like a web server function with many virtual machine instances walking together. This is related to horizontal scaling. And we would like to distribute the incoming traffic between the virtual machines in that cluster. This is the job of a load balancer. All those options are available as part of the infrastructure as a service model for creating the optimized a networking layer for our environment. After creating the required cloud solution, using the infrastructure as a service, building blocks will lead to manage and monitor those virtual resources. Let's start with the first task, which is monitoring each virtual resources like a virtual machine or a virtual interface, is generating metrics and logs, okay? For example, it can be a performance metric like CPU utilization, memory utilization recorded every five seconds. This information will be used to understand if the virtual machine has some capacity limitation creating bottlenecks in our application. Okay, so this is some kind of thing that we need to monitor. Another type of telemetry data is logs. For example, if for some reason the virtual machine performed a system reboot, it will be recorded as an event, will be able to open that event to see why this virtual machine performed a system reboot, every user action is recorded in a log to analyze and see that maybe someone many Lee rebooted that virtual machine. So this is some kind of useful information that the system is gathering and it is beautiful. It is used for monitoring what's going on when using infrastructure as a service, all the IT maintenance task and our responsibility, for example, will lead to update the operating system with ongoing security updates, upgrades, Middleton software components that we are using like database software much more. The good news is that public cloud providers will have a variety of management and monitoring tools optimized for cloud resources and cloud services. 24. Demo - IaaS Solution with Microsoft Azure: In the previous lectures, we talked about the main building blocks of infrastructures as a service solution, meaning compute, storage, networking, and management. It's almost like a Lego game. We can create a virtual network as a communication layer for our cloud environment, and then create multiple virtual machines inside that take virtual network. Each virtual machine can have one or more virtual interfaces in virtual disk a for storage. I would like to show you a simple demonstration of the process I just described. It's not a low-level practical training session on how to do it step-by-step. It's not my objective. It smoothly, quick, high level demonstration to make it more tangible and see how the things are coming together in a public cloud environment. I will use the Microsoft x2, a public cloud a for the demo. But it is important to understand that this process is almost identical to the other public cloud provider. We are looking right now on the Microsoft Zune homepage using my account, I can open the navigator pane by clicking here on the left side. I will get to a more options. As a first step, I would like to create a virtual network. So I will click here, virtual networks. The list is empty right now and select, Add, and I will get the result. First of all, I need to select how I'm paying for the resource under the subscription options. I have only one option in my account called pay-as-you-go. Okay, this is the pricing model I'm using. Let's call that network my Vn, and then select the virtual networks region location like Easter us. I can play around with mobile setting. Let's keep it simple and create this resource. It will perform some validation. And then I can click create. You can see the message from the public cloud saying that this request is under deployment. And after a while, I will get that my deployment is completed. Click on home. And then virtual networks. Here we have our first virtual network resource called my VM, ready to be used. Next, let's gate a little virtual machines. The process a little bit more complex, a click Home, and then select Virtual Machines. Again, this is the list is empty right now, while clicking on add on, getting to option virtual machine and start with the preset configuration. Let's click on this option. Here. It is like a weasel trying to help me while selecting the right a virtual machine type optimized to my application. I'm going to install inside that VM. For example, the workload environment type. Is it going to be a development and testing environment or it's going to be a production environment. I will select dev test. Next below the workload time. Ok, three main options, general purposed, memory optimized, or compute optimized. Each one is a complete family of virtual machine sizes. I will select general purpose. Now, I'm getting a wizard to create a virtual machine. Again, the subscription in my account is a pay-as-you-go. Let's call my virtual machine my SQL VM. As I would like to install SQL database inside. Then change the region to East de us. Okay? As you remember, in a public cloud environment, we have many options. Okay, now let's talk about images. A virtual machine image is like a template to create a Virtual Machine instance, this image can be just the operating system like K you can see or Ubuntu server read at a CentOS Windows servers anymore, I can select the elephant operation system I would like to use and move forward with the setting. When the virtual machine is up and running, I can manually install the application inside. Another option would be to select an image that is a package of the operating system and also some specific application. I will click on, browse all public and private images. If I created my personal images, then I can select it under my images. Another option is to select an image from the Azzam marketplace, OK, just search for the type of image. Looking for a for your virtual machine. I will search for a MySQL. It's a pipe of a, of a SQL database. And our, you get many options, okay, many options coming from the marketplace. Each of them is an image created by a third party company that will charge me based on consumption, consuming data application, the TAM, using this image in a specific cave virtual machine. In this demo, I would like to use a free image. So I will go to the pricing option and filter out item that costs money. Okay, just keep the free option and then select this one. Mysql certified by Bitnami. This is my selected image. And let's move forward. Now I need to select the virtual machine size. Click on Select size. As you can see, we are getting a table with all the virtual machine. A family's on the left side like this, the dc as Buddhist serious ACTs and mole, I will select the cheapest option under the b series. You can see a endow. It's organized into columns, the number of withdrawals, CPU and RAM memory per each line. What is the maximum number of virtual disk that can be attached to a particular virtual machine and more. The last column will be the cost for a full month of usage of that particular virtual machine. I will select the b1, s, a type wheat day one virtual CPU and one giga. Philae. I will also set up a username and password for accessing the virtual machine. Okay, the username will be my user, and I will also type some password and retype it. Okay, let's move forward either the disks, I can select the type of virtual disk for the operating system. Let's select a standard SGD and then creates an additional one data disk. Ok, we have one disc for the operation system and one or more data disk for the application and the actual date of the application that this name will be generated automatically. I will change the disk size and type. On the upper Layer menu, I will select against standard SGD and then the first option with a 32 size, okay, this is just for the purpose of the demo. Next, I will select networking. The system automatically select my VM resources, the virtual network for this virtual machines, there are almost setting of course, but for our demo it's more than enough. Let's click on Review and create. And it will validate all the setting for a few seconds before we can click on create. As you can see, we are getting a useful cost summary saying that this MySQL image I selected will not cost anything. It's free. And the virtual machine itself will cost this amount per hour. I will click Create, and then wait for few minutes until the process will finish. When the system is creating a virtual machine, several additional virtual resources are catered automatically. Like a virtual interface firewall setting public IP address. Okay, we got that configuration message and now we can go back to the homepage. Click on Virtual Machines. And we'll see a new line in that list. It is, our new virtual machine is up and running, as you can see here. I can click on it and get much more information and setting options. For example, I can restart, stop, or delete the virtual machine. I can see the list of actions perform on that virtual machine. Under the activity log, I can limit and control the access to that virtual machine under Access Control. Or see the list of virtual disks attached to this a virtual machine. We have two disc. One is the operation system disc, and one is the data disk. I will select the networking and present the default security firewall settings. Everything can be changed and adjusted. Every line here represents a specific allowed or not allowed, a traffic flow. Let's go back to the homepage and click on resources. This is a useful summary of all the virtual resources that I created in this demo. I will organize it based on Ty. Now we have here the virtual machine that we gave the networking phase, the two virtual disk, the virtual network, okay, in a real deployment, I will of course create multiple virtual machines that are sitting in the same virtual network. Each virtual machine will hold a specific model in an end-to-end application like MySQL silver in my, in one virtual machine and other virtual machine for a web survey. And much more. 25. Pricing Models: In this lecture, I would like to quickly review in high-level, the pricing models are used by the big players to charge for cloud resources related to infrastructure service. We have several options. The first one is called pay as you go. It is the most popular pricing options for cloud services. Every allocated cloud resources is measured in seconds or hours, depends on the Cloud Resource Type. And at the end of the month, we will pay for the actual consumptions. Will not have any long-term commitment or any upfront payments. And we can increase or decrease capacity on demand, which allow us to adapt to changing business needs without over commitment. Budgets. Pay as you go is similar to how we pay for utilities, like electricity, water, and other type of utility services. This model is useful when we can't predict how much capacity is needed and for how long. Prepaid capacity reservation package is a more advanced options for large organizations, enabling them to buy a package for one of several alleles with an upfront payment. In that case, they can negotiate a deal with a nice discount. The larger the upfront payment will be, the greater the discount they can get. It is useful when we can predict how much capacity will be needed and therefore perform a long-term commitment that will enable us to reduce costs in the long run. And the last one is spare capacity and new innovative pricing model that allow us to request spare computing capacity with a large deep discount in case the frist computing capacities available will get it for a while. And in case the cloud provider must use it, then it will be released back to the cloud. So there is no commitment that you will get it all the time. Think about the situation that the company would like to use high performance computing in a public cloud to run some heavy duty simulation task. It's not a mission critical application. When using spare capacity, computing resources will be allocated and deallocated done dynamically today, application based on availability, which is not a problem because it's not a mission critical application. The price for that spare capacity will be much more attractive compared to any other pricing plan that we just talked about. Again, this approach can be useful in specific cases. 26. Main Advantages: In this lecture and the next one, I would like to talk about the advantages and disadvantages of using the infrastructural service model in a public cloud environment. Infrastructure as a service in a public cloud can be first of all, compared to private cloud, or it can be compared to other service models in a public cloud environment, a like a platform as a service and function as a service. Let's start from the positive side of the equations, meaning that the advantages, what are the main advantages when using infrastructure as a service? Rent instead of by, the first advantage is the flexibility to rent virtual IT resources on demand. Instead of buying a quite expensive IT infrastructure. Think about the use case of a medical company that would like to one complex simulation of a new medicine. The simulation requires a substantial amount of computing resources, but for a limited time, like a one-week. For them, it makes sense to lend those resources in the public cloud for a specific time interval. Just pay for the renting period. It is also part of the mindset to replace capital investments with operational and investment in a dynamic business landscape. In the business world. Today, it is challenging to forecast the IT requirements of a new company in such dynamic landscape old or a new business line inside an existing company, or maybe new products, okay, it's easier and faster to went those resources as needed for a public cloud provider. The next one is, is immigration. Suppose we'll check the average IT profile of some basis companies. In that case, we will see that most of the server-side applications running in their private data centers are using virtual machines over some private virtualization layer. In that case, from a technical perspective, it's going to be quite a straightforward process to migrate those existing application into a public cloud while using a for such service. No application code modification is needed. There are of course, some challenges related to data migration and interfaces with other system. But just rest assured that there are many tools provided by the public cloud providers to quickly overcome those challenges related to migration. Another challenge is related to the needed skill set for Migrating application to the cloud. But it is not a it's not a big deal that the IT department should select the most relevant public a provider, get some training on how to allocate our resources in a public cloud environment and that's it. In a couple of days, they can ramp up the Cloud Skills and start to create cloud environments. By the way, it's very simple how those IT teams are creating virtual resources in their private datacenter. The next one is related to contour. While looking at the spectrum of control when using cloud services, they aimed for such a service model is the first in that line. When we use the infrastructure as a service model, we are fully responsible for building and maintaining our virtual infrastructure. It means that we are still keeping to our self the majority of control of our infrastructure. We decide which virtual machines or containers are needed and a specific operating system per virtual machines that will be used where those virtual machines will be deployed, how much redundancy would like to use, how the system will scale up and down, out and in. We are also responsible for monitoring what's going on with their virtual IT infrastructure in real-time. And other essential advantage while using infrastructural service model in a public cloud environment, a compared to private cloud is the concept of a global scale. A public cloud provider has multiple data centers that are located in many locations worldwide. Think about the situation of disaster recovery. If you store all your data and application in a single private cloud, then there is a high risk of data loss and extended service downtime in case of a major disaster that impacts the whole data center. On the other hand, when using multiple datacenters provided by a public cloud, you can reduce the chance of service downtime or data loss. And other issue is about proximity to the end users. Many applications today are connected to the global Internet with end users that are located in different countries, different location. If we force all those end users to connect to one private data center, it can compromise their experience due to performance issues. Think about the last time you open the website. If it was more than four or five seconds to load the page, it's not a good experience, it's not a good user experience. We are becoming more sensitive to performance issues. When using a public cloud, you can distribute the system resources in multiple places and try to be as close as possible to the end users. 27. And also Disadvantages: Let's talk also about the disadvantages for every positive side, there is also some negative side. The first issue that comes to mind is security. Security is a huge challenge for all organizations worldwide, especially when some applications required to be connected to the outside internet. In a private data center, you have full control of when and how the sensitive data is transmitted and stored. And specifically, who can access the physical servers sitting in a data center. On the other end, when using public cloud computing, you need to let go of some security responsibilities and trust the public cloud provider. Based on my experience, those public cloud providers are taking very seriously the challenges associated with security. They provide many useful out of the box features and capabilities to secure data and application inside the cloud. As a simple example, any data being stored in a cloud development is automatically encrypted by default. So even if someone will access the data from some server, it will be useless without the keys to open the encryption. As part of the advantages, I mentioned that infrastructure as a service provides much more granular control of the cloud infrastructure. That's great and useful. On the other end, you will have much more responsibility. You need to invest the time and energy to design the system architecture and then allocate the required resources and perform all the configuration. And after the application is up and running, monitor the system held and performance and perform updates and more. It's a substantial overhead that must be managed by a trained workforce. Ok, the IT team should be trained in creating and managing cloud Lee sources. Selfies, downtime, nothing is entirely immune to failures. Think about a leading public cloud provider with multiple datacenters connected with a complex global communication network. Each datacenter with thousands of Silvers, storage devices and security devices. Failures in that complex infrastructure will happen one way or another, preventing the customers from reaching their data and application. In some cases, the cloud service providers are trying to minimize any cloud service downtime as much as possible. And they will recommend you to design your solution to be more resilient with all kinds of options. But the bottom line is that those cloud services are not in your hand. You need to trust a third party company to maintain high service availability. In the business world, it is called SLAs, service level agreements, a public cloud provider must provide specific SLA parameters are to be evaluated by customers planning to use specific services. Let's say that you evaluated several cloud providers and selected provider X and then deployed your application in that public cloud. At some point of time, you would like to consider moving your cloud deployment to a different public cloud provider like provider y. Well, it is doable of course, but it can be a little bit challenging to move for one infrastructure as a service provider to another. It is called vendor lock-in. The good news is that vendor lock-in in Infrastructure as a Service is less intense and less complex. And then other cloud service models like a platform as a service and function as service. The last thing I would like to mention is about cost, or better to say unexpected costs. I'm sure you encounter the situation of getting a huge bill at the end of the month and trying to figure out what happened. Oh, consumed all that electricity in your home. And then after a short recovery, you are trying to explain your kids that electricity cost money. Unfortunately, it's not a free service and they should be a little bit more responsible. The same situation may happened when using cloud services, the core Pricing Model of infrastructure services based on consumptions. Okay, so, but the good news is that those cloud providers will provide you with a variety of tools to monitor consumption and caused a optimizing the diploid cloud system and reducing the cost. But anyway, it will be wise to educate and train the people that are planning to manage the cloud resources about best practices method to reduce an monotone, a cost. 28. Typical Market Use Cases: I want to talk about the typical use cases of the infrastructural service model, meaning how customers are using this specific cloud model to solve the problems. The first typical use case is called lift and shift. Lift your existing legacy enterprise application form your private datacenter a shift them to the public Cloud. In a typical private datacenters, many applications are running as an on-premise installation using virtual machines. All we need to do will be to replicate a similar virtual environment in a public cloud. This virtual environment will be created by us using the infrastructure as a service model. It is the fastest option to move application to the cloud without touching the application code. On the other hand, this approach can be very costly and not always a better option than a private cloud. Dev test environments, software developers must use a dedicated development environment a to develop a software code in another environment for performing testing. Let's call them dev test environments, where the software is ready, it will be deployed in another environment, which is the system production. The production environment. Using the infrastructural service model, development teams can quickly set up development and testing environments in a public cloud and then scaled them up and out as needed for performing different testing scenarios. Unlike a production environment that you'll be up and running 24 hours, seven days a week. They can shut down those environments, computing resources, you know, after working hours to reduce consumption costs. And then when they finish the development or testing, they can quickly remove those resources. Imagine that you are working in a company that is developing a new NLG storage device. And every month you must perform a complex simulation of the energy distribution in that device. And it is taking seven days to be completed in your private data center. Because the servers over there are not optimized for such computing task. The good news is that you can buy dedicated computing servers that are optimized for high performance computing. And such, sellers can perform the simulation in any instance in one hour. It will help, of course, to streamline the development and testing process in your company. Instead of waiting seven days for a test result, it will be one hour. Amazing. The bad news is that such high performance servers cost around $1 million. And it doesn't make sense for your company to use it just for one hour. And then for the rest of the months, it would be completely idle. It's a great use case for Cloud Computing. For public cloud computing, public cloud providers are providing high performance computing resources using superfast computers. Those computers can be connected as clusters or is net agreed of network for creating powerful virtual computing entities. And customers like you and me can rent those powerful computers on-demand. One of the famous use case today of such a service is for training machine learning models. This training requires to feed machine learning algorithms a large portion of data and then tune some of the algorithm parameters while searching for the optimized solution. Many data scientists working in many companies are using the public cloud to train the machine learning models. Every website on the internet must be hosted in a hosting service. And there are many companies today that are providing such end-to-end web hosting services, those companies would provide a way to upload a website files and perform. And from this point or the underlying management is on them. Of course, it is coming with a price tag that is associated with a service package. Another option is to host websites as do it by yourself. Create the environment in a public cloud using the infrastructural service model and perform all the website IT infrastructure, monitoring and ongoing maintenance by yourself. The cloud environment can be easily scaled up and down according to demand. For example, overnight, the number of users accessing the website is very low and therefore, we can use minimal resources. During working hours, the traffic demand is growing and the system can scale up and out to accommodate a traffic growth. 29. Let’s Recap and Thank You!: Hi, and welcome back to our last section in this training, I want to recap the things we have covered so far while we started by learning the fundamentals of cloud computing, what is cloud computing? Which is a new way to run applications by consuming virtual IT resources located in a remote datacenter, the same way we consume electricity from the utility company. This is perfectly aligned with the infrastructure as a service model. We then talked about the available deployment options like private cloud, public cloud, hybrid cloud, multi-cloud. The most popular choice today is a hybrid cloud, meaning enterprise companies are still running private datacenters, but in the same time will extend them to a public cloud and sometimes even more than one public cloud. The next important topic was the updated list of a cloud service models. Software as a service platform as a service emphasizes service and the new one, which is functioning as a service. When talking about a cloud platforms like Google Cloud, Amazon, AWS, and Microsoft ASU, SAS as a service model is not relevant because these platforms are designed for companies that would like to run cloud applications. Infrastructure as a service is the first-generation. Platform as a Service is the second generation and function in a services third-generation as part of the cloud computing evolution, in reality, applications are using a mix of those cloud services and are just a single option. We also talked about the concept of virtualization technologies like virtual machines and containers. Why they are so important in cloud computing. We started with virtual machines and then move to a microservices that are used to develop more cloud native applications. Those micro services are designed to better leverage contain is as a virtualization layer. Next, we zoomed on infrastructure as a service model, had a huge revolution in the cloud market, helping to transform IT into a utility service. We talked about the main building blocks of infrastructure as a service, meaning compute, storage, and networking. Each one of them is a category of various cloud resources used to build an end-to-end if a sexual service solution. We also saw the pricing model of such cloud service model, how we could pay while using infrastructure as a service. It was important to understand the advantages as well as the disadvantages of fee for such a service. Some customers would like to be in more control and have more visibility of what's going on. And infrastructural service will be a better choice for them in some specific use case, other customer will search for services managed by cloud services. Of course, they can consider using other options like a platform as a service and function is always assuming the applications are designed to take advantage of those small advance a service models. The last topic was about the typical market use cases of infrastructures, service, lift and shift migration process of existing applications from the private cloud to a public cloud, hating temporary test dev environments for developers using the cloud for high performance computing for a special task. Using that for web, website hosting, data storage and more. Infrastructural service is still a very popular cloud computing option with a variety of use cases. That's it. It was a quick recap of the things we covered. I want to thank you for watching this training and I hope it was interesting and useful for you. It will be great and awesome if you can share your experience in the review system, every single review is important for me. If you would like to continue learning Cloud Computing. Consider the next course in this training program that is called cloud computing for beginners, a Platform as a Service will focus on the next layer on the things we can do while using cloud services based on platform as a service. That's it for this training out to see you again in my other training courses. Bye-bye and good luck.