Cloud Computing Solution Design | Ahmed Fawzy | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Cloud Computing Solution Design

teacher avatar Ahmed Fawzy, IT Transformation Advisor

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

13 Lessons (1h)
    • 1. Learning outcomes (Solve using Cloud Computing)

    • 2. What Is Cloud Computing?

    • 3. Why Cloud Computing?

    • 4. Moving From Traditional To Cloud

    • 5. Develop A Cloud Solution

    • 6. Develop A Cloud Solution Practice

    • 7. Data Tiering

    • 8. Develop A Cloud Solution Application Group Practice

    • 9. Cloud Requirements

    • 10. Cloud management

    • 11. Core ready

    • 12. Cloud Security

    • 13. Develop A Cloud Solution Build server Requirements Practice

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

In this course, you will learn about 

  • What Is Cloud Computing?
  • Why Cloud Computing?
  • Traditional Application Migration And The Best Practices
  • Develop A Cloud Solution
  • Data Classification And Tiering
  • Define Cloud Requirements
  • Define Cloud Management Requirements
  • Cloud Security Best Practices
  • Cloud Risks

Meet Your Teacher

Teacher Profile Image

Ahmed Fawzy

IT Transformation Advisor


Ahmed Fawzy, is an Advisor, Author, and Online Trainer. He has 18 years of experience in the fields of IT transformation. Utilizing a unique approach to achieve a better alignment to the business through solutions and processes. Also, how to transform IT organizations successfully from "Traditional to Digital."

Ahmed holds ITIL Expert certification and ITIL4 MP. He is also a certified Project Management Professional (PMP), TOGAF 9 Certified, and has a Master in Business Administration (MBA).  He has implemented improvement programs for a wide variety of organizations. His approach is unique because it doesn't require new additional software or hardware,  "It's simple few adjustments that yield a high return." Ahmed's goal is to help leaders transform their IT internal o... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Learning outcomes (Solve using Cloud Computing): now to the third part of the triangle. The technology In this section you will learn about cloud computing. What is cloud computing and why cloud computing. Moreover, you will learn about how to move from traditional to cloud and to move from traditional cloud, you need to understand your applications on what cloud will meet your requirements off the application. This is called Developing a cloud solution and lastly, you now on the cloud how to perform cloud management and what is the difference between your traditional solution and the cloud one. 2. What Is Cloud Computing?: What is cloud computing? The National Institute for Standard and Technology defined the cloud as a set of characteristics, which is on the men Self service network Access Resource Pooling rapid elasticity. Baber Use with three delivery models. Cess software as a service pass blood form. As a service, I asked Infrastructure as a service, but now we've been seeing X as a service where exit can be anything like, Do you are the service idea as a service business process as a service, etcetera with three deployment models bribed Cloud your own the hardware, and it's for your organization. Use on Lee Public Cloud for the cloud provider with bay as you go Bill, you do not own the hardware on sometimes the software as well. Hybrid cloud a mix between the private and the public. Let's start with cloud deployment models. First, we have public cloud. Public Cloud is shared between multiple organization. This will pool resources. This will reduce the total cost bathed by the organization. The real value of Public cloud is that the organization bay for the actual use. No need to buy hardware software in advance. Instead, in a cloud, you'll be on Lee for what you consume. It's always a good practice to move your test, and they've environment to the cloud first. This will allow your developers to get a taste off. What required and how to develop the application on the cloud. Next, we have bribed club. A single entity owns the private cloud. There are two kinds of private kout. Local. You have the hardware and the software in your data center, and we have external. You don't own the hardware or software, and it's remotely managed. Location. Virtual Private Cloud. The 2nd 1 We see a trend a little bit more in the past few years, with the compliance and governance more demanding for specific application workloads. In this case, the customer used the external private cloud. The cost of the private cloud must be cheaper than the traditional system, but may not be cheaper than public cloud due to scale. If you have virtualized environment, it will be a minimal investment to convert it to private clout. And this will bring many additional benefits to your organization to convert virtualized system toe private cloud, get an offering from a vendor for their private cloud offering. Even open source you get open stack as one example, with a small fee, you can even get a support for it. From some vendors, though, the private cloud might look appealing. But to reach the same level as a public cloud, it will require lots of investment. The to reason why. To goto private clouds, our law enforcement and regulations, existing hardware software and that is sent off and the lost The trend in the cloud is multi clout. What if multiple vendors each have his own? A specialty in the clout in the public cloud? To be specific, how you get the best in each one of those? This is the multi cloud. This is leveraging multiple public clouds. It's more expensive but no vendor looking. This will give you best of breed and best, a cost effective cloud service. Tobe risks in multi cloud our cost overruns because you have multiple vendors and you have multiple bells. And the second is incorrect. Requirement Gathering Cloud service broker or cloud management platform is needed to unify the AP, I've and the management off such a scenario. Very issue is that this is a relatively new technology and not yet 100%. Next we move toe cloud delivery models first. We have I s infrastructure as a service, I thought the party provider provide hardware, software storage network. And you charged boy, one of two methods. Bay as you go based on your consumption or a fixed amount, which will be contract Bluff Bay as you go the 2nd 1 If you have a well defined requirement , it will be more cost effective. Next. We have says software as a service, assert the party provider. Provide hardware, software storage, network software provisioning, software maintenance, end update, then user provide working that are only the 3rd 1 is best Blatt form as a service, I said the party provider provide hardware, software, storage, network software maintenance and updates. You provision your software on top of the platform and Confederate example for that are containers always considered I s infrastructure as a service as entry level Onley, develop your application to run as SAS or pass strong and the software as a service or black form. As a service, it will be cheaper and it will have fewer issues and operations. In this lecture, you learned about the different kinds of cloud this will help you decide later on when defining your requirement to move to the cloud. Thank you for watching and see you in the next lecture. 3. Why Cloud Computing?: Why. Cloud Computing? KPIX versus Optics mobile. Since it's very competitive market, no one want to invest in assets. The Coptics model. This is where cloud computing comes in with virtually no sort of coast, and you will only pay for what you need as this a specific point in time. This is the optics mother. It's a very good candidate to gain an advantage in business toe. Have your servers and all your system on the ICTY requirements up on running and seem like utility bill. You get a bill, you get an invoice at the end of the month. This is the optics model. In every club provider. There are cloud products. This is not just the hosting off the hardware. It's an added value off the provider. If the provider is Onley providing infrastructure as a service, this is not a cloud. This is traditional. Hosting the real bar of the cloud is the ability to utilize the cloud products. Example. Microsoft Azure. You have 100 plus cloud products focused on different areas. The same applies to AWS, Google and Blue Max. Cloud computing is also more cost effective, does not require an initial investment in I t data centers. More flexible have automation. Let's first talk about cloud flexibility to be clear. Cloud is on Lee, a facilitator for the business, to become more agile and generate results fast. For example, auto scaling is not something the cloud does for you. The cloud Onley facilitate the black form. To do so, you have to write your scripts or build an application that can auto grow. Next, Cloud automation believe. Understand. If you still have the same process, it will not fruit anything. Invest heavily in automation. Shifting to the cloud will require investment in the automation and in the blood form as well. The cost saving either direct or indirect, will fruit in 3 to 5 years. If you're only doing the arrow I study for a single year, I don't think it will generate much off the return you expect. So this leaders toe cloud cost efficiency. A typical R Y in the cloud is 175% plus. If the value return from the cloud is lower than 175% this means something wrong with your solution, and the solution needs adjustment. Example of cloud cost saving, direct saving, like saving hardware, cost license calls, electricity a C and that the centre facilities operation savings, etcetera. Indirect saving the coast off agility that you move the dots from one place on the triangle toe, another place moving faster than other competitors in the market. This is called time to market to produce a real value, and the last one that a center coasts the Arctic Coast. Example. Is server hardware network devices, far alarm cooling units, space licenses for software and UPS is the list goes on and on on on it. Depend on your data center size etcetera, then Director Coasts, for example, is bandits for connectivity services, maintenance and spare parts. Support the contracts that a center insurance operation savings the risk budget hardware disposal. Bauer. All of this will help. You can create a better R I for the cloud. So let's recap in this lecture. You learn the benefits, and the saving cloud would provide for the organization and the items that could include in your business case to get the management approval to move to the cloud. Thank you for watching and see you in the next lecture 4. Moving From Traditional To Cloud: I have to start with these three first one. It's been more than 10 years since the cloud computing has been introduced to the ICTY world. I had never seen anyone lost their job to clout, but I had seen people are just and learning new things but never out of the door. You can say that cloud will require some I t transformation two Don't get attached to your systems. I have seen this a lot. People treat data center and servers as pets as expensive bets don't do that and instead treat them like the way they are Onley there to serve a specific function. Three card security. It is not just an appliance or you depend on the cloud vendors. You're still responsible for your data. You have to protect it encrypted and all other security requirements. You need it in a normal data center. You have to think defense in depth. Now these three are out of the way toe move from traditional to the cloud you need the following one create the business case about why should we move our application to the clout to things to highlight in the business case Agility value Coast of Retired Services Upcoming changes course to maintain the current service levels, etcetera. Three. Analysis of application If it can be moved to cloud or not. Not all application is a good fit for the cloud. Four. If you cannot great application before migration to the cloud, this would be much better. Five. How you will manage your data on data storage This is that the security and compliance six . Find the integration points between systems. 5. Develop A Cloud Solution: Now you understand the core concept of the cloud. Next, let move to develop a cloud solution. The first step is collecting requirements. This is step typically is a business analyst role, but these are the highlights of the requirement. Collection. One functional and nonfunctional requirements to the business requirements understands the business problem. The current estate assessment. What need to improve? Look for restriction on resources. Three. What Need toe happen to improve? Four. What technology you will use to reach this future state. Now I understand the basic requirements. Next, plan your application migration. In this context, the application means application. BLAS Application Associated Data to start planning your application migration one. List all your services to map services to application three. Map applications to data four. Map dependencies between applications. What application is dependent on top of each other? Five. Map dependencies between services. It will be a great help if you already modelled your processes and understood the data flow between physical and logical as well as system inputs and outputs. A trap I see lots of people falls into is replicating the weather center, as is to the cloud. This will not provide much of the cost saving or the agility requirement. Don't lift and shift off the application and expect a return. This Onley provide you with the saving off the hardware, fresh or upgrade, and all the additional cloud benefits you are not getting. To gain cloud benefits, you have to re blood forms application. Do that a cleansing and adjust the requirements before migration of the application. You should establish a baseline toe measure. The success of the migration system Biz line is a starting point for comparison. Over time, biz line drift will happen. It is the changes to the original biz line from day to day activities and the normal user load as a best practice. Always do that adhering and application grouping Toby moved to the cloud this way you will do it in phases. This mean you will group application in tow patches that heavily depend on each other, like the customer services, portal system, customer data and the services processes all at once. In one short, if you try to move on Lee part of it, you would be faced with many challenges. Never go with a big bang and move every application you have at once. always my great application in patches or in phases. To use the application in your cloud, you have to do requirement gathering for every application you need to consider one user requirements to compliance requirements. Three. Coasting Model for Security One of the most common misleading concepts about Cloud is that cloud will improve the performance of your application. Well, this is partially true, but only if you have a resource is issue for your application, like low Memory or Cebu or even network bandits. In this case, Cloud will solve your problem. But Cloud will not solve slow database due to wrong table. Divine or incorrect code. Always understand the cloud application performance is dictated by the lowest performance of the cloud architectures. You need to understand all of them to determine the performance issue is coming from where this leaders to applications are not suited to the cloud. It's not recommended to migrate any of these applications. One old systems Themba lost years. All of these systems, it's not recommended toe migrated because their systems have a different architectures. Applications with internal databases, no separation between the data bees and application compliance requirements, very high security requirements. Black books, applications, applications with no one around that know how it's written and function. Once this exercise is finished, you will have application groups available on Application Group is a group of applications that is dependent on each other to produce a specific outcome. I had seen lots of lots of people struggle with testing as well how to test the application once it's migrated to the cloud. Usually we're on these tests one time before the actual migration and another after the migration to measure the difference in performance. It should be relatively the same if it's lift and shift migration or face one migration toe infrastructure as a service. This is assuming you did not re platform the application. If you did re platform, you should see a significant increase in performance. So we have the following testing types. First, we have regression testing. Test each subsystem to ensure the service function off the subsystem is correct to smoke testing testings application. If it's life and if the application is functioning as expected, black box testing, testing every aspect of the application and the service. Some of the most common application tests are bage, low time responsible time and user experience. In this lecture, you learned about the requirement needed to my great application or system to the cloud as well as selection criteria. And finally, the test you need to perform before and after the migration. Thank you. 6. Develop A Cloud Solution Practice: and our lost the breakfast. We determined the problem area and this practice we need toe rephrase it in the business and technical terms in order to try to solve it and release an RFP or taken corrective action. So the first item we need to start the building is the business problem identifying the business problem? Why is this is a problem for us. So the business problem in this case is the mail sent from the organization is arriving to the customer at very slow rate that cause the validation link toe become invalid. We need to improve the mail delivery system from our support agent using email system to send identity validation. Using email was end user is on the phone. No. When you put it in a business term problem, you begin to understand Where is the pain point? This is a support agent, some customers calling him and he needs to test and confirm his identity. So to test his identity, he send him an email with a validation link. This is maybe a two factor authentication. Maybe he will send them an s mess and he will send them an email as well. So Now we understand the pain point. Why is the delay in the mail delivery is causing a problem to the business? Because at the end, at resulting in customer, the satisfaction or customer will stay on hold for a longer time. Next we determine what feature is required. Features are divided into two types. The 1st 1 is important off the feature. Is it a most featured or a shoot feature or nice tohave feature? The second category is a category off the feature off. The type of the future itself is a functional, nonfunctional technical requirement or transition requirement. In our situation. The most feature a cinder receive emails in a time sensitive manner because use are staying on hold. So this is very time sensitive. Our shoot feature automatically generated the email without the agent interacting and copying the links into emails. So we need one system for the Asian toe just breast on it and it will generate automatically. Nice toe. Have ability to receive a link as a mobile SMS as well, so the end user will get the link toe has mobile end to his email inbox as well. Next, we need to categorise the features. We have a functional ability to send the email foster than 15 seconds. Nonfunctional. All solution must have on up time off 99.999%. Technical requirements should be based on Microsoft technology. This is you can use any kind of technology, but in this, in our example were determined We will use Microsoft Technology Transition the requirement We need to find a migration boss between our current existing environment to the new system . So the migration pulse need to be without interruption from the current system to the new system. Next, we need to get a little bit into numbers. We need to find the standard deviation off our new requirement. This means a customer should get the message in 12 seconds. This is the new mean with a range from 10 to 14. So the maximum is 15. As we mentioned earlier, this is business is determined and we need the Sigma Tau B three sigma. At this point, we need our process to be three segments. This means the standard deviation should have a lower limit off 10. And using basic mass, we reached that the standard deviation should be boring sex this means our baseline. The current determined by the current process will have a mean off 22 seconds. And the current standard deviation in one boy in five to and the target is the target. Mean is 12 seconds with a target standard deviation off 120.6. So how this numbers will help us in determining what we need? This is will both a hard line on the upper and lower limits, and we give us some guidance. Where is the problem realize and where should we enhance our processes down the road? These targets will create the project objectives that the project should meet to enhance the current meal delivery system. Thank you for your time and see you in the next lecture. 7. Data Tiering: always do grouping and data tearing. This will show you the line between what can be on the cloud on what cannot be moved to the cloud. Never have to varieties, structure, data and on structured data structure. Data is immune that Abi's relational databases, like my sequel and a secure on the structure data. It's files on a structure that Abi's is like any variety off big data. Each of these categories should have a fucked here. Tier one data. If was public, it will have no impact on the organization there to data. If it was public, it should change how group of people look at the organization, but it will not hurt the organization. Tier three data, if released, will make our competitors gain an advantage over US Tier four governance and regulation data. These cannot be uploaded to any cloud, or it will be very restrictive. Cloud Tier one and Tier two are typically safe to move to the public cloud, while Tier three will be dependent on the organization Tolerance for risk for care for it cannot be moved to public cloud unless it was a dedicated public cloud for this specific governments. Information 8. Develop A Cloud Solution Application Group Practice: In this breakfast, we will start planning our application migration. You started planning your application migration by creating application groups. Application groups is everything that your system and your service is dependent on. For example, you cannot migrate, affront and application without its actual back end database. You cannot my greater system without its own, that abi's. If you try to migrate the system without its the base, this means you will have a lot of traffic going back and forth between the cloud and you're in premises. The hobbies and this is will be efficient. Practice because you're connectivity will never be a fast if everything will be on your premises. So to ensure that you migrate the entire application with all the dependency required, you start by greeting what's called a context diagram and this practice. We learned how to create one. So let's take an example First you create a circle with your system name inside it. This is your in journal system component. Anything related to your internal system. This mean your software package in this case, then you start adding more and more connections to this. So what is connecting to our email system? The first thing connecting our email system is our user. He's sending and receiving. So we add additional components as well. There is a server. Okay, so you have a physical system that actually providing the compute power with the email system, and you have network that providing, sending and receiving, and you have a storage that provide, read and write. So we start adding additional components. This is the primary components. So now we move on to more components about dependencies, so you have one dependency called authentication. This is the main repository that hold all the user information and the their email access. So in our example, the mail system sends those authentication repository access request. He either get granted or the night granted in case of this is the correct credentials or denied in case incorrect credentials. And now we started adding the secondary systems. Any systems that actually your email system is dependent on, like email, archive email, security, backup, etcetera. Anyone off those is considered as a country system because it to a certain degree, you can you have some option off keeping it or leaving it so email archive. You have the archive and retrieve because they your system can be. Either way, you can decommission it altogether. On my grid archives of old, All the users toe the email system on the cloud. But still, this is a decision point. You need to understand your system. And where is the connections coming in and out of it? Your next is the email security. You haven't incoming email. You can as well decommission this part by completely rely on the vendor security and spam filtering and email security. And lastly, the backup. You can as well decommission and remove the sport by relying on the vendor or any off the methods we talked about in our cloud management related to the backup your primary target at this point and is to find each one of these arrows and try to resolve where this arrow will go in your cloud design because the arrows don't disappear. These just either change from a specific solution, toe another solution or they only take another form like example. If you're going with I s infrastructure as a service, your physical system will disappear as well. You keep doing that for every system and every application you got and for each one of them . You start digging deep until you get all the connections coming into the system and or connections going out with system. You do this for any applications that you think might be useful to be migrated and our scenario and you have a connection point. So your next step is to take each one of these blocks you think that can be migrated to the cloud and start creating its own context diagram. Thank you for your time and see you in the next electors. 9. Cloud Requirements: Now we know your applications and you know your data. So my agrees application correctly. You need to decide on the requirements. These requirements will determine which template or cloud products will be used. Application requirements, user connectivity, end user interface. I s bass SAS always requirements next network requirements, network design requirements, firewall ports, requirements, upload and download requirements. Did the requirements transaction transactions? Sales orders, etcetera? Can the provider handled amount of from the actions required? This is a very important question. You should ask the provider. How many transaction do you got on your database? Data for analysis is used for decision support systems like business intelligence and big data data mining Machine learning that assigns require a specific providers. Not all provider can provide such narrative compute requirements. RAM requirement processing requirements, storage requirements, logical storage helped write to the storage using FBI or S or a specific application. Physical storage. How many gigabytes or terabytes required clouds Abuse in cloud computing. You have two models off abuse CPU cycle if you have server less computing, for example, and you have bear core. In this case, I always recommend using the minimum possible course. Don't go for the current course you currently have on premises, and instead get the average from your monitoring software and use it as the main off the cloud. Don't worry if your CPU is 90% all the time in the cloud. This simply you mean you're getting what you paid for. If the VM route of Cebu capacity and keep hitting 100% well, this is cloud. You can add more CPU whenever you need. One of the most common fails an application. Migration to the cloud because of sizing in the cloud, is different from sizing on premises. The typical server will have an average utilization off 20%. This is on premises, but in the cloud it should be 80% plus. To utilize the mentioned return off investment cloud that Abi's not only storing business data, it's also generating analytical data that helps in decision making. But you can also store that the basis in three ways directly on the cloud. Some wonder call this black form as a service. In this case, you charged by the data be size and the number of transactions very fast and very responsive inside the VM infrastructure as a service. It's a normal server, but it's hosted on the cloud you charge it buried the VM, not Teletubbies, and you have big data storage. This is unstructured data storage this leaders to cloud storage. You have two types of storage available. You have an object storage. In most cases designed to store large volume off unstructured data. The object is digital entities. Documents, files, images, videos or do media objects are organized into containers. Each has a unique I D that can be accessed globally. You can have an unlimited number of containers in some cases, object the storage, have a subsection called file storage that provided director fire, read and write access from the devices directly. Next, we have block storage for transactional information like SQL and my A secure used for high volume transaction. The benefit off utilising cloud storage is undeniable. Global access, cheap storage with no up front cost. Now let's address some consideration for cloud storage. The same benefit is the consideration as well. Same as you can access your data from anywhere anyone might be able to do the same. Securing your data is critical. Transfer the data to and from the cloud need to be encrypted. Must reviews s a layoff. The cloud provider and the Layton see here is not just Bink. So don't perform a being and expected This is a late unsee. It is a time their quest for the object going for the storage, processed and sent back. Sometimes the network is just fine. But the processing off the story, J p I. Is a problem. Also, some time, especially in databases Layton's he is tied to a different level, each with different price. So you will have, like bronze, silver and gold and black TEM. Each of them has different speed and late and see ended from pricing. Next, we have scalability and duplication requirement. What is scalability? The scalability is going up or down is a huge benefit for any customer. This will allow the system to scale up while delegates present and skilled down when the load is decreased. Though this is an application feature and controlled by the application. What you need here is to understand the capability off the provider for the feature. So you should answer this question. Can you create an application that can also scale on this platform or not? next we have replication. Replication is duplication of data in real time to another location. This will reduce the risk off that a loss in case of a vendor failure. Unfortunately, this one is not a matter off. If it's a matter of when all vendor fails at one point on her mother, the are a business like any other business. Always remember your data and application is your responsibility. You always should consider this as a risk implant for it, though. Beware that lots of the vendors will try to look you in so that it will be very difficult for you to move from their business from their blood forming cloud to someone else. Next, we have a mapping resources requirements. Now we finish the application layer requirements. Now we move to go deeper into the level of resources. We have compute requirements, platform requirements like Cebu cores, memory or else we have disaster recovery, active, active or active passive application interfaces, management tools back up on system maintenance, security, identity access, management and encryption that a busy requirements capacity. How big the data? What type of data model, structured or unstructured? Speed and latency, analytical capabilities, management of databases security that the base control or record the level. What security can you enforce on the batteries? Storage requirements? Capacity, gigabyte or terabyte Requires storage module object block file speed latency and input output. IO disaster recovery, redundant storage or a single and stuff application interfaces Service offered by the vendor Gross Mature gigabyte growth over time. Coast of the storage. How to manage the storage update to the database engines. Can you updated? Have these engines or the vendor will take care of that security, identity, access management and encryption. Thank you for watching and see you in the next lecture. 10. Cloud management: cloud management is operation revolving around the cloud platform. The challenges in any cloud systems are God matching cloud automation, cloud backup, disaster recovery and cloud security. Let's start by cloud matching Khyber Adviser In the case of Private Cloud, you have to maintain your hyper visor up to date to make sure you have the latest security updates and patches. When you batch the hyper visor you batch bear note. You make one note not holding any load and batch tested and move the load back to it. You keep doing that until all know their pageant says and pass software as a service and blood for mother service. It's automatically bad should on updated by the cloud vendor. Your only concern is maintaining your code, if any, incompatibility with the latest batch of version infrastructure. As a service, I asked, You have to maintain a batch ING process some provider provide you with the update process . Some Cal provider fourth update on the operating system on a specific deadline so that the entire blood form will not be compromised. The idea is you need to validate the batch on a test machine preproduction before applying it directly to production. Next Cloud Automation Cloud automation is based on the business requirement. It's critical total meat as much as possible in the cloud deployment. The types of automation are based on the workload and the cloud platform. One critical automation is that you need to consider is auto scaling and self healing. Automation is the completion of a task without human intervention. Run course automation is executing a series of tasks. The orchestration is arranging end initiating automated tasks, run courses and the single tasks alike based on a specific trigger workflow used to create an automated, end orchestrated process. Now we move to the cloud backup. Though the cloud is always available, you still need to protect your data. As I mentioned, over and over your data is your responsibility software. As a service, maybe you enable Dumpster or trash bin for greeted items to enable snapshot and keep the configuration file and software. As a service. You are fully dependent on the provider to protect your data, but some of them allow you to download the copy off your configuration and your data. Next, we have blood form as a service. Always keep your application on premises in your artifact repository and from time to time , take a copy off your data infrastructure. As a service, you can install a backup agent inside the VM and take whatever backup you wish. The only restriction is taking the entire VM as the backup. You don't have access to the Khyber Visor layer types of backup. You have cloud backup as a service from service providers, you can back up SAS Pass and RDF, and you have traditional backup off infrastructure as a service, like full, incremental differential and progressive. If you have a private cloud, you could treat the system as the some off of e EMS and new the traditional or back up as a service. The last party need to consider is disaster recovery. Most vendor will provide some sort of a Jew redundant option, which mean they will clone your every data and set up another data center in another location. To host all of these data and V. M's and a private cloud, however, you have to build your set up in another location. Business continuity is much easier with the cloud, so if you already have access to your application from anywhere, it is a simple as renting a white space and get some desks and Internet, and you're good to go 11. Core ready : In this practice, we will start building the core requirements, starting with the application groups. This is the same list that came up when we created the context diagram and seen the connections between each port outside of the email system itself. So this is the summary off the context diagram. Your next step is to start creating actions on blend strategy for each one of these systems . What will you do in each one of them? So your next step here is to blend action. The first item is migration for the email system mastication. We will extend the only to the provider. We're not migrated completely. We will still have the on premises authentication repository for our desktop to Logan etcetera. For the email archive, we will do migration as well. For the storage, we will do migration, the physical system. We will rely on my ass for the network we were rely on. I asked for the email security. We will get a subscription services that actually will provide a mere protection for us. For the back up, we will get a subscription services as well. For the user, the connectivity will be over internal VP and connecting from our office toe the cloud provider. So now let's take our project plan. The high level first sess or I s so says is not possible because of the male archive. We have a huge mill archive in our example. So we have to go face one migration toe I ass to solve the critical issues storage, emir, security, a male back up and connectivity issues that we highlighted in the optimization and the process phase. Next, we have a face to migration of the mill archive into the bribery mail system. We will start to consolidating both systems into a single system. The 3rd 1 is Decommissioning the mill archive. This is the third step. We remove the mill archive completely, which was the main blocker for us to go completely to assess. And the last step is to migrate the mail system completely process. How you gonna approach this phase is your first action is to start adding user requirements . So in our example, based on the elimination off all the items that that we will outsource and will be dropped or find the subscription services to cover it, we will start listing the user requirement the email system we need to send and receive under 15 seconds the authentication. The user doesn't have an authentication requirement for the email archive. We have AH 100 gigabyte bear user and for the storage we have 10 gigabyte very user. Your next step is to start adding compliance requirement for email system, email, archive and storage. There is no specific compliance requirement, but unfortunately for the authentication, it's required to be inside the country. Only you cannot both your authentication information outside of the country this is will be a legal requirement and the legal restriction on top of your project. So now we move to the next phase which is adding the costing which costing model you will be using in each system. Since we are migrating for a temporary period of time toe I us. In this case, we will use bay as you go and get off immune system. Same for authentication and for the email archive. And for the storage, we will bay bear terabyte. So next we move to the next requirement, which will be our security for the email system. We will need the mail filter. This has been dead tree and cannot be avoided for the syndication, email, archive and storage. We will need firewall I PS and ideas. Now we move to the next requirement, which will be our data security. Now we need toe determine what kind of encryption what kind of protection we will provide for our data in the mail system. We need that on flight encryption on the authentication we have that on flight and that at rest encryption and for the email archive and the storage. We need that at rest encryption, using a specific key that we actually generate from our big I. Next you add the data tears, which tears off applications that says This is where B tier two, tier two, tier two and on Lee, the authentication will be Tier three, and it's up to the management to keep it inside the company or to you the inside the country restriction, we add also, the patching the management off all of this. What kind of patching we expect out of the email system, the authentication and email archive we will have weekly, monthly and monthly for the storage. We don't have a matching for it because it's directly using an FBI from the provider for the automation. We need user activation. We don't need men. UAL Bay performs, keeping going back and forth between each one of them. We need a single system that actually activate the user end created so the user creation will be on the authentication repository, and the user activation will be over the email system and the email archive for the backup . We need to determine what kind of back up we will use in this email system in our migrated services. It will be a daily, progressive, daily, progressive or weekly progressive and disaster recovery for that email system authentication, we will use a replicated VM or a geo replicated VM. But in case off our email archive, we can survive if the system goes down for one off today. So you keep adding additional fields to your table until you cover the entire business requirement. This is will be your visibility, and things will start looking for it in any providers that you're looking to host your system on Thank you for your time and humans. The next lecture 12. Cloud Security: next. We have a big one cloud security, almost everyone. When trying to reject cloud, they simply say it's for security and stop. I'm not saying no security challenges exists. But what I am saying is it's achievable. Like everything else with good policy and practices, you can achieve it. Let's define what are the kinds of policies you need. We need two types of policies. A security policy, a set of procedures taking to protect the business and information from intrusion. And you need acceptable use policy a user must agree on before allowed access to the organization. Resource is this one is often ignored, though most of the leaks and the large organization are due to incorrect usage of organization data and resources, like upload private information to public website. I'm not saying user cannot use public file hosting like Dropbox or one drive, But what I'm saying is user need to understand the security option on not allow full public access toe uploaded files. I can't count how many times someone uploaded a larger file. He cannot send this through an email toe, another company or another partner off the organization, and simply he forgot to delete it after the other party already downloaded it and finalize that transaction. So it's left on the cloud with full access to everyone and for search engines to categorize so that anyone can download at any point in time in the future. This is very risky. Next, let's understand. On a high level, what is encryption? Any process used on data to make it more secure and less likely to be intercepted by an unauthorized party data encryption did. Encryption is provided in two cases. Data dressed on a storage medium or they found flight during transit. This means three categories exist. Client side debt encryption server side that encryption and network traffic protection In encryption, you have to choose a balance between the level off protection for your information and the level of resources that you will spend on encrypting and decrypting this information. There are two kinds of encryption symmetric. It's a secret key example DS does. This is a light wheat, but it can be cracked by a determined hacker, and we'll have asymmetric. This is like public key Encryption. Example is Big Guy resource intensive, but it will take long time to crack 10 plus years now we move to cloud risks. A public cloud is more secure than most organizations. As one vendor once told me, I'm securing my data center not only for the sake of my customers, but also I don't want to be in every headline for a week because someone miss configured their systems. Most of the attacks seen it was either Miss Configuration, off protocol or weak passwords. Protecting systems. Cloud risks are critical topic. I can't list all the possible risks, but these are the critical ones. First, we have run somewhere. Your mitigation should be backup systems, user security training against fishing Next encryption break mitigated with a strong encryption bits and change. The key is often physical. Sift reform local that encryption vulnerabilities mitigated with batch systems and remove legacy systems. Dido's distributed denial of service this kind the provider will usually protect you from it. The last one is in Predazzo. Sift Use data leak prevention systems for cloud risks. I would recommend reading Initiate agency and the CIA Cloud Computing Risk assessment. It's from 2000 and nine, but most of it's still relevant today. Some of the top risks from the report that address Cloud First Organization risks looking loss of governance compliance. It challenges technical risks. Cloud provider, malicious insider abuse of the high privilege roles, intercepting debt and transit that a leakage on upload and download and between entra cloud between different parts of the cloud insecure or inefficient deletion of data. Your data may be deleted, but actually it's still hold by the providers. Distributed denial of service, economical denial of service in the economical denial of service. The attack. It's simply drain you off the funds because if you are utilising bay as you go, so whenever there is a load, you are charged more and more and more out of your balance. So in economical denial of service, the attacker main purpose is to make you run out of balance. And then you will get the night off service conflict between the customer hardening procedures and the cloud environment. Maybe you can not hard in the system a same as the one you do it on premises. For your organization, you have legal risks. Subpoena Andy Discovery. If a court order asked the cloud provider to provide the data that a protection risks and licensing risks next, we have risks not specific to the cloud. We have network breaks, modifying network traffic, lots of compromise of security logs, manipulation off forensic investigation. And we have natural disasters, user provisioning vulnerabilities, remote access to management interfaces bore key management procedures, lack of security awareness, unclear roles and responsibilities. And lastly, we have mis configuration. Next, we have that a classification. We start this in the previous lectures, but in the context of the requirement gathering and management of data. Now we address it from a security perspective that declassification the process off categorizing data asset based on the sensitivity. If you remember, the sensitivity levels are Tier one that if it was public, it will have no impact. Want organization? T have to better. If it was public, it will change. Some group of people look at the organization, but it will not damage the organization that a Tier three, if it released it, will make our competitors or the organization competitors gain an advantage over the organization that that year four is governance and regulation data that cannot be shared unless it's a specific cloud dedicated for that took out the grace data in tow. Each of these levels you should do one set criteria to determine controls. Who should and should not see these data Identify the owner The transfer method of data, life cycle, classification and declassification of data. It is not the best practice. To provide access to all of your data, you have to identify permission level, bear record level table or object level, databases level storage level and platform level. Next, we have cloud network security. You need to understand how to secure communication layers and the how to secure the network . This is in addition to encrypting data using any tunneling protocol. Like I be sick, for example. First, we have traditional villains. We have multiple villains. Each fill in, have a group of resources, and the each villain is protected by a firewall, intrusion detection system and intrusion prevention system. Next, we have micro segmentation, its defense and EPS. If someone breeches the Brymer, if I roll, he still will not be able to get to the servers. North south traffic. This is the traffic going from the front and servers like TMZ to the application level and then to Sotheby's. This is considered only 20% of the enterprise traffic, but surprisingly, the main focus of security East west. The traffic is when the front and took so on. Mother Front end or application talks toe another authentication repository on the same level. It is traffic inside the data center itself, server to server. What micro segmentation proposed is to separate each VM and server in its own security group, and you have to try verse, Ah, firewall and security policy toe pass from one system to another. So general cloud security best practices are one strong boss words. Fees is not characters so dry always to make it a full sentence. Multi factor authentication for critical users. Try always to make it multi factor for political users and admin breve alleged access management for all privileged users. This mean all administrators are considered the privileged users. Privileged access workstations for administrators, dedicated worker stations just to do the administration work. In this section, you learned about cloud security high level overview on what are the risks associated with migrating your workloads to the cloud. Thank you for watching and see you in the next section 13. Develop A Cloud Solution Build server Requirements Practice: Now we move on to the Lost A step off identifying our application and developing the application requirement. Who now, at this point, we start creating the requirement at the server level, since we will be migrating as I asked Infrastructure as a service, we need to determine some basic components, so we will start by breaking down each application specific servers. So in our example, we are using email as our primary application. And we have TMZ Web and we have editor bees User connectivity. We have 1200 user on the migration. We already determined it will be in this face. As I asked, the operating system will be windows. So now we have some information. We next we start adding additional information and additional requirement. In this case, we add network requirement. We have network a design in the D. M Z server. We need deems E and for the weapon that always we needed behind firewall. We specify the network reports that we actually require and we specify, then upload and download per second. We keep adding the letter requirement. How many transactions are we gonna provide an hour that every server, what is this that I was used for analysis. What type of data is this? A structured or unstructured? In case of our database server, it will be structural data. We add compute requirement. What is the RAM requirement? We have 64 gigabytes and 128 gigabytes. For case off the tabbies and Cebu cores, we add 16 and 40. In case of that I base. This is only for demonstration. Use your own numbers based on your performance metrics. We move on, keep adding additional requirement. Now we reach the storage requirement. We have a logical storage which is will be direct attach inside the VM and physical storage would be 500 gigabyte or in case of the base will be 12 tour a white and the type of the storage required. It would be object, the storage. And for veritably server, we require Toby Block storage for the encryption. As mentioned earlier, we will use our own keys from our big guys to ensure that there is security and we have lower risk. In case of a breach. You keep adding to this requirement. Whatever requirement you see fit, you keep add Remove whatever you see will best fit the situation you have and the server types you have. But at the end, you need tohave a single view that actually contain all the server information and all the data required for any provider and any system integrators toe plan your migration properly .