AWS Cloud Migration For IT Professionals | Qasim Shah | Skillshare

AWS Cloud Migration For IT Professionals

Qasim Shah, Digitization and marketing expert

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
20 Lessons (2h 18m)
    • 1. Course Promo

      3:07
    • 2. Course Pre Reqs

      2:30
    • 3. S1 Cloud migration benefits

      9:58
    • 4. S2 Cloud Migration Strategies

      5:41
    • 5. S2 Migration process

      10:37
    • 6. S3 Migration Hub

      7:44
    • 7. Untitled Projecta2f65e5

      11:01
    • 8. Migrating a Database with DMS

      14:31
    • 9. Online and offline data migration

      9:14
    • 10. S3 Migration Acceleration

      4:53
    • 11. S3 EC2 recommendation

      3:23
    • 12. S3 Managed Services

      6:57
    • 13. S3 Migrating network infrastructure

      6:38
    • 14. S4 Data Migration

      5:14
    • 15. S5 Database Migration Service USe Cases

      4:36
    • 16. S5 Database migration service

      7:28
    • 17. S5 Schema Conversion Tool

      3:57
    • 18. S6 DMS Best Practices

      9:12
    • 19. S6 Server migration service

      4:11
    • 20. S6 VMware on Cloud1b1b74

      7:07

About This Class

Is your organization migrating to the AWS cloud?

Are you an experience AWS platform user with experience with virtualization technologies or have been working with VMs? and want to gain fundamental and intermediate level skills and enjoy a fascinating high paying career?

Welcome to AWS Cloud Migration For IT Professionals course. Hands-on learning, and quizzes: Learn from industry professionals!

Cloud Migration has become a hot topic in most IT departments and organizations. With more and more organisations deciding to shift to the cloud, this is the perfect course for you to get the hands-on knowledge to design and implement migration strategies that would be beneficial to your organization.

In this course you will learn:

  • Understand the benefits of migrating to the cloud

  • Describe the various cloud migration strategies

  • Migration process and dashboard

  • Migrate resources, workloads, databases, and users

  • Cloud Endure and migration tools offered by AWS

  • Use migration tools in AWS to migrate to the cloud

  • Hands-on labs on migrating, quizzes and much more...

See what our students say “It is such a robust course that I don’t need to take any other course but this one to learn all important concepts about migrating to the cloud with AWS, and I would without a doubt recommend it to everyone.” - Michael Norman

“This is such an awesome course. Clear instruction, and I loved every bit of it – Wonderful learning experience!”  Jill Lehman

Join thousands of other students and share valuable experience!

Why take this course?

Learn how to use the AWS migration tools to design and implement migration strategies, and gain solid understanding along with hands-on real world learning experience with this course. As an AWS certified professional, Microsoft, and Cisco Certified,  a senior Enterprise Architect & Project Manager managing and deploying enterprise level IT projects,  my experience with AWS-DevOps has been phenomenally great! I am excited to share my knowledge and transfer skills to my students. 

Enroll now in Migrating to the cloud with AWS course today and revolutionize your learning. Stay at the cutting edge of learning new skills - and enjoy bigger, brighter opportunities.

See you in class!

Qasim Shah

Transcripts

1. Course Promo: data is the cornerstone of successful application deployment and analytic work flows, including machine learning. When moving data to the cloud, you need to understand where you're moving it for different use cases, the types of data you're moving and then the network resources that are available. Among other considerations, AWS has helped thousands of organizations, including enterprises such as G E, the Coca Cola Company, British Petroleum, Samsung News Corps and 21st Century Fox, migrate to the cloud and free up resources by lowering I T costs while improving productivity and business agility. Migrations to AWS include moving any workload, whether it's your application, whether it's your website, whether show databases storage, physical or virtual servers, or an entire data center from an on premises environment. Truly aws cloud ecosystem. Welcome to this course on AWS Cloud migrations For I T professionals. My name is Sai it. And together with Kasim Shaw and Team Clay desk, we're super excited to bring you yet another real world practical course on migrating workloads To AWS, let me walk you through the course agenda. We will start off by looking at the cloud migration benefits and important leaving migration strategies. Then we will dive into the actual process from initial assessment, readiness, planning, migration to operations and optimization. Next, we will take a look at AWS Cloud Migration dashboard and how to migrate. Resource is to the AWS Cloud by working with Cloud and your and various other migration services, including online Al. Find migrations. Then we will have a deeper look into the server migration VM Ware Migration on the AWS How does it actually work? Migrating users and much more. Not to mention we have several quizzes, assignments and valuable resource is that you can download and benefit from who's this course for. We've designed this course for I T professionals who are already working within the AWS ecosystem and would like to gain some additional hands on experience on AWS Cloud migration strategies. If you're a beginner and up to the challenge, then jump right in. We welcome you in this course. We should take this course well. This course provides hands on real world migration scenarios and will increase your knowledge, skills and abilities by providing you a solid understanding of migrating workloads. Resource is databases and users to the AWS cloud itself. We value your feedback and with a regular coast updates, you will find new lectures from time to time so that you can keep up with the latest changes in announcements. Team cleared as teaches over one million students online. And we look forward to having you through this learning journey. So if you would like to increase your skill set and see the end of this migration process, then what are you waiting for? Click on being rolled, but now and we will see you in class. 2. Course Pre Reqs: I Really? So before we get started, this course I want to take you through some of prerequisites that required in order for you to do the labs. That's really and also understand the course because this is, ah, intermediate to advanced level course. So, first and foremost, what you're going to need is you're going to need to have access Teoh aws. Now you could have actually free chair count. Most of services that I'll be showing you are free tier. Some are chargeable, but I have mentioned what what is chargeable? What is not chargeable so. But for first and foremost, you'll need access to the AWS free tier account. Secondly, you need to be familiar with some of services that are offered by AWS. So, for example, you need to know what are easy to instances, hard to create them, configure them on so on. Because the database migration service obviously utilizes easy two instances, so you need to be familiar with water. Easy to instances creating them are you also need to be familiar with I am, which is your identity and access management in terms of users, your accounts policies, how they operate within eight of us. Additionally on one more thing that you will need to be familiar with is also your on premise infrastructure. So you'll need tohave some basic understanding off in what our servers, what are virtual machines? What is VM Where What is these fears? So you'll need to have a good understanding. A good grasp of infrastructure in general, along with the resource, is such as easy to. And I am within AWS along with databases, because we will be going through database migrations. So you'll need to understand what our databases, how they operate. What where the structure was the difference between the Oracle Database and SQL Database, where I post grad school database oil. You don't need to have an in depth understanding what you'll need to know. The difference is and have a good grasp off water databases. How they operate, what's required in terms of migrating, heterogeneous or homogeneous databases. You know, if your Margaret ng from Oracle two sq on so on. So those are just some of the Parex that keep in mind that you will need tohave if you want to have a good or clear understanding off the things that I'll be going through throughout the rest of this course. So if you don't have a good grasp, I would highly suggest that you go through, take some time and understand some of these resource is that I have just mentioned just weaken follow along with the migration process and the strategies because if you don't have an understanding of some of these basic concepts than it will become a little bit difficult to understand the migration process in the migration strategies in general. 3. S1 Cloud migration benefits: everybody. And welcome to the lesson. I'm looking at what different benefits that we have in terms of migrating to the cloud. When you're migrating applications and data and infrastructure, you want to use cloud services that best meet your and your organization's needs. You wanna make sure it goes as smoothly as possible. So in order to get the most out of the cloud once migrated, you need to know that you're getting as many of the advantages of the cloud as possible. So what I've done is I've tried to list as many as possible. Now, this is not an exhaustive list by any means. These are some of the top benefits that I believe organizations can have by migrating to the cloud. First and foremost is the faster deployment times now migrating. The cloud means you should be able to deploy your APS, your services and your infrastructure more quickly. Many of the services provide the ability for you to quickly provision servers and other resource is within a few steps, and even within a few seconds, as we'll see in AWS is essentially a couple of buttons and you have an entire server provisioned for you now that's obviously a lot simpler process. And compared to buying the servers in stone os, placing a new your data center and so on. And even for medium to large organizations, even procuring the server is a multi week process, so this significantly speeds up that process. Next is the enhanced security features. Now most cloud providers and especially AWS, take care of some of the tougher security issues that is keeping unwanted traffic outside a specific scope from accessing the machines on which your data and your APS reside and ensuring automatic security updates are applied to their systems to keep from being vulnerable to the latest known security thrusts. Not one thing you do need to keep in mind is that you will still need to have security policies in place. When you're configuring the servers and the virtual machines and the applications. That does not mean that you should not have secured in place. That just means that some of the inherent security features such as the physical security such as this, the main West security or the managed services that your opt in for for AWS, such as the database services have built in security features. So those things you do not need to worry about, for example, a hardware or software firewall. Everything is there in AWS, but it still needs to be configured and operated by you if, for example, you are operating your own. But if you are using some of their manage services than everything is managed by eight of us, next is a lex infrastructure complexity. Now cloud systems tend to peel away the complexity of infrastructure that underlies the architecture being used and makes it very easy for you to provision new machines like I mentioned a few minutes ago that essentially in AWS when you click a couple of buttons you have essentially provisioned in you server. Now, if that's not less complex, I honestly don't know what ISS Then you have the ease of monitoring or the built in status monitoring. Now, a lot of services that are provided by AWS are able to provide monitoring so you could be notified when an app or a machine has potential issues or is actually experiencing an outage. Now, this saved a lot of time in terms of you physically having to sit there and monitor each and every one of your resource is, for example, a Bs as many services, many management tools that allow you to do that, such as cloudwatch. It says automatic notifications when certain actions happen. For example, if your CPU is above a certain threshold on some of your servers or your memory usage is above a certain threshold, it can send you notification to. You are notified when certain things happen so you don't have to worry about servers going down or you not being notified about a service going down. And the best part about it is if, for example, a server does go down, you are easily able to provision a new one within a few seconds. Then you have the automatic backups and logging off key metrics not related to monitoring. Backing up and logging services are very important, especially in today's day and age, when data is very important. So this makes performing disaster recovery very easy and simplifies that process. The backups allow you to get things up and running in no time. Then you have is centralized management or some also say, single pane of glass, good cloud service predators. AWS makes all of the things appear as though they are a single pane of glass, meaning you have a single dashboard, were able to see anything and everything within your entire infrastructure, and you are able to do just that within the AWS ecosystem. And second, but not last and but especially at least, is a reduced costs. Now, when your provisioning servers on AWS you pay as you go and weaken, scale up and you can scale down based on your need and based on your demand. So, for example, let's say that you have an e commerce website and you know that black Monday is coming up and you are expecting a you know, a significant spike in traffic or you're launching a new product and you're expecting a significant spike in the amount of traffic or them on a visitor's you're able to get now. If you were not on the cloud, that would mean you'd have to buy a couple of extra servers to make sure you meet that demand best, and that money could essentially go west if you know that traffic is not going to last. If that traffic is not going to be sustainable, then the after servers or the extra hardware that you have bought, and provisions are going to sit idle until the next time you do have a spike. So you've wasted a lot of money, and a lot of resource is in getting the service up and running. All of that is simplified in in AWS and in the cloud and that, for example, you can provision service to go up. And when you have a spike in traffic, when the spike in traffic goes down, the servers automatically get decommissioned, and you only pay for what you use over example. When that spike in traffic happens and you provisioned five new servers and you pay for those five new servers. As long as the traffic is there on the traffic drops, those five service drop and so do your costs and last and especially at least saving the planet. I'd like to call it or reducing your footprint. For example, if you are a large organization, you'll have to set up a data center. Or you might have data centers which are going to definitely increase your carbon footprint . And for a lot of em, NC's and large organizations that are spending the Globe. A lot of from are becoming more and more conscious of such actions that have a negative impact on the environment. They have lots of CSR initiatives that allow them to take such things annotations actions that would reduce your carbon footprint and by utilizing already data centers and already hardware that's there by eight of us. Or since any club better, you are able to decrease the amount of hardware and the elemental carbon footprint organization is putting out. So again, just a few off the benefits again. This is not a exhaustive list by any means, but just some of the top one which I believe are made and court to migrating to the cloud. Now it's not all up and up on migrating to the cloud. There are a few drawbacks to migrate in the club because it's not essentially for everybody . I'm so these are some of the drawbacks, which I have come across from on migrating certain organizations and consulting with certain organizations to Margaret to the cloud. First, you have data sensitive. Now, a lot of organizations they have are their work with extremely sensitive data, which might be certain laws they might be in certain jurisdictions, which bar them from uploading the data to the cloud. Or in some countries in the Middle East, you're not able to take data out of the country. So if AWS or Microsoft does not have a data center in the country, and essentially, you are not able to host that data on the cloud and need to hold it in your own data centers. So those are situations, obviously, when moving to the cloud in that specific chord in that particular situation would not be viable for you. 2nd 2nd drawback again. If you have a very small set up, you know, for example, everybody works remotely. You essentially do not have a server environment than it does not really make sense for you to go to the cloud, because essentially, there's nothing to migrate to the cloud because everybody's kind of working in a work group . They have their own laptops. They're working remotely. So in that sense, obviously, Margaret into the cloud would not reap you any of the benefits of charges described before next. There's latent see. So now, obviously, if you're migrating to the cloud, if you put all your servers to the cloud. If you do not have that Internet connectivity or that cloud connectivity, then your users are going to experience Layton C as compared to the servers were in house or a thorough sitting with the same network. So, depending on you know, the type of connection where you are geographically, that could be a drawback. Coming second, last one that I mentioned, there's loss of control because obviously, you know those physical servers are not in your physical control. Eso you do not actually sit physically in front of the server. You cannot take the hard drives out and papa men. So you do lose that sense of control within your war hardware, which is potential drawback for an organization. And, lastly, switching, which is more of a change management or a cultural shift but is a drawback because a lot of organizations are hesitant to do that because off the could be the amount of training or the change management that they would have to do. But essentially, if that is the case, then you are most likely working for a large organization. And then the benefits far outweigh the drawbacks 4. S2 Cloud Migration Strategies: everybody. And welcome to this lesson. I'm looking at the different strategies that are available out there or that we could employ when we are migrating to the cloud there quite a few different strategies out there . What I'm going to do is just take you through some of the more common ones that are employed by a lot of organizations. So the 1st 1 is something referred to as re hosting, which is also sometimes called lift and shift Migration. Because there's no cold are no such advanced courting that you need to do to migrate your infrastructure. You're simply lifting it up and putting it on the clock and one of the main reasons organizations employ this is for timing, because this is a very, very quick way to Margaret, everything to the cloud. No each application or each VM is essentially picked up and put on the cloud virtually and lots of different reasons or use cases when this would be employed. Force and foremost is if you need to get it done quickly. So if in a time crunch this would be the main one that you would want to employ, one moving to the cloud or four applications or services that are already architected. To use the AWS infrastructure. Some application is a require recording in orderto work on the cloud. But if your applications are already architected to utilize the cloud services, then this would be a perfect choice or businesses that require applications but no need to change the capabilities, which again goes hand in hand with, with the previous one or Absar databases requirement that can only be met using AWS virtual machines. So, for example, if you are employing a new database or a new app that requires significant hardware upgrade , then obviously re hosting would be the best option, because you can get it done very quickly. And then, lastly, moving APS without any code changes, which which basically means that your APS can operate efficiently on the cloud infrastructure and do not need any recoding. Then we have something called the Re Factor, which is also referred to as repackaging. Now this strategy involves some change to the application design but no wholesale changes to the application court, so your app can take advantage off. You know, I as our infrastructure service or platform as a service products within the AWS ecosystem , you know, such as their managed database services. So why use this when you have an existing court base and Dev skills and court portability is a concern, then this would be a good way. Ah, quick way to modernize your app. So if you've been using in one of the legacy applications, this is a good way to not only go to the club but also modernize your app at the same time , or driving continuous innovation by taking advantage off Dev ops and the containers, docker containers or the kubernetes container. Moving on down the complexity gene we have architect, which is essentially modify or extend your applications cold base to scale and optimize it for the cloud. Ah, here you are modernizer app into a resilient, highly scalable, independently deployable architecture and use AWS to accelerate that process. You can scale applications with conference and manager apse with ease, so this is essentially going down the complexity change in terms of changing the architecture off your applications. So why or when would you use this can to take advantage of an existing application investments, meat scale, ability, requirements, bringing capabilities that are only available in the cloud in AWS into your organization or into your app and improve the agility by applying some innovative Dev up practices. You know, like the codecommit and the codepipeline can be employed within your applications or within your architecture. And then lastly, we have the rebuilt not rebuild biscuits and building it from scratch using Claude Native Technologies. Now it'll be s platform as a service provides complete development and deployment environment in the cloud without the expense and complexity of software licenses, the need for underlying app, infrastructure or even middle where in resource is so with this cloud migration strategy, you manage the applications and services you develop, and AWS essentially manages everything else. So this way you can put your resource is in developing the app and not really or the program and not really have to worry about what underlying infrastructure it will operate on. So why and want to use this again for the rapid development when existing app is slowing you down on, Obviously, you need to take your business to the next level building new abs using clog, native technologies building innovative, perhaps to utilize and take advantage of I o T or AI or Blockchain again, Block team can be done off the cloud. But a lot of the cloud service, especially the AWS Blockchain services offered, makes life a lot easier. Expediting innovation and, again, a za previous one applying innovative devil practices. So there are four main strategies which a lot of organizations employees the re hosting re factor re architect or rebuild on again. Going for simple to the most complex, which is rebuilding. So it just depends on your organization, your use case, your business. What you're trying to do, if you're simply trying to move for the cloud on everything you have, can be migrated without any issues without any modifications and obviously realize to be the case. But if you are an organization that's been using a lot of legacy applications, and then I was that you might want to look at some of the other three or even rebuilding the entire application or the entire infrastructure from scratch 5. S2 Migration process: everybody. And welcome back in this lesson we're going to do is we're going to take a look at the process that's involved in migrating infrastructure to the cloud. Essentially, there's a for four phase a migration process which designed to help Union organisation approach in migration off, you know, a few tens, hundreds or even thousands of applications to the cloud. While each phase is common a component of successful migration they're not discrete phases but an iterated process. As you reiterate and migrate more applications, you'll be able to dive repeatedly and predictability in processes and procedures and find them. The migration process accelerates. So there's a comprehensive portfolio off tools which are provided by AWS. And they've partnered with a lot of third party organizations to provide automation and intelligent recommendations based on their machine learning to simplify and accelerate each step of the four faith migration process. So the four phases that are involved in the migration process is an assessment. You have readiness and planning. You have the actual migrations and then you have the ongoing operations, operations and optimization. Let's take a closer look at each one, so first we have the assessment now at the start off your migration journey. What you need to do is identify your organization's current level of rabbit readiness for operating in the cloud and the potential business outcomes for the migration. So an initial understanding off your let's existing environment off your infrastructure is really necessary, and it allows you to develop a business case for the migration. Because migration is not really a cheap process, isn't it? Could cost your organization quite a lot of money depending on the size. You want to make sure that there's a strong business case that you can take to your management so through data and on actual utilization off your on primary sources, you could essentially create a more accurate forecast off your TCO or your total cost of ownership to run these workloads. So the lots of tools which are provided by AWS and third parties that assess your on primary sources and build a right sized and optimized cost projection for running those APS on AWS. This one is a T S o logic, which is essentially another company which is owned by AWS, which provides a total cause of ownership projection for AWS based on your actual utilization of resource is the migration hub, which will take a look at later in this course, generates correct PC two instances, or virtual machine recommendations were running on prime workloads. So it'll be S can help you develop this business case by using cloud value framework, which is a proven of pro method that delivers a compelling business case to sell to your management. Then there's also the CART or the Cloud Adoption Readiness Tool, which helps you develop plans for cloud adoption and enterprise migrations. It basically assesses your readiness across a few perspective, such as business people, processes, platform operations and security. So once you complete the survey, you condone your A lot. You know it generates a customized assessment chart and readiness that you can take and make part off your business case. It also has a pretty neat feature in terms of a heat map, and a radar chart helps you score the readiness off your organization. So those are the three main things that you can use to assess if your organization is ready , or even if the cloud is the right decision. Because if you're a large organization ah, and let's say you know, your total cost of ownership comes to tens of thousands or even millions of dollars, and then it might not be as strong over business case as you initially thought. Yeah, after air, our assessment is done. Let's say if it's proven successful and we're ready to migrate, the next step is readiness and planning. So this in this face what you basically do is your address to gaps in your organizations ready readiness that were uncovered in the assessment face. So in this phase, you get to basically analyze the environment, create a map off interdependencies of services, off infrastructure, off applications and determined migration strategies, which we looked at in the lesson earlier. I can re host or are you going to rebuild? And that allows you to build a detailed migration plan with priorities for each application . So for that, you are you basically set up a secure and well architected AWS environment, an AWS account which is also referred to as a ws mining zone. So, as part of the readiness and planning fates, you can create a migration plan which includes building experience through initial migrations and refining your business case and you also focus on building your baseline environment, driving operation readiness and also building your skills. In the meantime, no one thing to especially keep in mind, which is critical to migration is collecting application portfolio data and rationalizing applications. Using some of the common migration strategies that is re hosting or re platform ing or rebuilding, you have to make sure your applications are ready to be hosted on the cloud. Are there lots of different services which are provided by AWS in this face, such as, you know, there's the application Discovery Service, what automatically collects and presents in detail info about any and all application dependencies and utilization to help you make more informed decisions in terms of what strategy is going to be right for your organization. They also have third party partners such as Risk and Claude Amaze, and at a data Deloitte toe help you in these tools or in this face. No, the Hub automates the planning and tracking off the application migration across multiple AWS and partner tools. It was kind of yet like your main dashboard during your migration process. Then there's also the AWS Schema conversion tool. It makes Hatra genius database migrations predictable by automatically converting the source database schema and a majority of the database court objects to the format compatible with a target database. Let's see your migrating. You have Oracle database and you're no Margaret to the Amazon Aurora. You will require the schema conversion tool to make sure that your schema is converted to be compatible with the Aurora database. Landing zone like I mentioned helps you set up a secure multi count it of his environment based on best practices. So before you start to migrate, your first the application, the landing zone helps you set up your initial security baseline and then, lastly, have a control tower, which helps set up an automated landing zone, which is a well architected, multi count environment. So not only is everything automated and easily done through a click of a button, it makes migration a lot easier. So after we have the assessment done, we have the readiness and planning everything done. Now it's time to actually migrate. Our resource is son. This phase, the focus is going to shift from the portfolio level to the individual application levels, and each application is going to be designed migrated and validated. She'll need the capability to automatically migrants. You know, one or thousands of applications from different source environments, whether they're physical, whether the virtual to eight of us now, these applications typically involve widely used open source databases. Additionally, you'll probably will likely require a one time migration of large volume of data to eight of us. So again, the migration how hub helps you manage all of that now the best approach for many applications. It's rapidly moved to the cloud and then re architect in eight of us. Now the cloud indoor migration quickly re hosts a large number of machines from multiple sources platforms again either physical or virtual to AWS without worrying about the compatibility performance, disruption or long cut or windows. So first situations where you cannot install an agent based migration service on your server and the server Migration Service provides an agent less service to make it easier and faster to migrate thousands off on prime workloads to the eight of US environment. Essentially in a snapshot. Now she have a VM ware cloud, foundation based environments, the VM ware Claudia Native bets quickly relocates hundreds of APS virtualized on V sphere to AWS Cloud in just a matter of a few days and best part about it. It maintains the consistent operations with your on prime environment. So it's a very robust software and a set of tools that it'll be S has developed for the migration process. And then after the migration is done, obviously the job is not done. Then you have the ongoing operations, and optimization is to make sure that the apse and services that were migrate to AWS are operational and their optimized so that the stage again you would efficiently operate, manage and optimize workloads in the cloud. An ideal situation. You would build off the foundational expertise you've already developed, and you can use the AWS management and government service for an N 20 i. T. Lifecycle management for both your AWS and non obvious resource is so the lvs manage services can essentially help you accelerate your migration by providing ongoing management cost optimization and operations off your infrastructure. So, essentially, if you do not have the expertise to migrate your infrastructure or your applications of the cloud, you're also able to utilize the services off AWS through their manage service. and their management in government to help you migrate and optimize your applications to the cloud. So don't their sense of the four main steps in the process of migrating the clotting, any of the assessment you have the Reagan some planning, you're the actor migration, and then you have the operator operation and optimization. 6. S3 Migration Hub: Hi, everybody. And welcome to this lesson on looking at the migration hub. So we know or have a good understanding off putting migration off is we have a good understanding off. You know, the cloud infrastructure and migrating strategies and why we should or should not migrate to AWS. The migration hub is basically the main tool that utilized by eight of us to help you migrate. Your resource is to the cloud to aws and every streamlined fashion. So this is the main dashboard that is provided by AWS that allows you to essentially manage everything your entire migration process. One thing to know it what you will see Cloud indoor. It is a free service for migration service which will take a look at in the next lap, which helps you migrate your infrastructure to AWS. It's essentially a like I mentioned in the prisoners in a company which is owned and operated by eight of us, which is why it makes it very seamless to use cloud indoor toe help you migrate within aws . So in the main dashboard, you can see it will show you. You know, you're most used applications that have been using the tools, the migration status in terms of, you know, the servers and applications which have started the progress and the complete and over time . So it gives you a very good, detailed view off your entire migration ecosystem. And here on that right hand side, you have quick access to a lot of different tools which are provided by AWS, which we had briefly discussed. The connector, the tools, the programs, Let's if you wanted assistance by eight of us and so on. So here on the left hand side, is where we have all the different options for using the dashboard provided by eight of us . So the main steps in terms of migrating to the AWS ecosystem or to the cloud in general, his first step is always the discovery, which is where we discover, or you discover you know, all the different infrastructure and applications that you have operating within your environment. So after the discovery is done, that's when you go ahead and you migrate all of that infrastructure to AWS. And here there's on Alston option for assessing, which is a good tool which will quickly take a look at an upcoming lesson. It gives you recommendations on what type of virtual machines air hardware you should use in AWS based on your current hardware. So it's a very good tool, especially if you are looking at a rebuild strategy in your migration. This would be very good tool to utilize. So in the Discover options, we have multiple options within the Discover pain that we have servers, applications that data collectors and tools and the servers gives you a good or view in terms of how many servers you have within the migration process. What servers your up, what servers? Your including what servers you're not, including. As you guys can see, there's an error, because I'm in the process of deleting this on going back to the original state. Now each server is grouped into what's called an application. The application is a good way to AH group various servers because most of time, let's say if you are working in a Web environment, you'll have one or two multiple servers that are acting as Web servers or they're acting as database servers or that are acting as file servers and so on. It is good to group those service within applications and so you can logically move your infrastructure to the cloud. Now, obviously, that's not the case for every organization. So you might only have one server on essentially one application because you have one server that that's everything for you or you have once over that's a Web server, and then you have another server. That's a file servants on. So it's not a requirement that you have to have multiple servers within each application, but there are grouped with applications. Then you have the data collectors, which are basically the agents that are running on the servers in your network, to collect the data off your server. And we'll look at how we use the data collectors and collecting information about our hardware or applications within our infrastructure. And here are the three main tools which are provided by AWS. So far speaking either import our infrastructure into a W s. So, for example, let's say if we don't want to run any kind of discovery agent on our infrastructure, we simply want to ah, hard cord, what should be imported into AWS in terms of you know, what should be proved provisions we can import a CSU file. So if I click on the import template, so here's basically and import template. So here we can populate it with as many servers as we want, right? So but it does get very detail in terms of the bios I D. I. P. Address, Mac address and so on. So, um, I will include this in the download section for you guys to view and or you can again or better edgy. You are also able to download or directly from your AWS dashboard, so if you want to do it manually, you're able to do that. Or you can have a Discovery, connector or discovery agent main differences as if you are running a VM Ware environment. Then you would want to use the Discovery connector because it's a agent was connector in that works V sphere and your V Center. If you are not working in a VM Ware environment than you'd want to use a discovery agent, which basically gets installed on your physical server or unoriginal machines, and it collects all the data and we'll look at how we use a discovery agent in the upcoming lab, additionally could also click on this. It gives you a good information in terms of how each one differs. It gives you a good breakdown in terms of the costs. All three of them are free. Supported OS is importing or the Discovery connector or the agent. It's a very good table for you to use and understand which one will be better suited for what you are trying to do. And you also have partner solutions, which can be looked at the various parts of the Libya has. Has that help you migrate your infrastructure to AWS? So these are all the different tools that are provided by AWS in terms of discovery. Then you have the migrate which again takes you through your migration process of what's currently pending, what's completed or if anything is out here, would be all the applications. You know, the file servers, the database servers and so on. And the tools before these three tools are the import Discovery Connector and the agent are for discovering your infrastructure. Now, after the infrastructure discovered toe actually migrate to AWS, there are various tools. We have the server migration service. We have the cloud and door now the server migration service again is in built within the AWS equal system. Cloud indoor is a separate organization, but also it is an AWS company, or you also have third parties, which is a team ocean or river medal which you can utilize to help you migrate your infrastructure to the cloud on. The AWS also has the database migration service, which is specific to more, helping you migrate your data bases to the cloud. So after you have your discovery, you can use any one of these four or five services or tools which are provided by AWS or third parties to actually do the migration off. Your infrastructure are essentially copying everything that's in your server and moving it to the cloud. So that's a sensing the migration help very, very good and powerful tool to help you manage your entire migration process. 7. Untitled Projecta2f65e5: Hi, everybody. And welcome to this lesson on discovering your network infrastructure through the migration , huh? So there are a few things that we need to do before we go ahead and get started with discovering our network infrastructure. The first thing we need to do is in the Windows Server. We need to modify the firewall to allow a certain port port number 443 to go through. That is what the agent uses to communicate. So that's one of the things that we need to do and ensure that our firewall has that port open. Now. For the purposes of this tutorial, I am working on a Windows 2000 over No cells, 19 0 R. 16 server in terms off demonstrating how we can discover the network infrastructure and have it migrated to the AWS environment. So as you guys can see, I have a virtual machine open that has a Windows 2019 server running on it. It's a standard edition evaluation version, but that does not make a difference. The only difference is, is the hyper V is not available in the evil version, and especially if you're working in a virtual machine nonetheless, the first thing I'm going to do is I'm going to create a new rule in the firewall that is going to allow the TCP port, which I just mentioned to go through just so the agent can work properly. So I'm going to war and credit Custom rule and click on next. I want to apply that for all programs. And the protocol type that I want is specifically TCP port number. I can allow all ports, but again, that would be a security. No, no. So I'm going to specify a certain port a specifically for 43 on go ahead and click on next , which local I p addresses that Israel apply for. I'm just going to do go do all for the purposes of this demonstration. But for in production environments and for security purposes, you would want to make sure that you only allow a certain i P addresses on an out. But this is the demonstration. I'm just gonna go ahead and allow all and I want to make sure that I allow this connection to go through, and I'm going to give it a name and go ahead and click on finish there goes another. We have our firewall rule created. We can go ahead, move onto the next step. Now, the next part is, I want to install a visual C plus insulation that so the network agent can run so I physically can see have navigated to the download center in Microsoft. And what I want to download is the V c r e d I S t x 86. You want to make sure that it's the X 86 version and not the X 64 regardless of what architecture working on, Even if you have a 64 bit OS that you're working in, make sure you download the X 86 just because the network Discovery agent operates specifically with the X 86 architecture. So you want to make sure that you don't know the X 86 not the 64 regardless of what architecture you are operating in, I'm going to download this file. Next thing I want to do is I'm gonna navigate to my AWS console and I'm going to go ahead and going toe. I am because I need to create a user conduct has credentials, something good to users. I'm going to go ahead and add a new user. Since I don't have any created at this point in time, I recall it. Agent Discovery. For the purposes of his demonstration, I want to make sure they have programmatic access. And just for the sake of it, I'm also going to allow the management console access. But you can get by with Onley programmatic access if you want. Let me just change the display settings off this virtual machine real quick so you guys can see the entire screen. I'm just gonna change the resolution. Teoh school with 1600 by 1200. There we go. Go back into my console and I should know you guys should be able to see the entire screen . There we go, so I can see all the options. So as you guys concede that I have selected programmatic access on for the sake of fit the console access, I'm just going to give it a a password for logging in and just uncheck this box. We don't need to have him creating a user Now. As for the permissions, we want to make sure that we attach existing policies with this user just so the Discovery agent has the proper permissions to interact with your AWS environment. So what we want to do is you want to click on attach existing policies and instead of scrolling through him, is going to go ahead and search for AWS application discovery. There we go. So we have two options. Eso We're going to select both the options and make sure that the users added into these policies. Go ahead and click on next. If you want to add tags, you're more than welcome to. But I'm just gonna go ahead and create the user. There we go. Now, A couple things you want to do is we want to make sure we know down the access key and we know down the secret access keep. They will be utilized by the Discovery agent for logging in and interacting with your AWS environment. So I'm just going to quickly copy of these into note pads just so I haven't for reference. So that's the access I D. And I'm also going toe copy the secret access key. Make sure you do copy this because once you get out of the screen, you will not be able to see the secret access idea again, so you'll have to create and use our council. Make sure you know what these down or programmatic access. So now that we have those two pre RECs done, we have the visual sheep c++ agent downloaded, and we also have that user account created in I am. Next thing I'm not Go ahead, go on and had not over to the migration Have I'm going to click on Discover because I want to discover my network infrastructure. Either one weaken, click on the yellow button or here. Want to use the discovery tools, which are provided by AWS in order to do the network discovery again. Three different options we have. I'm going to go into the agent for Windows, and it's going to download that agent on my in my download section on my desk type depends on where you go ahead and save it. So that's the third step. So again, going on with the first step was downloading that visual C plus plus. Next one was creating the I am user for the agent have discovery and then, lastly, is downloading the Discovery agent to get that up and running and started. There we go. So now we have both those items downloaded in my desktop. Next, I'm gonna go ahead and open up next. I'm gonna go and open up the command prompt, and I'm going to go ahead and install the AWS Discovery agent a Masai with this code and again the key i d and the key secret, you guys, he here is copying patient from my note pad, which I saved. One thing I did forget is obviously we have to install the visual C plus plus distributable files before we go ahead and install the agents. I'm just going to quickly double click on that. It's a pretty quick install. And once that is done, that's when we can run the as we can run this command again to install the Discovery agent , it's again very simple and quick process to install the Discovery agent. There we go. So once that's installed, we're gonna go ahead and click on finish. No one that's done. We're gonna go into data collectors because obviously want to collect the data off our infrastructure or of our servers gonna click on the agents tab and here. We see that agent is automatically populated in my dashboard. You guys can see the Asian i d the host name the collection, the health and so on. So we want to do is start the data collection. And here we have different options for not different options. We have one option for a data collection. We can enable the data exploration in Amazon. Athena. So let's say you have a large footprint in terms of servers and applications. We can do that. If you want to use the powerful quarrying off Amazon Athena, we're gonna go ahead and get that agent started. So the data collection can happen properly. I go back into my server manager. You guys can see those two services are automatically started. The adobe is Discovery Update and the Discovery agent have been started on my server. And my server is also populated. Or the Discovery agent is also populated in my migration. Huh? You guys can see if the status is healthy and once once it started, you guys will notice that the surgeon will start being populated in the service section. At this point in time, I only have one server on my virtual machine. That's why only one is showing up. But if you had 10 51 100 all of whom would slowly start showing up as the agents started, starts to discover the various infrastructure that you have within your environment. And here it populates all of the detailed information about your server to verify the concede that the I P addresses off what's in my AWS dashboard and what's actually the I P address to my server do actually match, which is 10.0 that $2. 15 Just so we can confirm that it is indeed the same server that Discovery agent has discovered, we can see the core the CPU Is the disks ready? Tell information now for more detail. Information. Weaken. Go on to Amazon, Athena and our dashboard. And here we can see that there's already been a database has been created for us. In terms off our server. We can run different queries to court the data about our servers about her infrastructure, to get more detail information about what we have on how we can migrate that to AWS. That's essentially what the Discovery agent actually entails in terms of discovering what you have on your network and how you can get that populated and moved on over to AWS, which is the next step is migrating this information or the server into AWS. 8. Migrating a Database with DMS: Hi, everybody. And welcome to this lesson on the division migration service. You can see I've already logged into my AWS council. I have managed to get into the D. M s, which is the database migration service. This is the main dashboard because we'll see before we get started. Obviously, I need a couple of databases created in order to the migrations. I'm going to go on over tow RDS, which is the relational database service off A W S. And I'm going to create a database as my source database. Obviously, if you guys are doing this in a production environment, then you will have a source database which will be your on premise database or another database that your organization is using on AWS. And then you'll have the target database, which is the database that you're moving or migrating to. But since you know I don't have either, Um, since this is just for demonstration purposes, I'm going to go ahead and quickly create a database in RDS, which is the relational database service in AWS. So I'm just going to go ahead and select the main some basic options. I'm I'm going to do the easy create in terms of the creation method. Since I want to keep everything default in terms of the engine type, let's go and do my s dwell on Make sure that I want to stay in the free tier. So just make sure you guys select that if you guys are going to you practicing practicing this on your own. Just measures like the free tier that were easy to instance stays within the tea to micro, which is a free tier. I'm gonna go and keep everything else, the standard said in my Hedman password. There we go. Um, if you guys want you guys conceive, you know some of the default settings that he easy create creates for you, such as encryption. The VP sees the Martian group the sub nets. It puts everything in the default. They're all of the default options that are selected for your databases. But doesn't that easy? Creative. He did the other option than you can go ahead and modify any and all of these options if you want, so I'm going to go ahead and create the database. There we go. Now it's created. A visit does take, um, depending on the time of day in which region you're working. It will take about 5 to 10 minutes for the database to become up and running. At this point, time it's creating, after creating goes in to back up more and then after backup more. That's when it goes into being ready. So you get to see that I have created a did. It was called Database one. He can see the VPC that is currently in. Once I go back into the date of his migration service, I said, This is the main dashboard for it gives you some basic information. There's a quick video that shows you how the GMs actually works. You guys, you see some use cases again, which will did go through in the earlier lessons and the benefits and so on. So we want to do is want to go ahead and create the replication instance Here, I'm going to just give the replication instance a name or and for most of the time you guys want to keep this separate in terms of your migrate multiple databases. The description. This is for your own intents and purposes. Instance class by default, it goes into a teacher medium, but you are able toes select 80 to my girl. Just keep in mind. Sometimes the T two microns is class does fail, but you guys can go and check out the pricing for the on demand instances. If you do not know them already, this issue how much will be charged for using these instances. The T two micro is within the free tier, but the teacher medium is not within the free two year. So just keep that in mind sometimes if you are doing the tea to my gore, depending on the size of the database, even for test purposes, it does sometimes fail. If you see the only one repeatedly throwing up since I only have the default vpc multi easy . If you want that redundancy, you want to create an A multiple multiple availability zone. You are able to do that publicly accessible. You want to make sure you go into that, especially if you were migrating from on Prem to AWS. You want to make sure that this is available publicas. You can connect to it from outside of the aid of his environment available at his own armature. You guys just keep it. No preference. AWS has picks the availability zone that's best suited. And the BBC security group. You can specify different security groups or keep it as a default I have one actually created and the default one. So, uh, by default, it does keep the default one. But if you are doing customized migration, you can change that in terms of the maintenance. You can specify when the maintenance should occur in these databases, since the replication instances where your database is going to be replicated and stay in the in term before it gets copied over to your production. Instance toward your migrating to I'm gonna keep everything default create. Once I go back into the already education, see that it's still creating. Like I said, it does take five or 10 minutes, sometimes longer for the database to become up and running. There we go, so we have our replication instance created. You can see that it's currently in the creating status. You can see all the different information. If you click on refresh, you can see the day status changed once it does that. This also does take about 10 15 minutes for the process to complete. I want to go into the replication instance. I can see all the different information. The detail of the state is the engine region that is using, um, the cloudwatch metrics Guy saw that the air happened before it tries to connect right away . It does take a few seconds for it to connect, even because it's also creating. So that's why you saw the error. But it it does go away once it connects to cloudwatch. Here you can see all let the different information that we had selected when we created this replication instance and and the last one is a log since it was empty, since we haven't done anything yet, so we go back to the dashboard. This is the main dashboard recon. See all of the different application tasks or their status. What's active? What had aired out, which ones complete and sore wandered always want to go ahead and create an endpoint. Obviously, that'll be 20 points. One is the source, one of the targets. So for the source, we're going to select the database that I just created because that is a database, this one right here. Database one This is the database that I want to copy on over and migrate over to my environment in AWS, then fire. Identify him to keep it the same as the database name the source. Engine it automotive detects that it's my SQL server name it. Also the tax port is 3306 user name admin. I type in the password that I selected when I created this database. But these air endpoint specific settings you are able to set detailed or some unique settings for the end point if you want. Ah, the key M s, which is, you know, the key management service provided by AWS, which is for security. You can change those keys or use the default ones and the testing the endpoint. Now, since the databases creating right now, the test is going to fail. Since it's still creating, it's done creating. What is backing up was not live yet. I'm I had just jumped the gun a little bit in creating the endpoint. So right now it's backing up. This is why you guys, either the replication instance had failed. But not to worry. We're still going to go ahead and create this endpoint. As soon as the database is up and running, we will see that it does successfully create the endpoint. Very well. Like I said, the databases are the endpoint active now for database one. I guess you see the status that active even though, uh, the test I failed again because it was backing up. But as soon as it was able to connect and see the database is actually activites only backing up. It was ableto successfully connect and test this connection. So now we have our source and one created. Now, obviously, since we have a source created, we also need to have a target. So what I'm going to do now is go ahead and go back in tow, rds and create an other database for the target and born. Did I see there's only one database listed here since I only have one database created If you're in a production environment, you could obviously connect to your on Prem. But since there is no on Prem for this demo purposes, I'm going to go ahead and never get back into my rds and create another database. Now, this is going to be the database that I will be migrating tour that I want to migrate to. So I would go through the same steps as I did for the database. One, I will select the Easy create again just to keep everything the default and I'm going to go ahead and create the database. There we go. So now what? We have to databases in RDS one day, this one which will be the source and day was too. Which is going to be the, um there we go. Now we know that obviously the database one is available and database to is backing up now so that the first database is fully functional were able to connect to it. If we want through this endpoint within the aid of this environment or because it's not publicly accessible, let's be said all the default setting this If you want to be publicly accessible, we would have to do some customize settings when we're recruiting the RDS here because it availability is owned. The VPC, the sub nets, the sudden that group the security settings of these are all of the different details for this and database one that we have created in RDS, which which is a my SQL database. All right? Now that I can I would go back into my database migration service. Now, I'm going to do the create another end point, which is the target and point. As you can see now that the databases to is showing up in the rds instance, I'm gonna type in the password that it for the admin so it can connect. I will test. The endpoint is still backing up, so we'll see if it actually had run successfully. Sometimes it does. Sometimes it doesn't solve this. Run the test real quickly. If the status is testing it almost like a return. A successful result. As you guys on the last one. It aired on right away because it was backing up. I didn't connect right away. But since this one is able to connect, most likely it is going to run a successful tests. If it was going to air out, it would have error out right in the beginning. Right now is running the test. There we go. We see that the test is successful, sits able to successfully connect to this. And now we have our source and we have our target source again being database that we want to migrate to, ah, migrate from and database the target being the date of if you want to migrate to when are we going to the database migration task? Now that we have our twin points, want to go ahead and create a task? Task identifiers scored quickly with the name of application instances one we created source in the database One target pink database to here is a check box. Start the task on create. We are able to uncheck this and start the tasks added little live we want. And if you click on the info next of the task settings, it gives us some detail information in terms of what this task setting is four. In terms of the target table preparation mode, we can do the include I will be columns and replication. Um, and again, if you want more information on this, this click on the in four and it opens up a snippet on the right hand side, we can also enable validation. Be not enable cloudwatch logs. I have to want to do detail logging and Montreat through our cloudwatch everything. We could do it either guided over, you know, through a go interface, or we can do a Jason editor just depends on our preference. Somebody advanced settings. We can create a control table on the target using schema, weaken to a history time out minutes weaken. Do some control table settings toe again if you are. If you are going to Margaret in a detailed database for large database, you want to make sure you have a control table, just we can keep track of everything. Now, before we go ahead and created, we obviously need to create some tasks for this. So we're going to go ahead and at a new selection rule and the scheme out. We're going to enter a schema and we want to just essentially copy everything. So we'll leave everything in the percentage sign that this basically means is going to include everything in there. We can specifically pick and choose with ski. Most wanted Margaret was table to want to migrate. If we are not going to be my green entire database that one point, Uh, but since we are migrating, everything will just leave everything to migrate on over and we're gonna go ahead and create the task. There we go now that this test migration times, guys, and see if this data is creating and what it's going to do it. Since we have the check box selected in terms of starting it, it will start it right away. You guys can see, you know the status has changed to running and we can see the you know, the source. The target. The type is full loading the full database, the progress, the elapsed time, the tables cord. So it keeps a detailed log of the status of your database. Once they're going to database, are in the dashboard, we concede the percentage in terms off you know what's active. If it had air our if it had failed. Ah, you know the tasks, the instances, the endpoints or gives a very detail information in terms of your entire task or your entire migration process in terms of marketing databases. There we go. We've seen out that the progress is 100%. We could see the tables loaded, the tables queued, how many tables that aired our which were able to see in the logs since if it were not able to match up the tables or if there's any air copping some stuff over it, chose you all of that. All that detailed information is viewed right within the migration and the tables and in the table statistics. In the migration test, we can see a detailed breakdown off what was copied over what was loaded, what was not loaded at the tables were loaded or not. And the way that I can you verify that it was modified. We conceived, the tables were modified and if they were completed or if they were failed, Additionally, we had set up the cloudwatch metrics. Everything would have also triggered in the cloudwatch if something failed. So it gives a very detailed information in terms of what was copied over what was not copied over. Not since this was a very simple migration, you know, the data, but essentially empty. There was only one table that was created in their ah, at one pretty quickly, and there were no errors or no spikes and data in terms off the tasks. But if you had a large database, then obviously you'd have more information that would be populated in terms off the scenarios and the information that would that you would see for the table that were being copied and they re sources that were being used. 9. Online and offline data migration: everybody, welcome back to our continuation off our data migration. So the previous lesson we basically looked at, you know, the two main options that we have in terms off transferring our data from our on premise systems to the cloud in our migration strategy or in our migration process. So let's take a closer look at the options or tools that AWS has in both online and offline data migration. So first there, in terms of online, there's a main connection or a man option that a lot of large organisations use, which is the AWS direct connect. So in here, the you know, organizations select this option if they want a dedicated physical connection accelerate, you know that network transfers between the data centers and the AWS data centers. So if you have a enterprise organization that has a good amount of data and they want that direct connectivity between their on prime servers and the AWS data centers, then they choose. The Direct Connect lets you establish dedicated network connections between the two networks, and it uses you know, the 802.1 villains, which can be partitioned into multiple virtual interfaces, so it allows you to use the same connection toe accessible with the public resource is such as objects, stored industries using public eye peas and private resource is such as an E C two instance , which is running in a VPC, which might be using a private I pee in the AWS environment, and best thing about it is it. It maintains the network separation between the public and private environments. So, essentially, when you're setting up the direct connect connection, what you're basically doing is your essentially putting your internal network along with the eight of US ecosystem. But obviously that's an expensive option for a lot of organizations. Another tool is a AWS data Sync. It lets you transfer very easily data between your on front storage into an S three bucket or any FS system. And it automatically handles a lot of the tasks that are related to data strand transfers that can slow down migrations or your sometimes burdened the I T ops, including running own instances handling trick encryptions, scripts, optimization and so on so it can transfer data at about 10 times faster speed than open source tools. And you can use data saying to copy data over direct connect or intent links depends on you know what option you have set up. There's also an option for storage gateways or partner gateways and or partner gateways. So a gateway basically sits on premises and links your environment to the cloud in AWS. So it's basically an ideal solution for hybrid scenarios where some storage might need to stay. Ah, locally for performance reasons or for security reasons. And some can be offloaded into s three bucket. So let's say if you have massive amounts of archived data, right? So what a lot of organizations would do or might want to do is they could offload that archive data and utilize the S three buckets to store all of their archives and their backups while keep keeping their live data on Prem. So for stores gateway, it simplifies again the on prem or the Hybrid Solutions or your existing applications connect toe a local gateway and uses a you know, industry start industry standard block and tapes stores, protocols to store data in S three and even an Amazon glacier. Ah, a few things keep in mind, you know, data is compressed, it's secure. It also has a virtual tape library configuration. So ah, very robust system on. A lot of organizations use this to keep their tape backups or their archives on the cloud. And you also have partner products that do the same thing in terms of the stores gateways for AWS has partnered with a lot of a number of industry vendors on physical gateway applications that help you bridge the gap between traditional backup and backing up on the cloud. Then you have the S three transfer acceleration. It basically makes public Internet transfers to the S three bucket a lot faster. So you mac maximize your available bandwidth regardless of distance or varying. You know, Internet Weather's on, and there's no special clients or proprietary never protocols. So simply what you basically do is change the end point you use with your history bucket, and the acceleration is automatically applied. And when you create s three buckets, you have an option of doing as three accelerations. Then there's also a tool called the Amazon kinesis or the data firehose. So is the easiest way to load streaming data into AWS. We can capture and automatically load streaming data to s three and red shift, so it enables real time analytics with existing business intelligence tools and dashboards that you might already be using. So the change is very minimal in terms off your front end, but you're back end is a lot more streamlined by using the AWS ecosystem. Then Able Bs again has a number of technology partnerships or technology partners and vendors that make it very easy to bring your backups and archives into the cloud again. The simplest way to move your data might be why are you know, industry connector? And you might want to embed that existing backup software. But if affordable, large organizations, they don't have the expertise and how they can use one of the AWS technology partners to help them in their migration process. So those are the online tools that are available if you are choosing to do an online or hybrid option for your data migration now, like I mentioned, the produce less and sometimes adequate bandwidth or even network are not available for, you know, data transfers for large amounts of data. So here AWS has three options. You have first of all, the AWS snowball, which is a patter bite scale data transport solution that uses secure APS to transfer large amount of data in and out of AWS. So basically, you if you want to transfer data, why the snowball AWS basically sends a very rugged and heavy device to your physical location, and you physically connect that device into your network. You transfer the data physically onto the device in your network, then you call up AWS. They come, they pick up the device, they bring it to their data centers and they transfer the data into their servers. You also have these noble edge. The main difference between the snowball and snowball edge is the snowball edge has on board computing capabilities. Essentially, it is a transportable server sort of saved in layman's terms. So if you want those on board computing capabilities than you would want to utilize this noble, if you just want a simple hard drive, then that's what the snowball is for. And then finally, you have this normal bill, which is literally a semi trailer that is used to transfer the data. It's used to transfer exabyte scale data in a 40 foot shipping container so uses snowballs , address common challenges with large scale data transfers because if you're trying to transfer exit bytes of data over the network, that's an extremely expensive and long task in terms of times. So with the snowmobile physically comes trio location, you transfer the data onto the literally a semi trailer, and it goes to the AWS data centers and data is off loaded there. So those are main data transfer options in terms of migrating or in your migrating process . And again, data is a cornerstone of any organization. And you should definitely plan how you're going to move your data from your on prime service to the cloud and that could Onley be decided after we have developed a migration strategy. You've laid out your plan. You've planned what is going to stay on prime? What's going to go on to the AWS Cloud Onley. After that? Can you actually decide what type off data or what type of option with online or offline, you are going to be utilizing in your migration process 10. S3 Migration Acceleration: everybody and welcome back when this lesson we're going to be taking a look at a program that AWS has called Migration acceleration program and what it is. So a lot off organizations are obviously migrating to the cloud and for any number of reasons. So now, while every organization you know is going toe have their own unique drivers on why they want to move to the cloud. Every organization wants toe, have transparency and reduce their risk and their costs while they're migrating to the cloud. So that stays standard across any organizations, regardless off what's driving their move to the cloud. So this m a peep or the acceleration program is basically designed to help organizations, mostly large organizations that are committed to a migration journey, achieve a range off benefits by migrating existing workloads to AWS. Now I m. A P has been specifically created to provide consulting, support, training and services, credits toe, reduce the risk off migrating to the cloud, build a strong operational foundation and help offset the initial cost off migrations. So they have a very unique approach in terms of how they do and approach the acceleration program to the build basically the foundation toe operate mission. Critical workloads on AWS will build capabilities that could be lovers across a variety of projects. You know, the have, it'll be yes, has a number of resource is to help support and sustain the migration efforts. So they have a pretty unique methodology in terms of their approach to this acceleration program. So consists of basically a three step approach for migrating to the AWS. First, you have the m r A, which is the readiness assessment phase. So here what they do is they determine the current state of the readiness to migrate and identify areas where you already have strong capabilities and where further development is needed to migrate at scale. So and another was, you know, what they're basically doing is they're evaluating your cloud readiness along a number of dimensions. You know that we looked at earlier, you know, they look at the landing zone, you're operating models security and compliance, migration process, experience your in house skills and so on, and they give you an assessment in terms of where you are and where you want to be. After that comes the readiness and planning phase or the mrp hear what they basically do is they dedicate a consultant and team helped build a foundation for large scale migration and you gain the experience and expertise off eight of us. And you know, they have a prepared proprietary methodology and processes that they used approach to have best practices implemented within your migration. Janet. And the whole point is they want to reduce the total cost of ownership and maximize your are wife. So that's essentially why essentially, the main reason that a lot of enterprises would use thes migration acceleration program is because they're huge enterprise. You know, they have global offices that have, ah, huge amounts of data. They have a very, very complex infrastructure, and they want to migrate to the cloud, and they really don't know where to start. So that's where the acceleration program comes in, where AWS utilizes their expertise and their global network to help you decide what strategy will be best for you if migrating is actually, you know, an option for you. If so, what strategy to use for everything you know, your server migrations, your data migration, your database migration. So you're a tizzy off migration is done and then finally, after the Revis is planning is signed off. That's what they do, the actual migration. And here is where the physical migration is done and again, depending on the organization size that can take anywhere from one month to literally two years on. I have seen organizations that I have been migrating there. There, on Prime Resource is slowly but at a regular pace, and it's taken them, you know, more than a year to transfer. Their resource is from on prime to the cloud. So this program is used and keeping. Keep this program in mind. If you are working for an enterprise or consulting for an enterprise organization and they don't have the expertise in house, then they can bring in. AWS has to help them in their migration process. 11. S3 EC2 recommendation: Hi, everybody. And welcome to this lesson on the E. C. Two instance recommendation services provided by AWS within our migration have gas board. We have an option for e C. Two instance recommendations. When we get started with this, there's a few options that we can specify in terms of AWS recommending what easy to instances we should be using. So the first sizing preferences. We basically specify what type of utilization the server or is going tohave and then what region are we're going to be having are launching the service into what type of tendencies we want in terms of share? Dedicated the pricing models? Um, and also, if you want to exclude any type off servers, So let's say you want to exclude the very expensive ones for Excel grid computing so it doesn't consider them. We can go ahead and check this bucks. So when we do this and we export of this recommendations what it basically does, it generates an Excel file that compares your existing servers that have been discovered using the network Discovery Tool, which we use a few lessons ago, and it compares them to industry best practices that have been and it compares them to industry best practices, and it generates basically a recommendation file for each server that's been discovered. As you guys can see, there's only one line since only one server has been discovered. I only know one server running, but it will generate recommendations for each and every single server that you have discovered within your migration hub. It's it's It's a very detailed Excel file in terms of giving you full recommendations for basically everything your CPU, your memory, whether should be reserved or dedicated. What type of hard disks use you should have? How largest B. So it's a very detailed Excel file that gives you recommend again. These are only recommendations that AWS believes based on your data input and based on your servers that have been migrated because because remember the servers air migrated. The Discovery agent also checks. You know what your CPU utilization is, what your memory utilization is, what you're never band with you. Utilization is what compares all of those metrics. And now here is a basic table that takes you and describes each one of those fields in a bit more detail so you guys can see It's a fairly exhaustive list in terms of the details in terms of what aliens provides in terms of recommendations. So I found it very useful for a lot of organizations that want Teoh. You know, if you're not only free hosting in terms of your migration strategy, if you're rebuilding, this is a very, very good tool to use because you can use the industry back best practices knowledge that AWS has gained, you know, there extensive data centers and lots of small to large companies that have used their hardware. They've developed some very, very good metrics to help you choose which hardware will be best suited for your usage. Again, this is not a one size fits all, and not every recommendation will be perfect. But if you are rebuilding your never network infrastructure, it's a very, very good tool to use to give you an idea to what you should be utilizing and provisioning within the AWS 12. S3 Managed Services: Hi, everybody. And welcome back from this lesson. What we're going to do is we're going to take a look at the manage services that are provided by AWS in the migration process. So for enterprise customers moving towards adopting the cloud at scale, some find the people and some do not. So what the service manage services does It operates AWS on your behalf providing a secure and compliant AWS landing zone, a proven operating model Ongoing Organizations Day today. Infrastructure management basically advantages your cloud infrastructure on your behalf now . Why implementing You know they're known best practices and their expertise to maintain infrastructure. They help you reduce your operational overhead and risk, especially if you do not have the expertise in house. It automates. You know a lot of the common activities such as change your quiets monitoring path management, security tool, basically everything that your cloud services team would do if you had anonymous AWS does for you. So you're basically outsourcing not only your infrastructure by going to the clot, but also the Claude management are, or management of your entire infrastructure. So there's lots of benefits. Um, I've listed three of the main ones on there for you. Ah, first and foremost again, you have the improved security and compliance because it offers a step by step process for extending your security, your identity compliance perimeter to the cloud and including, you know, some of the critical tasks. Such as if you have windows environment, the active director integration. Or if you have a e commerce website, you know, Pete P C I. D. S s compliance or GDP, our compliance. So the help can manage all of that. If you do not have the expertise, then they also help you accelerate migration to the cloud because it provides an enterprise ready proven operating environment which enables you to migrate. Production were close in a matter of days, which would which might end up done in a matter of months. No, they work with a lot of different partners Third parties to help make that migration process a lot easier and go a lot smoother unless we also removed or some innovation. Various. Now, Enterprise Dev Ops is, you know, essentially conversions off modern and best practices and existing I t processed frameworks such as i t i o. Now they give you the speed and agility while maintaining governance. You know, they maintain the security, they maintain the compliance control. So by manage services, you enables Enterprise Dev Ops package is everything into an eye as model or an infrastructure as a service model into a secure and compliant platform for organizations to get up and running on the cloud right away. How how it works. It's essentially a fairly simple process. Within the managed services, you have the foundation. You have the migration, any of the operation, so within the foundation. And then it works backwards from your desired operation outcomes to implement a virtual private cloud on building an entire infrastructure. So everything that you would do on your own in terms of the planning in terms of the readiness AWS does for you and then the actual migration, which can they do on your behalf within by using some of their own manage services or some of their own tools and, if required, utilizing third party tools, let's if you have VM ware V sphere, they would utilize those specific tools or the schema conversion tools. If you have Oracle database migrating Aurora so everything is done for you on your behalf and then the obviously the operation, which is the ongoing maintenance, ongoing operation and optimization off your infrastructure off your applications. Now some of the features in a little bit more detail in terms off what manage services actually does for you. 1st 1 is provisioning again. It enables you to quickly and very easily deploy your infrastructure. It simplifies a lot of the on demand provisioning off cloud stacks of infrastructure, off virtual machines of applications and so on. It doesn't monitoring and event management for you. Use configured for logging and alerts based on industry best practices, so there's not something that you need to learn and configure. You're patching and your continued continuity. Management takes care, all of business, everything in terms of your patching in terms for backing up by utilizing their best practices, which are specific to your industry or specific to your organization. Then you have availability. It's hosted in multiple regions worldwide, and each reason essentially unique geographical area and all components of the managed service are deployed, validated and operationalized within region. Any other security and access management was protected. Information assets and help keep your infrastructure secure. Whether it's antivirus, whether it's anti Meyer malware or intrusion detection or I. P s, everything is done and dusted on your behalf Compliance, which might be an important to four organizations. You would have e commerce sites or which didn't need to adhere to Hip A or G P R or P P. C i. D. Assess. So all that compliance, all that headache is taken off of you and managed by eight of us. Then you have the change. Management, you know, provides secure division means to make control changes to infrastructure, to ensure compliance, to ensure continuity for your application for your business processes. You have the incident management and then, lastly, the cost. You know the personal Cloud service Delivery manager rule. It's essential your account manager basically provides a monthly summary off your metrics of your activities off the cost, where you can benefit where you can scale up where you can scale down. So essentially everything is is managed on your behalf. You have a dedicated account manager, which does everything for you. No, Obviously, it does come at a cost because this is meant for enterprise organizations which want to offload their I T department onto AWS. But a lot of organizations might benefit from this instead off if they don't have the expertise in I was bringing expertise in house is very long and complicated process and essentially, even if you do bring expertise in house you, you're not really sure if and how much of an expert actually are. If you offload your eye team pressures infrastructure to AWS, you can rest assure that you do have experts working at managing and migrating your infrastructure to the cloud. 13. S3 Migrating network infrastructure: hi, everybody. So now that we have our network infrastructure discovered now we want to the next step in terms of migrating that into AWS. So you guys can see the server has been populated. Information has been populated from the discovery agent in terms off the specs and was installed and what's not. So the next step is after that information has been populated. What we want to do is you want to select that. So if that impression populated, But when it was group that as an application. So we want to grope this server as an application and we're gonna go and give it a name, and there's anyone to do that is you can group different servers based on you know what they're used for. Others finance, whether it's Web servers, whether it's database servers and so on. So I'm going to go ahead and group this as an application or create an application and include the soul server in there. And when I go to my application section, I can see that there the servers already there and what I want to do is click on the application, and here we can see that there's one server that's been added to this application. Once that's done, I go ahead and click on the tools. And here are all the different migration tools which we discussed earlier, which AWS provides in terms of migrating can use the database migration service. We can use the at a data the River Meadow Cloud and Door or even the server Migration Service. The server migration service by AWS is what we're gonna take a little get a little bit later. In this course, what we want to do is go ahead and look at Cloud and or it is also a AWS organization. It's a free service that's provided, and if you are testing it out or not preach your environment. You are able to use cloud indoor for freak. So I regard you guys since you have logged into my iCloud indoor console, and what I want to do is I want to add the eight of US Access Key I D and the Secret access key. I do. You know, if you have the one that you created earlier, you can add those in there. But since I apparently have seemed to lost text file, I'm going to quickly create another user that will have access to my AWS account that I can input into cloud and or you guys can see that the Asian discovery I had created earlier. But I have misplaced, and I think I believe I deleted the file that had the access key idea in the secret key. So I want to quickly create another user. Call it cloud and gorgeous so I can keep it differentiated. Give him both of those accesses. Give it, give it a certain password. There we go. So I'm going toe and put those access idea in the secret key and click on Save in the AWS credentials in the Cloud indoor dashboard. Because that's verified, it takes me on to the next setting, which is the replication settings. Now the migration source is obviously the other infrastructure. Additionally, if you are going to be using cloud indoor for production purposes, you can actually migrate from other regions to within AWS, using cloud indoor also. Well, what I want to do is the sources, other infrastructure, which is obviously my server, and I want to migrated to U S West in Oregon after that's done here are a few other settings that I can specify in terms of the replication servers, the types of discs, the security groups and so on. So these are all specific to cloud indoor, and again, it is a W s organization owned by AWS. So here in the replication service, I can specify what type of server it should provisioned and use to use as a replication server based on my internal servo. So these are application servers are kind of used as an in term where they replicate the data to before you can provision that server into production in a W s few other settings in terms of the security groups. If you want to use a direct connect VPN if you want staging area tags or if you want bandwidth throttling, so depends on what types of settings and what type of replication you want to do within your environment. I'm just going to keep everything as default. There we go. So not that I have my projects that up. I can just click on, Show me how, and it shows me how I can get this cloud endure to run on my server and get that operational. So I'm going to just download the agent for windows from the Cloud Indoor dashboard. Since I'm running a Windows machine, you can also do it for lyrics if you are running UNIX machines. So as soon as that is downloaded, I'm just going to copy this command, which they're very nice to listen for us also. And I'm going to just run this command in my command. Prompt. Obviously, I have to go to the same folder in which I downloaded that cloud and or after you that can see installer win that E X E and this folder. I'm going to run this command, and it'll take a few minutes what this command will basically do. It will download and install the agent from cloud and door on your server, and it will start the replication process on the dashboard so you guys can see that it's connected to the console. It's checking the disc space, which is the disc space off the server that you we are trying to replicate and move on to the AWS cloud. Yes, since you that it's found and identified 50 gig hard drive, which is which is what I specified when I create this virtual machine, and then all the discs were replication were successfully identified, and after it does, the identification process is going to download the cloud indoor agent and then after downloading, and it will install the cloud indoor agent on the server. There you have it is downloaded and installed, and now it's adding to the source machine to the cloud indoor console, conceded. Instance, I D. And it successfully finished. So when I navigate back into the cloud indoor, there we go. We can see that one machine has been added into the cloud indoor console, and it's initiating the data replication. So that's essentially how easy it is or how a straightforward I shouldn't say easy how straightforward it is in terms off, starting the migration process from your infrastructure to the AWS cloud on the audit log. This gives you a more detailed information on how that's done. So that's by using cloud indoor in terms of after it's your network discovery, how we can migrate to the AWS Cloud and again like I mentioned, we will take a look at the the AWS Server Migration Service a little bit later on in this course 14. S4 Data Migration: everybody. And welcome to this lesson on data migration. So data obviously is a cornerstone in the building block of almost every organisation and for any application deployments, everything you require data, and especially if you are migrating off your on prime infrastructure to the cloud, you obviously have to Margaret your data along with it. So when moving data to the cloud, you have to understand where you are moving it. Four and the different use cases and types of data you're moving and the network resource is available, among other considerations. So there's a number of services that AWS offers in terms of migrating your data from your on Prem data centers or servers to their cloud. So there's two different ways or two different options that we can have in migrating our data. So we have obviously an online data transfer or hybrid cloud storage option, and then we have an offline data migration to an Amazon S three. So for the online data transfer and and or hybrid eso with these methods, you can make it simple to create a network link to your VPC, transfer the data to AWS, or you can use an S three for hybrid cloud storage with your existing on prem applications . So these services again can help you both lift and shift large data sets all at once as well as help you integrate existing. But they process flows like backups or recovery, or if you have continuous data streams directly to the cloud. But obviously migrating directly or online data transfer might not be an option for organizations that have lots of data. That's if you have petabytes of data or you have hundreds of terabytes of data and, you know, for a lot of even small, even medium size organizations, that's not a far fetched concept in terms of having petabytes of data. Because, you know, data nowadays is becoming more and more prominent. I mean, you have cell phones that have one terabyte storage eso. If you have cell phones with one terabyte story, just imagine what type of data or how much data organizations have stored on their servers . So if you have lots of data offline, data migration option might be for you. And there are multiple offline options that AWS offers in terms of moving terabytes and terabytes, or even petabytes off data. So Here's a good overview in terms of the options and what AWS service might be right for you. Now we'll look at some of these services in a bit more detail in the next lesson, but this table, it basically shows you, you know, options that you have in terms of what you're trying to do. So let's if you're trying Teoh, you know privately, connect your data center with the network linked directly to your VPC NATO BS. Then obviously you'd want to drag connect connection. Now you know, AWS has a number of regions spanned across the globe and lots of different availability zones. So doing a direct connect connection is not ah far fetched option for almost any location around the globe. Or let's see if you're trying to copy or replicate. If your file system toe s tree or F s, then you'd want data sink or if you're connecting existing on from applications those so lots of organizations, you know, they decide not to move their APS to the cloud. They just want to move their data to the cloud. You know they want to utilize the data servers that AWS has or the infrastructure but they want to keep the applications in house. Then you have multiple options. You know, instead of stories, get ways, your file, gateway and so on. So this table good, you know, a good overview in terms of what you're trying to do and what AWS tool is available for you to do that in an online option. Then you have offline options for offline. There's three main ones that AWS offers. No, there's a snowballs. And then there's a snowmobile, so it depends on how much data you have you want to transfer on. Believe me, there are organizations that have gotten snowmobiles to transfer petabytes of data to the cloud. There is a reason that AWS has developed the AWS Noble Mobile, which is basically a semi trailer, that it's filled with data. So these are offline options and transferring petabytes of data. Even with AWS Direct Connect, it's technically possible, but it's not really realistic. So those are again the two main options that are available for you, and it would depend on what your situation is, what your organization is trying to do, how much data you have and what type of infrastructure you're studying. up. You know what type of migration process and migration strategy you want to implement would determine which one of these options their online and offline. And even within those two which option in which tool or technology you're gonna be utilizing to migrate your data to the AWS cloud? 15. S5 Database Migration Service USe Cases: Hi, everybody. And welcome to this lesson on the database migration, service, use cases and all that. We have a fairly good understanding off what GMs is and how it operates. Let's take a look at some use cases for the database migration. The 1st 1 is we have a pretty straightforward ah, simple one, which is a homogeneous date of his migration and wish, you know, the source and the target engines are the same Oracle Oracle or a school desk Well, or my SQL to ah, the Relational database service or the M S SQL to the relational database service. You know, the engines are the same since the scheme was structure, data types and database quarter compatible between the source and target. It's essentially a one step process. You create a task with connections to the source and target. Then you start the migration with essentially a click of a button, and the D. M s takes care of the rest. The source databases can be located in your on premise environment outside of a DEA AWS running on easy to instance, or it can be essential in RDS database. Weaken do migration from our ideas, tow rds. Let's see from region to region. So this one home would you just want very, very simple way to migrate your data bases The heterogeneous database migration here is a bit different in obviously the source and target engines are different. That's the oracle to Aurora. So in this case, the schema structure, data types and coat of the source and target can be quite different. And that requires a schema and coat transformation before the data migration starts. So that makes the migrations. It is that process, you know. The first step is they're gonna use the schema conversion tool to convert the source chemo code to match that of the target database and then used the M s to migrate data from the source db to the target. Now, all of the required data type conversions will automatically be done by D. M s during the migration. So the source database, again like before, can be located on prime. It could be located on easy to instances, or it can be in our ideas database. Next one is a deva test. Our development and test. Now the D M s can also be used to migrate data both into and out of the cloud for depth purposes. So why would organizations do this? You know, one example is, you know, to deploy a development task or staging systems, I need to take advantage of the cloud scalability and rapid provisioning. So let's say you know, this way. The developers and testers can use copies off Rheal production data and can copy up dates back to the on from production. Or a second example would be. You know, if you have death systems are on Prem on, you want to migrate a copy off eight obvious cloud production database to these on front systems, either once or continuously, so it avoids disruption to existing devil processes. Or let's see, if you just don't simply have the hardware on prime to do some development and testing, you can utilize this service, too. Do all of your development and testing on the AWS infrastructure. Then you could also consolidate multiple source databases into a target database. This can be done for both homogeneous and heterogeneous migrations, and you can use this feature with all of the support of database engines that AWS supports . So now in this the source database again can be located. Ah, on your problem. Eight of us, either in a C two instance or an idea it's, and it can be spread across different locations. So, for example, one source database convenient on problem. One can be an easy two instance, and one can be in RDS, and all of them can be combined into Let's a you know, aurora or RDS or single, easy to instance. So it's Ah, very good way to consolidate multiple databases. So if you're doing in migration strategy, you know, before that's in migrating some of your on prime to the cloud, it's good to do. Ah, consolidation just toe optimize and streamlining your operations at the same time. And you have continuous replication with D. M s has mult has a multitude of use cases, you know, for disaster recovery. So geographic database distribution, Devin test environment. So you name it depending on your organization on what you're trying to do, it does support continuous replication for both homogeneous and heterogeneous environments . So a number of use cases and that you can do with the M s, a very robust software that allows you to do a lot off things in terms of not only migrating but also your continuous usage off your data basis. 16. S5 Database migration service: Hi, everybody. And welcome to this lesson. I'm looking at the database migration service. So the it'll be s migration service for databases helps you migrate like the man says databases to AWS quickly and easily, Along with being secure, the source database remains fully operational during the migration, minimizing downtime. Essentially, that's the entire concept off AWS. During that, your migration process is reduced downtime and make the shift seamless, which is why it makes it so enticing for organizations to move to AWS. Now the Davis Migration Service has lots of benefits. You know. It supports both homogeneous migrations such as Oracle, Oracle as well as Hetrick genius migrations such as between different database platforms. Let's say Oracle or SQL toe Amazon Aurora. Now with the Migration Service, you can continuously replicate your data with high availability and consolidate databases. Essentially, if you want into petabytes scale data warehouses by utilizing Amazon red shift and even as three and again. One of the best part about it is that you can use database migration service free for up to six months, allowing you a good time frame to migrate your data bases. It's extremely simple to use. I can begin the database migration, essentially by just a few clicks in the management console and political look at you know how easy it is to migrate databases by going into the management console a little bit later . And once the migration has started, it takes care of all of the complexities into the migration process for you. Um, you know, especially if you're migrating Hydra genius applications or heterogeneous databases. It does the schema conversion tool that it'll be s offers that converts the schema into compatible databases. It minimizes downtime, supports widely used databases. It supports Oracle Toe Oracle and homogeneous or heterogeneous in terms of Oracle SQL upholstery Rescue. Well, you can utilize the Amazon RDS, which is the relational database service. Or, uh, you see two instances running database services or vice worst us. So lots of options in terms of what you can utilize and what it supports. It's extremely low cost again. The service is free for six months, and so the only thing you'd be paying for the storage extremely fascinated to set up and reliable because again, that's the staple off the AWS cloud. Is it being reliable and highly available? So here we can see some of the internal components of how the D. M s actually functions. Now, understanding the underlying components of the M s can really help you migrate the data more efficiently and provide better insights when troubleshooting. So an AWS demons migration basically consists of three components. You have a replication instance, you have a source and target endpoints and then a replication task. So you create a D. M s migration by creating the necessary replication, instance endpoints and tasks in an AWS region. So application in this instance, at a high level replication instances simply a managed easy to that hosts one or more replication tasks. So the figure you see on the screen basically shows an example off a replication instance running several associated replication tasks, and a single instance can host a one or more task, depending again on the characteristics off your migration, the capacity of replication, servers and so on, and it provides a variety of replication instances. You can choose your optimal configuration of your easy two instances. Then you have the endpoints. The mos uses endpoints to access your source or target data store. The specific connection information is different depending on your data store. But in general, you supply you know, some information to the end points. That is the end point type. You know, if it's a source or target the type of database Ah, the server i p So keep in mind if you are migrating your on prime to it of us, you'll you will need a public I p obviously for this endpoint toe work. Or you would need to do a VPN the port number as a cell and so on. And when you create an inborn using the console, it requires that you test the endpoint connection. Obviously, um and it has to be successful before the d. M s can start the tasks on it. Now a single endpoint can be used by more than one replication task. So, for example, you may have to logically distinct applications hosted on the same source DB that you want to migrate separately so you would create two tasks one for each application table. But essentially, it's the same db many other application tasks again. You can have multiple replication tasks within Ah, an easy two instance and they basically move a set of data from the source and point to the target endpoint, essentially from your on pregnant database to the AWS cloud. Now creating a replication task is the last step you need to take before you actually start the migration. And here again you specify again through the replication. Instance, the endpoints and the migration type again, you wonder. Migrate the full load are you? Are market a partial load? You want to replicate data changes on Lisa multiple options you have in terms of what tasks you want again. Like the example I gave before. If you have multiple APS using the same DB, and you will migrate them separately in terms of the tables migrating separately, you can also specify that in the specific tasks, So it's a very robust now. Conceptually, a replication task performs two basic functions. In the diagram that you guys see. The full load process is straightforward, you know. It's pretty straightforward. Ah, data is extracted from the source in the bulk extract manner and loaded directly into the target, and you can specify the number of tables to extract the load in parallel on the A. D. M. S. Now you can also use the M s to capture ongoing changes to the source data store while you are migrating your data to a target. Now, the change capture process that AWS DMX use when replicating ongoing changes from a source and point collects changes to the data based logs by using the database engines native a P I now in the changed data capture or CDC process. The replication task is designed to stream changes from the source to the target, using in memory buffers, right so it uses those toe hold the data in transit. Not if the in memory buffers become Let's exhausted for any number of reasons. The task will spill pending changes to the change cash on the desk. It also uses storage for task logs as discussed above. So it's essentially a very simple process or a very complex process, depending on your migration strategy and what you're trying to Margaret. So if you want a simple replication task, um, it will simply copy everything over into from the source into the target. I am. If you want to do live change streaming, then obviously it gets a little bit more complex. But again, everything is managed from your D. M s console, making it very easy and very seamless in terms of the management 17. S5 Schema Conversion Tool: everybody welcome back. So in this lesson, we're going to take a look at the schema conversion tool that AWS offers. This tool is very, very good tool because it makes Hector genius date of his migrations predictable and easy by automatically converting the source database schema and a majority off the database cold objects including views, store procedures and functions to a format compatible with the target database. When you're migration, let's Asian want to move away from Oracle. If you're migrating your entire infrastructure to AWS, you might want to utilize the Aurora database. Ah, I currently using Oracle again. In that sense, you would be using or you would need to use the schema conversion tool, and any object that cannot be automatically converted are clearly marked so they can be manly. Converted to complete the migration. Now the tool. Ah, good, very good thing by the door. It can scan your application source, scored for embedded SQL statements and convert them as part off the conversion project. Now this whole process, it performs cloud native code optimization by converting. It's a legacy. Oracle and SQL Server functions to their equivalent AWS service and helps you modernize the applications at the same time, off the database migration. So once the conversion is complete helps you migrate data from the range of data warehouses to your A WSC, the red Shift or Aurora, or, depending again, what your migration strategy is. So here is a good table for the conversions, right? So it's in terms of the source database and the target database. It's not something that you would obviously have to have memorized. It's available on the AWS sites. It's a very good tool to use if, for example, you if you're thinking about migration and you're designing your strategy and let's say your applications, you're using your fair legacy applications or fairly outdated applications that might be running on any number of databases. Um, and you're looking at migrating to AWS. Obviously, you would want to upgrade not only your infrastructure but also your applications and your underlying cold at the same time. And the best way to do that would be the schema conversion tool, because again it updates every thing for you to use the cloud native applications for use the some of the cloud, some of the cloud, the new cloud applications, it is I, o. T. Or machine learning or a I. So it's a very good tool to use and consider, especially if you have databases that have been around for quite a long time. You might want to look into this tool and and see if this essential will not only make your application or your migration easy, but also modernize and streamline your operations at the same time. Now, in addition to this tool, they also have a workload qualification framework, which basically helps you assess and plan your migrations to AWS. Now this framework uses the conversion tool to collect information to model existing oracle or SQL workloads and provide instructions to convert them to an AWS database. So it it does most of the job for you in that it identifies the complexity off the migration by analyzing your schemers and court objects in your application cord and dependencies and so on and all the characteristics that are with their and conduct a fleet wide analysis off your entire database portfolio and help you categorize migrations. So it's a very, very good tool to use and test out. Let's say if you're in your initial stages off your migration strategy. Go ahead and download this tool and see and run this on your environment and see if essentially migrating to an AWS and converting your on prime database to an AWS one would be the best way for you to go and to see if it will streamline your operations. 18. S6 DMS Best Practices: Hi, everybody, welcome back. So this lesson we're going to do is we're going to end off the database migration section by looking at some best practices that recommended by AWS whenever you're migrating your data bases. So there's lots of different things that you should keep in mind, especially when you're migrating large databases and even medium sized databases, because it can get quite complex. So the first thing we need to keep in mind is, how do we improve performance off a migration? So there's a number of factors that affect the performance of the migration for example, the resource availability and the source. The what type of speed you have on the network. The capacity off the replication server, the ability of the target toe ingested changes, and so what? I'm So you know, one thing that AWS to say is that you know, they migrated. They give an example of migrating about a terabyte worth of data in about 12 hours in a single task. So obviously it is possible to migrate large databases in a fairly short amount of time. Considering that you have all of the best practices in mine and you have an infrastructure in place to do that. So the ideal conditions include source databases running on E. C two and in RDS and the target databases in RDS all in the same availability zone already . That makes it a lot easier. Since everything is on AWS, everything is in the same of ill bodies on. But in the real world, obviously you're migrating your on prem data bases to AWS. So these situations are not going to be ideal for most organisations. But just to keep in mind that, you know, it is possible to move large quantities of data in a very short amount of time. So a few things to keep in mind in terms of improving the performance, I should load multiple tables in parallel rather than loading on. You know, one of the time work with indexes triggers and referential and integrity constraints. You want to make sure that you optimize those to increase the performance, run a disabled backups and transaction logging because that takes up a good amount of resource is and you want to use multiple tasks for a single migration. So if you have sets of tables that don't participate in common transactions, you know, you might be able to divide your migration into multiple tasks so trans, as transaction consistency is maintained within the tasks. So it's important that tables in separate tasks don't participate in common transactions, so obviously need to have a good understanding of how the tables are operating within your database in order to do that. Um, last leading one thing to keep in mind in terms of improving the performance is optimizing your change processing. So by default, the M S processes changes in a transactional mode, which, which is done to preserve the integrity. If you can afford, you know, some temporary lapses in the transaction integrity you can use batch optimized, apply option instead. So what we'll do it will apply the changes in batch ing rather than in transactional mode, so it does save up some time in terms of improving the performance. So that's the first thing to keep in mind in terms of best practices to do to improve your performance. Next one is choosing the optimal size for your replication, in instance, so it depends on several factors for use case, you know. So during a full load task, the D M s loads tables individually and by default. Eight tables are Lord of the Time. It's not something you two keep memorized, but just keep in mind that it loads tables individually, and it captures ongoing changes to the source during the full load tasks so that changes can be applied later on the target endpoint. So the changes are cashed in memory available memories exhausted changes are cache to disk and when Ah, full load task completes for a table demons immediate immediately applies. The cast changes to the target table. So after all of the outstanding cash changes for a table having applied, the end point is in a transactional, inconsistent state. So at this point, the target is in sync with the source in terms off, you know, the last changes that were cashed. So then it begins the ongoing replication between the target and the source. And to do that, it takes change operation from the source transaction logs and applies them to the target. So you have some control in terms of how the replication instances handled change processing and how memories used, but for, you know, for the purposes off this election for the For all intents and purposes, that control is very limited. So some things to keep in mind when choosing a replication instance. You make sure you know your table sizes because large tables take longer to load and so transaction on those tables must be cashed until the table is fully loaded. You want to know the transaction size you no longer and transactions can change can generate lots of changes. Total size of the migration, the number of tasks you know, how many large objects or Halabi's that you have because they take longer to load all of those things that you want to keep in mind for, You know to give an example if you load. If you're lower, take, let's say, 24 hours and you produced two gigs of transactions each hour. You might want ensure that you have at least 48 gigs of space for your cash transactions. Their small things you want to keep in mind to make sure your replication instance is powerful and large enough to handle all of those tasks for your database. Next, best practices. Reducing the load on your source. The D. M s uses some resource is on your source database. So during a full low task, it performs a full table scan of the source table for each table that's processed in parallel. So if you find your overburdening your source data because you can reduce the number of tasks or tables for East task in your migration, next you want to do is one. Use a task log to troubleshoot migration issues. Or in some cases, it can encounter issues for which warnings or air messes appear on Lee in the task log. It's going to make sure you are using it. You are monitoring it on a regular basis. Next one is converting the schema. No, this is this should be a given if you are migrating. Different are for migrating heterogeneous databases sauce. If you're marketing Oracle toe Amazon Aurora, you want to make sure that the schema is converted before the D. M s even begin. So before even going into the dashboard off GMs and starting work going to make sure that you use the schema conversion tool to convert your skin or another tool if you have 1/3 party tool to convert your schema to be compatible with the target database migrating large binary objects or l will be so in general, A. D. M s migrates alabi data in two phases. Tree it's in euro in the target table and populate through with all the data except the Associated Ellerbee value. And then it updates the role in the target table with the Alabi data. The ongoing replications so damaged provides, um like like I mentioned before ongoing replication of data keeping the source and target databases and sink. So it replicates Onley limited amount of day that definition language or DD out. And it doesn't propagate items such as indexes, user's privileges or sort. So if you plan, if you plan to use ongoing replication, you should definitely enable the multiple available. These are multi easy option when you create your application instance. So we're doing that. You get high availability and veil over support for the replication. Instance, changing the user schema for an oracle target. So again, when using an oracle as a target, DMX migrates the data to the schema owned by the targets endpoints user. So, for example, suppose that your migrating is key money, you know, schema named costume to an oracle target endpoint and the target important user name is Master. The D. M s will connect to the target as master and populate the master schema with objects from, you know, from from the skin from from Qassam, the name of the scheme that you named. So let's keep that in mind and you are able toe and you are able to override it. But just keep in mind. This is the default behavior for Oracle in terms of having Oracle Target and lastly, important performance when Margaret in large tables. If you want improve the performance during that, you can break a migration into more than one task and toe. Break the migration into multiple task. You can use row filtering O R. Use a partition key. So, for example, if you have an integer ah, primary key I d. From one tool. Let's say you know eight million. You can create eight tasks using row filtering to migrate one million records each. So what that essentially does. It breaks up large table into small, manageable tasks, for you do that. So these are just some best practices to keep in mind If you want to optimize your migration process to make sure it goes smoothly and as efficiently as possible, 19. S6 Server migration service: Hi, everybody, welcome back. So in this lesson, we're going to cover the server migration service. Now, In the previous modules, we had looked at migrating your servers onto the AWS cloud. Now there's also a service provided by AWS called a server Migration service, which also helps you migrate your servers onto the cloud is biscuit agent and service, which makes it easier for you to essential Margaret. Even thousands off on prime were close to the AWS clock, so it allows you to, you know, automates schedule and track your replication off live, sir volumes and makes it easier to coordinate large migrations. To AWS. It's a very easy service to use. It's very robust service to use lots of benefits. It's very easy to get started. You have lots of control in terms of creating customized replication schedules designed for large scale migrations. It makes the migration process very agile and very cost effective in terms of doing that and not utilizing expensive hardware or expensive software to migrate your servers. And you know the best thing that a lot of organizations, and especially the I t vote will like. It minimizes downtime because you have incremental serve replication, and it allows you to reduce your server downtime significantly. So it's a very robust service that's offered by AWS. So it's an ideal solution to use when you're planning a scaled migration from Let's a VM where environments toe AWS, where the downtime agent list tools, incremental replication and testing the application before the cut over our critical considerations. If you're planning to have a hybrid environment, if you're planning to utilize, keep using VM ware, then you have the VM or on the cloud option in AWS, which we just looked at in the which we just looked at in the previous module. But if you are planning to Margaret away from VM Ware than the SMS or the server, migration service is the best for you. Ah, best part of artist is it's a free service to use during your server migration, and essentially you're only paying for the storage resource is you're using during the migration process, such as, you know, if you have any EBS snapshots and the Amazon s three for your storage. So how does it work? So the SMS basically requires a connector. That orchid orchestrates the workflow of the migration process and the connectors deployed in the V Center. Now, before you deploy the connector, you have to make sure that your environment meets the SMS requirements in terms of correct , far walk configuration. Ah, the I P addresses and so on. Now, essentially, what your best going to do is we're going to deploy. The SMS connector is basically a virtual appliance that you're going to deploy. The agent list connector basically runs on your servers. It creates a virtual machine on your on prime server, where downloads basically and I eso image and what it essentially does, or how the service basically operates. It creates a replication server on a W on the aid of this environment. It replicates your live server load onto AWS, creates a snapshot and then restores from the snapshot on an E. C. Two instance in AWS. So essentially, it's creating a real I've replication off your workload and launching an E. C. Two instance with that snapshot in the AWS environment, so there's essentially no downtime. You're working in a live environment, and your server is being replicated without any downtime or without any network outages on your end. So it's It's a very easy tool to use very easy connector to use, and again it makes the operation extremely seem like. Now the best thing is, you know, the SMS use incremental replication, so the card over time will be at a minimum. Ah, and depending on the changes from the previous replication run. So it's an extremely robust service. If, for example, you are looking to move away. If you're if you currently in a VM or environment, you're looking to move away from the VM ware environment. This is a perfect option for you to use in terms of Margaret in your live or work load onto AWS. 20. S6 VMware on Cloud1b1b74: Hi, everybody. Welcome back This lesson. We're going to talk about the option for we m wear on the cloud. Now, A lot of organizations nowadays are using VM where, in terms of their virtual ization of their servers and AWS has an option for you to utilize that when you are migrating your services and your infrastructure to the cloud. So v m we're on our way over a cloud in the U. S is basically an integrated cloud offering. It's been developed both by AWS and VM where, and it delivers a pretty scalable, secure And, you know, in my opinion and innovative service that allows lots of companies that are using V and where to seamlessly migrate and extend their on prem the spear based environments to the AWS Cloud running easy. Two instances now it via were on the cloud on AWS is ideal for enterprise. I t infrastructure and operation organizations looking to migrate their V sphere based were closed to the public cloud. Now lots of options and lots of benefits that the VM Ware cloud brings to your organization . First and foremost, it's extremely innovative. Um, the workloads running on it, obvious cloud have native access to a broad and rich set of services in AWS, such as, you know, databases, analytics I o T machine Learning and AI Mobile resource development and lots more late and see sensitive applications hosted on Vienna, where can basically now directly access databases on you know it's a dynamodb or Aurora or even red shift for petabytes scale data analysis, which doing that on premises can become pretty resource intensive. Additionally, it has simplified operations. Organizations can simplify their hybrid I t operations by utilizing the VM, or Cloud foundation technologies, and they include V sphere V sand and a sex, even the V's interest er. Now they can use that to utilize an act across both their on from data center environments and the AWS Cloud. If your strategy is for a hybrid environment and you can use the same tools and made it management capabilities you're using today, so there's no change in terms of that for your i T folks, so there's no extra training that's required and no learning curve, because the management tools stay exactly the same. The only difference is that now you're using VM where on the clout on the AWS equal system as compared to using V ember on your on Prem systems. Additionally, there's definitely reduced costs because they're enables organizations to optimize costs off operating consistent and seamless hybrid I T environments. So there's no custom hardware to deploy in your on private Marmon. No need to let's read, write or modify applications to shift toe a hybrid cloud model. Because you can use the VM wears management tools and policy tools across both your on prime environment and your environment on and your environment on AWS, which is using easy two instances and lastly enhance availability. You know Vienna, where cloud on AWS really helps you accelerate migration off your these fear based workloads to a highly available and highly scalable AWS cloud. So the service basically enables those were close to run directly on the next generation micro based systems, for example, easy to bare metal infrastructure and provisioned in a single tenant, isolated Amazon. Vpc so literally you can utilize the bare metal infrastructure off easy to instances, yuk unusual eyes, VP sees, and it basically makes a very robust environment for you to migrate and overcome the challenges or any reservations that organizations might have off using VM ware or migrating that to the cloud. Now the ember cloud on AWS provides dedicated single tenant infrastructure and support for up to 16 host V Sphere clusters delivered on next Gen bare Metal. You see two instances which basically have optimized extremely high I O instances and very low latency, so you can scale capacity by adding and removing hosts from clusters. You can have, you know, 3 to 16 host Bert clusters. So it's very robust tool. So here are some of the features that VMRO Claude had on AWS. First and foremost, it provides the S D. D. C software stack to highly scalable cloud V Sphere s, A N and A sex. Like I mentioned and consists of 3 to 16 hosts, you can have each with 36 cores. You're gonna up to 512 gigs of memory. You can have ah, up to 15 TB off raw envy. Emmy storage In those bare metal infrastructures, you also have flexible storage options. Now each cluster utilizes an all flash visa and storage solution. Build on an envy Emmy instance storage and each SX I host has envy. Emmy Storch so mixed flexibility Very, very high. You also have dedicated high performance networking. No, it'll be has provide separate dedicated performance network from management and application traffic connected through the VMRO NSX networking platform and provide support for network multitasking. You also have security and compliance again. You can benefit from the 80 yes, Security First approach their infrastructure that have including the eyepiece aka VPN connective ity between your on prime environment and your view More cloud on AWS hybrid environment You can utilize Nat tables for the connectivity for the security for the encryption. Are you going to utilize encryption on you know you as three buckets or EBS volumes so extremely high security and compliance and configure ability in both of them. Also the on demand licensing because it supports both customized GM's run on any supported OS that supported by VM where and makes use of a single tenant bare metal aws infrastructure. Lastly, our last couple of ones you have the third party software integrations, you know, for Dev Ops or Claude Migration and the single host S. D D. C, which is basically their low cost a gateway into VM ware cloud on the hybrid solution so extremely rich feature set are very, very good benefits, especially for enterprise. Organizations that are already using VM were in there on prime environment. It makes operating and migrating and using and developing a hybrid migration strategy very easy and very seamless.