Azure Database services | V S Varma Rudra Raju | Skillshare
Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
15 Lessons (1h 52m)
    • 1. Introduction to Azure database services building blocks.

      7:09
    • 2. Azure SQL Database overview

      11:18
    • 3. Lab demo: Creation of Azure SQL database using Azure portal

      6:38
    • 4. Azure SQL Database configuration overview

      7:49
    • 5. Lab demo: Configuration of firewall rules & Active geo replication

      5:55
    • 6. Azure SQL managed instance overview

      6:08
    • 7. Azure SQL database security overview

      10:22
    • 8. Lab demo: Walthrough of Azure SQL database security features using Azure portal

      5:45
    • 9. Azure SQL database monitoring overview

      7:48
    • 10. Lab demo: Walkthrough of Azure SQL database motoring features using Azure portal

      5:55
    • 11. Azure COSMOS DB overview

      12:55
    • 12. Lab demo: Walkthrough of Azure COSMOS DB creation using Azure portal

      7:33
    • 13. Introduction to Azure data factory

      5:36
    • 14. Lab demo: Creation of data factory using Azure portal

      5:33
    • 15. SQL Stretch database & SQL Data Warehouse

      5:34
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

31

Students

--

Projects

About This Class

The objective of this class is to introduce you to database services in Azure that you can use to migrate your on-premise data into Azure. This class includes the following lectures and lab demonstrations.

  • Introduction to Azure database services building blocks.

  • Introduction to Azure SQL Database overview.

  • Introduction to Lab demo: Creation of Azure SQL database using Azure portal.

  • Introduction to Azure SQL Database configuration overview.

  • Introduction to Lab demo: Configuration of firewall rules & Active geo-replication.

  • Introduction to Azure SQL managed instance overview.

  • Introduction to Azure SQL database security overview.

  • Introduction to Lab demo: Walk-through of Azure SQL database security features using Azure portal.

  • Introduction to Azure SQL database monitoring overview.    
  • Introduction to  Lab demo: Walk-through of Azure SQL database motoring features using Azure portal.
  • Introduction to Azure COSMOS DB overview.
  • Introduction to Lab demo: Walk-through of Azure COSMOS DB creation using Azure portal.
  • Introduction to Azure data factory.
  •  Introduction to Lab demo: Creation of data factory using Azure portal.     
  • Introduction to SQL Stretch database & SQL Data Warehouse.

By the end of this class, you should be able to implement Azure database services and security controls in Azure.

Meet Your Teacher

Teacher Profile Image

V S Varma Rudra Raju

TOGAF Certified Enterprise Architect

Teacher

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction to Azure database services building blocks.: Hi. Welcome to this lecture. In this lecture, I'm going to take you through different building blocks that are available in our your database of he says In terms of the building blocks, the basic fundamental building block that is available in Azure is secret. Did all this? Microsoft is offering this sequel server and secret office on a jury in number of ways. Firstly, you can deploy single database are you can deploy multiple databases as part off a shared elastic pool, and recently Microsoft came off. It managed instance, which is targeted towards on promises customers. So if you have a number of secret database says within your on premises data center on, if you want to migrate towards your we thought any complex configuration or ambiguity onda also take that wanders off licensing you have within your on premises data center, then go for managed instance. Because this is particularly targeted towards on promises customers who want to lift and share their on premises data bases in to argue with the least effort on optimizing cost. Okay, how are all these three offerings? Are bass offerings back? Formal service offerings waiting. Microsoft will be responsible for maintenance patching and all those stuff. But in case if you want to go for I asked service for sequel server, then you can apply sequel server on your virtual machine. Okay, so if you have a dependency on the drilling platform and you want to log into the sequel server, in that case, you can go for a sequel. Surround virtual machines on dinner. Shinto. These four You can deploy Sequel Date of a house on Cloud. There is an offering related to that also, and our Juries often number off other database. Um, he says, for different types of data basis, because these are all sequel, right, But it is also offering database service for my SQL Maria db on also post a Secret. All these three offerings are provided by Microsoft. Vitina your and once you deploy the database in towards your you need to migrate the daytime to it are replicate the daytime do it, isn't it? So let's go through some of the services that are available in Azure, which you can use to migrate. The data from we are on premises Sequel server into Azure Fast Money's are your data migrations of years. Using the h you can able to migrate. The data from your existing Sequels were in database written on from Isis Data Center into Azure on In case If you want to replicate the data from your on premises data base in toward you, then you can use our your sequel data Sync Okay, on the next one ease sequel stretched out ofhis using the age you can able to migrate called a title Dodger sequence stretching databases big, different from other database offerings it works. Is a hybrid of this. Basically device the data in tow. Different pies basically heart and call any hard data are it will keep it on premises data center on any cold. Later, it will move into a joy. Anyway, I'm going to explainable this in bit more detail in upcoming lectures. And finally, if you want to do some kind off PPL transformation So extraction, loading, transformation, all those kinds of stuff if you want to do there is a tool that is available within azure. In order to do that which is called data factory using data factory, you can even extract the data from your on premises data center, do some transformation and load into your argue sequel later. This it's not only at your data factory is basically et a tool that is offered on cloud, which you can use to do anything on. It has lots of characters using those connectors. You concoct a different date of assess, extract the data, transform it and load into distillation. Okay, all of these databases that are existing in a Jew needs to be secured. And also you need to accept connections from unknown origins, isn't it? For that, all these database services comes with firewall rules where you can configure from which particular i p address you want to a local actions and also from which watch, when I took you want to l A connections. So you can define those five old rules in order to limit the amount of connections and also reduced or face attack area on the next one is cost most db Cosmos DB is a no secret data store that is available in ***. Andi, it is designed Toby globalist scalable on also very highly available with extremely low latency. My to thought actually guarantees lay agency in terms of freedom rights with cost most TV. Okay, so if you have some kind of applications such as I won't gaming where you get lots and lots of data from different users spread across globally, then go for cosmos TV because crossed mostly be designed to be globally scalable. On a very, very highly available on dure, users will experience very, very low latency, and it is a no sequel date of this. Basically, you don't need to and hope they're set and scheme up. I help provide their electoral lab on cough. Must be very. I have explainable cost, mostly being bit more detail. Onda. Finally, there are two things. One is you need to secure all these things, father purpose. You can able to integrate all these services without your active directory, and man is the users from our director directly. Also on. In order to monitor all these services, you can use security center. There is an individual monitoring tools also, but argue Security center will keep on monitoring all the services on providing recommendations, and I load you if something goes wrong. So there are a lot more things that you can do using your security center and finally in terms of monitoring and diagnostics. You can use our Euromonitor in order to monitor all these services, but it is used for basic monitoring. So there are a number off tools available in *** for each of the services in order to monitor. Although they are basic, they're very comprehensive. But in case if you want tohave and Advanced analytics tool, then you can use Log Analytics in order to hold all the log information on. Also, analyze those locks. And there is another tool called Sequel Analytics. It's a management solution that can be deployed on Log Analytics, using which you can view, sit and reports dashboards. And you can customize those dashboards exit drop recording sequel analytics. We have a lecture on it where I'm going to explain what this secret analytics is all about in bit more detail. And finally, you can define your own alerts using other porter on get alerted. If the database consumption is reaching its special on based on the alerts, you can auto scale them also, basically, you can also scale the resources that are allocated to each one of these services. Okay, so this is all about are your database building blocks in the upcoming lectures. I'm going to take you through each one off them in bit more detail. And we have some lab. The most were I'm going to show you how to create. And man is the services. Okay, so if you have some time, join me in the next lecture. 2. Azure SQL Database overview: Hi. Welcome to this lecture. In this lecture, I'm going to provide you an overview off your secret database Secret database as your event . It's a flagship product off Microsoft. Indeed, over Syria. It's a general purpose. Relational data obvious that support structures like relational data are Jason Facial and XML on Microsoft is basically offering secret of this on cloud. But the big differences are your platform fully manages every produce equal later. This on guarantees no Daedalus on high percent is off to tie availability. So basically argued, automatically handles patching backups replication failure, detection underlying potential heartbeats off. There are network failures deploying bug fixes database operates another maintenance stars so you can imagine how much are you will take care off it if you deploy secret data. This honored your and in comes a platform as a service offerings in secret of Assyria, there are previous you can deploy your secret of this. 1st 1 is managing instance. This is primarily targeted towards on premises customers. In case if you already have a sequel server instance in your on premises data center and you want to migrate that King George you're with the minimum changes to your applications on the maximum compatibility. Then you go for manage it instance. Anyway, I'm going to further explain about managed instance in its own lecture. On the second thing is single that obvious. You can deploy a single database on a jury with its own set off Resources manager where logical server. And you have elastic pool, which is basically you can deploy pull off, get a process with shared set off Resources manager via biological server again. What is this logical server? What is this elastic pool I'm going to explain in more detail in the father part of this lecture. Okay, on finally, if you want to deploy secret database has infrastructural service. That means you want to deploy sequel server on your Arduino, which would machine you can do that also. But in that case, you are responsible for managing sequel server on the particular drill, which will machine Andi, I hope provided a link toe comparison of different types of deployment models in the resource section of this lecture. I think it is very important for you to go through the same on understand the differences at minute detail. Okay. And the next thing is purchasing models. There are movies you can purchase. Sequitur honored. Your 1st 1 is vehicle purchasing model. This is relatively new and compared toa do to you based model in vehicle based purchasing model. You can independently scale computers Torres Resources Match one promises performance and optimize the price. It also enables you to choose generation of the hardware. And in case, if you already hold sequel server licenses within your on premises data center, then you can get the benefit off. Your hybrid benefit for secrets are, well, toe gain the Khan savings. So it is best for customers who already have a sequel server on prom Ise on also who want flexibility, control and transparency. Okay, on the second model is Did you based model here? You can't independently choose Compute and storage. It's a bundled measure of computer storage on my only sources, so compute sizes are measured in terms of database transaction units. On. If you go for elastic pools, it is measured as elastic date of his transaction units. Each transaction unit represents an amount of compute storage, and I worry sources okay, so you don't independently choose them. So if you're a startup company on. Do you want secret office? Then go for this because you don't need to worry about conflagration and all the stuff you select. Let's say 10 D to use and start working on it at one point of time. You can increase the number of details also, so it is best for customers who want simple and pretty confident resource options. Again, I provided a link to comparison off types of purchasing models in the resource section of this lecture. Please go to the same, and the next thing is service tires. Each purchasing model will have service tires under it in vehicle purchasing model. They're called as general purpose, business critical and hyper skills of these types on when it comes to do to model the called a standard premium service time. And there is something called basic also in Did You model? So let's go through some of them 1st 1 is general purpose are standard. Model is based on separation of compute and storage. So basically done. Darling architectural model relies on high availability and reliability off your premium storage that transparently replicates database files and guarantees no data loss if the underling infrastructure failure happens basically the compute The database engine will be on a separate north and storage will be on a separate Nor okay. That's how the general purpose standing mortal architecture looks like. If you really see the underlying things and when it comes to business critical slash premium service style model, it's based on cluster of get of this Injun process. In this case, bought the secret of this Injun process on underlying database files will be placed on the same note with the locally attached assess the storage. Basically that wanders off, having both of them on the same notice. It have very, very low latency. So if you have a huge I went into work clothes, then this is the model you should go for ideally okay. And in terms of high availability, it is implemented using technology similar to sequel server always on availability groups. So when you compare general purpose, asked handed model with business critical our premium service time model. There is a bit of difference in the underlying architecture. In the case of general purpose, the secret database engine will be on a separate note from the stories, but when it comes to business critical our premium service time model. Both the database Indian process on the database file will be on the same north on Finally , we have hyper skills and the style. Basically, it's on US service time that is available in recall based purchasing model. It isn't preview, I think, when I'm making this lecture. This service time is a highly scalable stories and compute performance style that leverage to start your architecture to scale out storage and compute resources for not your secret of this substantially beyond the limits of general purpose and business Critical service starts. So unless you are working for a Fortune 500 company, I believe it is unlikely you go with hyper skills investor because it's issues basically. So most of the time you'll be working with general purpose and standard model. Our business critical. Our premiere service type OK, and the next thing I want to discuss ease, I hope mentioned poor Tom's. One is logical sever and elastic pool. So let me go to that secret of its logical server within at your illogical server access a central administration point for multiple single are pulled it other, says Loggins firewalls, auditing rules, threat detection policies on file or groups. So this logical summaries that container. It's like administrative rapper are only your single database are pulled it off. Assess okay on obviously, before you create a juicy called it a very logical several must exist on all databases on a server are created within the same region as a logical server. And you might be thinking this might be the equivalent off sequel sarin on Prom ise. But it's not because sequel database service makes no guarantee recording the location of the databases in relation to the A logical servers on. More importantly, you don't have instantly will access are features, So basically, you can't access done dealing instance on which this logical, terrible six. So keeping out off it Onda. As I said, it's like a parent resource for all the databases, elastic pools and even Data Warehouse also, so if you are going for secret of it house again, you will create a logical seller. First on Deploy sequel, Gate of a House on the logical set up and finally elastic Bulls sequel database. Elastic pools are a simple, cost effective solution for managing and scaling multiple databases that how very on unpredictable uses. Demands, the database says in an elastic pool, are on a single. Are your secret of a silver on shared a set number of resources at the SEC price. So, for example, let's say you have a B. C D. Cover says it consumes toes Andy to use in January. That's the maximum time that it consumes on DBI Consumes toes. And Did you seen, Let's say, July during the summer. Andi Seacon Juice Maximum units When it is December during Christmas period, let's say in stuff buying 1000 due to use for 1000 B to use for B and 1000 due to use for C , you can aggregate and by 1500 e deduce elastic pool details on Deploy all these three databases into the elastic pool. So basically at any one point off them only one day that this is consuming maximum polar and even at that level to you conf ensure so within the last people you can configure in such a way. This particular database, let's say a date of us should not exceed beyond told only to you so that the reminding finder Day to use are available for BNC. Similarly, be should not exceed settle details so you can configure in that way also. So basically, the biggest saving is in stop buying 3080 years. You're actually buying 1500 years, so you are essentially saving 50% of the cast with elastic pools. Okay. And you can confidently sources for the pool based either on the day to you purchasing model are we call based purchasing model on the best size of the pool depends upon the aggregated resources needed for all databases and the poor. Basically, this in were also determining two parameters. First, parameter is maximum resources utilized by all databases in the poor. And the second thing is maximum stories by it. You place it by all databases in the port. So basically, you're brigade the resources on patches that resources rather than individually purchasing the resources for individual databases. In case if you don't understand, don't worry about it. We have a lab on it. Were I'm going to create an elastic pool and deprived it of a syndicate. Andi, I'll explain further in that lab. So that's it for this lecture in this lecture. Have provided a brief or you off of your secret of this different deployment models and also purchasing models on within that purchasing models have taken you through different service tires. And finally, we have talked about re logical server and elastic pools. Next lecture is allowed here. I'm going to create a logical server and deployed database and great aunt also go through some of the configuration items. We do not your portal. Okay, so if you have some time, joined me in the next lecture. 3. Lab demo: Creation of Azure SQL database using Azure portal: Hi. Welcome to this lab in this lab. I'm going to show you how to create a logical Severina jur and also deployed database Endeared on. Finally, I'm going to take you to some of the key features off logical server and date of us. So first of all, let's create logical sovereign database In order to do that, click on create a resource and typing sequel databases and then select sequel databases and then click on create. Firstly, you need to give a name going to call the as Rudra Database on subscription of your trainings and the resource book. I'm going to create a new one, which is databases RG. And there are three types of sources you can select. 1st 1 is blank, did our best, which will create basically blanket of this are you can clear sample it about how you can use this as a backup. So, first of all, I'm going to create sample andan server. You need to create a server because this data there should sit in a logical severeid. So I'm going toe call this as draw sequence. Okay, Andi, give log in. Name, password and not Europe location. I'm going to select on a low a juicer visas to access the server. I'm going to keep that as it is, because we are going to use a reporter in order to logging into the state of us On day, I'm going to show you the tables in the database, etcetera. So for the purpose, I'm going to leave this tick box checked, OK, and then click on Select and Elastic Pool. We are not going to create elastic pool as off now, so I'm going to leave that as it is and in terms of pricing, tired as a said in Did You Model? There are three pricing times basic standard and premium, and you can select V quote pricing. Moral also, but we are going to select that when I'm going to show you hope to create managed instance . In that case, we're going to select record. Okay, so far, this lapper purse I'm going to select basic and minimum storage and click on, apply and find a little con create so this state of his creation and logical circulation is going to take some time. So I'm going to pass this video for a few minutes and come back now the database and sequence of what has been successfully deployed in order to viewed our click on results groups on this is the resource group that the Creator click on it and you can see sequel, server and sequel databases. So let's go to Sequels that are first in terms of features here. When you click sequel databases, you can see all the databases under the logical server. Remember, a logical silicon contain number of databases and number of plastic pools also. Okay. And if you click on sequel elastic pools, you can see all the pools under this ever. But we haven't cleared and elastic pool, so we don't have anything in it on. The next thing is free. Lower groups. Here, you can create a file or group in order to or to medically managed application collectivity , etcetera. I'm going to show you how to create this failure or groups in the latter part of this course. And here you can manage backups on also defined conflagration policies. So basically worked back a policy that you want implement and here you can set it mean, which you can use in order to longer into the underlying secret of this is okay on if you come down here, you can enable advanced threat protection here again, I'm going to explain about this in the latter part of the scores and click on auditing here you can specify the Arctic love destination. Basically, you can enable auditing and also space fired oclock destination. The destination can be story, age log analytics, or even have on. Most importantly, you have firewalls and watchful networks. Here you can confidence it and rules in order to a low connections from a signal I p address do your sequel. That office are from a particular watchful network and a subject in it. OK, in the next lecture, I'm going to explain for their aboard this firewalls and networks on the next thing is a transparent data encryption. Basically, in short cut, it is called a STD. Here, you can specify your own key and use that key not only encrypt your data bases, backups and locks okay on, If you come down here, you have intelligence performance, automatic tuning recommendations and all those stuff which we'll go through at an appropriate time and coming to the database level. Now we go onto the logical server level. What kind off administrative activity are confrontations that you can do now? Better data. At this level, you can configure geo replication. Basically, you can create a secondary date of this at a for every region on replicate the data from primary to secondary. Whenever primary goes down, you can reload the request to the secondary data. This again I'm going to explain but further about this Geo replication fail or groups in the next lecture on here, you can see all the connection strings basically, based on the connection provided that you use on the next thing is single to other date offices. You can clear the same group and sing the data from this database into other database. And also here you can enable advances. That production you can enable auditing on you can do dynamic data masking, which I'm going to explain in the security lecture on you can enable transparent data encryption by default e days enable for all the databases. Okay, if you want to disable it, you can deserve her. But why do you want to do that on you can do some intelligent performance and all those stuff which I'm going to take you through in the monitoring lecture. Okay, So these are the configuration activities that you can do on sequel, server and also sequel database. So that's it for this lab in this lab. I have shown you how to create sequel server on also secret database within a dude on very quickly. We have been through different configurations that weaken dough at the server level. On the date of this limit in the coming labs, I'm going to show you in bit more detail about these key confirmation activities. Okay. I hope you find this lab useful. If you have some time, join me in the next lecture. 4. Azure SQL Database configuration overview: Hi. Welcome to this lecture. In this lecture, I'm going to take you through some of the key conflagration features off your sequel server and secret date of this in terms of conflagration for skating. His firewall rules at the server level are, in other words, at a logical settler written mature. You can define some firewall rules. They can be I patrols. Basically, I p rules be grant access to data. This is based on the originating I P. Address off E trick. First on the second type of rule is watch for a little cruise. They are based on the watchful network service endpoints. So these are the two types of rules that you can define at the server level. Okay. And when you're defining rule at the server level, they will play for all databases bitten the same logical server. Okay, on these rules are stolen master data vis, and these rules can be configured by using portal rt SQL statements. So when you define server level, make them more generally because they will apply for all databases within their server. And in case if you want to have a database level in terms off accepting connections at a granular level than define database level firewalled rules. These rules enable clients to access a 10 databases within the same logical server on you can create these rules for each date of s, and they are stored in individual date offices. So basically, you define Server five on rules at a general level that applies to all databases on at a specific level. Defined the rules at the date of this level in the next lab, I'm going to show you how to configure server level firewall rules, and the next thing is your application. This your application will be defined at the database level nor server level, and it is designed as a business continued. The solution that I lost application to perform quick disaster recovery off individual databases in case off reason or disaster are large scale outage when you're configuring geo replication. Basically, you'll specify a secondly database at a location forever from family on. Do you can have a traffic manager that rooster traffic by default to your primary load balancer on that primary load parents have based on application request. If it is read and write, it can route to primary logical server and if it is read, only it can route. The second research over there by that want is off this A's. You cannot floor some of the redolent traffic from primary on Rupert. Secondly, so in that way, the primary performance will be good because read only Korea's will consumes at an amount of CPU Arditti units in stuff that you have, secondary it up This way, the data is continuously get replicated. So why not use that second geological server for really queries? Okay, so that's kind off architecture. You can design Onda whenever Primary Data center is done. Then you can reload all the traffic, both read and write and read only through a secondary server. Okay, I hope you get it. In case if you don't get it, I'm going to create a secondary database in a secondary lesion on. I'm going to show you how you can enabled your application in the next lab. And the next thing is fail or groups. Hot off a lower groups is a secret of his feature that allows you to manage replication and failure off. A group of databases on a logical server are all databases senior manager, instance to another vision. So basically the fatal groups, what's on the same principle of your application, But at the server level at the Sara Lee will you can create a fail over group. Onda put different databases into that group on initiative. Fail over for group of databases, rather individual IT office. Generally, this is very useful when you have ah, complex network of databases. In other words, a particular solution will have number of applications, a number of the services. There is no point in feel over off a single day tapestry. A second revision because the solution as a whole doesn't work in that way. Solution as a whole will require, Let's say, ABC databases. In that case, you want to move on a basic data processing to Secondary Data Center in case off fail over . In that case, you can use Philo groups. Toby fine it a logical server level on add the CBC data processing to fail or group on and create a conflagration policy. Basically are, in other words, fail or policy, which will automatically fail over all three databases in case off unplanned failures. Okay, on you can initiate fail over manually. Are you can delegate to the secret of the service based on the user defined policy on when you are using auto fill or groups with automatic failure policy any out a stop impacts. Even a single database in that fatal group will result in automated filler off all databases on from Application Point off you, you have fail or group read and write. Listener is basically a Dina see name record formed that points to the current primary. You are basically Tello's read and write sequel application toe transparently reconnect to the primary database when primary changes after feel over. Basically, this listener will get updated as soon as the failure happens. So from application point off you all the time, you are connected to a single DNS name. But from read and write, it is not perspective. When I was a failure happens that's seen him. The card will get updated with the current primary you are. So if the secondary becomes primary, this you are will get updated. Okay, so this is how you can do fail groups at the server level and finally, database backups, which are very key from databases. Perspective sequel databases used secrets about technology to create a full differential and transactional off backups for the purpose. Soft point in time restore. Basically, transaction log backers generally occurring. If I took 10 minutes on the differential, backups will occur. It will pullovers with the frequency based on the computer size on the amount of data this activity. So based on whether you gone for basic standard or premium, this frequency will change and also based on amount of data, this activity and in terms off retention of these backups. Each sequel database has a default backup retention period between seven and 25 days. That depends upon two things. 1st 1 is purchasing model, and second thing is serviced. I How are in case if you want Oh, keep these backups for a longer period of time, Then you need to define backup retention policy. So they called us lt our policy. So, basically, if you go for long term backup retention, those backups are copy. Two different stories blobs. If they volunteer policy is configured. Andi, you can confrontational dear policy for each sequel databases and specify how frequently you need to copy the backups there long terms to his blobs. Okay, so basically backups will be taken at a certain point off time based on the compute size and amount of data. This activity. And you can keep those backups for a sitting peter off time between seven and 35 days. But if you want to keep those backups for a longer period of time more than 30 days, then you can use long term backup attention, which basically uses argue story in order to copy those backups on store. The same. So that's it for this lecture. In this lecture, I have explained about firewall rules, different types of five all rules on Guy explained about geo replication at the database level on fail or groups at the server level on. Finally, we talked the board date of his backups. Next lecture is a lab where I'm going to show you how to configure firewall rules at the server level on also how to enabled your application at the database level. So if you have some time, join me in the next lab 5. Lab demo: Configuration of firewall rules & Active geo replication: Hi. Welcome to this lamb in this lab. I'm going to show you a number of things. Firstly, I'm going to show you how to configure firewall rules. And secondly, I'm going to show you how to set are directed directly at men and hardly I'm going to show you how to use a Cory editor in art upto Runda queries using on your portal itself on. Finally, I'm going to show you hope to enable Geo replication. Okay, so first of all, let's go to five or sittings for that. You need to go to a logical that we're on. Then come down here and you can see firewater networks on. Firstly, I'm going to add my client I p And also you can confident network they watch your network from where you want except the connections. So in this case, I'm going to have an existing much from network. So select the subscription of your trainings on how only one watchful network on that has been automatically selected on Click on. OK, basically, any virtual machines existing in this virtual network can be able to come back to this particular sequel server on the databases. Vucinic. Okay, what's your network has been successfully. I did. Now we're going to save this configuration on the second thing I want to show you is directed that create mint. He tells you to centrally manage identity and access to your secret of this. So let's secondment. And if I'm going to discuss in a bit more detail about this are directed at a treatment in the security lecture. I'm going to type in my I d on select and select The reason I'm doing this because this will be useful in Corrie Editor. Because when you use Corey editor, you need to log in either with sickle cell a Quran. Shells return. The date of this Are you need to use our director directory once you configure here. Okay, let's see if this on then, close days, go to the database and go toe created. Uh, now you can see here There are two types off authentications. You need to have our at least one of them. You should have one is secret server That indication are are directed directly authentication. Because I set myself as an admin. I got already access to database. Okay, click on. OK. And this is the Cory editor that you can use in order to logging into the database under the particular server. Onda, start taxing table. Serve use. Basically, you can view the tables on, but you can do the store procedures, views and everything. How about if you want to go at once of stuff, then go TSS detail Don't do any advances stuff here. So let's run one query. So let's start from Let's see what they was. We have sales lt dart customer. Okay, let's call you. You have the result. So this is how you can do. And finally, I'm going to show you geo replication how to enable it at the database level at the server level, you know, to use final groups. Okay, If you go here here, you can able to sell it. What is the secondary lesion where you want to create? You'd only replica. Okay, so if you come down here, you can select. I want to select West Europe. Let's select West Europe on. We didn't rest you up. You need to have a sequel server and a database to replicate. The data is under because I don't have any secrets that were there. I'm going to call this as withdraw sequel server West. I'm creating a secret. Basically, Andi consulate on the pricing that is automatically selected based on the primary date of s so I'm going to click. OK, this is going to create a sequence that we're investing a rope and a database on replicate the data from not Europe database into vest. Year of Data guest. Okay, so this is going to take some time, so I'm going to pass this video until this is done. Now, our deployment has been successfully completed. That means that your application started. So if you go to the date of years here, you can see not Europe is getting replicated to best Europe. Okay? And not only one. If you want to do multiple second replicas, then you can keep on selecting and start replicating the data. How are the more available replicas you create? The more cost you are incurring. So keep that in mind when you are creating this secondary databases and if you go into the resource group, you should ideally see sort of a vest on. Oh, sorry. By mistake Looks like and even rode around at a rate of this. It's basically it'll drop investigator this. Okay, so that's it for this lab in this lab. I have shown you hope to configure firewall rules and also confident virtual Network in order to accept the connections to the sequel that are this. And secondly, how to set are your active directory. Eight mil ons use the same credentials to log into the secret of its using query editor in Azure Portal. And finally, I have shown you how to enable GL replication basically create a second replica on start replicating the date up. One thing that I would suggest you toe piease logging into the north database and make some changes and then log in to investigate areas and see that the changes are successfully replicated or not. Okay, see you in the next lecture. 6. Azure SQL managed instance overview: Hi. Welcome to this lecture. In this lecture, I'm going to provide you an overview off to sequel manager instance. I do see qualitative. This managed instance is a new deployment model off of your sequel later this based on vehicle based purchasing model. Basically, what used to happen is previously Microsoft came up with Did you based purchasing Model? How are many on premises customers when they are trying to convert their existing configuration Toby Deal based purchasing model? They used to first a lot of difficulty because there is often a bit off ambiguity on how to convert their existing configuration off secret. That office into this due to you based purchasing model. So what Microsoft Danny's They came up with the new offering called Sequel Managed Instance with the purchasing moral off vehicle based. So we called based partisan model enables you to easily convert existing configuration off your own promises Secret database into azure. Okay, so this particular sequel manager instance, is mainly targeted toward on promises customers on premises customers can easily lift and shift their own from a sequel server to your managed instance, that offer, compatible deep with Sequels there were on premises so that stuff we get wanted on the second at one disease, although you to lift and shift it doesn't mean that you are losing all the past benefits. You still have all the past benefits available with sequel managed instance and one additional important feature associated with secret managed instances. Security, because managed instance, get deployed into in its own network urine. Romain is actually isolated from all other enrollments, and finally, it's a new business model. It is very company, toe transparent and frictionless business. Moral. Basically, if you have sequel server licenses, then you can recycle them and use them in azure. Basically, the overall cost of these managed instance will come down by 30% if you are using your on premises sequel server licenses. So let's go through this security thing in big. More detail. Managed instance. Provide additional security isolation from other tenants in the cloud, so the security isolation basically includes and native watchful network implementation and connectivity to your own promises in adornment. Using either express route are VPN gateway. You can use either off them toe correct your existing on premises network with the virtual network that gets created with managing instance, on collecting to the database privately. What private network? That's one big advantage on the second and wanted this sequel, Endpoint is exposed to only three private address. Allowing safe connectivity from private also are hybrid networks, so there is no publican point for the clients toe connected. It's only treat play what I be addressed and finally, and most important thing is it's a single calendar involvement. It's not multi 10 enter. It's a single talent with a dedicated underling infrastructure, so both compute and storage is dedicated for you. In that way, you can satisfy some of the regulations you have based on the country where you are located . And finally, you need to understand how the structure off managing instance looks like and how the communication happens. So let me take you through that. Firstly, when you create managed instance, a watchful network will get created, which will have friends and sub net Gateway sub net on the manage it instant submit and the Lord that you deploy as part of managed instance creation will get deployed into and my sublet. Each note is basically consists off sequel engine on sequel management on Basically written the same network. You can deploy multiple North also, and these multiple North will fund a watchful cluster with Gateway servers, and this entire virtual cluster will help to end points. The 1st 10 point will be for client connections. Whenever the clients whether it is applications are users want to connect to the database, they can use the same point, which is basically a my under school name. Don't be in this june dot database dot windows dot net. Joan will be defined based on where it is deployed on your My underscore name is basically your managed instance. Name okay on the second and point is a publican point, but will be used by Microsoft in order to manage this in adornment. Because Microsoft is responsible for managing this environment, they need to come back to this environment using some automated scraped or something like that. And man, is it for the purpose. That will be an end point. Also for this center in Norman to work properly, it needs to connect toe argue stories and service bus also. So when you are trying to restrict the traffic from your and my sub net to the outside, make sure you are low, all the traffic related to the Microsoft. Otherwise you're in. Norman might not work properly, and finally, in terms of client connections and applications to connect to the database they can recite in frontal Sub Net on connect to the database are they can recite in appeared network. The moment you peel the network video and my sub net, then all the Babs on virtual machines can be able to come up to the date of this because both networks appeared. And also you can connect your on premises applications toe the start of us also, by creating either which one it will get. There are expressive well that also, you can connect to the date of this basically all the collections, whether it is from the labs, which will machines are on premises applications. All of them are communicating with the database or a private connection. So this is how the communication happens. This is very similar to the AB service in a moment, if you are aware off it, basically, they provide the similar concept in date of its enrollment, also which they are labeling. It has managed instance. Okay, so that's it for this lecture in this lecture I have taken you through managed instance its features on what is that wanted from security perspective and also how everything structure together to deliver managed instance obvious. I hope you find this lecture useful. 7. Azure SQL database security overview: Hi. Welcome to this lecture. In this lecture, I'm going to take you through different layers of security that you can implement in our juice. Equal data this predominantly you can divide adduce equal date of the security into file layers Fast Money's management plan security. This is all about controlling access to what users can do, using our reporter at the server level under database level from conflagration. Perspective are, in other words, who can log into argued portal are running partial skip in order to change conflagration at the database level on several levels. And second thing is data plan security. This is all about providing access to the data within the date of this on you have encryption plant said, encryption addressed. And finally, auditing, which is basically ordered the changes to the date of this and provide that information to Aid Mints. Okay, so these are the fight layers. I am not going to take you through manage plan security now because I'm going to show that to you in the next lab when we go through these things using our reporter. So let's go through date up in security. And now, in terms of data plan Security. Secret Service supports two types of authentications. One is equal authentication on the 2nd 1 is your active directory authentication with secret indication when you're creating a date of this. If you remember our labs correctly, we are providing a use really in password that use radian. Password is basically server level principal account for your database server. Using that account, you can log in tow any date of its under the particular server on the second These are you ready? Authentication, which uses identities managed about your active directory Andi supported for managed and integrated domains. If you remember our previous lab, I how created are you radiate milli ons, user that it mean in order to log into the database while I'm using Corey editor in your portal. Basically, when you are sitting, are you ready? It mean in the conflagration, you are essentially creating a second set of level prince playground which you can use inarguably administer. Are you ready? Users and groups. Okay. Even the segment also can perform all operations like a regular sickle administrator can do . So. Basically two authentications sequel authentication. And are you really based authentication? My recommendation is go for a jury. The authentication Because this is the way forward in future. And there are two types off rules that you can have. One is sort of a level rules why you can use our principal account to manage server level security. You have also options to Assane Loggins toe other secret database security rules. I'm going to show that you in the lab, basically you can assign lock ins to different rules. We're going to do that anyway, in the laps where I'll explain there on the second thing is database level roles. The building security rules at the database level are very similar to on from Isis equal sort of security rules. So you can implement date of this level security by fix a database are custom rules in case if you want to design custom rules, you can design them. So overall, in terms of controlling access to the data you can use equal authentication are Julie identification, and when you are creating users, you can assign server level rules. Our database level rules on incomes off your definitions. You can either use it, fix it. Rules are you can clear customs rules based on the new self. Your application. Okay, I'm the next thing he's encryption addressed in terms off, encryption addressed. I'm sure you all aware that it is TD? Basically, it has bean on on premises sequence eruption since equals R 2008. Andi is available exclusively for data dressed. That means the database on the back up there all in captured. How are one problem with this TD is if you thought how permissions to your database with tedious and every user can see all data. So if you have a support user on duh, they have access to maintain that database. They generally able to see all the data in case if you have the administrator who is supporting your database Andi, he or she is located somewhere in Southeast Asia. Let's say you are databases located in Europe, but the user to maintain that database is located somewhere else. But Asper Judy pr regulation. You don't want to alot even administered also to view some of the data in your date of us, for example, you don't want administered also to view like and my number they call it here. Our social security number in America are other number in India. So in case you feel don't want a particular table containing sensitive information Toby viewed by the users then you can use always interrupter. It's basically in producers set off clank libraries toe Hello, operations on in culture data transparently inside an application and the keys always under the control of plant and application and never on the server. Basically, what happens is the new type in a name number, a Social Security number in the application on when it comes to server, the application itself will in creep. That particular number, using an encryption key on the encrypted portion of the data, will get stored in sequel server. So in that way, none of the administrators can be ableto view that actual and my number or Social Security number. The only persons who can view is users accessing applications with appropriate credentials on access livers. Okay, so this is one very good thing. I strongly recommend to use it, but this might require application changes by it, so you need to keep that in mind and balanced our benefit versus cost. So that is encryption. Addressed on the next thing is encryption in transit Sequel date of his collections are in Cooper, using the less slash SSL for the cable of data stream transfer of data up for our new sequel databases. Microsoft provides valid certificate for delis connection So any connections you're making to the data this all in country using DLS so the same thing you can do, using our new secret of its also on two important things that you need toe Keep in mind when you are doing encryption in transit. Secret of this one is Rollerball security, and the second money is dynamic data masking. We trusted to rule level security. You can able to restrict access to rules using a security predicate that is defined as in line table. By your function on, you can create a security policy to enforce dysfunction. So, for example, if we have a table that is containing the project in the table, let's say there is a flag which defines whether the project a secret project or not, and you want to restrict access toe all the rules. In other words, all that since two projects within that table in that case, you can use four level security. You can basically define a table valued function and you security policy in order to enforce that function there. Particles and off users. And the second thing is dynamic data masking dynamic data. Masking is a feature that allows you to limit access to your sensitive data. We thought making client our application changes while also enabling visibility off a portion of the date up. So this is ah, typically use, if accurate card data. In case if you want to and keep the credit card data on, do you want to show only the portion last four digits of the card? Then you can use dynamic data masking and in terms of data stories down. The linked data in the database remain intact on his applied based on the user privilege. Okay, so basically there are three ways you can include the data are restrict the data. First twenties always interrupter if you want sequel, administrator and support persons or anybody basically should not see that data, and only users having access to applications at an appropriate level should see that data use always encrypted. And the second thing is, if you want a place it and restrictions to a say 10 rows in a table then you can use Rollerball security. And finally, if you have credit card kind off information where you want Oh, partially blackout basically put xx xxx like that, but showed the tail end of the portion are millions of the portion It's up to you. Then you can use dynamic date. I'm asking, Okay, I'm finally one important thing I want to take you through, which is only thing secret of this hard hitting is available in all service styles. By implementing auditing feature in sequel databases, you can retain your audit trail over time and as villas and lies report showing the date of this activity off sexist our failure conditions. For any database, auditing is very important. You want to know who is accessing data? Who is changing it, although stuff, particularly if you are working for government. This is extremely important because millions and millions of citizens that I will be in the date of this on. You want to make sure nobody is axing that they don't just for their personal purposes, because there are some instance that happen on people got fight because there are proper auditing applied on it on people ableto track who read the data and caution individuals and in terms off, enabling vomiting. There are two levels. You can enable it. You can configure auditing at the server level. In that case, all the databases under that particular logical server will inherit the same ordered settings as an alternate do. If you want to have a different ordered settings for different date of us, you can configure audit policies for each secret that others individually. Okay, so that's it for this lecture. In this lecture, I have taken you to file A It's off your secret of a security. We have gone through data plan, security infliction in plants, aid encryption addressed on finally auditing in the next lab. I'm going to take you through all this file level itself security, using our reporter in, It'll abdomen. So if you have some time, join me in the next lab 8. Lab demo: Walthrough of Azure SQL database security features using Azure portal: Hi. Welcome to this lab in this lab. I'm goingto provide. You walked off all security features associated with our new sequel server and sequel databases using Azure portal. So first security feature is access control. This is related to the management plan security that we discussed in the lab here. You can control who can access your server and change the configuration settings of the server using art. Your porter are your power shell, by the way, by providing access here, you are not providing to the underlying data in the data vis, you're only providing access to the server and databases from conflagration prospective at all your level. Okay, basically, they can come here and change the configuration. Are there can use our your powershell scripts also. Okay, keep that in mind on one more thing. You need to keep that in mind. These are your access. Control is not available at database level. It is only available at server level. OK, now I'm going to provide contributor rule and then here and going to Sage for somebody on provide access on DSI. Now, I provided a contributor rule to somebody who can able to do all the configuration changes , but one thing they can't do it contributed Role is assigned a role to some other user so they can't add user here. They can only change the stuff if they need toe. Add to use us also than unit to select owner rule. Okay, on the second thing is odd. Your active directory. Eight men as a certain material a church. There are two principal accounts that you can create for a subtler and database. Also, 1st 1 is when you are creating the secret database itself, you will provide a name and password at the time on Dhere. If you want to, man is users using our directive directly. Then you need to integrate and azure active directory. In this case, when you said and it mean Onda, select somebody, then what you essentially doing is you are linking your active directory associated with the subscription to your sequel Databases are sequel server. By doing that, you can essentially go into the database and start creating equal and users off our director directory. Okay, so let's sell appear, so basically you'll create to server principal accounts. One is when you are creating the date of its on one you can set here that one days off sitting here, which is our directed, a trade minis you can able to use on your active directory Integrate with the database and you can create users he call into the user written or directed directory. Okay, for more information, please. That referred to a link that I provided in the resource. Such not this lecture. And the next thing these transparent data encryption. So if you come down here, you can click on ahead on. You can use your own key in order to encrypt the data. But if you want to control what databases needs to be encrypted and what should not, then you can go to the database level on, come down here, click on it here. Either you can enable encryption are disable it. But if you want to provide the key, then go to the server level and use it, okay. And I already show new firewalls and watchful networks here in one of the previous labs, we have added a client i p address on. Also, we have added a virtual network tax of the connections to this database. From that watch will ever and finally you can do auditing here. So basically you can switch on auditing on. You can specify it. So it is a con Fear the log should be stored on storage account you can create Are you can use an existing one on the retention You can specify here on You can specify which stories Access key you want to use on? Click on OK and click concert How are you? Have a choice to do. These are getting policies at the database level also So you can go to here. Not here. Sorry. Here you can see some of our living is enabled. You can switch on here But one thing that unitary memories if blob are eating is enabled on the server it will apply to the database it regardless of the database sittings. So basically even the only thing is switched off. Server level auditing will still takes place And if you switch on targeting, both policies will be applied and run parallel e keep that in mind, but in case if you want to store the Arctic locks off this particular data, visit a different location then you can configure here And also if you want targeting information to flow into log analytics. You can specify here and the specify the log in and takes workspace and all those stuff on . If you want the information to be posted into you and hob, you can configure here. OK, but generally do at the server level on if you have any specific requirements for the database. Confident here. So that's it for this lap in the slab. I'll show you how we implement management Plan security. Basically, you can configure under access control. Who can have access to configure sequel server on secret database configuration settings using all your portal or a your partial on. The second thing is transparent data encryption using which you can configure your own key are you can enable or disable encryption at the database level on Finally have shown you auditing where you can configure or letting policy on d defined the destination by the storage account. Log Analytics are even, huh? OK, I hope you find the slab useful 9. Azure SQL database monitoring overview: Hi. Welcome to this lecture. In this lecture, I'm going to take you through different monitoring tools that are available in azure in order to monitor your database, says elastic pools. Managed instances. Vitina Your So in terms off our new sequel database monitoring the basic one is made text monitoring your provides Mickley X for a very resource that exists in all your. So basically, you can goto your hotel on. Start viewing the matrix associated with the resource. You can define your own charts and being those charts into dashboard, so that is very unit to start your marketing for us on the next level of monitoring, you can do using diagnostic telemetry. Logging basically are juicy, called it obvious. Elastic pools, Managed instance says on databases between those managed instances can stream metrics and diagnostic logs for easier performance monitoring so you can go into a chipotle enable diagnostic delimited logging on staying those locks into either stories account even harm or log analytics on dollop your own custom solutions on top off it, either to troubleshoot out to monitor on the next basic tool that is available is Qari Performance Insight. I really like this tool because in the past, managing and turning the performance off relational database used to take a lot of expertise and time. But using query performance inside, you can ableto identify the top four. Korey Stick is taking CPU. I will utilization our memory application so you can able to identify those top four quays and find Junior on argued automatically identifies the next stop. For again, you start find chilling them in this way, you will continuously improve the performance off your data. Best. Vitina your So that is all about Cory performance inside. I'm going to touch upon it in a bit more detail in the next line. If you want to go advanced monitoring, then go for your sequel analytics. It is basically a cloud only monitoring solution for monitoring performance off do sequel databases, elastic pools and managed instances at scale across multiple subscriptions through a single pane of glass. Generally in real world scenario your company are your client. We have a number of subscriptions and they will have number off databases, and it is very difficult for a centralized monitoring team Togo to each individual database and start monitoring in stuff that you can call it all the more clicks and logs information into Log Analytics and deploy our juice equal antics solution on top off it on. Start monitoring centrally. Okay, I'm going to provide in big more detail in the latter part of this lecture on our new sequel, Analytics and finally, intelligence insights into performance Art. Your Secret office. Intelligent insights lets you know what is happening with your secret office on managed instance date of his performance. So these are all the tools that are available between a Jew to monitor your sequel databases. 1st 1 is make clicks. That's where you start on goto diagnostic telemetry, logging and label it on the news query performance Inside toe. Identify those queries that are taking long time are utilizing more resources and find, you know. And if you have a complex network of databases on solutions, then go for secret analytics in order to centrally monitor and manage all those databases on finally intelligence insights into performance, which provides very good insides and recommendations to implement on your database in order to improve the performance. So let me take you to query performance inside andare do sequel and legs in big more detail . Corey. Performance inside helps you to spend less time troubleshooting date of his performance by providing deeper insights into your data basis due to consumption. So it provides you are database resource. Did you consumption? You can able to monitor the overall consumption and also at a quarry level also. So the details of the top database queries by CPO duration and execution count will be displayed in the next lamp. I'm going to show you it is very, very good. Actually, Andi, even you condone deal down into the quality you can click on equity on view the quality text on the history off resource utilization. Basically, how many times it regard how much CPU to utilized on all those stuff on there. And then you can view the politics on. You can find Shula query in order to reduce the resource consumption. It is a very, very useful tool if you use it in right way. And finally, it also shows performance recommendations that are from Secret Service Advisor. So this is all about quickly performance inside. I'm going to show this to you in the next lab, and the next thing is our new secret analytics As I said, it is a clothed, only monitoring solution supporting streaming off diagnostics telemetry for a Jew sequel databases so those databases can be single, pooled and managed. Instance. Databases. OK, so these are your sequel. Analytics Solution basically sits on top off log analytics, so you stream on the daytime to log analytics on deployed The solution on top of where Miss Workspace. Okay. Once you deploy the solution, it will gather all the data that is available in the workspace and analyze it and present different views. Predominantly, there are two separate views that you need to keep in mind. One for monitoring on your sequel databases and elastic pools. On other view, is for monitoring managed instance and databases in managed instances. So there are two views. One is for our new sequel and elastic pools, and the 2nd 1 is married Instances. And there are a number of reports that are available as part off sequel Analytics for Stoney's Are your sequel, that of this intelligent insights that lets you know what is happening with the performance off your sequel databases? And secondly, for elastic pools and sequel databases, they have their own dedicated reports that shows all the data collector for the results in a specific time. So basically, when you enable diagnostics and logging on fear that they trying to log analytics, then when you deployed the sequel analytics using the standard reports that are available in secret analytics, you can able to view that they talk for either data on they will to analyze on from 40 perspective. There is a query report, which you can use to view the performance off each quality from Korea duration and quality rates. Perspectives. Okay, so basically, this is an ad wants a tool. It contains a lot of views and got off reports, but I have mentioned the key ones on duh. When you deploy, these are your secret analytics you'll incur Sit in amount of cost because you need to have log analytics before you can go to our your secret analytics. So you need to pay some price for log analytics. Howard. If your client or your company has a large number of databases that needs to be monitored centrally, then this is the best tool because the cost off Log Analytics on Beautiful Analytics will be peanuts when compared toe individually monitor the date of assessed by going ahead. So in stop that this is a really made solution for you. For a centralized monitoring. Go for this. So that's it for this lecture. In this lecture, I have taken you till different tools that are available in order to monitor your sequel databases. Elastic pools, managed instances within azure and also have taken in big more detail about Cory performance inside on our juice Equal Analytics. Next lecture is allowed where I'm going to take you through make clicks, diagnostic logs on court performance insights using all your portal. See you in the next lab. 10. Lab demo: Walkthrough of Azure SQL database motoring features using Azure portal: Hi. Welcome to this lab in this lab. I'm going to show you how to use make clicks Diagnostic Longs on Cory performance inside in order to monitor your sequel date of its that exists in all your first monitoring starts with these two parameters basically destroy metrics. First twenties detail person does utilization. You know the monitor this give the utilization reaches 90% than you might decide to scale up. And if the utilization is consistently down, then you might scale down and save some costs similarly with their data stories also. But apart from this, if you want to view other metrics, then you can come down here, click on matrix on. Then you can view different types of medics As you can see your CPU person teas, I will person days. Did you limit did use used and so on? There are a lot of McChrystal you can use on. Also, if you want to define a timeline, you can define the time range on also short time as either UTC GMT are your local time okay , so you can keep on adding the charts. Onda ping those chance to your dashboard on start monitoring, but everything I have shown here is completely basic at a basically will itself. You can design your own dashboard with databases in it, but if you want to do complex dashboards and monitoring them, go for your sequel. Analytics. Okay, on the next thing is diagnostic settings. Basically, you need to turn on the diagnostic settings in order to call it the Vaca. Give it a name. Put their diagnostics. Andi. There are three areas to which you can feed the daytime, too. Money's stories account. You cannot cope their stories account and specify the stories Akane details are. You can stream the logs into an even hub are send you to log analytics so you have three options. But most of the time, 90% of the time people will Arcadia stories account and also to elegant. It's because most of the time your client, our company, will have lots of data bases. They generally tend to use log analytics, and if you come down here once you selected a stories account, then you can select attention. This how many days you want to retain in their stories account that particular logs, so there are lots of logs available you can see sequel insides. Automatic tuning. Police tore. Runtime chorused. Oh, wait. Database rate. So the lot off logs that you can did you do to troubleshoot an issue are to monitor, Okay. And also, you can feed them a clay Xing two stories account, so I'm not going to enable it. But you can try on your one basically provide stories, account details on space for the retention days for all these locks and metrics on then save. Okay, so let's close this. And the next thing is performance, all of you. Here you can view the performance Basically the little performance off your data vest on. If there are any recommendations for you, it will be displayed here. However, I would like to show you query performance inside, which is really interesting tool from my point of view. Here you can see top for queries by CPO by date, I go on, blogger you So if you can come down here, this is overall. Did you from the queries perspective? But if you come down here, you can see overall consumption. Basically, overall CPU convention, The title Contention Logger consumption. But here what is shown is the consumption by that quake, so it selected traffic worries here on If you come down here, you can view the individual for years how much time it is executed, How much CPU it is utilizing now. Currently, my queries are not replacing anything, but they're getting exerted more number of times. And the best thing about college performance inside these. If you sell it one off them, then you can able to view the query text on. Do you can view the details for the quickie in terms of how many times it is triggered at what time it is trigger how much consumption is talk, ex cetera. This is very, very useful because you can identify those qualities that are taking more. Did you consumption and start faint chewing it. So I hope you find this query performance inside really useful. And one final thing I would like to show you is activity lock. Don't underestimate the importance off activity. No, it is really important because it provides information about activities that has been carried out on this particular date of us at the edge of porter level. Are you partial level? So if you carry out some activities between the data west that doesn't come here. But if you are doing any administrative or configuration stuff here within our your portal are partial r a jurist ap eyes. Those activities will be coming here. Basically it will be logged on will be displayed in the activity log. This is really important from monitoring perspective because you want to know who is changing. What? At what time? All this stuff is very, very critical from auditing perspective. Hance, you need to continuously monitor this activity lock. Make sure the right people are accessing your databases from your portal power shell. On the other are your pools. So that's it for this lab in this lab. I have shown you how to use my pigs on how to enable diagnostic loss and also have shown you how to use query performance inside. In orderto view, the top five or top six queries that are consuming more CPU are more resources of the date of s. And finally, I have shown you activity Long, which is basically contains log off all activities that has been performed on this particular date of us by using orgy of portal. Are you partial? R n er do tools. Ok, I hope you find this lab useful 11. Azure COSMOS DB overview: Hi. Welcome to this lecture. In this lecture, I'm going to take you through your cost most database, and it's capable. It is. Are your cosmos database is in no secret data store. Unlike traditional relational databases, where you have a cable cable will have a fixed number of columns on each row in the table should have helped a scheme off the table. Unlike that in a no sequel, get a vest. You don't define any scheme at all for the table, and each item are drove it in the table can. How different values are different scheme by itself. Okay, Once you've gone through the next lab, you will understand what I mean. So basically, that have this ingenious, fully scheme diagnostic. Since no ski mind index management is record, you don't need to worry about application downtime when migrating. The scheme us because you are defining in SK my itself in the first place, and secondly, it is a globally distributed data. Guess so. Cosmos db yellow Sciutto. Add and remove any off dodgy reasons to your cosmos account at any time with a click of button. This is particularly important when your users are geographically spread in the case you need to deliver the data are with very little it and see for those users. So if you want it obvious is in Australia and the European users are US users trying to access the data. Then there will be a lot off legacy in terms of the distance. So in stuff that you just add regions for the global distribution and replication, that means any Australian user added, according to the database that record bigger replicated toe replicated data vest within U. S. Or Europe okay on the cosmos DB will make it very easy for you. And you don't need to worry about this network of data centers that replication process under all those stuff. Microsoft will take it off that you need to worry about application logic and business process only. Okay, and the next thing is developing applications using no sequel AP Ice. If you have a no sequel data store such as Cassandra Mango, DB and others, when you are migrating toward sure, you don't need to change the entire application to consume the new FBI's Because Cosmos baby implements no sequel. AP Eyes for mango db, Cassandra and all those stuff. You can easily migrate. You are existing database into cost. Mostly be with minimal changes to your application and the next big wanted. Of course, most services industry leading comprehensive a serious cost, mostly B is the first and only service to offer industry leading comprehensive a serious and compassion. 99.999% free availability, Onda feed and right latency at the 99th percentile guaranteed throughput and consistency. So there are a lot of things that Microsoft guarantee with Cosmos TV. It's about high availability. It guarantees latency, Canty's true port and its guarantees consistency. Also, because this data this is globally distributor consistency is very important on. I will discuss about it in big more detail in the subsequent part of this lecture and finally look total cost of ownership. Since Cosmos Baby is a fully managed service, you no longer need toe. Manage and operate multi data center deployments and upgrades to your database software pay for support, licensing or operations. You don't need to worry about it. Microsoft taken off that I believe Cosmos database is a very good offering from Microsoft because we told this. Imagine if you're a startup with users from across the world. In that case, it is extremely difficult for you to manage all these replicated databases on data centers with cloud computing on cost, most database in azure. You can easily implement that with the touch of a button, Actually. Okay, next thing I would like to discuss about Cosmos database structure because you want to know how it is structured, how it works. Really? So let me take you through that. In terms of cosmos database structure for street, you will create a Cosmo second, are you a cost? Mostly be account is the fundamental unit off global distribution and high availability. So if you have to data process with different global replication comments, then put it into different cosmos accounts at a cosmos account level, you can simply add and remove or to regions to your cost. Mostly be account at any time I'm under the cosmos account. You have database. You can clear one or more. Are you cost most databases under your account. It databases basically name space. It is a unit off management for the set off on your cross. Most containers. Okay. On Vitina database, you can have a container and not your cosmos container is a unit of scalability for both pro vision trooper and stories off items. A container can be called US collection table or graph based on the date of this type you're going with whether this mango db are in it, that kind off databases you want to go with on content. It is the level where you can provisioned throughput how you can project through port at the database level also, But when you provisioned throughput at the database level, all the containers within the date of its share the resources. In case if you have containers with the different throughput requirements, then project through port at a container level, okay and in turn container can contain items. Item can be called us Document row. No age are something else based on the type of the date of issue. Go over when I say type of the database. It's basically type of the A P A. You choose whether it is mango db, our Cassandra or something else. Based on that, this container and item will be called differently. Okay. Onda Container also can contents to Oppa, Caesar's user, different functions and Krieger's So this is how constant state of a structure looks like. And the next thing I would like to take you through his global replication and distribution . Of course, most database. So let's go through that in terms off global distribution and partitioning. Cost mostly be works in a different way. Unlike a traditional relational database, where you have a table on all the rows in the table will say that one physical place how, when it comes to cost mostly be. You will basically create logical partitions within a container so you can have set in amount off items with one partition key on the second number of items with another partition key. Their call is logical. Partitions on each logical partition can recite in a physical partition. So, basically, container. If you plead that as a table, the rose on the table based on the partition key will be stored. It's different physical places. Physical partitions. You can have one or two logical partitions can recite in a single physical partition. Okay, on the physical partition can be called as a replica sect, also, because you have a leader and you have fall over far water. These are to detail for you at this moment of time, so I haven't put that here. So remember that a container can contain millions off items and you can divide these millions off items using partition key on make logical partitions. And the ecological partition will decide in a physical partition. That's how the load after container will be distributed across the board locally. And also the data from here. I mean, the data off these logical partitions within the physical partition will also get Replicator. Two different regions on those are called partitions said Okay, so you have particularly distributed across the regions on horizontally distributed to maintain high availability. Okay, I hope you understand this global distribution and partitioning it is extremely important for you to choose a bride partition key. I can't emphasize more than this. It is really, really killed. Based on what you choose for partition key, your performance will depend upon it. Okay. You need to make sure you have a right balance in terms off choosing the light partition key so that you can have a appropriate chunk off records in each logical partition. Okay, So now, because this particular container orbit items gets replicated globally. Consistency is very important, isn't there? Because when they are still in user abducted, a record you want to present that record to multiple people from multiple geographies and you need to present the data are consistently so. Consistency is very important when it comes to global distribution and replication. So let's go through what cost must be offers in this consistency area on your cost must be approaches data consistency as a spectrum of choices in stuff polar extremes. So you have strong consistency and eventual consistency a dent. But there are many consistently tristesse along the spectra, as you can see in the image you can go for strong bonded still in there's session consistent prefix eventual. The more consistent you want the late and civilian trees because the user should see the latest committed right, it's going toe. Wait until the committee's committed across the board. I have provided a link in the resource section of this lecture, which explains about each consistency. Please go through them in detail on One thing you need to remember is consistency. Level are reason agnostic, so the consistency level off your argue cosmos account is guaranteed for all read operations regardless of the region from which readers writes that happened. Number off regions associated with your cosmos account. Whether you're cosmos account is configured with a single reason on multiple fried regions . So basically are do cost. Mostly be guarantees dis consistency levels Based on what you choose, Microsoft will guarantee that consistency is respect you off whether you go for one reason or multiple regions, although stuff basically. And finally, I would like to take you through request units because that's the cost off each operation that you do on the your cost most database. So let's go through the costume rates without your cosmos. Maybe you pay for the true port. You provisions on the stories you can do on an hourly basis, and the cost off all date of its operations is normalised about your cost. Mostly be on this. Express it in terms off the Coast units. So what it cost units. You can imagine like a dollar on duh. Basically, the cost off. Reading your one K B item is basically $1 of Wonder Cost unit, basically, on another date of its operations, are similarly assigned a cost in terms off, argues So if you do a read operation off a one K B item, then you have one question it consumed on. If you be a right operation, then you can have maybe to the question it's on. If you don't complex read quarry on that particular cast. Most database containing items than you can consume four read units so you can't do multiple users. Based on that type of the operation, you do the item size on the consistency level. You have selector on also Corey patterns, a difference upon number of parameters. So you need to estimate harmony. Real unity required based on the size of the items, kind off operations you do on top off it on what consistency you choose. So it is extremely important for you to choose a write throughput. What I would suggest from my experiences create a cross mostly be container lightened on Lord. Let's say 1000 items on. Do some operations on it and see how the request unit consumption is on type of operations you do during this. Testing should be very similar to the type of operations in a real world scenario. Users do so if you do with one user for 34 hours. See how many of the questions are consumed on. Based on that, you will identify the number of concurrent users that generally use your application on the type of operations multiplied them. Andan estimate throughput in terms. Off question. It's so that's what I would suggest in order place to meet accurately. Okay, so that's it for this lecture in this lecture have taken you Draw your cost. Most database its structure on how the global replication and distribution happens on water . All the consistency levels that you can choose. And finally, I have taken your through the cost units in the next lab. I'm going to show you hold a clear cost. Mostly be account. Create a data Bissonnette, create a collection within aired on Dad. An item into aid on go to some of the savings. Also set of it. Cast mostly be account. If you have some time, join me the next lab 12. Lab demo: Walkthrough of Azure COSMOS DB creation using Azure portal: Hi. Welcome to this lab in this lab. I'm going to show you how to create cost. Mostly be account on, create a collection within Ed Onda, add an item into it and finally go through some of the settings associative it costs most DB account. So first of all, let's create cost Mostly be account such for cost moves on. Then click on your cost. Most DB you can create. And then here I'm going to select Resource group database. Sergey on. I'm going to provide in account name drug costs Moors on the A P I. You can sell it Number of FBI's. So as you can see, you can sell it for mango Devi Cassandra on your table, etcetera. But I'm going to select quarter school on. I'm going to deploy, not vest. Sorry. Not Europe. Not not rest on you can enable Jordon in see my new nipple Geo redundancy. A replica will get created in the paid region. Okay, on you can alot are disabled in my village invites I'm going to enable and then click on next. Here you can space with a watchful network from which you want acts of the connections. But I'm not going to do that at this moment of time. I'm going Toe just cleared it. Now, this is going to take some time because of the enable Jared and see also on also read and writes on all regions. So I'm going to pass this video for a few minutes until this deployment is successful. Now our cost mostly be deployment has been successfully completed. So let's go to the resource. First thing we're going to do is to add a date of this. So in order to do that, go to date. Exploder Onda, let's create a gate of this. You need to create a date of us before you can add a collection. Okay. Here. I'm going to call. This has really brought it of this on. One thing you need to remember is you can provision through port at two levels. One is at a data base level. If you approach in through port at the database level, all the collections under the database will share the resources. OK, I'm not going to approach in throughput here. I'm going to approach in the throughput at a collection level. Generally, it is very quick, so let's wait for it. So now we have created a date of this on. Now let's add a collection bread. I'm going to use existing cool. Let me type in 20 topless. Maybe because it is safari. It's not getting this player. I really should be displayed here on the collection I d like, Say collection one on what will be the partition key. Let's say country okay on to put, I'm going to select the minimum so you can define the throughput here and also to date of this level if you want. But I'm doing that at a collection level and click on OK, now we created a collection, so let's click on it now. As you can see, you can have documents, stored procedures, use a different functions, triggers etcetera. But at this moment of time really had a document that's click on new I really I'm going to make it as one on day. I'm going to do only one additional thing, which is country okay, and then see it that will add a document into this collection on you can see Lord off for system generated values. In case if you want more information about these just referred to Microsoft documentation where there is a detail explanation about what is this? Each system properties all about anything. I'm going to add a link to the resource section of this lecture the cornet and go through that on the next thing I want to show you is scale and settings. In case if you want to change the true board that is allocated for this collection at a later point of time, you can change it here and also conflict resolution. In case if there is a conflict, Vanna record is getting updated by the users. What is the more that you want to follow in terms of conflict when the boarded the same record? Okay, on the next thing is time to leave. This is very important in case if you want to delete items from this collection after sitting period of time. So, for example, you have ah added a document and you haven't access to for a year on if you want to have an auto deletion, basically, after one year from last access our last modified, then you can provide a time to live here on define what is the time scale. So let's say for record is not updated or accessed in the last you know materials. You want to get it deleted automatically. Then you can define here. Okay, On you can defend. Stop a scissors user defined function triggers. If you go to the top, you can start creating all those stuff here. But it is extremely unlikely you will create a document using audio Porter. In a real world scenario, you will. How? I would t applications gaming applications, retail marketing applications on those applications problematical level. Add documents into this collection so it is extremely unlikely that you will use as your portal. In my view on if you come down here, you can see a number of things first is replicated data globally. Here you can have multiple other regions also, so it is still updating. But what you get is here is a plus sign. You just click on it. That means you automatically added the regions where freedom right? Replica on used to be clear that Andi, if you come back here default consistency. There are five consistency. Reville has explained in theory Lecture. You can sell it. Anyone off them because the data is globally distributed on replicated. It is very important that you identify the right consistency that is suited for your needs . Onda Finally, you can click on firewalls and networks were you can configure some firewall rules. Basically, you can specify that I peered recess from where you want except the connections on all so much for, like, folks from where you want acts of the connections. Okay on do you can do a lot of other things which I will take you through later. But these are the fundamental stuff that you need to know about your cosmos TV icon. So that's it for this lab in this lab Have shown you how to clear cost must be account on also using that exporter we have created a database Onda Under did obviously appear did a collection on father collection. We have added a to put basically 400 argues Onda. Under that collection, we have already document Andi. I have gone through some scale and settings also, but fundamentally you will use a P ice. Whether it is just a P eyes are dark nick libraries. Those are the stuff you use in order to modify our insert documents into this cosmos TV. You will not use our reporter. I hope you find this lab useful 13. Introduction to Azure data factory: Hi. Welcome to this lecture. In this lecture, I'm going to take you throughout your data factory and it's capable. It is. Get a factory is a cloud based data integration service that allows you to create data driven workforce in the cloud for orchestrating and automating data movement on data transformation. So if you are looking for any deal, tool on cloud data factory is the right choice for you because data factory is designed to deliver extraction transformation and loading process within the cloud. Okay, on any kind of futile process generally in walls four steps. 1st 1 is connected. Collect with the data factory, you can use copy activity in the data pipeline toe. Move the data from both on from Isis on cloud source data stores. In terms of connecting collect, there are literally number of collectors that are available within feet of factory, using which you can connect to the source, get the data and put it in the data store. Once you collected the data in a centralized data store within data factory, you can transform the collected data using computers and, he says, such as hedge the inside her group spark date. Alec analytics and machine learning. All these things are available within a Jew. You can take that Want is off them in order to transform the date up. And once this raw data has been transformed into business really consumable form, then you can load the data into a data warehouse. Secret of this class. Most DB on not only these things again, there are a number of connectors that are available using which you can publish the daytime to them. And finally, in terms of monitoring our data factory has a building support for my plan. Monitoring. Why are your monitor AP eyes poor shell log analytics on health panels. Vitton Roger Porter you can use in order to monitor all this pipeline activities. Okay. And while I'm explaining about this, I mentioned pipeline copy activity. All these things, multiple times. They're all part of data factory. So let me take you through some of the concepts Also Civic Data Factory, so that you can familiarize yourself with some of the terms used in get a factory. So predominantly Argo data factor is composed of 40 companies on these conference. Work together to provide a platform on which you can compose data driven work flows with steps to move and transform data. Okay, so the first competent is pipeline. A data factory can have one or more pipelines. Apply plan is nothing but logical grouping off activities that performs a unit off work on together these activities in the pipeline for former task. So, for example, if I plan can contain group off activities that gets the data from a nodule blob on Don't runs a high 40 on and hit, Steen said cluster to partition the gator. So that can be one task which can contain getting that is one activity on running a high Tory, and it is another activity together, The former task which will be part off a pipeline. Okay, on the next thing is activities. Activities represent a processing step in a pipeline, for example, you might use a copy activity to copy the data from one day to store to another day, last or on. The next thing is, data said data sacked prisons that data structures within the data store on which an activity will be performed. They're basically simply point toe are difference. The data you want to use in your activities as inputs or outputs. So those are called data sets, and the next thing is linker services. Linker, Sorry says, are much like connection strings, which you define connection information that is needed for data factory to kind of external resources. So it's more like a connection. Strings basically Onda linger Surveys can be a linked away data store our computer source. Also, this is very important for you to remember. A data factory can get the data and also trigger something to use a computer power in order to transform the data on something else, not within the data factory, but it will take the data from one source. Put the daytime to some of the source on trigger a computer source to perform some computing activity, our analysis on top of the data, and finally get the data out on publishing to destination. And the next thing is to those triggers represent the unit, a process that deter mines, find a pipeline execution used to be kicked off. Basically, you can skate do these activities also to before formal at some point of time on you can use triggers in order to kick off an activity, and finally you have control flow control flow is an orchestration of pipeline activities basically includes chaining off activities in a sequence branching them and also defined parameters of the pipeline level. Passing arguments while in walking pipeline on demand are from a trigger. So you need toe do have business processes and there are transformation process on the data so you can use control flow in order to sequence it in activities and also define what parameters needs to be passed for each of the activity. Okay, so all these things to get their form data factory. So I hope you got a basic understanding of data factory Now. Next, luxuries allow where I'm going to show you how to create a data factor. Instance how to create a pipeline and do some of the activities. Okay, so if you have some time, join me the next lab 14. Lab demo: Creation of data factory using Azure portal: Hi. Welcome to this lab in this lab. I'm going to show you how to create audio data Factory on also take you through some of the activities that you can do within our job data factory. So first of all, let me create audio Data Factory and I'm going to name this as withdraw Data Factory. And in terms of resource group, I'm going to select a database A Sergey. I'm going to leave that motion to on location. I'm going to select you. Gets out, actually can create. Generally, it is very quick, so I'm going to wait for it. There you go. The resources already created. That's go to resource on. Do you might have noticed when I'm creating the dreaded A factory? I haven't provided any pricing information. It is because the pricing will depend upon number off activities that you run rather than the instance of data factory. So you don't even occur any cost by creating a data factory itself. So go ahead and create a data factory. Inarguably explores some stuff. OK, but make sure when you are running activities keep in mind you are incurring some cost. And another thing is you might have noticed There's nothing much to sell it here because they're the factory has a different portal altogether. Okay, in out of the view that Porter, you need to click here and in this portal, there are lots of things that you can do. So let me show you some of the stuff. Click on order here. You can add pipelines. You can have data sets, you can add connections. And you can. Actually, Gus, generally, the way you should work, in my view, is first of all, you need to add collections. Go here on start adding a connection. A connection is nothing but basically an instance off. Credentials to connect were data store. Here. You can select number off, get a stores, basically lexicon azure blob storage on dhere. You can provide the connection information and save it. Andi, Intern, you can use this linker service in number off new docents. Okay, so if you go toe data set on trying to create a data set here, essentially, what you do is when you sell it argued blob storage. You will provide connection information, pull the connection information you need to create lingered service. So that's the reason what I'm telling is first of all, create number off linger services to your sources and destinations. Basically, all the data sets to do you have on didn't start creating data sets one by one. Okay, on, then go to pipeline on data pipeline and you can start adding activities into this pipeline so you can add different types of activities. Okay, so when you add a custom activity you can provide of your batch related information again are your badge is a linker service. So unless you created Linger service, you end up creating a linker service from here. So first, clearly linger service, create data sets, and don't start creating pipelines and start adding at duties. So that's how you should go in my view. Okay. On close this under our number off activities that you can do within pipeline. So, as you can see here, you can add batch activity. You can add copy data on Whenever you add copy data, you need to specify source sink again. We need to specify here data sets, So if you have all of their data set existing, then you can select it here. Otherwise you need to create similarly for Sink. Also similar to these activities. There are other activities also that you can do which is happen variable of your function. You can trigger, not your function. Here. You can execute another pipeline from here. So when your dragon drop here, you can provide settings information. Basically, you are specifying which pape line you want being work from this pipeline so you can have hierarchy off by plans. Also, our branch pipelines and you can have a our tree pipeline that triggers different branches off pipelines so you can configure complex processes by having different pipelines and trigger them in a sequence. Okay, on def. You come down here, you can get matter guitar, Look up Store position you can invoke. You can wait for sometime on hedged inside you can add hype, quarries and everything and iterations even you can add if end of fun loops. Basically, this pipeline is very similar to the logic caps workflow in large caps. You add a workflow on Davitt in the work for you start adding actions here. The similar principle is followed. Firstly, you will create a pipeline on another pipeline. You can have different activities. Tomi, make your data transformation process. Okay, so that's it for this lap in this lab. I have shown you how to create a data factory and using data factory portal. How sure knew what you can do in terms of creating connections, skating data sets, create a pipeline on add activities, different kinds of activities into that pipe life. OK, I hope you find this lap useful. 15. SQL Stretch database & SQL Data Warehouse: Hi. Welcome to this lecture in this lecture and want to provide you a very highly will walk through off sequel Stretch it of us and sequel Date of your house in terms Off Secrets Tested of this it migrants your cold data transparently and securely to the Microsoft Club . So basically wanted stretched it of this do Is it diverse? The daytime toe Protais first when his hard data, which is the frequently accessed data on a cold it up, which is infrequently accessed and also you can define policies, are criteria for hard. They don't call date up. So, for example, if you have a sales for a table on the open and in progress, sales holders can be hard later on all the clothes its sales orders can be called it up on the call data will be transparently my greater to argue secret stretch database. How what? It doesn't mean that you need to change your application in such a way that for open sales for those you need to goto on promises to local database on for close and sales orders, you need to goto, argue sickle stretch date, unless you don't need to change the application. OK, you will use the same qualities in your application to fetch the data on. Based on the the data exists the core evil automatically sent to stretch database. If the Korean close call data, so that's the biggest advantage off this. Okay, so let's go through some of the benefits off sequel stretched it of this. Firstly, it provides cost effective availability for call data because you can benefit from the low cost off your balance killing expensive on promises storage. That's one thing. Secondly, as a Sedalia, you don't need to change anything in your applications because the location of the data is completely transparent to application. Okay, and Carly because the data written on premises data base gets reduced because of the mitigation of co ordinator to stretch it of this, when you take backups off on premises data, they will run faster on finish within the maintenance window. And in terms of backup off sequel stretched eight of us, it will be running automatically. And finally, you can take that monitors off all security features off sequel server in order to make sure the data is encrypted at rest and emotional. So okay, so These are the benefits. Frustration Database. How are one thing? One slight thing you need to keep in mind. Days van your quarry in close both heart and call data. In that case, the court. He has to go to stretch database. We can argue, right? So you might experience some kind off latency. So keep that in mind when you are designing the seaQuest instead of us for your customer. And the next thing is secret of a house. I'm sure everyone knows about secret of a house on by. Microsoft is offering secret of a house within Azure. Basically, it's a cloud based scale out that our best capital a processing, massive volumes of data both relational and non relational and secret of your house is based on massively parallel processing architecture. Basically, in this architecture that it costs will come to control. Node on Derrick Rose will get optimizing their on pass onto the computer toe. What? In Pamela? Okay. On the secret off stores the data in premium local in and stories on and linked to the compute notes for equity extraction. Okay, so let me take you through pricing information also in terms off gate of a House units Data warehouse units is a measure off a location of resources to your secret of it. House. So it's a bundle off CPU memory I ops which are allocated to your secret of your house. It is very similar to a database unions that you purchase when you go for your secret address. Similarly, you how great of a host units here on the date of the host unions provide a measure of three precise metrics that are highly correlated with the date of a host worker performance . So there are three things generally you dough on. It will take a lot off compute memory or network bandwidth, etcetera won his can and degradation. This takes a standard date of a hosting quality that scans along the number of rows and performs a complex segregation. In this case, you need Morsi pupil er on Also, I walk Okay, I'm the second kind of metric is Lord. This metric measures the ability to in just the daytime to service. So these metric is designed to stress network on CPU aspects of the service and finally you have creative will a cellar that my just the ability to copy A table on this involves leading data from the stories distributing across the north of the plans on writing to the stories again. So it is a CPU ivo and network intense operations. So based on the kind off operations you do will define how many data warehouse units you need to purchase. So it's not that easy on, but it's not a black inviting. In my view, I think Microsoft might come up with a purchasing model similar to vehicle purchasing model in our use equal later this, because to convert the existing configuration of secret of a house into a date of a host units in a Jew, I think it's not that straightforward. Okay, so keep that in mind when you're going for secret of a host manager. So that's it for this lecture. In this lecture I have taken you through secret, stretched it a bit on the secret of my house at a very high level. I hope you find this lecture useful