Deliver big data solution on Azure using Azure analytics services | V S Varma Rudra Raju | Skillshare

Deliver big data solution on Azure using Azure analytics services

V S Varma Rudra Raju, TOGAF Certified Enterprise Architect

Deliver big data solution on Azure using Azure analytics services

V S Varma Rudra Raju, TOGAF Certified Enterprise Architect

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
16 Lessons (2h 7m)
    • 1. 1. Introduction to Azure big data solution building blocks

      6:51
    • 2. 2. Introduction to Data Lake Store

      6:52
    • 3. 3. Lab demo - Create a data lake store 2 and access the same with Storage explorer

      7:08
    • 4. 4. Lab demo - Ingest data into Data Lake store using Data factory

      7:09
    • 5. 5. Introduction to Azure HDInsight

      4:59
    • 6. 6. Lab demo - Process the data in blob storage using Azure HDInsight hive query

      18:19
    • 7. 7. Introduction of Azure Data Bricks

      5:57
    • 8. 8. Lab demo - Create Data bricks workspace and Spark cluster. Prep for extraction.

      9:34
    • 9. 9. Lab demo - Extract and transform the data using Spark on Azure Data bricks

      6:43
    • 10. 10. Lab demo - Load the data into SQL Data Warehouse

      17:36
    • 11. 11. Introduction to Power BI

      7:07
    • 12. 12. Lab demo - Visualize the data using Power BI

      5:53
    • 13. 13. Introduction to Azure data explorer

      6:30
    • 14. 14. Lab demo - Create data explorer and ingest & query data

      6:22
    • 15. 15. Introduction to Azure data catalog

      3:19
    • 16. 16. Lab demo - Create data catalog and register data assets

      6:13
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

54

Students

--

Projects

About This Class

The objective of this class is to introduce you to different Big Data Analytics related services available in Azure to deliver an end to end Big Data Analytics solution on Azure. This class includes following lectures and lab demonstrations.

  • Introduction to Azure big data solution building blocks
  • Introduction to Data Lake Store
  • Lab demo: Create a data lake store 2 and access the same with Storage explorer
  • Lab demo: Ingest data into Data Lake store using Data factory
  • Introduction to Azure HDInsight
  • Lab demo: Process the data in blob storage using Azure HDInsight hive query
  • Introduction of Azure Data Bricks
  • Lab demo: Create Data bricks workspace and Spark cluster. Prep for extraction.
  • Lab demo: Extract and transform the data using Spark on Azure Data bricks
  • Lab demo: Load the data into SQL Data Warehouse
  • Introduction to Power BI
  • Lab demo: Visualize the data using Power BI
  • Introduction to Azure data explorer
  • Lab demo: Create data explorer and ingest & query data
  • Introduction to Azure data catalog
  • Lab demo: Create data catalog and register data assets

By the end of this class, you should be able to deliver simple Big Data Analytics  solution on Azure platform and start exploring each area in depth.

Meet Your Teacher

Teacher Profile Image

V S Varma Rudra Raju

TOGAF Certified Enterprise Architect

Teacher

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. 1. Introduction to Azure big data solution building blocks: Hi. Welcome to this lecture. In this lecture, I'm going to take you through different building blocks that are available in *** in order to deliver big data solutions on a jury. Any kind of big data solution generally involves five steps for step is ingesting the data . So normally is the source of the data is watched. Increased. It can be sequel databases. It can be your mobile laughs. It can be I ot solution. It can be your perhaps. So there are host off resources from where the data can. Come on. You need to in just the data. Once you in just of the data, you need to store that data. Some that and also you. Nikhil designed the stories in such a way. It's supposed big data analytics. Okay. And once you store the data, you can give an option to your business users to explore their data off course. This is raw data that we're starting. So if you want to do at Okinawa system raw data is very useful. And you can use explore Sepp in order to analyze the data. But is it in raw form and the next step of the process? is prepping train. Basically, I would say it's all about converting this raw data into meaningful information. Okay, that's what you do in prep and try most of the times on once you can monitor the daytime too useful information. Put that daytime toe a date of a house to model. And so and finally, you can doll of different abs. Insides on top of this converted a tight are useful information in order to present them to business users. And business users can use this abs and insights in order to make informed business additions. Okay, so let me take you through some of the at your companies that you can use in each of these steps, please. No, these are not only the confidence that you can use in your there are different components, but I picked the important and most widely used ones on presenting it to you. So in terms off injection, the first tool that you can use in a jury is data factory Data factory is more like E deal offering on cloud. So using data factory, you can fit the data from sequel. It always are blob storage, so you can fit the data from variety of sources within data factory. Similarly, if you want to use our party offering honor do, then you can use Apache. How DuPont hedged insight. In order to fish the data from a source and in terms off, I would tell you how I only help where you can in just the data from devices on. Also, you can use even help in order to in just the date up. Okay? And once you have ingested the data, you need to store that somewhere in terms off are you? There are two important offerings that are available in *** first twenties date Alex store using which you can store large amounts of data on do some transformation on top of it, using different other of your competence on the second important thing in order to store the data. Is that your story? How were recently Microsoft? Are you released Data leak stories, Generation toe generation one used to be a standalone stories on its own. But now date Alex Torres generation to is built on top off blob storage. Okay, I will explain about this in bit more detail in the appropriate lectures on once you have stored the data. You can explore the data using data catalogue candidate exploder. Both these tools you can use in orderto explore the Rodica on gain some insights. For example, in the taxpayer you condone do times recent policies in order to predict the plants and also, if you want to predict a seasonal trends. So, for example, in winter, if the weather is too bad, that means it will create a lot of demand for gas utilities. So gas utilities generally do this prediction based on the weather and make sure and off pressure is maintaining the pipes in order to really with the gas at a light pressure. Okay, and once you are exploring the data at the same time, you can prep and trying the data using different at your conference. 1st 1 is argued. Data breaks. It's our party sparked based analytics platform that is built on a Jew. You can use that on the spark is also offered under hitch the inside. Basically it's the inside consists off number of conference. Really Toto Apache, so you can have a jihad. Oops, straume spark. All those types have supported as clustered life. We didn't have you hits the inside anyway, I'm going to shoot up you in one of the lectures and labs also. And the next thing you can use his dick Alec analytics on in terms off processing the a stream of data, particularly when you have I would be devices and all the stuff you will have stream of data coming from the devices in order to process them. You can use a party strong for a Jew hitched inside if you want to go for a party competent . But if you want to go for even bigger, competently teenager, then you can ustream analytics. Okay, on. One more thing you can use is your analysis also in out of the prepper trying the data. And once you have done the transformation, our preparation are converting the Roger time to useful information. You can push the data into secret out of your house that is vitally used in terms of northern answer. But there are other components also, But most of the time, in my experience, we have used a secret of the house on. Once you have migrated the useful information into model answer, then you can present the data one of the pools that is most widely used and very popular. These days it's power bi I using power bi A. You can connect to secret it over house, fetch the data and presented in a very user friendly and user. Judy reports, okay, and also you can use a log analytics log unnoticed by the red doesn't fish. The data from secretly have a house, but it is very useful information, and you can, in just the daytime toe will miss workspace and access and analyze the data using Log Analytics. You can also install it different management solutions on top off log analytics in order to analyze your logs on gain insights into what happening within your 80. Okay, Once again, I want to stress that these are not only the tools. There are other tools that are available via teenager, but these are most vital used tools in terms off big data solution. Okay, that's it for this lecture. In this lecture, I have explained aboard six key steps involved in delivering big data solution on what our juice and he says that he can use in the corresponding steps in big data solution in the upcoming lectures and labs. I'm going to take you through each off this step in detail. On step by step. We're going to create a Jew services in order to deliver Big Data solution v teenager. So it's going to be very interesting. So if you have some time, join me in the next lecture. 2. 2. Introduction to Data Lake Store: Hi. Welcome to this lecture. In this lecture, I'm going to take you through our due date. Alex store are your date. Alex Torres. Generation to preview is a set of capabilities dedicated to Big Data Analytics and it is built on your blob storage. Hey, earlier in Microsoft Raju, there are two stories. Solutions One is date Alex to a generation one on the 2nd 1 is blob storage data like stories, Generation one is designed for big Data Analytics, and it supports highlighted colors to resolve deeper. But the problem with data leak stories is it is costlier on also, it is only locally relent, so it provides only local redundancy as far as I can remember. And when it comes to blob storage, it doesn't support hierarchical name Space ho. It is much cheaper when compared to get all extra generation one. So what? Microsoft Danny's. They came up with the Calixto Generation to which is basically result off converging the capital. It is off to existing stories. Servicers One is argued blob storage and the 2nd 1 is data extra Generation one. Okay, so let's go through some of the key features off Datalink stories. Generation to little extras generation to and loss you to manage and access data with a nodule. Blobs to this, just as you would do with a hard distributed file system. Because how Luke distributed file system is hierarchical in nature. Alia, there used to be trouble when you are using our your blob storage with hard up Now with the data leak stories generation to that problem. Gone of it. Okay, On the second thing is the security model for day Calixto generation to supports access control list and plastics permissions along with some extract annually being space fit to date Alex Torres tradition toe on the starting. Which is more important, I guess, in my view, is data lex to his generation to our first Lucas to this capacity and transactions and also in addition to low cost storage, it also supports geo redundancy because it is built on top of what you block storage by default. It supports geo redundancy. Okay. And finally, the a BFS driver, which is a new driver, is optimized specifically for big data analytics. Alia Van Hot group needs to work on all your blob storage. There used to be an old driver nobody is replaced by a Be a first. So let me take you through this driver in bit more detail. A little extra generation to and loss user Self azure blob storage access to a new driver, which is our your blob file system driver are simply called us a BFS on Davey Affairs is part off Apache Hadoop and is included in many of the commercial distributions off How dope . So now, with new driver baby affairs, you can able to use on your block storage in solutions that are based on Hado, There used to be an old driver Beaches, windows are your stories blob driver that used to provide the orginal support for Are you blob storage when you're working from Hadoop? How are? Because earlier, the blood stories is a flat to a structure this driver used to perform complex task off mapping the file system Semantics? No, because it'll extra generation to comes with the new driver, and it introduces hierarchically in space to hold your block storage. You don't need to use this whole driver and the time consumed in complex task off mapping the file systems. They're all gonna be no okay and in terms off referring the data within data. Lex, today's generation to you can use a below. You are basically a BFS file system at a code name. Dark beer fest are called up Windows. Daughter on mentioned the folder names and then finally, the file him here file system is basically container account. Name is basically stories. Account name on the party's fold us in the file. Name is the name of the fight. And finally, a be a first driver supports two types of authentication. One initiated key on the 2nd 1 is on your active directory. What indication? Batter took him okay. And finally made this data leak stories generation to fields in delivering a big data solution. Let me take you through that. Basically, Datalink stories is a story solution that is designed for big get analytics so you can use data experience. Toe, hold the data when you're crossing gate are visualizing it. I'm so on. Okay, So in terms of putting the data into data lake storage, you can put the data using are you get a factory you can use to his explorer you can use. Is it copy tool, etcetera? I'm pretty sure these tools will keep on increasing because my clothes have just released a little extras generation toe. Very soon, Microsoft will start adding our tools. Who can support ingesting the data into a data lake storage? Okay, on in. Terms are for even data are due date of factories. Torres Explorer is a copy to the are for bulk data. But when you come to even daytime dress, you can use a purchase. Tom, Andi even hobbles off. Currently. Even Hub is not supported from data legs to his generation to perspective, but it is supported from generation one. But I'm pretty sure by the time you take this lecture, Microsoft might have updated that feature. Also okay on in terms of processing the data within the data extras you can use your data bricks on Did you can use our your history inside? Basically, you can use how do pitch base spark strong on. You can use our Judy Italic analytics on so on again. Some of these are not supported on a little extra generation toe on. Some of them are supported for data. Lex, Today's generation one. So keep that in mind when you are delivering any solution. But Microsoft strongly recommend you go with the date Alex Torres generation toe from now onwards, and in terms of visual is in the data, you can use our new secret date of a house and you can use power bi also. Okay, so basically, data extradition is designed for storing petabytes of data for big Data Analytics. So that's it for this lecture in this lecture Have taken you through at a very high level. What is data lex to his generation to is all aboard Onda have taken you through the new driver on were date Alex to his fits in a big data solutions. Next to lectures are labs in the first lab. I'm going to show you how toe create a data extras generation to using all your portal on a prude someday. Time to that data exploration. Okay. And in the second lab, I'm going to show you how to use data factory in order to in just the data into a data link storage from a secret office. Okay, So if you have some time, join me in the next lab 3. 3. Lab demo - Create a data lake store 2 and access the same with Storage explorer: Hi. Welcome to this lab in this lab. We are going to carry out step one off, creating big data solution on art. You're basically I'm going to create a date, Alex store and upload some daytime to it. And also, I'm going to create two more resources when his data factory and the 2nd 1 is secret. That obvious in preparation for the next lab. Okay, so first of all, let's create a data extra using our your portal here. First of all, I'm going to create resource Group, and I'm going to create this in not Europe and review and create and click on Create. Okay, our resource group God Creator. And within this, I'm going to create get Alex store. And when I type data, Lex or here, you might notice something different. Basically, you can see only data extra generation one. So where is this date, Alex? Toward generation toe. Because data leak stories generation to his build on top of azure blob storage, you will not be able to see here. So basically, we need to create stories Akon on while we are creating stories account. We can sell it hard. Article name, space off your little extra rest condition, too. So let's get constant his account and click on Create. And then he provided him Roddick a leg store and then come down here. I'm going to leave. The location has not Europe performance a standard account kind. I'm going to leave that as it is, the skill. I'm going to leave all these settings as it is and click on Advanced here. You can able to specify whether you want to go for data extra next generation to or not. So I want to enable this. So let's click on enable basically your enabling hierarchical names based on top of blob storage. So let's get country it frankly can create. Okay, this is going to take some time, so I'm going to pass this video for a few minutes and come back once this is successfully created. So now our resource has been successfully created. So let's go to resource. And if you come down here, you can see that Alec storage file system, and you can see here also click on in now. You can't do anything here. The reason is it's not at implemented within azure portal in order to upload the data and all the stuff for uploading the data into digital extra generation to unit to use for his Explorer. So let's go to the stories, Explorer, and then let me sign in with my account. Let's modify. And now you can select the subscriptions from there. You want to view the resources, so I'm going to take this and apply here. You can see the data leak stories that I have creator on the block containers. There will be nothing in it. So let's clear the country. No, which is I'm going to call it as he majors like, say, and click on the majors. You can create a folder here and then we think that you cannot purify. I don't have any images, so I'm going to upload power point and then click on. OK, so now you can see the state of soffit. That transfer is in progress. Once it is completed, you should be able to view the file here, so just refresh it and you can view the file. So this is how you can use for his explorer in order to upload the data into a data lake storage. Unless you are exploring some stuff. It is very unlikely that you used to his explorer. You generally use either data factory. Are is a copy tool. Are some other tool in order to upload the data programmatically most of the times. Okay. And in terms of properties actions, it's very similar to blob store is basically but only thing is, then point will differ. So you can see here in stuff. Blam! You're getting data factory? BFS basically Yeah. Can you notice this DFS? Okay, So this is how you can create date Alex stories and also applauded the daytime to detail. Extraordinary. Next thing I'm going to do is creation off duty sources. One is secret database. Another one is data factory because both we need for the next lab. So let me do that in of itself. Click on, create on. I'm going to name this as a drug addict Off, actually on select analytics hard G, version two location. I'm going to select not Europe and click on create. And also I'm going to create secretly in others. Basically, in the next lap, I'm going to show you how to transfer the data from secret data this into data extra using data factory. I'm going to use sample because we're going to use an existing table in order to transfer the data. So that's the reason I'm using sample date of this and then click on select and then come down here. I'm going to sell it basic here and apply. And finally click on create. So this is going to take some time in order to get this equal it of its creator. So that's it for this lamp in this lab. I have shown you how to clear data Lester and upload data into Data Lake storage using stories Explorer on also be created to resources for the next lab. Next lab is going to be interesting lab because I'm going to show you how to transfer the data from sequel server in total extra using data factory. So if you have some time, join me the next lab 4. 4. Lab demo - Ingest data into Data Lake store using Data factory: Hi. Welcome to this lab in this lab. We're going to carry out Steptoe off, delivering big data solution on module, which is basically ingesting the data into it. Aleck from sequel databases using data factory. Okay, so let me go into azure portal in order to carry out this step. So here you can see that off quickly, cornet, and then click on another and monitor. And let me June in a bid just to make sure it is more visible to you and then click on order on. The first thing you need to do is to establish some connections to source and destination. So I'm going to click on connections and create a linker service for your sequel date of its first. Because that is our source on select the subscription, not your training's sort of a name is with a sequel. Server data. This is Rudra database authentication. I'm going to use equal it. Indication on user name is with Dr. A year on past what I'm going to type in and then just the connection first successful and click on finish. Okay, Now, this is done on the next connection. I'm going to create to date Alex store. So you should see that'll Extra generation toe preview here and then click on it quickly. Khanna to subscriptions on stories account because we are using a Konkey Andi are the result of integration. One time it will automatically current because we're load access to our new services anyway . And that's the connection can actually successfully and click on finish. Okay, I'm not changing any names because you know, by description itself, you know that we need to provide a name on description and all those stuff. So let's dismiss now we have created conscience. Now we need to play a data sets okay. For each of this connection, basically, are your secret date of this click on finish. And I'm not changing the name connection. We need to use the connection that we created Alia on the table. We are going to use customer at this table. That's what we're going to fetch Onda. You can import a scheme or if you want, but I'm not doing anything. Let me important scheme. I ensure that you also you can basically important schema on dsi want column names are there within that table. Okay, so I'm going to leave this as it is on the create another day, does it the cheese toe data like store. Okay. On connection I'm going to use is this one and no. If I remember correctly in the earlier lab we created a container which is basically personal. I think so. Let me see again. I think you can't. You using Azure portal? Sorry for their Let me launch stories. Explorer. He may just So let's be going toward your portal and specify images here as a directory. Okay? And you can space for the scheme. Also, in terms of what scheme are used to be had heard otherwise, the color names will be displayed as pro gun pro to once the data has migrated. So here I'm going to add columns here. Just like we have customer I d. And into 32. And the next thing ease address. I d. A deigning to 32 had the stripe. It is a string and finally modified it in time. Okay, so we created two data sets. One is for our new sequel table on Devaney's for date. Alex tore into visit. The data will get copied. Okay, so now I'm going to publish these two data sets in. Told you, Data factory. Let's see the progress off it. It is still publishing. Publishing is completar. Now I'm going to add an activity in the pipeline. So let's create a pipeline and that a copy date activity here. We can specify the source from where the data needs to be fetched on. We can specify the sink. So where the data should be copied and mapping Here, you can ableto configure the mapping between source and destination. Okay, In case if you want to change, you can change. But I'm going to leave that as it is on leave all these settings as it is on the validate this so everything is fine. Let's publish this activity. Yeah, published. Now I'm going to click of this pipeline, hopefully to be successful. And if you want to see the progress off it, you can click on it and see here it is currently in progress. Generally, it is very quick, so I'm going to wait for this. So the copy activities succeeded. So let's click on it. And if you want to see the in detail in terms off, what activity has done. You can see here the data copy to data exploration. So it's a very nice representation of what is happening within activity. Okay, so now if you want to view, the data can go to stories, Explorer. And here you can see the text five. Okay, so that's it for this lap in this lab. I have shown you how to transfer the data from our your sequel databases into date. Alex store using data factory. I hope you find a slap useful. 5. 5. Introduction to Azure HDInsight: Hi. Welcome to this lecture in this lecture, I'm going to take you throughout. Your hate it inside on this capability is before you try to learn about. Are you hitched inside? First of all, you need to know Apache Hadoop, because at your hitch, the inside it basically implementation off Apache Hado on a jury. So what is this, Apache? How do it's basically open source framework for distributed processing and analysis off big data sex on clusters? Ah, party group is very popular in delivering big data solutions on it has lots off utilities on South beers included. In it, it includes basically a party hive, Apache Hitch, base spark, CAFTA and many other tools. I'll pick some of the tools from the set on explaining bit more deter in the next slide. Okay, so on your head, stay inside is basically a cloud distribution of how do competence from the heart and works data platform. With this frameworks, you can enable broad range of scenarios. So, for example, using a party hope you can do extraction and transformation and load on. You can deliver a date of your hosting solution using Apache whole group. Ah, party hive Cory's and spark. Also, you can deliver machine learning you can deliver. I would be using a party strong, so basically you can deliver a range of scenarios using different companies that are part off Apache Hadoop. On your hitched in sight, I provided a link to different big data processing scenarios in which you can use Hitch the inside in the resource section of this lecture. Please click on it on. Go through some scenarios. So basically the link will redirect tooth this page, where you can see it's not your war type of components that you can use. Vitina you hitch the inside So basically for date of it hosting. As I said, you can use Apache, Hadoop, Apache Spark and Interactive Cory's like Hi Eve, I'm for Internet of things. You can use strong spark AFCA and for data science, you can use Spark and others, so basically you can use different components from Apache whole group or not, you're hitched inside to deliver different big data scenarios. Okay, and now let's go to some of the competence in bit more detail. Hit the inside with a nod. Your includes specific cluster types on the cluster customization capabilities such as cap ability to add components, utilities and languages so you can select the different cluster types based on the needs off. Delivering big get a solution. So based on the requirements basically so fast on is Apache hado, a framework that use hits the offense and your own resource management on the simple mapper just programming model to process and analyze a batch data in Pala. Okay, so that is one of the companies on you can use a party spark basically except open source parallel processing framework that supports in memory processing to boost the performance up big data analysis applications. So the difference between spark and her dubious Helou use a set data storage onda and in terms off doing, I'm going to that data stories that we have a performance impact, but in spark, then their data storage are the partition off, it will be put into the memory. Our cash on the parallel processing will happen on the cash, which basically Busta performance off big data analysis applications. Okay, on the 3rd 1 is a party storm. This is basically far even analysis. So if you have a stream off events that are coming. Then you can use this up artist Trump, which is basically a distributed real time computation system for processing large streams of data fast. So Strahm is offered as a manager cluster in hitch the inside. So if you are delivering, I only solution our big data solutions later tie ot than Apache Strong is the right choice under all of the tools such as Hitch base machine learning. Sevy says calf car They're all supported by HD Insight. Again, I provided a link toe different cluster type supported by our your history inside in the resource section of this lecture. Click on it and go through them. Okay, so that's it for this lecture. In this lecture I have provided you have brief all we off Apache hado on your hitch The inside on different scenarios there you can use this Apache Hadoop on hitched inside on also different cluster types in Haiti. Inside next lecture is a lab, which is going to be very interesting because in that lab I'm going to show you how to create it inside cluster on demand and also processing table. So basically, we will use a high quality in order to query the data from a table and put it in their data store. So the people will content 10 columns. Let's say we will only fit three columns out off it, using high quality and put it in their data. Stop. Okay, so if you have some time, join me the next lab. 6. 6. Lab demo - Process the data in blob storage using Azure HDInsight hive query: Hi. Welcome to the slab in this lab. I'm going to show you home to process data in blob stories using data factory hedged inside and high. Quit. Basically, what I'm going to do is create blob storage and upload a hive script into head on. Secondly, I'm going to create Data Factory. And thirdly, I'm going to create computing person Reason data factory to create on demand hits the inside cluster. And finally, I'm going to create data factory pipeline with high quality. I know it might be confusing to you, so please closely follow what I'm doing in order to understand how you can use our your history inside in order to process the data. Okay, so first of all, let me create on your blob storage. Firstly, I'm going to create a temporary source, Duke, because I'm going to delete this after this lab off course, you need to do the same thing and go into that resource group and create a blob storage click on stories Akande, click on create Andi Select Analytics temp RG. He has to his account. I'm going to call this hands rude drop blob storage, and I'm going to leave everything as it is. I know you might change this to block stories if you want, but I'm going to keep that as it is on the click on Review and click on Create while this towards the corn gets creator. I'm going to create a service principally not your active directory and provide contributor roared to that service. Principal put this store is a contact we created. Okay, maybe I'll do it at the source of dope level itself. So, like, we create that one, So application registration. I'm going to call the science Hedged inside on history dp with the mind dot com. Yeah, here. I'm going to create a keys. I'm going to call the cells. Hedged inside on duration. It's going to be one year on DSI of this value. Let's copy this. Yeah, on, then. Yeah, I'm going to just copy this. Close this and then copy this. Okay, so we created a service principal. Now we need to provide contributor role with resource group that we created toe this service principle. So go to resource group African aid on them. Access control had a rule assign manned contributor rule on then here. I've been hedged in sight. Yeah, on then. Say Okay, that's it on. Now we need to upload the script, the hive script into the storage of conduct we created. Yeah. Yeah. Let's see the container. Which is I'm going to call themselves Hitch the inside. Hey. Yeah. And then click on aid and then upload a fine. Oh, it's already dated it. No, no, it's not that. Sorry. Floated. That's it. So we have uploaded the high script. Basically want this high script. Contains is these statements it will fetch the data from a sample table. Andi, it will only take a few column sort of the table andan insert in the output people. So both our text files basically live, so it creates a sample text file which contains rose and number of columns. But we are interested in only few columns. So it creeds column values off those few columns and store it in their output table. OK, so basically high sample out is output table. Andi input table is high. Sample table the sample table. We're not providing anything because Vanna hedged. Inside cluster is created. There is a sample table that automatically gets created into the stores icon and it will get read up. Okay, so let me go back toward your portal now. We need to create data factories under Let's clear data factory. I already have one. He was existing Analysis, Tim, Party location is not Europe. And then click on create. This is generally pretty big, so I'm going to wait for it. Okay, so let's go to resource. Click on order and monitor. Let's click on this on Click on Connections First Linker Service that I'm going to create these to the blob stories that I created here. Select subscription of your training's on the one Replicator is Rudra Blob storage. Okay, on test connection, I'm not changing anything. If required, you can change it. I am. Click on, kid. Now in court Creator Nextlink answer is that I'm going to do is for hedge t inside. So here you need to select Compute here because it's a computer service. Okay, so click on it here. I don't have any hitch the inside existence, so I'm going to create it on demand on the linker, sir, is that I'm going to use his blob stories. So basically the blob stories recreated is going to be used by HD Insight as a data store. Okay. Cluster type is hard group on. I'm goingto make it too. I'm going toe. Keep it for 30 minute on service principle. I d and Principal Key. This is the credentials that will be used by Hastings said cluster in order to populate the daytime toe blob storage or access that resource group. Okay, so we compute this hell, let's scope it is sorry they seize the i d. On service Prince perky. Okay, tenant, I'm going to leave the capacities on subscription. I'm going to select modules. Trainings, Resource group use existing one, which is analytics temp Argie Select region, not Europe. And additional stories. Linker serve years. Just selling this one on a pecan. Oh, yes. Type on. Provide a suspect user name and password. That's it. All done. We have provided everything. Now click on finish on the linger service has been saved. Now we need to go to pipeline and click on pipeline. And in this case, we're going to use hive activity. For that, we need to provide hitch, dear cluster rest of the things you can provide if you want, but I'm not taking you through this because they're very straightforward. It's the insect luster Saleh clinkers various one not selling concerns that we created on the script. Linger Service Select Azure Blob Storage on grows to the stories. It's the inside and salad descript. See, that's him. I think. Unfortunately, this thing came up. So let me do one thing very quickly that's been called us create and then deleted this. No Mexico here. So you knew dab so not used to call me Yeah, So now let's go toe blob storage. And then let's delete this. Sorry, because I joined in for back of you. It's not coming here, so deal it doesn't have shorts. Also. Now go to hitch the inside Onda upload the highest script again flowed and then Goto data factory on Let's create a script again. Oh, you connected from here also. Okay, any of it now I have changed it. So that's escape and go be advanced. And if you press on auto fill scrip then you need to provide a parameter value for one of the parameters in the script. So if you go back sorry, I can show you here also one minute in here If you see there is an old per parameter for that meeting toe pass on the value that means where this important used to be stored in the stories account. So let me provide that W E s B B just a driver Name on. Then you need to provide container name here. H d inside and the read withdraw Blob story age dot blob dot Cool. Don't in those dot net. But what fault do you want to create on the store? This output table? Into what folder? For here. How to put folder. This is a watchful folder. Any of them slash Okay, So all I'm providing these a parameter value, basically output parameter value specifying a location where this whole put needs to be stored. Okay? And then published this change. Okay, I hope you understand this correctly. So basically what we are doing relaxing a table that will be created by the cluster itself as an example. Onda, we are reading only few columns from the table into the destination table. OK, so let's close this. Hopefully it should work next validated. No, a rose on. Now let's trigger This is going to take a lot of time because it needs to create a cluster on demand on Run the High Query on it. So I'm going to pass this video for a few minutes and come back once the pipeline either succeeded or failed. Now I have a kid for over 30 minutes on Dour Pipeline has been failed. So let's go into the actions on view. Details off it. It is telling her that job failed with exit called one user. Ever see Onda and going to this spot for viewing the details. So let's go to blob storage and go toe gloves under the office jobs. High quality jobs. Taconic, Conn. Staters on de other details. And here stilling table hive sample out, not found. And also it's telling their corn being access and does not support hitch tpp. Not sure why that came. Okay, What I would do is I think problem might be because let's go toe my line. Hi script and it is talking about output. Table is under high sample out. First of all, let's view the script looks everything okay for me? Let's be closed this go down and he'll be provided hedged inside. Maybe we need to create output folder HD inside here only hive script is there. Let me do one thing. Let me create using stories Explorer, what edged inside and then create a new folder, The Cheese Output folder. Maybe that is the reason why it hasn't Andi. It's validated this on. We haven't changed, isn't it? So let's trigger now again. Hopefully, let's see what's happened again. It might take some time, so I'm going to pass this video and come back once this is done. Now, the second round of the pipeline has been completed and as you can see it and again failed . But this time I think I know the reason because if you go to hitch the inside, I've done some research. By the way, go to script on, Go to advanced. And if you are using hitch to DPS, that means if you enable secure access required on the stories account, then you need to use vast bs basically here like this. Andi should work in my view. But again, let's see what happens. There isn't I'm not editing these videos and other sees. You should know what kind off arrest will come when you're doing this. That's how you learn, isn't it? So that's the reason I'm keeping all these things this maze and then let's trigger. Now I really hope this works now. So finally, our pipeline one has been succeeded. You can't imagine how much happy am finally succeeded on. You can click on it to view the details. Andi, if you want to view the output off it and close this close this go to hit the inside and you should see an output folder on kick on it. I don't know this and then open it so basically you can see some information here. This is our put table. Let me show you the source table. Also, you can see less columns right in the output table. You can see only three columns are four. But if you can see the source, then you can see lots of columns. Basically what high quality done is it? Stripped all the industry columns and it's toward the selected columns in your pool table. So that's it for this lab in this lamp. I have shown you hope to process the data using high Cory on data factory on also have shown you how to create on demand hit the insect clusters using data factory. I hope you find the slap useful 7. 7. Introduction of Azure Data Bricks: Hi. Welcome to this lecture. In this lecture, I'm going to take you through. Argued data breaks Our new data breaks is an Apache spark based analytics platform optimized for Microsoft Azure Cloud Services platform. So basically argued that Publix is implementation off Apache Spark on a George on data picks comes with several components. It contains a workspace. It's a collaborative workspace. We're different people from different backgrounds such as data science, data engineering. Business analysts can come together and work on it. Okay, on within the workspace, you can create notebooks, you can clear dashboards and all those stuff. And also within data breaks, you can clear work flows and data. Blix has its 11 time, which is basically a set of core conference, which includes spark also okay on with the option of serverless within data breaks, you can take all the infrastructure complexity so you don't need to worry about underlying infrastructure. Are you will take care of that for you on one more important thing is data Biggs is completely indicated. Without your active directory hands, you can take that vantage off role based access control that is available in your active directory in order to design your rules on implement and proper security on data bricks and in terms off integration data bricks integrated with our new storage so that you can ableto fits the data from all your storage. And also it is integrated with your data Lake Secret Data warehouse and many other tools. So having a party sparked based data bricks in azure, you can able to fetch the data from number off new services and process the data and also published the data into a member of services. So you can able to publish the daytime to machine learning stream analytics and power bi I Okay, so let me take you through some of the key concepts of data bricks in bit more detail. So the first fundamental thing is workspace. A workspace is the room full of for our due date publics and it stores, notebooks, libraries and dashboards. And within the workspace, you can create notebooks. And notebook is basically the best interface to documents that contained honorable commands of his realisation and a narrative text. In the next lap, I'm going to show you how to create a notebook on to put some scale a command so we did it on the starting is the public's file system. It's a file system. Abstraction layer Where. Blob storage. It contains direct police, which can contain files which can be data files. Libraries are images and other data. Actress So basically data. Blix Intern uses blob stories for its stories purposes, but it implements a file system. Abstraction layer because blocks where it is a flat story scheme, basically, and the next thing is datable expand type. It is a set of core competence that runs on clusters. Manager by audio database data Break sometime includes a party spark, but also includes a number of other conference that substantially improve the usability performance and security off Big Data Analytics. Okay on income, self data and computation management. You can clear databases vitton audio data bricks, which is basically collection off information that is organized so that it can be easily accessed and managed and update er, and also you can create a table. A table is basically the presentation of structured data on Do you can use spark. Ask your toe Corey. Those tables on also use Parky Be ice on a table generally consists of multiple partitions and the next thing is Cluster. A cluster is a set of computer success and conflagrations on which you run, nor books and jobs. So what we're going to do in the next Lambie is going to create data bricks. Onda. We are going to create a spark cluster within the topics and also we're going to pay the notebook on Britain notebook. We're going to use Kala language and put some commands and bread and eggs Eagle that once you complete that lab, you will gain full understanding off all these concepts. Okay? And the next thing is job. It's a way off running in order, book or library, either immediately. Foreignness Kendall basis. One more final key thing that I want to take you through is network implementation in terms off your data bricks network implementation, you have two options by defied on your data. Breaks will be deployed in its own network because it is a fully managed service on Alger, you can able to deploy it in your soul network and that resource group. We get locked. That's one way. How are you might want to deploy our your data breaks within your own watchful ever because off number off reasons and one reason can be connecting argued database to other argue services existing on those watch your networks are connecting on premises data source for use with the data breaks taking advantage off user defined routes For that reason, also, you might want to deploy in your own network. On the tourism can be is you want to inspect all the outgoing data from the data objects. In that case, you can put the N B A and inspect all Orban traffic calling to a loan denials. And also finally, if you want to configure your own network security group rules to specify aggress traffic restrictions. Then you can implement audio data bricks within your own network. So basically two choices either argue the public's can be implemented in its own network as a fully managed service are you can implement on your own watch will ever meeting not your So that's it for this lecture. In this lecture I have taken you through and Lovie off day topics. It's key concepts and also network implementation. Next lecture is allowed where I'm going to show you how to create data picks on also hold a clear the cluster and a notebook with Scalia commands in orderto process a file within a digital extort tradition toe. And also, I'm going to show you how to trigger that nor book from data factory. Okay, so if you have some time, join me the next lamb. 8. 8. Lab demo - Create Data bricks workspace and Spark cluster. Prep for extraction.: Hi. Welcome to this lab in this lab. I'm going to show you how to process daytime date Alex store using data, bricks and data factory. This lab is in continuous. Not Steptoe. Instructor, what we have done, Israel, my greater the data. Why? We have copied the data from our new sequel databases into data extra using data factory. So in step three B, I'm going to show you how to process the daytime date, Alex story using their tablets and data factory. I'm going to divide this lab into two parts. In the first part, I'm going to make feud fix. So what we have done in Steptoe in order to create a text file with a meaningful name and also include column names in the text fight? Because we haven't done that in Steptoe. And also, I'm going to create a service principle and provide access to that service principle to date Alex store, I'm going to create data breaks on also sparked cluster on top of fed ex cetera. And in the second part of the lab, I'm going to show you how to ride spark court in scholar in order to process the data within data like store. Okay, so first of all, let me change data factory in such a way that the file that is getting created in date Alex store will be created with a meaningful name. And also, it will include column names. So let me do that. I'm going to go toe little X tourist file on board. Click on Connections here. Alien. We haven't provided any file name. Now I'm going to provide that. Okay on, then, if you come down here, you can specify calling names in the first row or take that on published. This change. I'm not going to change anything else. So what we have done in Steptoe that said published successfully now go to pipeline on trigger this pipeline. Let's see the state just off it. It's still running. Yeah, it is Succeed, er so let's go to stories exporter and see whether the right file has been created here or not. So let's refresh so you can see customer addresses start txt. Let's open it. Okay. Sorry. Let me Don't lure this properly. I just don't want it. Let's open the Ford up. Andan open the 60 file on it. Contains now customer idea This idea Just type royalty and modified time. Okay, so let's go back to on your porter. So we have done that change. Now know, the next thing I want to do is to create a service principle. In order to do that, go toward your active directory on Goto registrations. Andrea. All applications. So you would only toe you are on Dhere from going to call this as day topics access. Alright, calm I limply can create. Yeah, once you created Goto settings Keys on I'm going to call this a spark in one year. I'm save it, okay. I know we need to copy this and I'm going to baste it in the scholar court that I have which is claimed secret, isn't it? Okay. And also I need to compete our client I d, which is basically this one score peed and then be tested here. So that's a we have copied successfully. Now the next thing we need to do so is to provide access to the service principle to the data factory. Sorry to the data leg store. Sorry for that. And then go to access control on a roll assignment here When you are selecting this. You need to be careful. You need to select the blob contributor rule. So if you sell it normal contributor roll, it doesn't provide access to the data. Vittles to is a con because the normal contributor rule is associative it. Management plan security, not data plan security. So keep that in mind when you're doing this. Okay? That's what I have selected stories. Blob Data contributor. Okay, no one going to search for data. Biggs. So you did topics access selected and save it. Okay, Now that role assignment has been successfully saved. Now, the next thing I'm going to do is to create a resource, and I'm going to fetch for data breaks, pick a net, recon keyed, and then I do training. So I'm going to call this hazard drug it off. Briggs, Andre Source group on going toe. Create a new resource group because it's easy for me to deliver this part. Analytics temp, RG on location, not Europe pricing standard. And if you want to deploy this audio database in your own watch your network, you can do that. But in this case, I'm going to deploy in its own watching network and click on Create. So generally it is very quick. So I'm going to wait for its creation Now the public's has been successfully deployed. So let's go into that resource on you can launch workspace. Now let me ju min a bit because it will be difficult to see this. Okay, Now let me take you through some off the menu items here. Here is the workspace. If you click on it, then you can ableto you know, create nor books or manage documents within your workspace on. You can click on here and you can add data here on view databases and tables. And here is the clusters where you can create spark Lester on man is them on day. If you want to run around book, you can basically see a jobs could do a job on so on So they are very straightforward. Now the first thing I'm going to do he's, um, created cluster. So let me create a cluster they seize on, going to name the size of the dress park cluster on more. I'm going to leave that as it is one time I'm going to leave that as cities fighting version. I'm going to leave that as it is on de enable auto scaling. We don't need a lot, so I'm going to disable it and I'm going to sell it. This has to workers. And if you want this cluster, do patinated after certain period off in activity, then you can specify that duration on. After that duration, that particular cluster will be automatically terminated. In that case, I'm going to select a 60 Okay? And then workers, I'm going to leave that as it is, and you can vote advanced options and you can see different stuff. But I'm going to leave that as it is on and create the cluster. Okay, so this is going to take some time. In the meantime, there is only one final thing that we need to be asked. Part of prerequisites that is creating authentication key. Go to user settings. Onda. We'll create a token here what we can call data factory, but us, okay. And then generate this case unit toe. Copy this and then basted somewhere in this case, I'm going toe pasted in the war document. I can't go back on. Done. Okay, let's go to cluster. It's getting creator So that's it for this lab in this lab. We have a lot of pretty consistent are required for the next line. If you have some time, join me the next part of the lab. 9. 9. Lab demo - Extract and transform the data using Spark on Azure Data bricks: Hi. Welcome to the Slam. But this part cluster has been successfully created. Now what I'm going to show you is how to create a notebook. Andi, copy some cord into the Lord book. That cord will fits the data from the text file that we created in the lab. I'm process the data. Basically, we will extract only second columns off the table into the new data from Okay, so first of all, let me create a notebook. In order to do that, go back here, click on here and create a notebook on I'm going to call this as process. They talk in Dave Garlic store. Yeah, I'm the longer I'm going to select a scholar on Cluster. We have only one cluster, so I'm going to be father to it. And then click on. Create. Now we need toe. Copy some cord into this. I already written a cord. What this court does basically is it will use the service principle that we created earlier on Canada. Date, Alex store and access data from the data table. Our text file and show that data to you. Okay, so first of all, let me copy discord. Okay? And Let's see whether it is running properly or not. Looks like yes, and then I'm going to from wide into details. If you're doing this lab, you need to change the talent. I d here. This is my talent. Ready? But you need to change the tenant. Really? Do you? Us? And also client idea and client secret. I have created this service principle in the previous part of the lamp. If you are doing the snap, you need to change this client idea. Inclined secret. Okay, on. Once we have everything in place now it's timeto read the data from the text file, put it into a data frame. Andan show that data initially. Okay, hopefully to work properly. Okay, It's current this ever. No, let's from now on now our job is running, so let's read for you. So, as you can see, we are able to successfully fish the data from customer addresses stable on show that data here. How one Let's say as part of the process we are not interested in robe. You Aggie. Okay, Lord, you idea We're not interested. We are interested in other columns. In that case, we can change the court again. That piece of court I already returned. So let me copy that just and then run this. So basically, we're excluding that particular column on fetching the values off other columns. So that's a bit of transformation that we are doing our processing the data. So it's running the command. So let's rate for the result. No, the commands has been ran successfully. And if you come down here for sleeve, we have data frame with all the columns and come down here. Here, you can see the data from with only four columns that we are interested in. So that's a big of an example for processing the data. And not only you run the notebook from Data Vics, you can actually run this notebook from Data factory also. So let me show you that. So let's go into data factory. Close days on end. Goto data breaks on Dragon. Drop this notebook, okay. And then click on the notebook. Goto data breaks. So we haven't done that. So, first of all, we need to create a linker service for data bricks. Yeah, it isn't compute. And here select the subscription and data bricks workspace with raw data bricks on existing interactive cluster. And here is the access token that you need to provide. Remember, in the previous part of the lamb we created the authentication token within data picks on we copy the value. So this is the value of So let's go back and face the token here and select from the existing clusters. Let me show you. Really, creator, If you go toe here user settings, this is very created. A toke Amazon Tha Doc Token, I hope based on here. Okay, unless there's the connection, hopefully should work and click on finish. Now let's dismiss on Goto Pipeline now you should be able to see that incur service. Goto sating bro's for the notebook part. Oh, ready to create it? Sorry. One minute, please. Yeah, that's selling this on. Then that's it. Published the changes. So basically, you can trigger the notebook as part of the workflow within the data factory. That's what I want to show it to you. Now let's validate this on then. To that, hopefully it will run properly on In order to monitor the run, you can click on here and you can see the in progress. Let's wait for a few minutes and it will get completed. Let's keep on refreshing. Now. The pipeline has been successfully completed. If you want to give you the details, you can go on view. So that's it for this lamb in this lab. I have shown you how to create a notebook on copy some corn in the notebook. In such a way, it will access the data from data extra on, do some simple processing and also have shown you how to trigger that notebook from data bricks and also from Data Factory. OK, I hope you find this lab useful. 10. 10. Lab demo - Load the data into SQL Data Warehouse: Hi. Welcome to this lab in this lab. I'm going to show you how to move process of data into Secret Data Warehouse from date. Alex store in the uglier lab. We have written Spark notebook, which basically access the daytime date Alex store on, transform it and show it. But we haven't written the cold in order to save the transformer data back into datalink store. So in this lab, what I'm going to do is I'm going to extend that's part. Nor but court, in order to store the clowns from a data back into a little extra or on guy will create a secret of it. Hose, create a destination table on. Finally, I'll goto data factory, create appropriate linker. Salvy says to secret of a house and change the pipeline in order to move the data from the Calixto into secret every house. Okay, If you don't understand it, follow what I'm doing carefully. By the end of the snap, you'll gain full understanding. So first of all, let me go into data bricks. I want to show you the piece off quarter tohave extender. So this is the corner we have written. We have reached until here. Basically what we have done here is we provided service. Principal cretin Shells on we provided can end on were basically accessing customer at the start Txt file. Get the data and show that data on. We created another data frame which will fetch only four columns of the table and show that information. But now we need to save this change of data frame back into data, Lex Stories under father purpose, I have returned the score. You might see a different way of connecting to date Alex store in this piece off court from here onwards. The reason is there are two ways you can connect a data lex store on. I just want to show both off the case. Basically, you can directly connect update Alex store like this are you can mount the date, Alex to a file system on day public's fire system. So what I'm doing here is I'm taking all the conflagration were used, basically service principal client I d. Kind secret and all the stuff on. Then I'm mounting this part on data breaks file system as this spot. So from that point onwards, I don't need to use this I just need to use this only okay on Don't know Here what I'm doing. I'm writing that data within the date off him into transform of Darcy s trip. However, there is one problem when you do this in sparked by defaults part will create a folder with that name so it won't create a file, but it will create a folder with this name on. Did pre ate a partition table so we don't want that falling. Really? Because we want to programmatically access output file is under from data factory. So far, their purpose. What I'm doing is I'm copying the data from this file into Transformer Doc txt on deleting the folder, which is transforming about CSP in case if you want to understand more, what you can do is run this notebook. But don't do this. Basically exclude this court on from this. Then you can see a folder God created. It transformed RCSC. Then you will understand what I mean. Basically on, then paste thescore and run it on. Then you can see only transform your txt on this folder gets deleted. Okay. I hope you got an understanding of this on the next thing we need to do is to create secret of your house. By the way, I'm not going to run this court because I'm going to run these from Data Factory. Okay, so let me go into resource groups. Sorry, I'm not going to create a resource group. I'm going to clear the resource, which is sequel data. Viet House. Okay. Subscription is on your training's resource group. I'm going to select Analytics TEM party blanket of it because we're going to create a table anyway. Server I have already a server. Rudra Sequels are selected. Onda, We don't need this many date every house unions. So I'm going toe select the bare minimum on can apply and click on create. And this is going to take some time. So I'm going to pass this video until the state of your house is successfully created. Now, our secret date of it house has been successfully created in order to view that it's also a go to resource on let's copy the server name because And now we're going to create a table within the secret of your house. So we need to connect word using management studio here. I'm going to paste it and I'm going to select secrets are without indication on. Then log in and then cut it. Okay, Now we're able to successfully connect. Let's expand it secret of their homes tables. We don't have any, so I'm going to create a new table Can settle. I already have a small skip to create the table, so let me copy it. Very straightforward script. Basically. So basically it is creating a table with four columns. That's what the transformer data will contain in order to move the data into sequel data warehouse meaning to create the stable explicitly. How? If you are migrating the data from sequel that are basing to see quit of her house, you don't need to create it because data factory on the flight will create a table for you . But because now we're copying the data from data Alex tore into secret of a house we need to create the stable explicitly. Okay, But in future, Microsoft might change this and enable automatic table creation. I don't know, but at the moment it's not supported. We need to create a table explicitly. So let's execute. So that said they will God Creator successfully. Let's refresh it. See here that they will got clear Dead. Now, let's go into data factory. Click on here and then we have to build asserts earlier on. We have a pipeline earlier, but now I'm going to create connections, create a linker. Surveys on game took great Alex storage. But earlier this data leak stories one used to refer to the source file Are you basically even be migrated to the data. From what? Your secret people into date Alex store. We create a text file. Right. So this date Alex Torres want referred to that's text file. How are our spark cord Will transform the text file and gender transformer. Doctor 60. Now we need to create a linker service for transform of dark txt. Okay, click on it continually. Onda. Uh, are you trying story the counties with radical extraordinary age and I'm going to call this as transformed. Okay, Does the connection on let's finish it? Okay, dismiss. And now next thing is to create a Linger service Taubate of a house and your training. Several name is Rudra. Sequel server Vallejos is redress equality of your house on a sequel up indication on the desk a collection and then finish it. Okay. Dismiss now in the data sets recreated the linker services. Now we need to create data sets for the appropriate LINKER service finish and then goto connection. This time, we're going to use a little extra is transformed in the battle. Crazy majors on the file parties trans formed. Don't be extinct just to make sure the fire is good included. Let me corporatist Expected is on deck. Okay, We don't have that. Fine. So we can browse and in terms off, Schemer, we can't import it because it's not yet finished. OK? And also, we need to tell here column names or in the first header Sarif Astro. And that's it. We don't need to do anything else. Now I'm going to create another data set for their house connection. I'm Gables. Customer interests is scheme. Are we don't need to preview our if you want. Oh, preview. There is nothing else. Sorry. There is no data, actually. Okay, My publisher, all of them. So let me give meaningful name toe this also. This is transformed on this one. Also publish it now Going to pipeline now We're going to drop another date activity. The state active divinity to conficker. Sorry, I doomed in a big just to make sure it is visible to you properly. And now I'm getting all kinds of problems. But anyway, so now for the copy activity we need to select a source, which is data extra is transformed on the sink, which he's a sequel. Date of a house. Okay, Andi, I'm not going to use Polly Base because it is not supported for data leak story yet. Okay, so that's it. So let's publish theirs. I said publishes computer, so let me give you a recap. Firstly, we have a copy date activity which will copy the data from sequel date of this into Data Lake Store. Secondly, we have a notebook activity which will trigger a notebook. Basically. So this is the data breaks on go to settings. You will specify the notebook part. That notebook will be triggered by this activity. What this notebook is doing, it's basically transforming the data. I, in other words, a simple processing basically excluding one off the columns in the source stable within data factory and created a destination table in a text file which is transformer dot txt Now this copy activity what it is doing, it's basically copying the transformer dot txt on putting into secret of your house. OK, so I hope you got good understanding now Now let's valid it all. And then finally a zone Hopefully it should work in the first run. If not, we'll identify the problem on we'll fix it He noted that morning to the pipeline, you can click on here and then see what is happening. Okay, on also, you can go to stories explorer on, see what is happening here. See already, customer, just start txt is created which is basically mean Our first copy activity is completed Now the spark notebook basically struggle and it is running. Let's go back and refresh here. No, it's still running. The cluster is running, so hopefully it should be quick. Let's see any of it. See transformer dot CS We got created that folder. Now the next piece of court will delete the folder, but copy the text file within that transformer dot CSP into transformer dot Txt So let's refresh transformed dirty extent. God creator, Now that means the notebook activities successfully completar. Now we're coming into copy activity toe. Copy the data in the data Extra into secret our house. So let's see what happens if you need to monitor. You can voto Management studio the first years and want to sell it quickly. Still not that came. So that means that activity is still running. It might take some time in order to Kanto date of a house and populate the data. So let's wait for him. It's taken some time. So I'm going to pass this video until this paper and really successfully completed. Now our pipeline run has been failed. In order to view the issue, click on ahead on the last activity. That means transferred the data from data lex tore into a warehouse got fail. So if you click on it here, basically what it is saying is called him at the side in the data set cannot be found in physical date of this because we haven't provided any mapping. It got confused. I think so. Let's go to data pipeline. So they're not data by plane. Go pool. Maybe the Daleks storage transform. Sorry. So let's go to connections now. Here. We never imported the scheme up because transformer dot txt Now, God, Creator, we can import the schema from there. Okay? And published this basically earlier. We haven't provided any mapping. So that's the reason this particular copy activity got confused because it hasn't ableto map appropriately. Okay, Now what I'm going to do is click on the copy activity mapping import scheme us. Now he will be provided a mapping. Okay. Basically, just automatically done. But because we haven't done this, Andi, When the copy actively tried to map the columns, it got confused. Basically, it hasn't able to match these two things. Ideally, it should. I don't know why, but let's publish this changes now. And also one more thing you need to do these toe delete these two files because they will be created again and then transformed Doctor 60 also deleted. Okay, Now I'm going to run the pipeline again. I think this time it should be successful. I can't see any other reasons now. Okay, let's finish this hundreds running on going to pass this really once again until this pipeline running successfully completed. Welcome back now, or pipeline run has been successfully completed on. You can see all the activities here. So first copy acuity is copying the data from Secret Service into Data Lake storage. On the second activities are running a notebook which will transform the data and copy the data back into Data Lake storage. The final activities from the Gerlich storage so secret that off their house. Okay, I hope you find this lab useful. 11. 11. Introduction to Power BI: Hi. Welcome to this lecture. In this lecture, I'm going to take you Triple B A and this capabilities. And also, I'll tell you how you can use four B A in contention with our new services. Barbie. I is not actually part of a sure it's part of our history. 65. But it is a very popular tool that is used along with your services. So let me take you through that micros or poor be ISA Business Analytics service that provides interactive visualizations with self service business intelligence capabilities. Palm Beach is a very powerful reporting tool. You can dollop different kinds of reports and dashboards on you can share those customers and dashboards and interactive reports with your colleagues are users on. You can scale these power bi I across your organization with building governance and security. The key success or pour beer is its simplicity, even and user with a very minimum knowledge of fighting can be ableto dollop reports and dashboards. That's the biggest strength point off power. Bi. I okay on when your dollop in things with poor BIA generally you fall of five steps. First up is getting the daytime to pull FBI. There are literally number of characters which you can use in order to connect to the source data basis and put the data it can be excel file. It can be arduous equal database. It can be on for Mrs Database. Pull them together in power. Bi. Once you got the data, you'll start developing reports you can create reports with interactive. Is yours like Sam Graphics? It's a very, very good tool in terms of developing reports, and once you have gullible reports, you can pin them to a dashboard. Basically, you'll creator dashboard with most important metrics on one screen toe. Tell a story. So, for example, you have recently releasing an advert about your product. Now you want to measure the sex off that Edward Father case you can design a dashboard with all different metrics are different views of the same data to get the inside off. How successful you are recent and what isn't. Campaign is okay on. Once you created the dashboards, yuk unbundle different dashboards into an app on the publish it to the users so users can view those dashboards or reports using mobile app are the stop up. So they can view using, you know, browser all those stuff. And finally, once business you just started going into this report. They can analyze them on, monitor them and take informed business decisions even in the seas off data feast. Still, there is lack of data in terms off providing in a meaningful veteran business. Users take in formulations personally. In my experience, there is always a struggle for business users to get a proper report which will provide a good insight In order, put bacon in formal business station on poor B. I can help you in achieving that Now how we can carry out these steps using pour beer. There are different tools that are available within power bi in order to carry out these steps. So let me take you through them. 1st 1 is probably a day stop. It is a windows to stop solution that you can use $2 reports and published them. Basically, if you want a dollop very complex reports. Generally you do with for being at the stop are you can deliver the similar reports using poor be online also, but people country use power beer, the stockman, their dollar complex, reports And once you deliver the report using Power Bay, they stop. Then you publish that in due Paul Biya Service Michael Sort power be a services also called us power Bi online are abduct for being dot com Basically, you can go into poor B I onda view the publisher reports from poor being this stop Also, you can develop reports written power bi online Also on do you create these reports? Get all these reports together on building a dashboard using power bi A service on the next tool is poor being mobile labs or be off forces set off mobile apps for I west on right and we know stand mobile devices in mobile ass. Basically, you can publish the dashboards from four B online into an up on didn't develop user can connect and interact with your cloud and on premises data basically in the form of reports and dashboards, they can able to view the data and analyze the data. And finally, for on premises customers there is something called four b reports are over. It allows you to publish for will be a reports to an entre mice reports over after creating them in our bid to stop. Okay, so four offerings. What is this? Top value Build reports on Publish them into power bi a service or be a service you can use to create reports unclear. Dash wars and publish them into ABS on power, bmo By lapse using which your users able to view the dashboards and reports and analyze it and analyze the data. And finally, for on premises customers perspective, you can use the polar bear reports over to receive the publisher reports on the serve The on premises user community. Okay, now how this power bi I can walk together with azure. Let me take you through that By combining our new services and power bi I you can turn your data processing affords into analytics and reports that provides real time insights into your business on June poor B. I has built in connectivity and integration toe dollop complex business intelligence solutions as a set. Bobby has a lot of characters on. Some of these contractors are toe do services, so you don't need to fight any court in order to conduct way. For example, stream and other takes are even help. Okay, in stream analytics conflagration. You can configure in such a way to stream the data into pour beer. Similarly, art your history inside machine learning but your story secret of a secret data warehouse. So there are whole stuff services Matina jewel can feel that they're trying to pull back. Our poor baby can pull the data from host off of your services. So by combining the data sources and power Bi, you can build a very complex business. Intelligence solutions that provides insights into business ritual enable business users in order to make in formulations. Okay, so that's it for this lecture. In this lecture, I have provided you a brief introduction to power bi I onda key five steps that you follow when you're developing a complex solution for B A. Andi, I have taken you through some of the tools that are available in power. Bi I on finally how you can use Honduran poor being together to deliver complex business intelligence solutions. Okay, next leeches allow where I'm going to show you how to fetch the data from sequel date of their house and present the daytime power bi I in the farm off report. Okay, so if you have some time, join me in the next lab 12. 12. Lab demo - Visualize the data using Power BI: Hi. Welcome to the slab in this lab. I'm going to show you how to visualize the data in Secret Data Warehouse using power bi I This lab is in continuation off a Leela in the rear lamp. I have shown you how to transfer the data from data Lex tore into Secret Data Warehouse. That's what we have done in step for now in step Fi, we are going to present the data in secret of a house in power bi I by the way, object off this Lambis not to teach for B I, but to show the data from secret of a house in Bolivia. Okay, so let's go into politics. Stop now here, Click on, Get data on gun. Click on More Pipin Sequel Date of their homes and then server. To get this, we need to go toward your portal under Copy Deace on basted here on database. I'm not going to mention now, okay? And it's a bad Korean. We're not going to him for the Daito. You can't. OK, and now we have to databases. One is normal date of us and one is secret of a house. Click on Ed and then select the customer address that we have created. You can view the data on Lord the later Okay, Now we got the customer. Just table here and you can view the columns. You can drag and drop, sit in column into the chart under the workspace. And now, let's say, as an example, our object of used to show how many customers are getting added on a monthly basis. OK, for that purpose, I'm going to drag and drop Modified. Did are now you can see here on a monthly basis how many customers were getting we can view here again. You can select the chart and apply different visuals on top off it. So it is a simple as that you can ableto easily dollop report using power bi in case you don't know about it. And now let's publish this report into polar be online in my book space. Yes, I want to replace it. That's fine. We're going to resolve together in even so that's going to pour be online. Now we have published from probably at the stop into poor beyond life. Now let's going to pour be online now and now you should go into data set and you can see customer insights here on you can see a report, customer insights here. Click on Ed and then we're having in our area. So let's click on shore Details. Goto data settings page on, kick on customer insides on. Let's say, did credentials here. I'm going to sort of basic here. It's It should show us already password. But I don't know why it's not showing, You know, I use only one password boxes here. Yeah, nobody showing. Okay, Okay, let's sign him. That's it. I have signed Now I should be able tow. See the report. Now that's click on it. There you go. You can see the report that we have published from Power Bi at this stop. And also, you can able to design this report straight over here itself. You don't need to goto, you know, power bearded stop. But generally for most complex reports generally people going to use poor bay at the stop. With all of the reports there on finally publishing to poor be online. Here you can pin this report to your dashboard so you can dollop multiple reports with multiple views on bond pink to that dashboard. So here you go. You can ableto change the report here if you want Onda like openness Toe dashboard. I'm going to call the size customer inside. That's been this. And now you can go toe workspace and you can have a dashboard here. See, this is the dashboard that the newly created on click on it. These are generally called us tiles. You can add multiple tiles to the dashboard. Okay, If you have one, so you can design your dashboard anyway, it tells a story to your business users. Okay. Finally, once you design the dashboard, you can share the dashboard to different users also. But unfortunately, that feature Italy, it used to be available is free. But now they made it as part of for Be a pro. Unless you have would be a pro, you'll not be able to share the dashboard. So I don't have for be a pro, So I'm not going to share this now. Okay, so that's it for this lab in this lab. I have shown you how to use polar bear to stop in order to load the data from secret of a house on design. A simple report I also shown you how to publish that report into power. Bi online on how you can pin that report were dashboard and finally shade it. But we are not able to share because I don't have for being pro licences. Okay, I hope you find this lamp useful. 13. 13. Introduction to Azure data explorer: Hi. Welcome to this lecture. In this lecture, I'm going to take you through our due date Explorer and its key capabilities are you A Bit Explorer is a fast and highly scalable data exploration service for log and telemetry later . So in case if you want to explore your log and telemetry data in its raw form on also, if you want to explore the data which consists of billions off rose, then you can use our due date exporter because it is very highly scalable and it is very fast. You might be thinking why I need our duty to Explorer when I have a log Analytics to examine the log data on guy have application in Sex Also basically argued eight explora is a foundation for both tools. So in other words, log and Riddick's and application insights merely extend the capability off of your great explorer. So it's key features include several ingestion matters, including connectors to common services like even hub. You can programmatically in just that it are using dot net and python under DIA taxes to engine for exploration purposes. So there are several matters use in which you can in just the data on there are different types of data that are supported by argued it exporter. You can have structure data are semi structured data. Even you can have unstructured data that is free from text fears. Okay. And data Explorer have basically but just a gap off. Two words often structured test logs and structure numbers and dimensions by extracting the values in drunk time from freeform text fields. So you have generally some text logs, folly, a structure, some text logs doesn't folly a structure you can use the tax porter for leading both off those logs and bring them together. Anyway, I'm going to show that to you in the lab how you can in just the data and also analyze the data up. Okay. And when you're using our duty tax poorer, there are three key steps that you generally dough. Let me take you through them. The first key step is creating a cluster, so you need to clear the cluster and create one or more date of a senior. Once you create a cluster and the date of this, you can start loading the data into date of this and you can run queries against it on. The third step is qualitative s using the exploder. They'd Explorer comes with its own web application. Using that you can run, review and share queries on results in the next layer. We are going to use the tracks portal have application in order to fund some quarries on the in Justin data. So two critical steps in data expertise ingesting the data, including the date of this. So let me take you through these two steps in a bit more detail in terms off ingesting their time to date Explorer. Generally, you'll fall of six steps. 1st 1 is the Read the data from external sources. Whether it is a jerk, you are even helps. And the second thing is batting the data. In order to manage the true put you can batch data flowing under the same databases toe Optimize the injection trooper so you don't want one data source ingesting a lot of data on gets wound on other injection process. Okay. On the Corvallis validation, you can carry out some primitive validation and former conversion if required before populating into the database, and you can manipulate the data. Basically, you can specify where the source field should go into destination so you can provide a mapping between source columns and destination columns. Also. Okay, On the 51 is persistent point in the condition floor, mallet, condition, load on the engine and handled re Christ upon transient failures So you can put a persistence point within the injection floor on. If something fails, then you can roll back and start once again and finally come in the daytime. Just so once you have ingested the data, then you can make the data available for quarries. So these are the six key steps that you follow in general, ingesting the data into their tax poorer. Okay. And the next critical step is querying the database. Let me take you through that in bit more detail. One of the key things and one of the key benefits off our due date Explorer is the ability to carry out time series analysis with ease. So it contains need to support for creation, manipulation and analysis up multiple times recess. And when you are doing the Time series analysis, generally you follow two steps for step is partition and transformed the original telemetry table. They set off times three says. So let's say you're comparing your own your gas uses, so you need to have the period off one year. How my gas uses is in 2019. So in 18 17 6016. In that case, you partition the data yearly. And once you created a set of time series, then you can use in bill functions Vitina, Judit Explorer to process and analyze them. So there are a lot off in bill functions you can do filtering, which is a common practice in signal processing, and use it for time. Cities process us, and you can use regression analyses in our dividing the fighter trend. So, for example, based on the trend off past 12 months, how my next six months is going to be and also see is not a detection. The beauty off, Get, explore ease. It has in built functions for this seasonal detection also, so you can use those corresponding functions to automatically detective. So, for example, if your customer is a utility company and they want to understand how the climatic conditions has an impact on energy, use it in the case you can use the weather pattern. And based on that, you can detect how much gas uses is going to be in summer, winter. Spring. So this is how you can do time series analysis using the exploder. Okay, so that's it for this lecture. In this election, I have introduced it to the taxpayer and also have explained about three critical steps that you generally do within date Explorer on. Duh. I explained about what steps you follow when ingesting the daytime to date Explorer. And also, when you are doing times its analysis using their tax flora. Okay, next lecture is allowed where I'm going to show you how to create Explorer, how to create a clustering it on database with Annette and in just the data and run some simple quiz on that in just a date up. Okay, so if you have some time, join me the next lab 14. 14. Lab demo - Create data explorer and ingest & query data: Hi. Welcome to the slam in this lab. I'm going to show you how to create date Explorer cluster on, create a database in it. And also how to create a table on Lord data from module blob storage into the taxpayer date of us. Okay, so foster phone. Let me clear. Data Explorer, by pain, do for explorer. Click on it. Could can create Andi. I'm going to give a cluster name on. Then this will be not you Dope computer specification. I'm going to go for cheapest one, Basically select and review and create. And this is going to take a few minutes. So I'm going to pass this video on Come back once this is successfully deployed. No, the data exporter deployment has been successfully completed. So let's go to the resource here. You can scale up, scale out your cluster capacity and all those stuff. Next thing we need to do is to create a date of this. Okay, go here on. I'm going to call this absolute draw Explorer. Date of this. The SEC retention period. Reason beat activities. Okay. Cash with these 15 years. That's good. Conquer yet Okay. On click on the date of this here, you can do a number of things. Basically, you can clear data condition connection so you can add a connection. Basically, you can specify from the The data is coming included eight exporter date of this. The source can be even harbour blob storage. Andi, if it is even hub, you need to provide a name, salad, even home name Spiritus, even hob on also the destination people within the state of use. To which table we are sending the data on what format the source Mrs will be on how we are mapping the source Mrs attributes for their destination table columns. Here you can space for the name of the mapping table which contains mapping between Jason Data attributes and table column names. Okay. And also similarly, you can select a blob stories also on provide appropriate information. I thought of creating a data connection and show you how to in just diagnostic logs why I even TOB integrate expert. But unfortunately I stuck with the showstopper. If I were is all that showstopper then I live place this lamp with a more complex scenario . But for now I'm going to show you very simple scenario, so go to query. And when you are creating tables are fetching the data from the tables. In date, Explorer we use are together a different language. It is not sequel, but it is similar to sequel. But it is not sequel. In case if you work in log analytics, then you know the same tax using which unit toe quality the date of a star. Create tables. But for now, let me show you too simple commands in order to create a table and in just a data into the table. Okay, so he is the command to create a table. It's okay and exude it. This will create a table basically on. You can go through this comment to see what exactly the column names and data types that were specifying But the table is successfully created. Now I'm going toe in just the data into the stable using this command. Basically, what this command is doing is it is accessing strong, even start CSP fetching the daytime, it on loading the data in the table of recreated earlier, and you might have no business asking here a valid soundscape. So you need to use the seeing Selsky. Okay, copious on vested here and run this. Now all the data has been just to successfully. Now, if you want to Cory the data, all you need to do is to mention the table name. So what's the table name? Strongly, Vince isn't death on Let's say we want of you all the data from strong events. In that case, you just need to type the table name. Okay, So if you don't want to view all the data of it in astronomy events, then we can put some filters to it. Just like sequel. Let me alone to complete this wrong on. Then I'll show you the next wave. Now you can see all the data within strong, even stable. But let's say we want to show only the 1st 4 r 10 him that kiss you can mention take then and then on it, and it will little only 10 regards. So this is how the court along region the taxpayer looks like it is exactly similar to log analytics because log analytics and application in sides, all of them are an extension off our due date Expert capability for a specific purpose. Okay, so this is how you can use the query language in order to search for the data. And there are some complex things that you can do using out of the box functions that are available within date. Explorer. I know how showing you a very simple thing here, but unfortunately, I tried a complex scenario, but I faced with the showstopper. In case if I resulted, I'll replace this lab with them. More complex scenario. Okay, so that's it for this lab in this lab. I have shown you how to create our due date Explorer cluster on. Also, create a database in it. Create a small table on in just a day. Time to the table from blob storage and simple Corey longer statement in article. Get the data. Okay. I hope you find this lecture useful. 15. 15. Introduction to Azure data catalog: Hi. Welcome to this lecture. In this lecture, I'm going to provide you a brief introduction to our your data catalogue before I start explaining about agitator cat lock. Let me take a scenario. Imagine a scenario where you have a library with hundreds of thousands of books. How were there is no information about what each book contains. So if somebody and listen to that library on want to search for information about a topic, then how can they go to appropriate book and read an appropriate section of the book? So the similar situation is facing by many companies. Many companies have all the data they need, but they don't know where it is located on how it is stored. So having a to like our Judita Cutler, you can ableto address that issue are your data catalog is a fully managed cloud service where users can register discovered the data sources they need on understand the data sources define. So it's a single central place for all business users to contribute their knowledge and build a community and culture of data. There are two issues that are just big data headlock. 1st 20 is discovering the enterprise data sources. It will make it very easy. We thought that a guard log It's a very tedious task. On also using data catalog, you can ableto create and maintain documentation. A Lear doing that is a complex and time consuming task. But with the data cat look, it is very easy. I'm going to show that to you in the next lab In event, and in terms of using data catalogue, you'll generally fall of five steps. 1st 20 is provisioning the data catalogue. First of all, you need to create a data card lock on dinner. Do you can only have one data catalogue far killing on once you probably in the data card log. You can start registering and allocating date icers, so basically you can resist a with key structural data information on provide some tax to it on description to it to make searching easy for you and the past Up is discovering, asserts your business. Users using data catalogue can easily discover a search. Because the metadata is indexed, users can easily search for data they need on. It's not only about searching the data catalogue and discovering the data source, you can able to easily can't put the data source and pull the date. Also, I'm going to show that to you in the next lamb. How you can Don't know someday Time to an excel. So, for example, fish top 1000 rows on. Finally, you need to control who will have access to this data catalogue, which you can do under Man is asserts. Okay, these are the five steps Fast one is providing the data catalogue. Once you provisioned the data catalog, you can register and annotate their Certs and your users can discover their sits very easily on coming to the data on you can manage, asserts I. In other words, you can secure them to make sure, right? People helped write access to get up. Okay, that's it for this lecture. In this lecture, I have provided a brief introduction to greater cat lock and the key steps that you do within data catalogue. Next lecture is another. Well, I'm going to show you how to prove in the data catalog how to register and assert and discover, assert and also coming to the data. Okay, So if you have some time, join me the next lab 16. 16. Lab demo - Create data catalog and register data assets: Hi. Welcome to this lab in this lab. I'm going to show you how to provisioned data catalogue on once we provisional data catalog . I'm going to show you how to register data assets into it and discovered that asserts on finally, look into the data that is told in the data set. Okay, so first of all, let's get started. Click on this. We're going to provide this link in the resource section of this lecture. You can get it from there. And if you come down here, first thing you need to do is to provide data. Cattle of name and subscription. Is your training's cattle of location? Not Europe by the reader can be only one catalogue part tenant. How are you need to deploy the data cat lock into one of the subscriptions for resource management and billing management purposes. Okay, Pricing. You have two options. I'm going to go for free on the catalogue users. In case if you want to add catalogue users here, you can add it on. Administered is also you can have. And once you've done everything, you can create catalogue now, Catalogue has been successfully creator In order to view that catalog. You can go toward your porter on. Just refresh this. You can see a resourceful that called Created from Catalogue on. You can click on road rocket. Look. Okay, so this is how also you can view the catalogue. Now, go back now. There are two ways you can publish the daytime to catalog. I register data assets. One is you can use this stop application and the second is manual entry. My recommendation is go for application because creating manual and race is often a tedious task. So let's launch application now and install this. Let's run this now. Once it is successfully installed, we need to sign in excitement. Let's sign him and take on Very fight. Now we need to select a data source from very want to resist the date I assert. So click on Sequels There were, and then we need to get the server name. Let's copy this. Okay. Onda authentication sequence. That authentication I'm going to select. Keep it simple. And aws it's sort of dreaded of us, I think. Yeah. So let's copy this, okay? And then connect. Now we can go into the obvious look for the tables and you can do. You store all these data sets into data catalogue if you want. So I'm going to copy everything, and we just okay now the registration is complete. Now we need to search for them. Are you discovered? The data asserts. In order to do that, going to your data catalogue and open the data cattle portal. By the way, you can do here also like click on view portal. But I'm doing in this way now. Let's search for address. Okay? See, we helped to data asserts that dress and customer address. Okay. And if you want to go into details, you can go into it. And here you will be able to annotate the testers. So, for example, you can give friendly name description. Who is the export related to this new tie? Assert on? You can tag them all the collection information you have here and you click on it. You can be ableto you the preview, but it is not included on call him names for each column. You'll be able to provide description and attack them also. So sometimes business users would like to know what particular data attribute is all about and where it is located. In that case, these descriptions and tax will allo them toe easily. Search for a particular column or particular data Attribute. Okay on documentation. In case if you want to add documentation related to the state asset, you can do that. And if you want to open this, then you can open in excel in a probate, the stop and so on. I want to show you opening and poor being the stop. In order to do that, I'm going in tow windows over here. Sorry for this. Let me search for answers. I should have staged heat itself. Andi Opening polar bear. Just stop. Let's open this. So basically, users can immediately see the data within the data set. So it's as simple as that. They don't need to go to the underlying database. I'm looking dead. They can easily search from here, so click on it. Apply genius. There you go. You can see that. Just information stored it in there. Okay, so that's it for this. Now, in this lab, I have shown you how to provisioned data catalogue. Andrea. Just a data assets into it on Discover the date, I said, and also look into the data that is told in the data set. I hope you find this lamp useful.