ASP. NET Core and Azure | Trevoir Williams | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

47 Lessons (5h 5m)
    • 1. Introduction

    • 2. What Is Microsoft Azure

    • 3. Azure Portal Tour

    • 4. Create a Virtual Machine with Azure Wizard

    • 5. Connect to Virtual Machine

    • 6. Create Virtual Machine using Azure CLI

    • 7. Section Review

    • 8. Section Overview

    • 9. Setup Azure Web App

    • 10. Explore Azure Web App Tools and Services

    • 11. Setup Azure SQL Database

    • 12. Azure Web App Monitoring and Logging

    • 13. External Azure Management Tools

    • 14. Section Review

    • 15. Section Overview

    • 16. Azure SQL - Hosting Options

    • 17. Azure Cosmos DB

    • 18. Creating a Document in Cosmos DB

    • 19. Reading Documents from Cosmos DB

    • 20. Section Review

    • 21. Section Overview - Storage

    • 22. Azure Blob Storage

    • 23. Uploading to Blob Storage

    • 24. Securely Access Blob with SAS Token

    • 25. Section Review

    • 26. What is Azure Serverless Architecture

    • 27. Azure Service Bus and Why

    • 28. Additional Azure Server Bus Features and Settings

    • 29. Building an Azure Function

    • 30. Publish Azure Function

    • 31. Section Review

    • 32. Section Overview - Azure AD

    • 33. Explore Azure Active Directory

    • 34. Create Azure AD User

    • 35. Register Azure AD Application

    • 36. Create .NET Core App with Azure AD Auth

    • 37. Setup OAuth 2.0 Authorization Code Flow

    • 38. Secure API with Azure - Part 1

    • 39. Secure API with Azure - Part 2

    • 40. Setup 'On Behalf Of Flow' (API to API Communication) w/ Token

    • 41. Section Review

    • 42. Section Overview - Azure AD B2C

    • 43. Azure AD vs Azure AD B2C

    • 44. Provision AD B2C Applicaiton

    • 45. Authenticate using .NET Core Application

    • 46. Section Review

    • 47. 47 conclusion

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

Microsoft Azure is the premiere cloud hosting platforms for .NET applications. The modern .NET developer must be comfortable in navigating the different services and features and taking advantage of the cloud hosting platform to produce top-notch enterprise applications.

In this course, you will get familiar with Microsoft Azure, it's interface and various services. You will provision and then use Microsoft Azure resources and services and have an appreciation for how everything connects and can contribute to your stable and modern application being developed.

Along the way, you will learn how to:

  • Navigate and customize Azure Portal

  • Provision and Manage Microsoft Azure Services

    • Provision Virtual Machines on Azure

    • Provision and use Azure App Services

    • Use Azure SQL and understand the different hosting models

  • Use Azure Blob Storage

  • Use NoSQL Databases (eg. Azure CosmosDB)

  • Monitor web applications for performance and potential errors using Application Insights

  • Scale applications and databases based on load

  • Setup continuous deployment with GitHub Actions and Azure Web App Services

  • How to manage application secrets in .NET Applications

  • Use Azure Service Bus and Queues

  • Build and Deploy Azure Functions

  • Integrate Advanced .NET Application Security with Azure AD and Azure AD B2C

By the end of this course, you should have a fundamental understanding of what Microsoft Azure is and how it's many services and third-party tools can be used to best serve your context.

This course aligns with training required for the Exam AZ-204: Developing Solutions for Microsoft Azure examination, though it is not an official training guide. It is perfect for you if you need to know enough about developing with Azure to be functional in your workspace, without taking the exam.

Having a foundation in ASP.NET Core development will come as a plus, because we will be focusing less on the fundamentals and only be making modifications to an existing application as needed to complete the tasks in this course. If you are unfamiliar with ASP.NET Core, you may visit the course Complete ASP.NET Core and Entity Framework Development, which will give you a very beginner friendly start to the ASP.NET Core ecosystem and allow you to get up to speed quickly.

Along the way, we also author some original and unique applications to demonstrate how integrations work between our code and Microsoft Azure APIs.

Meet Your Teacher

Teacher Profile Image

Trevoir Williams

Jamaican Software Engineer


Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Introduction: Hello and welcome to the course, Microsoft Azure for dotnet developers. I am your instructor for war Williams. For the past 10 years, I've been a software engineer and a 0 system administrator and a lecturer as a fellow dotnet developer who also has been trained in using Microsoft Azure. I can assure you, I understand that sometimes it comes as a big surprise when you hear about what Azure can do. Well, in this course, I want to show you how Azure can be integrated into your dotnet applications and how its infrastructure and services can be used, maximize the potential of your apps. Throw this course, we're going to be looking at the Azure portal in general. We look at how we can provision virtual machines, various apps and services, and how they all interact to give us a good experience. In the same vein and simultaneously, we will also be looking at how we can consume and use these kinds of apps in our dotnet applications. So we will definitely explore some development and some management in order to make the most of this course, I do recommend that you have the latest version of Visual Studio. And you do have some C-sharp and dotnet programming knowledge and database knowledge because we are going to be talking about some concepts. And I'm going into these concepts assuming that you're already familiar with most of the terminology. Now with all of that said, I want to once again welcome you to the course, Microsoft Azure for dotnet developers, and I'll see you soon. 2. What Is Microsoft Azure: Hey guys, we're going to kick start this course by exploring what exactly is Microsoft 0's. So you've probably heard it, you've probably seen these, you've probably even used it before. But I just wanted to say at this time I'll do an informal tour of what you can expect if you get started with Microsoft Azure. Now in a nutshell, Microsoft Azure is Microsoft flagship Cloud Platform. So you've heard about cloud, you've heard about different services that you might have access to that offer cloud services, namely Google Cloud. You probably know about Amazon Web Services or AWS. You also would hear a boat, Oracle and IBM and pretty much everybody has their own Cloud platform. But then when it comes to accessibility with dotnet technologies, Microsoft of course works best with Microsoft technologies in my experience. So you can easily get to the Azure website by going to Azure dot And while I'm doing it in English US, but based on your region, you may have a version of the sites that is more localized to you. And what they're presenting to the get-go is the ability to start a free trial. So you can try it for free and they give you up to $200 worth of credit. So most of what we're going to be doing in this course is actually going to cost money. If you don't already have an Azure account, you can always go ahead and center for the freer cones and you'd get 12 months worth of certain services for free and up to $200 to be used on other services. Of course, for the duration of this course, I will teach you how to maximize the use per service and hold to minimize the cost per service as you go along. Now before we move forward with creating that fair coins, I'm just kinda scroll through the page. You might end up seeing a different page from me. The speed changes quite a bit, but the information is still pretty solid. They give you an overview of certain services and certain benefits that you can get with the different services that they have. So you can even run Java on its, you might not be at dotnet developer. There are a number of things you can do using Azure. Once again, it's code, so it's all very redundant, very stable. For the most part. You can get access to tools and other resources for right here in formal training. All right, That's your fingertips once you get to this page. All right. So before we get started though, I just want to look through some of the other things. So if we were to take a look at the different regions, So what happens is that Azure is they have datacenters or Microsoft has data centers all over the world. So you want to always choose the datacenter that is closest to you or closest to the region to which you'll be offering your services. So for me, I am in the Caribbean, latin American area. My recommended datacenter is always East US or East US 2. I see that there's a three coming online maybe by the time you're doing this course it will be available. My point is that based on where you are, you may want to choose the datacenter that is best for you based on where it is online. So you have different countries. And they will say that if you're in maybe India, then these are the different datacenters that are available, right? So when you're provisioning your services, of course it matters because you want the datacenter once again, that is closest to where the bulk of your customers will be accessing your services and your site from. So that will help it with the speed and reduce latency in between the request and response workflow. So that is it for the data residents. You can also take a look. The pricing, gravity can get quotes on certain things that I had whole bunch would virtual machines, customers will database is cost. And if all of these things are foreign, don't worry, we're going to get into how we provision virtual machines, databases, how each of them can be maximized once again, so you don't have to worry if you're not very familiar with all of that, right? No. You can look at the different free services that you get when you sign up. So you can do a pay as you go. That is the general offer when you pay for what you use pretty much. Right? But an upfront, you're going to get the free brudos Cosmos DB, the App Service 0 function. So we're going to be using actually these free services. So you probably want to have to use your, your credit much. And you'll see that there are a bunch of things that you get once again. So you can always explore that, right? No, what I'm interested in doing is getting started for free. So I'm just going to walk you through that quickly. Now once you click Start free, you're going to see that you get a popular services. They're just reiterating what you will get. And you only have the $200 credit for 30 days so he can use it up as much as you want. But then you can always add your credit card and then just be as you use. So you can just click Start free. And you can sign in with their existing lab a cone to know anything that's Hotmail or live or at Or you can just use your GitHub cones and my institution, I'm just going to use my GitHub account, which incidentally is a Gmail or cones. And in the next step you may be required to verify or a cone. So you may see a screen looking like a man or not. Go ahead and verify your account. And then you may be required also to provide your credit card information upfront, right? Of course, it's all free until you tell them you want to move over to pay-as-you-go pricing so you don't have to worry about getting charged outside of maybe an authorization treasures to make sure it's okay. But you may or may not have to go through these steps because I've had the experience where I didn't have to go through the steps. So when we come back in the next lesson, we're going to review what the portal looks like after you've gone through all of these setup and everything I you get access to the portal, will review how that portal is laid out and just take a general tour. 3. Azure Portal Tour: All right, guys. So I'm guessing you're at this doesn't because you've successfully created your colon to, you've been able to log into the portal and then you're seeing a screen that looks similar to this. So I've been doing things in my portals, removed styles of put tiles. So you may not necessarily see the same thing that I have on my screen based on the way your dashboard is laid out. But ultimately this is where you do land. So from here you can look at your dashboard. You can actually customize some of these things. You can go over to see what are other resources. You can see all the services that you can access. So the different services allude to the different offerings that you can take advantage of. You can always circle the free services that you have access to. Of course, with your experimentation, be careful because you have so much allocated to you and no more. But the point is that once you burn through that free trial under allocation, you can actually always, you already have your credit card on file. So if you so desire, you can actually transition to a pay as you go. Or if not pay as you go slash dev test, which basically gives you the resources at a discounted rate. But of course, you shouldn't be used in those subscripts are that plan for anything that is enterprise level or proprietary. All right, So from here you can always go to different things like virtual machines, resource groups, App Services. So all of these things you would have to directly interact with. Some of these things get created in the bud groan when you're doing something else, for instance, I wouldn't necessarily have to go and create a resource group from scratch. Because if I want to do a virtual machine that it is going to say, do you have a resource group? If not, I will create one for you. Pretty much the same for any other one of these services, cost management helps you to view your invoices, your subscription. You know the cones that is there, how much costs you've incurred when was the last time? And they can see out the pay-as-you-go Dev Test, which is my active one. You can view all your subscriptions are burned through my free trial of birth, through my developer benefits trial, which was way back in the day. And know I'm on the pay-as-you-go DevTest, right? So you can view all the subscriptions that you have. You can actually have multiple subscriptions under the CMA cones. And you kind of see that in more corporate kinds of cones, me, but not in the individual accounts so much. Now everything you need to accomplish can be accomplished through this portal. There are many options and wizards and tools to help you to accomplish the provisioning of a resource or anything. But then of course, while it might be easier to do it one time through the wizard, it can become tedious in the long run if it's something that you have to do repeatedly and you have to log into the portal every single time to accomplish it. So that's our scripting comes in. And the great thing about a series that they allow for powerful scripting. Everything that you can accomplish through the portal, through this GUI. You can actually accomplish through Bash PowerShell scripts. And you have lots of third party tools that can access the underlying APIs and allow you to carry out certain things. So firstly, we can look at the built-in closed shell that is right here in the portal, which allows you to do either Bosh or PowerShell scripting. So let's choose Bosch. Once you're there, now skew which which subscription it is that you want to use. So you select that subscription, the Latino that they need storage. The storage would be for any supporting files. Maybe you need a file uploaded the script against anything like that. Didn't need somewhere to store it. So they're seeing would you allow me to create the storage? Okay. I'll just create the storage. And as again see this calm at a small monthly costs. So just be mindful of that. So once I do that, I can click Close. Oh, I'm sorry, I shouldn't up close. So you can reopen up CloudShell and once it is started up, you'll see an interface that can lead this, right? You can always switch between Bash and PowerShell if you need to. And you can restart it if maybe it froze or you just want a fresh instance, you can always look at that help changes settings, upload and download files that we discussed just now, gets a new session, look at an editor or look at a web preview, right? So to see what you can do here, you see that you have the AZT, use the Azure CLI and help to see what are the options that. So let's just type in Izzy. And then once you type in, is he's going to show us all of the cool commands that we can runs. There may be times when you also want to stay from your computer and run certain commands. So you didn't notice that they had support for PowerShell. And with that in mind, we can actually bring up our own partial on our machine. I would advise you run it in administrator mode and then you can actually just run the command. So when you do that and press Enter, we'll actually go through and install the entire suite of Azure commandlets to your PowerShell alone. The way you might get prompted about certain security issues regarding the trustworthy repository. If you're behind a competent network, you may want to make sure you have currents through the firewall for that. Otherwise, it should go pretty well. Now if I do the command git command known is the asterisk and press Enter. Your screen is actually going to get flooded with a whole lot of commandlets. So from here you again see a home many commands you have at your disposal when you use PowerShell. So everything, once again, everything that you want to do through the portal, you can actually script and that script or execute that script. I'm not going to get too much into the details of writing scripts in this particular Listen, I'm just letting you know all your options. So the third one that I'm going to look at would be Visual Studio Code. Now, Visual Studio Code has wide support for various things. And as your is no exceptions if you go to Extensions and look for a zoo or you're going to see all of the possible commands are sorry, extensions that you can installing Visual Studio Code, which will then take advantage of the same set of commands that we have been using, right? So you can always just go through and select the ones that you need. Whether you want to take all of them or you just need certain ones AT, at the certain ones that you need at that particular time. You can go right ahead and do that. And then once you're connected, you can actually manage all your databases, your app services, access the tools, functions, write functions from right here and publish them to Azure, everything from Visual Studio Code. So those are different ways that you can interrupt. And we're going to be kind of mixing and matching. We're probably going to use certain methods more than other methods because, you know, once you have the options of days you get tied up in analysis paralysis and you'd never actually use any one of them because you had too many. So it's always good to try to use one, stick to it, but be aware of the different options you have. Probably my preference would be different from your preference. So when we come back, what we're going to be doing is provisioning our first virtual machine in Azure. So we'll look at the wizard and we'll take a look at how we would have scripted it otherwise. 4. Create a Virtual Machine with Azure Wizard: Hi guys, welcome back. So in this lesson, we're going to be creating our first virtual machine in Azure to get this started, what we'll do is jump over to the pills in the top left corner, and then we'll click on Create a resource. So once you do that, we are led to this page that gives us some templates, I guess, the most popular templates. So we could start off with a Windows Server 2019, Datacenter and Ubuntu VM and different resources. However, if we want to see all of our options, we could always just go to see more and marketplace or actually filter based on what we might be interested in based on the category. So if I just start filtering here, then I will see that I eventually land on Compute, which allows me to create a virtual machine. So it might be that I don't want a Windows 2019 Data Server Datacenter other, I may want a Virtual Machine with something else I can click on Virtual Machine. And then from here it allows me to set up what I want. So firstly, I'll choose a subscription on which subscription do I want this virtual machine under? I only have one, like we discussed earlier, you might have multiple, you might have a personal one and accompany one, or the company may have several, whichever one you select the appropriate one. We also need that resource group. Resource group is pretty much like a container for resources that might be related or that's the way it, at least I would recommend it's used that if you have a number of resources relating to one up and they're all related and share things, then they should all be in the same resource group. In this situation, I have no resource groups, so I could always just have one actually didn't robot that one, that's fine. But I could select an existing one or I could just allow it to create a new one. So I can just say Create New, and we'll call this one test VM B1. All right, Once it's in the clear meaning you don't have to resource groups with the same name. They can click Okay, and then he can move ahead. What's the name of the virtual virtual machine? I'll just call this test VM one. What's my region? So we discussed this earlier that you want to always choose a region that is closest to either you or the customers that you will be serving. Based on my geographical location, East US or easiest tool would be best. I'll just choose East US 2 because that is a newer datacenter. But of course he made that decision based on your context. We can choose availability options, so we can choose availability zones a little bit. The set, we can have multiple virtual machines. So that's where that fault tolerance comes in for this exercise forever. I'll just use WAN virtual machines are no redundancies required. Standard security. I will leave that on standard. And then the image would be, what do I want this virtual machine to have as its operating system? So what happens is that you won't be buying the license, but the license, the price of the lessons is automatically implied. In hallmarks. You'll be paying for the virtual machine per user, per month, et cetera. So you won't have to buy a Windows license upfront. Oh, you don't. But you can pay rent for it. So you can view all images if we're not entirely sure or we're not seeing the option that we want right there. And let's say I wanted Windows Server. So we have Windows Server here under featured. And if you scroll down, you see all the other operating system. So it's not limited to Windows, right? Of course we're using dotnet Core, which is very much open source on cross-platform. So any one of these that fits your business needs is fair game. If you're considering using a virtual machine to host your dotnet Core application, in this case over I'm going to stick to Windows. And then from here I can choose all of the possible images that they have. So they have from Windows Server 2012 all the way up to 2020 to actually have not used Windows Server 2022 to this point. So I'm actually going to just do Windows Server 2022. That is the one that I want. All right. So you'd also noticed that I have this Gen 2. You saw an option for genuine engine to basically if I click Configure VM generation, it will kindly kind of explain the difference between genuine engine to, and it's really just so much of the support for the underlying hardware that would be virtualized. So you can learn more about that what does stick with Gentoo? And for the size, this is where the real money spending comes in. So they'll estimate how much it will cost per month if you never switch off that machine. So in other words, this D2 S underscore VRS on three with two virtual CPUs, eight gigs of memory would cost me $70 per month if I never shut it off. Mark you. If I never shut it off. So when we spin it up and everything, I'll show you what that means because in the event that I'm using it during the day and then I go on to use it while I sleep. Then I can actually deallocate the machine and I won't be paying for the use of the machine at that time. However, once that machine is allocated, it is going to be in a running state as far as the infrastructure is concerned and you will be billed for that. So I'm going to stick with the standard for some $2 a month and then we can go to the administrative stuff. So I'm just going to keep it simple. I'm just going to see my username is that and my password is a nice and easy password to remember. To change that password because the security for this is a bit tighter than what I'm used to, that's fine. So I just made it a little longer. And then we can configure the different inbound ports. So I will definitely need to allow RDP. I can also allow HTTP and HTTPS traffic, but for now I'll just enable RDP, which means I can connect to it when it is spun up. I can also bring my own license so I can use the Azure Hybrid Benefit, meaning if I have a lessons already and I probably don't want to pay the full price of arranging the license from them. I can let them know I have a lessons already and they can work on a shared pricing scheme for me. I don't have one, so I'll just move along. Next. We'll go over to the discs. So when we're at the disks, We get to choose the operating system disk type. We do. We want premium solid-state? Do we want standard solid-state, our standard HDD, right? I'm going to use the standard SSD. It's a bit cheaper. So please note that what you're hoping for the virtual machine is different sometimes. Now what you'll be paying for storage and other networking resources. So they all combined. So even though you see some dollars a month, it might end up being a little more, or maybe more like 80 or 90 thousand months given all the related resources that are needed for that virtual machine. So the disk is no exception. So with that done, I'll do is continue to networking. And then here I need to set up a virtual network. So this is where I actually got into networking a bit because I don't like networking immediately, right? Well, we'll talk about a virtual network. It's just going to spin up the underlying infrastructure to quote unquote fake and network is going to give you the subnetting. I'm going to give you the IP address on that virtual network. And then you can give it a network interface card, which of course is a virtual car that represents like your Ethernet ports or what the report connects your machine to the network. So everything is done right here for you. And you can also choose load balancing. So if you have more than one virtual machines and you want to do the load balancing scenario where traffic hits either machine at any given time. You can easily configure that here. Keeping it simple dolostone over to management, which is really just going to talk about Boot diagnostics, guest OS, identity. I'm not going to really change anything here. Here's a very good feature that was introduced. Maybe in the not so distant past, where we can enable auto shutdown. So you can choose to shut down your machine at x time every single day. You can also choose to be notified about the shutdown. So this feature basically would help you with cost-saving if it's a case where it's a test machine like it is null, then you don't need to be running 24, 7. So you can always just enable bone auto shutdown in case you forgot to dawn on deallocate, it will do it for you. You can also enable disaster recovery and other options. So I'm not spending too much time on those because those are pretty standard. Unless you have very specific needs. You can put on extensions, putting in some custom data, put on tugs. But at this point, these are more rudimentary tags. That's the help you to identify the machine. But then at the end of the day, we want to get to the review and recreate where they just give us every being a boat, the machine all at once for us to review. And it will give you a nice the flood to the genome that all your settings ken work. They even give you the by the hour charged being $0.09, roughly 10 cents per hour with this machine. Alright, so you can just go ahead and review everything, you can go back. You can also download a template for automation, which is really just a JSON file. So this JSON file line, every single parameter, every single setting that we just setup that we just selected rather, we can download this and stored for later. We can do a deployment at any point when we bring it back, we can do a number of things with this template. So you see that Azure is billy supporting automation, right? Anyhow, let me come out of the template and I'll just continue with the wizard. So I'll just go ahead and click Create. And now this may take a few minutes. So in the next lesson when we come back, the VM should have been created as well as all the resources. And then we'll look at how we connect to and interact with it. 5. Connect to Virtual Machine: All right, so we're back and our virtual machine has been created. So at the end of the creation process, I'm sure you saw an option that said go to Virtual Machine. And even though you went there, it probably was not already probably soft flat, but ideally if you were to just navigate to it from the Dutch, from that page back to the dashboard, you'd be able to see that all resources has every single resource that you had to create. You can actually see more just to make sure that you have the full list. These are all the resources that had to be created with the resource group for the test VM. All right, you could probably have named this. Lets us it said test Vm dash RG or something to this resource group. But my point is that these are all the resources that had to get created, just know to support one virtual machine. So if I click on the Virtual Machine, so it brings me to this management page which you would have seen if you went to the virtual machine after creation. And then you are able to connect. So to connect, you choose RDP, which basically gives you the ability to download this RDP file and allows you to connect to this public IP address for your virtual machine. So going back to Overview, if you just look through, you'd see on the networking area of the public IP address. Now that is another resource that you will be paying for. So by the end of us doing this, I'm going to show you how to deallocate the virtual machine and turn off the public IP to minimize costs you would incur when you're not using these resources. So once again, if we go to connect and say RDP, we can download that RDB file, open it, and then it will bring up the RDP that is on your machine. When you click Connect, it will prompt you for the credentials for that virtual machine. So that means I have to be careful what I type in here as a username. So my username would have to be the public IP address, backslash or fours. Not entirely sure which one that is. But then we put in the username that we had selected, and then we put in the password that we had used. So the importance of this is that We are on our own network, on our own machine. That domain on our machine is different from the domain on the virtual machines, say after me, sure you're present. The domain that virtual machine can recognize, which is at this point its own IP address, the username. And then we'll put in the password when we click OK, where then prompted to accept the certificate. And then it will allow us to connect. And there we go, where no connected to the virtual machine. Alright. So that's pretty much all there is to setting up a virtual machine. And you know, these virtual machines give you the opportunity to explore different operating systems and the features of different operating systems that you probably might not be able to from your current contexts. But as you can see, it's just another virtual machine that starts off as basic as possible. You can configure it all. You want to configure it and set up all the applications, everything you want, just another virtual machine. And you can save that RDP file or it's already saved because you downloaded it and you can reuse it every time. Now what I'm going to show you is holy, can remove our turn off this machine properly to make sure that it's not willing to incur as much cost when we're not using it. So if I close this, that's fine. We'll just close that session. You'll see that you have the opportunity to stop. Now the difference between stop and don't write if we were still connected or disconnected, but you don't you can always go no machine or Windows machine normally or any machine for that matter, you can just shut it down. Shutting down is different from deallocating. Because even though you shut down the operating system, the machine itself is still running once you see this Stop button clickable. So when we want to allocate the machine, we have to make sure that we click stop someone to stop this machine at this moment. And then they're going to ask, do you want to reserve the public IP because it's no guarantee that you'll get it back, right. So that's another thing. If you choose to preserve the public IP address, then you will be charged for keeping me for reserving it, right. If you choose not to reserve it, then the next time you start the machine, you're going to get a different public IP address. So at that point you might end up getting an RDP file for every time you change the IP address because then that connecting address would change, right? So you can decide which one is more important or they want to keep the public IP address so that you can use one RDB file are in the long run. If you're actually using this machine and connecting to this machine activity, then you may want to just reserve it because you may have other services that need to see that public IP address. Either way, I personally, and for this lesson, I have no attachment to that public IP address. So I'll just go ahead and stop it without reserving. Click OK. And once that stopping action has been completed, you notice that you will be able to click Start. So you'd have to click Start. In order to reconnect. Now while we're here, I wanted to point out they have many different options. You can adjust the size of the virtual machine on the fly. So maybe the one that you provision, you get the idea you need more juice, you need more space, whatever you can always upsized, scale it out. So obsessing would be scaling vertically, meaning you're making the machine bigger. Whereas scaling horizontally means that you're providing more virtual machines within the same set to do more load balancing. So you have different options. Once again, your context determines what you do. Wonder monitoring, you'd see they have some key metrics additional you different things that you can also pin to the dashboard. Which means that you can actually see that whenever you login and go to the dashboard. So you see that? So like I said, the dashboard will come to life and everybody's dashboard may end up looking differently based on what they want to see. So that's really it for how we provision a VM using the wizard, how we connect to it. And at this point, I'm really just going to scrub my honed of everything that was created for this virtual machine. So the easiest way to do that, and this is why it's always a good idea to have the resource group having things always related that say I'm finished with this virtual machine. I don't want anything related to its network interface or the 10k That was created. The Network Security Group, the virtual network, the public IP address that has been reserved for me. I don't want any of those things. I can delete the entire resource group. So let me go back. If you look carefully, most things here, clickable. Once it's clickable, you can jump directly to that resource. So by clicking the resource group, I can jump directly to the resource group, which is showing me all of the resources listed out. I can just choose all of them. Or rather, the easiest way would be to delete the resource group. So if I delete the resource group, is going to ask me if I'm sure I want to delete it because everything here will also be deleted. And to confirm deletion, I need to retype this name, which I do. Then click Delete. Now that procedure may take a little time, so I'm going to jump back over to the dashboard. And while my own was completed, or at least it's reporting that it was completed successfully. So at the end of it, I'm seeing a few resources here. If I go to tests VM1, which I know was in that resource group, you'll see here that I can't do anything because technically it really doesn't exist red. So it will soon be delisted, at least. Then the Network Watcher that was in a different resource groups. So enough to remove that one manually. Which once again is as easy as just going to delete the resource group, which has Network Watcher service, taking the name, making sure it's put fairs and deleting. All right, so that one is done. And if we click all resources, even though we see resources here, if I click all resources, I've seen, just seen that one storage account that was created for the Cloud Shell storage. So that's fine. So when we come back, we will look at how we can do that. Repeat those steps of creating the virtual machine using the Cloud Shell. 6. Create Virtual Machine using Azure CLI: Hi guys, welcome back. So in this lesson we're going to be looking at creating the VM and the related resources using the command line interface. So to get to the command line interface while in the dashboard, you can always just click on that Cloud Shell, if you recall. And that will bring up the bash slash PowerShell interface for so much I'm going to switch over to partial reason being more than likely you're using a Windows machine and I think Porsche is a bit more intuitive. So we'll just wait for it to do those preliminary checks and then we're here. All right, so we can use the Git module is the star dash list available to see all of our available commands. Remember that we did that when we set up partial on our machines. So if you didn't set the Porsche on your machine, but you want to use bushel. Well, here's where you can do that. So let us check out how we create our resource group. If you want this bigger, you can always maximize. And so it feels more off the screen. So I wanted to say a new dash is the resource. And if I press Tab, it will kind of listed out the completion formula. It's if you start typing a not entirely sure what the command looks like. You just press Tab and it will kind of give you a little, it's version of intelligence. So what I really want is the new is the resource group. So I can just continue typing and press Tab when it's sure that that's the only thing that could possibly be there. It will autocomplete that point. I am going to C dash resource group name. And once again, you just press Tab for it to autocomplete. And this one I'm going to put in quotation marks. I'm going to call it test the SVM one, RG. And I'll call this one VM to write. Well, for our backend some VM CLI. So it's clear that this is the CLI one, right? So RG is just what I said in the initial one that I should have probably tolerate it as RGB. So we're clear that this is the resource group. And he hold next is location. So I want this resource group to live in the region East US. All right? No, I'm not necessarily saying you need to commit all these things to memory. Sometimes you'll have to go back and see what the command is or what the values are, like, what the reason the proper location shortcodes are, you'll have to check on those things. But eventually, you know, practice makes permanent. I don't once again, you have the script, it's repeatable CDO and tough to keep on chugging into every single time. And he whole onset press Enter that will actually go ahead and create that resource group in East US 2 for me. And everything that I need is not provided. Now the next thing I'm going to do is get the credentials or create a credentials object that I'm going to use. Two, give the virtual machine as the user that it should be set up with, right? So all you have to do is see dollar send credit, which is basically just a variable. So I'm declaring a variable, dollar sign or credential, whatever you want to call it, it is equal to Get Credential. So inside do that 10% or with ostomy for the credential. So I'm going to use the same credentials I used last time. Trend gets a strong password in there. And then that is my credential object created for me. So now I can move on to creating the new virtual machine. So I can now see a new dash is the and then press tab if I'm not sure, display all 1229 possibilities now, not willing to do that one. What I'll do is just finish it up. So is EVM. And then inside of that you have the resource group name was the resource group name we know because we just created it. So it's test VM CLI, RG, and here's a completed statement. So for ETL department is of course you have the values. So let's just go from the top resource group name. I gave it the name. I tried to append zeolite ever name so that we're sure. So resource group name, name of the machine itself, the location which is East US 2 for me, the virtual network name, which is test VNets CLI, the subnet name. So here it gives us the ability to name every single element that we know needs to get created. And just by naming it here, what we'll do is actually scaffold those resources for us because it knows it needs them for the virtual machine also, in other situations, maybe have them existing. You of course would want the new virtual machine on an existing VNet and the subnet, et cetera. You can go ahead and use the existing resource names instead of putting names of resources that don't yet exist. So up to the point where we get to the public IP address name Ghana tops off. So it's test PIP, public IP, CLI. And then the credential object is passed in for the credential environmental reservoir. What that we've just created that credential up top here. So once I do all of that and press Enter, it is letting me know that no size was explicitly stated. So once again, as many parameters as you put in it will know, otherwise it will kind of assume certain things. So it will just assume that I wanted the default standard one, which is what we created last time through the wizard. So we can leave it for a bit and let it go ahead and create what it needs to. And that didn't take too long at all. So now I can close this command shell. And when I go back to my dashboard, I know see all the resources that I didn't click through the wizard. I just wrote maybe four or five lines. And we can see all of those resources created for us. Of course, once again, the more information you put into the script is, the more targeted and refined the decisions will be when the script is executed. Whatever details we fail to put in, a 0 is just going to assume and putting the defaults that it knows about anyway. So we already looked at how to connect to the virtual machine and everything. So I won't go into that. I again, notice we got a different public IP. If you committed the previous one to memory, you'll notice that there's a different one. But let's jump back over to our CLI and see some other commands that we could run that could be helpful. So if I wanted to stop the virtual machine that's currently running via script, I could say stop Dash is EVM. And then I will provide the name of the virtual machine that I wish to stop. So star dash is EVM test VM CLI. And then of course you would just ask me to confirm the resource group name. So I'm going to do that again. I'm going to do control C, and I'm going to show you that I could have actually done that all in one command. So I could say the name of the machine is that, and the resource group name is test dash VMs, dash RG. And then it says that it will be stopped. You want to continue ESA do. Then I press Enter. And once it is successfully stopped, you will see that prompt come up and starting it is just as easy. You just start AZ VM instead of stop with the same parameters. So everything you can do in the wizard, you can do in the command prompt. So what I'm going to do is tear down this entire resource group, right? So I went to say remove dash is the resource. And if I press Tab twice, I'm going to see that I have the options like case a Resource Group tab completed. Then this one requires a name, and then of course you give it the name. And then we finish off that statement with a force. So that means I don't care what this resource group is doing. I don't care what resources might be locked up. I want to delete it, so I'm going to go ahead and remove this by force. And at the end of that operation is going to return true. So I think that's a nice clear enough indication of it completely, right? So if I go back to my dashboard and look at all my resources, I'm going to notice that I only have the Network Watcher resource group lift. So to remove that one, seem same procedure, same principle. But you would also notice that on the last time when I had lingering resources and actually it's when I close the browser came back. They disappeared. This time they're all gone. Not that, it's really that big of a deal, but just to show you the whole much more quickly the script might then using the web interface, right? So if I want, I can just go ahead and remove the Network Watcher also, which I will do. And in similar fashion ones that pleads, we see true. So after refreshing, I see that I'm back to just my storage account needed for my PowerShell or CLI shell for Azure. So that's really it for hold the scripts help you to do certain tasks more quickly and repeat them more efficiently in the long run. 7. Section Review: So let us review this section. What we did was we looked at how we can get started with a 0 hold to set up an account, to login, do all those rudimentary things once we're allowed been, how do we stack creating resources? So we looked at the fact that we have the user interface available to us. Everybody gets a dashboard. And what they see on the dashboard is up to them. At probably didn't mention that you can actually have multiple dashboards that show multiple metrics on different resources. So very highly customizable space that this is. So otherwise, we looked at how you can create new resources, you can create new services. So we actually went through setting up a virtual machine, looked at the basic settings that are needed for the virtual machine. And we saw that some of those other settings that we may have put in would have orchestrated other creations in the background, like for storage and for virtual networks, for our public IP address, et cetera. We also reviewed the fact that everything that we did through the wizard, while it's easy enough to do a onetime can be tedious if it's something that we have to do time and time again. So we looked at the fact that we could actually write a script. And with the script, we can actually just, in a few lines of code, spin up a VM with all the required resources, we could stop start. We can remove or create resource groups. We can do everything we want using just the scripts. So at the end of this, I hope you would have gotten an appreciation for some of the things we can do with a 0. When we come back in the next section, we're going to start looking at how we can use a zoo or to interact with our application. 8. Section Overview: All right guys, welcome back. So in this section we're going to be looking at App Services. So when we talk about AP services, we're talking about the underlying infrastructure that Azure provides to us developers who don't want to worry about creating a VM and maintaining the VM, updating it, all of those things just to host our web application. So there are situations where you do need your virtual machine that's not negotiable. But then there are situations where you have the application and it really just want to deploy it. So from that point of view, it really just want to get the App Service plan and then host a web app on the App Service plan. I didn't notice that they have different templates here. See, service which is capable of housing or hosting rather many whip up. So it's almost like spinning up a virtual machine just without the overhead of the maintenance and the upkeep. And then the web app itself would just be a container for that particular web application. Of course, Web Apps be need databases. So they do have a template where you can spin up the web app and the database as needed. So we'll be looking at all of these cans off deployment scenarios and we'll look at how we do this with a command line. We'll look at how we do this manually from the interface will integrate with git, will also look at how we can push directly from Visual Studio. So stay tuned. We have a nice action-packed section per head. 9. Setup Azure Web App: All right guys, so in this lesson we're going to be looking at how we create an App Service are our web up through Microsoft Azure. You would also take note of the fact that my dashboard looks a bit different, so I just kinda just the top a bit worrying earlier. But the point is that you can have multiple dashboards. If you want a new dashboard, you can always come here, say Blank Dashboard. You can customize. Once you save that new dashboard, you can always flip between them. So maybe you have a dashboard for a year. Systems versus here, companies systems versus this system, etcetera, etcetera. You can customize the widgets and what you want to display on each dashboard and simply flip between them at will. So here I have the one that you would've known and then this new one that I would have sorry, there we go. I would have put on health ponds support the time. You can put on charts, different metrics, et cetera, right there on the dashboard. So that's up to you and you owe our objective here is to create a web app. So what we'll do is jump over to Create a resource. And from here, we should see the template for web app. If you don't see that templates over, you can always type in the search web app. And then you'll get all of the options for potential Web App templates, right? So we can just start off with the web app. We can click it, which will bring us to the creation screen, or at least an informational screen. And that leaves us know what kind of plans are available, what is supported. So we can do NodeJS, Python, PHP. So it's not only dotnet framework Core. We see here it has wide support for other applications. So even though you may be a dotnet core developer doing this course, it is quite feasible for you to be a PHP NodeJS, et cetera, developer and take advantage of these features. Nonetheless, anyhow we can go ahead and click Create. And no matter which route you took to get to the Create, you will see a page like this. So the first thing you want to do is create the resource group. We already know that swell to create a new resource group. And I'll call it my own Izzy Weber test. And then they will let you know that if you need a database, you can just spin up both at the same time, right? No, I'm only going to focus on the web app itself. So what do we want to name the web app? You'll want to give it a unique name. So if I said is E dash web up, it would tell me I can't use that because it's not available. Somebody else is using that because it's going to be at dot Azure websites dot in it. So it's not as resource, or at least this name is not a named resource. You need to go and it's going to be global. So you have to make sure it's unique. So you can do whatever you need to do to make it unique. And then they'll ask us to want to publish by a code or Docker container. I'll leave it on code. We can select our runtimes. So am I deploying a Java Application Node PHP, et cetera, in dotnet? Am I doing done it 5.13 or done it seeks. So right now I'm going to focus on dotnet five because dotnet six is still in preview. So we just worked with von at five, at least at the time of this course recording. So you can experiment with dotnet six. Up until recently, the Linux support was kind of limited at this point. It is basically on par with Windows in terms of its support for dotnet technologies. So I'll leave you to experiment, spin upon those windows, spin upon with Linux and see what different thereby be in performance versus any configuration. And we'll just stick to Windows though. My region is going to be East US 2 as we know. And then down here, they're letting us know that they have to spin up, spin up a plan. So remember that a plant can host multiple web apps. So web app can't exist without a plan. So I'm going ahead with the web up first in the process, I can see what kind of plan do I want. So they already gave it a name. I'm not going to change that name. But then they're asking me what kind of size do I want that web up to be? So I can click Change says that up a service plan, sorry, not give up that service plan. What size do I want it to be? So from here, I can look at the different pricing models. I can look at the deafness, I can look at production. And for very advanced circumstances like maybe you want to actually have a hybrid between your on-premises servers and the Azure Web App Service plan. Then you can look at the isolated networking scale, where it would have to sit on a VNet that sitting on a VPN that's connected. So a Global Gateway that's connected to your company. So that's a very special circumstance. However, I'm just lonely developer who wants to deploy to Azure. So I'm going to look at the deafness slash production plans. And then from DevTest you can see the different plans and how much they will cost, right? So once again, if you're on the free, you can go ahead and use that. You see the cheapest one or the most affordable rather would be the four to $1 per month estimated. And of course, if you need to change between them in need on the clique. Right. So and so blue bars are owned it with that sorry, that border I guess then it says the selected one. I will however, jump over to dev slash test, which is offering me shared infrastructure with 60 minutes per day, compute for free and shared infrastructure. So if you know about web hosting in order you can have your isolated share, isolated hosting rather, versus your shared hosting, which is the most common one and the most affordable. So obviously Adobe performance gaps and they let you know what is included. So pretty much you are still spinning up space on a virtual machine. What they're telling you what is going to be allocated to that space that you're spinning up on that virtual machine. Once again, you don't have to worry about the virtual machine and sitting of IS and all the underlying infrastructure. You just need to know that yeah, the service plan that meets your business needs. So since we're in demo mode, I'm going to go with the shared infrastructure. I'll go with the B1, which is going to give me the SSL cert allow me to scale up to three instances and have some computing power. Not one of those costs some money, but that's fine. So I'll just hit Apply and then we can go over to the deployment. And in this option, they're asking you if you want to allow for GitHub to be able to deploy to this web app container, right, so remember I did mention that we can integrate with Git. So GitHub has GitHub Actions which is like CI CD pipeline that allows you to deploy. If you've done my blazer course, you'd have seen where we use GitHub actions to deploy to the web service, the web up here on a 0. So I'm not going to get too much into that in this particular course. I just want this to be familiar with the different Azure technologies, but that's what GitHub Actions is really four. So whenever a new committees made in your repository, if your code is in GitHub, you choose the repository and it will automatically deploy here. So I'm going to enable that. And then we can of course authorize or GitHub account as needed. And once I've signed in, I can just choose my organization. I can choose the repository that I want to use. So for this course, I was thinking of using one of my previous obligations, which is my leaf management from my complete Core and Entity Framework Core. So I'll just go ahead and choose that one. And yes, we want to use the master branch so I can go ahead and hit Next to go to monitoring. And once again, these are all different steps or you can set up later on equal also just go to Review and Create if you're satisfied, we can enable Application Insights, which I do believe comes with an additional cost, but that's fine. We're in demo mode, so that's no problem. I'll enable that. We can go to tags if you want. I'll just skip ahead to review on Create. And from here we can review and make sure everything is as we want it to be then finalized by clicking Create. So we have to give that some time for it to finish up. And one such is done. We can jump over to Go to resource, which is going to give us similar management dashboards while we had with the virtual machine, this is called a blade. So anytime you are lucky on these kinds of screens and you switch between these tubs. They really blades showing different bits of information that are pertinent to both the resource and the topic is being highlighted. So let us jump over and view the URL. So that is our websites. So this is a default page, right? If you're familiar with, IS once you set of S and browse to the local host, you would see a default ie speeds. Well, this is your default App Service page. Now what's happening is that it is currently deploying the code. So you see here it says, if I jump over to GitHub and look at that repo that I had linked it to, the Actions tab will show me that the work flow is actually in progress. Alright, so it's been in progress for a while. If I click, I'll see it going through. It's already did a successful build and know it is deploying. And after a little weight, it is completed and it tells us where it has been deployed to know. When I jump back over and refresh, I'm getting this HTTP 500 error. I think I know the error because we deployed the app that is database dependent with the database are an indication of where the database might be. So it is just seeing an error. It can't go forward. But when we come back, I'm going to leave it in this Aristides, and I'm going to show you how you can troubleshoot these kinds of errors. For nodal, we have seen how we can deploy. If you deploy an app that was not database dependent, it should be showing regularly. We can see the matrix here. We can even pin this to the dashboard. So like I said, we can pin whatever we need to the dashboards out pin that my new dashboard that I just created. So that jump back over to the dashboard and I'm seeing the metrics for that particular app. So that's what a city can have multiple dashboards for multiple apps at any given time. So when we come back, like I said, we're going to assess why our App Service is not working. Or at least some will just show you the tools that you can use to troubleshoot when your app is not working. 10. Explore Azure Web App Tools and Services: All right, so the last time we were here, we deployed our first web up to our App Service here in a 0, it is not working. We've got this 50, 100, or even though GitHub successfully deployed it. So we know that there's something amiss and we need to fix it. So here we're going to try and see what tools are available to us here in the Azure portal to help us to troubleshoot these kinds of errors. Because if it's on your machine, it's easier to put in your debug. Even if you deployed it iss, It's easy to see logs there and everything, but here it's like you're probably wondering, okay. So where do I see my files, where my files where the logs being written to, et cetera. So, you know, I'm I'm hoping you at least explored and you looked through secant, see the activity log. This will show you everything that's probably happening with your service. In the different areas, you'd see that you can chin some configurations. You can set up new connection strings and new variables for your configuring. For that particular absolutely know if you have certain keys that needs to exist, our environment keys, as I said, connection strings. You can set all of that up inside of this section because now the Azure web app is going to use this as its context and look for those environment variables. You can also set up authentication. Well then that's a different issue that we're willing to deal with right now. However, to fix the issue at hand, what we need to do is go down to advanced tools. If you look at development tools, you see our console, you have advanced who's an even have App Service editor, which is in preview. So I'm going to jump over to advanced tools because that's the one that I'm more familiar with. But in the meanwhile, let me see what exactly this editor allows me to do. And would you look at that? So Azure is just something that gets better and better every day. This is the first time I'm seeing something like this. And this seems to be something like a Visual Studio code running in your browser that allows you to change files. So if I go to change one of my CSS files, there I go. I can edit it. And it has auto save enabled. Just saw that opsin earlier. It has Git integration there I can search, I can edit my files right here, RON opening console, et cetera. So this is very powerful. This is honestly, this is the first time I'm seeing it as you see it's in preview. It's a brand new feature. However, let's not get away from our objective, which is to look at how we can view what's happening with our application. So when you go to Advanced Tools, click Go, you will jump over to this tool called kudos. You'll see that it's the name of the web app, dot SCM, dot Azure websites dot in it, whereas we don't have the dot SCM in the website itself, right? So from here, you can actually get to see logs. You can see different things about the deployment and upsetting is the source control of everything relating to the deployed up is available to you here. So if you want to click through and look at the different variables that exist the environment. Here's the app settings with all of those fixed variables, as I mentioned earlier, that we can change. And as you scroll through, you're going to see different things that may or may not interest you, but it's good to know what information is available to you. So we want to jump over to the debug console and I'm going to go over to the CMD. So from here, it's actually shows me all of the files that have been deployed to this container space, right? So if I go to Site dot-dot-dot root, then I'm able to see all of the files that would have been deployed with the whole deployment process, sorry, from GitHub. So typically speaking with any dotnet Core application done, if I've done it seeks, if you run the executable file, it will try to spin up an instance. And if there are errors, then you can see it will tell you what errors are there, which would lead to an HTTP 500 or so error? So let's try that experiment here in the command prompt. So I'm going to copy the name here, and I'm just going to paste it in the command prompt and press Enter. And it may have taken some time, but here it is. So it tried to execute. And then you see here that it's giving some errors, run a diagnostic. And before I gets all their iterating, it was spitting out the error at the top. And in here it says SQL exception and that work-related instance where it couldn't find the SQL server. That's pretty much what that be erasing. So in other words, and then the red section is really just a reiteration of what occurred up top. So like I said, it's a database dependent up that's a deployed without the database. So it knows not it does not know where to get the database from. And so it just gave up in the process and that's fine. I deliberately did that because I wanted to show you hold the tools here are geared towards helping you to have the smallest deployment and the smoothest bit of troubleshooting when things don't go exactly right, right, so that is the reality with working with technology. So you can look through Hulu, I'm just clicking through to see you can even do a restart side here. You can add other plug-ins as you may need to. There are tons of these you can do right here. But from this management area, there are lots of things that you can do. You can set up alerts. You can set up an alert to see when my traffic is at a certain point, let me know. You can even setup, auto-scaling on some service plans, or at least were you scale up or scale out? So remember, skilling up means making more, making the mushy more powerful versus all it means making more machines available. So you can see when there's traffic ads at certain time of day. Skill don't because it's not peak when it's peak, scale up or scale out, etc. So there are a number of things I would leave you to experiment with them because to sit here and truly every single one of them, discourse would never end. And most of them are non. Well, they already have the free trial or credit or you already accepted that this is going to cause some amount of money. So some things that you put in will cost some money. But it's good to know what features are available to you and good to know how to use them. So when we come back, we're going to create an Azure SQL Database that we know that our lead management system needs to see. And then we'll make the necessary code changes and then go ahead and check in, let it redeploy and then fix that whole database connection issue. 11. Setup Azure SQL Database: All right guys, So at this point what we want to do is deploy our website and have it connect to our Azure SQL Database. So I am using my leaf management application from my complete Core and Entity Framework course. You, I'm assuming at this point that you're either using it because you went on copied it for the previous examples or you're using your own up. It doesn't really matter to me as long as you have an up and you can follow along with activities. So the startup for this application is quite straightforward. It just connects to the local DB context through the default connection, which is just looking at local DB right? Now what we want to do is actually the song went directly, kinda went to click Publish. And what you'll notice is that because it's already hooked up to the workflows are the options in GitHub, it already has a published profile. I can create a new one or I can just reuse this published profile at, will. Know what I'm going to do is jump down to service dependencies where it also build the database. So it has a dependency on the connection, default connection. I'm going to click Configure and then that allows me to select a dependency. So I'm going to say Azure SQL database and I have none. So it's given us subscription at this point, it might ask you to sign in, go ahead and do that. But if it allows you to go forward, you just go ahead and indicate that you want to create a new database. So the database, I'll just leave it as the defaults that are there of someone to choose. The one for the web up, izzy whip up, trip, and then the database server. So just like, Oh, for the web up, we needed an App Service plan which acted like a logical server for the database. We do need a logical database server. So I'm going to click New. And then that's going to bring up another wizard asking you what I want the server to be called, what location I'm going to give it the Administrator credentials, keeping it simple, adhering to the password rules, of course, clearly it's having some problems with the name. So it says this name is not available. I'll just put dash have anything was tends to be available for me. And after a little process just to make sure we click Okay, and now it will let us know that will create all of that for us. We click Create. And it will revalidate and then start creating the resource that it needs. So you see the boards on this one way, the CLI is another way. And of course we can come directly to Visual Studio, which is our development tool, where most of the magic is happening anyway. And it can act as our one-stop shop. And once that is done, we get confirmation that everything is set up. We can click Next, which is going to bring us to a screen asking us for our username. So you see here one, the connection string name is going to be routine, default connection. That's all we have in our code. Then I can just give it the basic username and password. And then of course it will kind of fill in those into the connection string, as we see, it's red. So this connection string shows us what the data sources or the server name, the catalog name, the user ideas was just provided and whatever password was just provided. Then we can hit next or at this point we can just say finish anyway. So then it will set up some configurations. It will store that information in the secrets, the op secrets, and not the app settings. So if you've done my security course in nobel, the difference between the app settings and the app secrets. So it won't be very visibly won't even be visible in, in GitHub really after we check in. So when we close, all of those things are configured. And just to verify, we can always jump back over to our QL, well, torr, our portal, and see if our SQL servers setup. And when I jump back over to my dashboard, go to SQL databases and click view, I'm now seeing my SQL Server. So that is once again the, well, this is the database rather. So the database sits on this server, so let me just open that server in a different tab, seek and see what the server administration looks like, right? And you can sit the firewall settings, a number of things. So in terms of accessing it, yes. You can access it through the portal? Yes. You can run queries through the ports I bought that might not be what you wanted to. You can also connect with Visual Studio. And having the Azara workload installed in your Visual Studio would also help. I'll show you that at the end of this section, how you would get that integrated into Visual Studio, where they looked at the plug-ins they can get with Visual Studio Code, you can get a 0 Data Studio. Or if you're old school like me, you can just try to connect via the SQL Server Management Studio. Know the procedure for that is very similar to connect into any other kind of database on any other kind of server. We just need the server name, which let me just go back and remind you we just got here is the server name. We're looking at the database, we copy that to the clipboard. And then where are the ones who set up the users? Just know. So we set up that one user and we put in that password, we connect. And then we're going to be hit with this firewall rules. It's letting us know that the server is not configured to allow access to a friend, anybody. So they want us to sign in to Azure. And we can do that. And once we're authenticated, it will create a new rule for us that we can see, yes, we accept that rule. And then it allows us to connect to the database. So right here is where he wanted to start thinking about security. Because when you're working from home or from your office, then it's fine. It's probably just one or a few known IP addresses that you want to off. They're authorized to be able to connect to your database. However, if you have a developer or you are going to a coffee shop or to one of those work areas that's shared. You don't necessarily want to load that kind of public API to access your database. Because then if somebody gets a credentials or whatever, they can get in without any problem from those spots. So you want to be careful and gauge it. That's one of the challenges maybe with working with Azure SQL databases in the Internet. But from this point of view, I'm connected, I can expand, I can see the database. I can look at the tables in the database and if I want, I can add more security like creating new SQL users. Anything that you would do on a regular SQL Server, 90 percent of that at least is available through Azure SQL. Once again, Azure SQL is kind of abstracts the need for the infrastructural concerns like updating the database, backing up and all of those things is kind of built into the portal services that you can set up. You can scale the database up and down if you need to. If you're running out of resources, all of those things are available to you through Azure so you can do backups, failover, all of those things, setup is the directory. Spawn new database is, once again there's a server, so it's not limited to one database. Now all of that, I'm just showing you how Azure SQL databases work, right? If you wanted to do that manually, you could have easily done the same steps. Go here, go to Azure. Sql databases creates a new one. You have the wizard, or you could go and do it through the CLI. You can use the existing server, you can create a new server, whatever the database is, all of those options are available to, you know, we have to retest are up. And let's see if it works this time. And there we go. So now we're seeing are up because node, our app sees the database. If we jump back over to our web up just to see what happened or if anything changed. When I jumped onto configuration, you're going to see that we know have a connection string available, right? So remember I said that you would create that connection string default connection. If we click it, we can see the connection string value and we can keep it for later use it if we need to. We don't really need to use it. But my point is that all of this happened without compromising our dev environment. And if anybody accesses your GitHub or even if you access my GitHub repo this point, there's nothing here to indicate any of the connection strings to my null deployed Azure service. So that is how you can deploy your web application and how you can connect to an SQL Server, and the different things that you can do to troubleshoot when it's not working on how to get it up and running. At the end of the day. 12. Azure Web App Monitoring and Logging: Alright, so we're still looking at the web app and what I want to transition into more of the management. So where are they looked at the deployment, we looked at what might go wrong. We can fix it, how to get the database involved. Now we need to look at the longevity of the alcohol. We monitor it and make sure that everything is going okay. So we have a number of tools available to us. One, you'd notice that from the overview you have a bunch of charts. This one is showing us how many 500 errors we've experienced. So this is for d1 times that we experienced with 500 error and not since we fix the deployment of we experienced it again, right? You can see the amount of data that has been transferred with each request data in data out, you can see the number of requests. And once again, all of these can be pinned to a dashboard, so I can just pin this. And then from the dashboard I can view that chart showing me. So as soon as I login, I'm going to see, okay, this is what's happening with my web app. It also noticed that I pin the web app itself. I can filter the past 0, one minute, etc. All of those charts are available to me. Now if we scroll down to the side panel and go down to monitor and we see that we have another option for matrix. So this allows me to actually build my own kind of chart so I can see what's the average memory working set. I can see what are the connections. I can actually create multiple charts. I can set up the type of charts I want, era bars gets a grid. I can have multiple metrics such as needed to click Add metric each time submit the metrics. So if I want d to in and maybe the average of the data in, then I see a metric again. So as many as you see going across and you just need to click that tile and it becomes editable. If I want to maybe system are the number of 500 errors. Those are things that will be very important for us to work with 400 arrows, the number of them, I can add that, et cetera. So once I've completed this chart, are at least I think that this chart is good enough for me. And I'm going to rule bucks old rules because the more I add is the more skewed this graph becomes. Because if you see connections is blue and then CPU time is orange, and then it's kind of, you know, offsetting the width displayed so that the end of the day, once you're satisfied with the chart that you've created, you can pin it to the dashboard. So once you've been that to the dashboard, that chart is visible from the dashboard, but then the moment you navigate away and come back, that chart is gone. So once you build your custom chart to pin it to the dashboard, and then you would have access to from the get from that moment, right. So when I refresh, I know have that chart right? And remember that you can always customize this dashboard. So if you don't want the chart so far away from each other, you can always just go ahead and hit edit and then drag them where you want them to be. Based on your screen size. They may appear a certain we are not for you. Alright, so that's all that dashboard starts to come to life. Now let's talk about scaling, what both scaling. So I looked at it, didn't go into it. But remember that we can scale up, which means improved the mushy and that is being used. Or we can scale out, which means many instances do you want so you can do a manual scale, meaning you want three instances of this kind of machine running. So you could have an instance here and instance in another region, et cetera, or three instances here. And then it will use like one of those load balancing algorithm to determine which one to traffic should go to at any given time. You can do custom scaling where you can set up rules to govern when the scaling should occur. On the Watts condition, maybe when there's too much CPU usage, then spin up another instance to share the usage. Or at a certain time of day just auto scale. You have complete flexibility of all of that. All right, so right now I could just easily manually scale and say I want three additional instances clicks if of course my costing would definitely go up relative to the amount of scaling and putting in. Scaling up ones again means I want a new instance with more memory. And the good thing is that most of the times, I'm going to say most of the times we're also going to see it will not affect your runtime. So this will happen and the application will be agnostic to the users. Probably more than likely will not experience in a, any performance issues and it will just move ahead as normal. Now let us look at insights. So we have another option called insets, and sometimes you have so many things you don't know what you're looking for, you can just go ahead and search and it will filter to Application Insights, right? So here I can enable that. And this is basically just collecting data about whatever is happening. I can create any resource, but of course I would want it to be happening against the current resource that we're using, right? If you made any changes, you apply, fine. I didn't make any sense. Shouldn't make any big change. Either way from this point, I can always go and say View Application Insights data, which is going to give me even more metrics about field requests. Servers spawns them server requests and availability, and number of key metrics that you can use to figure out how your application is doing under certain pressure and in certain situations. Know, after you have your insights created, this is probably where you want to start thinking, Oh, well it's alerts. So then we go back to monitoring and we look at alerts. And from here you see that you have no alerts fired. However, I have at least one rule, so you can create a new rule. And then you can see under this circumstance, please let me know when something happens of a certain nature. So from here I can say what is the signal I want, let's say I wanted to truck memory. So I can say available memory. We have the chart to show available memory over that period of time. And then I can see if the, this is this is available memory, right? So I can see if the available memory is, let's say less than. And then I set the thresholds, maybe 105 thousand bytes, whatever. This is measured in bytes, you want to make sure that you sit sensibly enough value. So you'll be getting alerts and false positives. But once you set that up, you just go ahead and click Done. And then it will know, start tracking that. So you see each thing comes with a little cost, right? So this condition comes with an additional 10 cents per month. Just the truck, that stuff. In the grand scheme of things though, in your work in very intensive environments and you need to make sure that your applications uptime is solid, then all of these things will come into play. Next, stop, let's look at App Service logs. So if I go to Monitoring once again, App Service logs, then we're going to see here though we have the opportunity to turn on logging. So first then need the ASP NET Core. Site extensions enable logging. I can just go ahead and click there, let it install. And once that is successfully completed, I can turn on logging. So you can choose blobby can choose a file stream. And what I'm going to do is do it with the file stream. So then we say what is the minimum that we want to be added to the log. So do I want it to be verbose? Do I only want information upwards, warning upward? So I'm going to choose warning. And we can see that actually for demo purposes, let's make it verbalise because I'm not really anticipating any warnings or errors. Really. It's a well-built. But I'll just put it on verbal so that we can actually see the logs happening, right? So when I go over to Log stream, it will show me a console that we'll be showing me watts being logged by the ESP dotnet Core application console. So if I jump over already logged into my app, and if I go to home, if you've ever used a dotnet Core application, which I'm sure you have Venmo, you know that everything you do, there's lots being settled in the console in Visual Studio, I look at this just by clicking these different pages and it being non-verbals. It's showing me every single thing that's happening, inflammation, debug, trace everything. Right. So that means when you need to debug an error that says occurred and you're spitting out the log for the error, you can always jump over to the Log stream, reproduce your action, and track what is happening. So that is the power of the logging and monitoring tools where they looked at KU and hold that can help you to troubleshoot what's happening to your app. And when we come back, I'm just going to show you what the Azur tooling looks like for Visual Studio. 13. External Azure Management Tools: All right guys. So we're just going to be reviewing some of the tools that are available to help us to manage our Azure resources. So if you're running 2019, all I have here is installer. And if I click modify, it allows me to pick, choose, and refuse the workloads that I want. So for a zur related tools, you want to make sure you have the Azure Development workload selected so you can go ahead and modify your Visual Studio if you don't already have it. And you can get that available, which gives you the option to get the Cloud Explorer. And once you're signed in, you can see all of your App Service plans, your insights, everything relating to your subscription Of from here. So you can do basically, at least visibly, you have its availability. You can open it a portal you can open in the SQL Object Explorer for your database is you can do a lot of things from just the stool. Know, another thing to note is that this has been retired from Visual Studio 2022. So unfortunately, there is no similar feature in reserves to the 2022. Instead, they would encourage you to go ahead and get the Cloud Explorer tool and the documentation for which I'm looking at right here. So you can get that Storage Explorer for a 0, which allows it to interact with your Blob Storage, a different storage of cones, and the different tables that you may have there in red. So you can go ahead and download that. We'll be using it later on. And like I would have indicated from the earlier parts of this course, Visual Studio Code also comes with great plugins that allow it to interrupt with Azure. So you can just go ahead and hit us your install the plugins that you need. I've installed so I'm installing one feet, sera, and also this is like a POC that it's just installing all of these things that aren't already installed from me doing them one by one. So once you do as your tools, you're getting about 12 plugins one time, or you can split it up into exactly the ones that you want. Once you've done that, if you click that a that comes up to the sidebar, it allows you to sign in, which I'll do. And once you've authenticated, you can manage your virtual machines. You can look at your App Services, you can look at your databases, all of that from Visual Studio Code. Of course, Visual Studio Code is one of the best editors on the market right now. Oc can also develop what you need to develop. Maybe we're doing functions which we'll look at in a few. So all of those tools are available to you to interrupt with your Azure resources and help you to be as efficient as possible. 14. Section Review: All right, so to wrap up this section, we have taken a look at quite a few things. We looked at setting up an action using GitHub that does continuous deployment to our new web app service that we spun up in the portal. So take a look at how we can use Visual Studio to get in on the action. It will automatically identify with the workflow that exists while allowing us to create a database connection and do that deployment alongside each check-in. So each time you change your code, you check any will deploy an action and it will try to keep the database is up-to-date as well. We also looked at how to spin up databases with we want to do it manually or we want to do it through the app deployment. We looked at different tools and services that can help us to be as efficient as possible, monitoring our web up. And to top it all off, I just deleted the resource with all of these assets. So you saw that all of those things start incurring the two and the total costs as you go along. Of course, you always want to make sure that you're checking your cost, dashboard, cost and billing options or values, whatever it is to make sure that you are within your means. And ultimately, Azure is fun so far. So when we come back, we're going to look at different aspects of Azure Functions and serverless offerings. 15. Section Overview: Hi guys. In this section we're going to be taking a look at Azure databases with a bit more scrutiny. We've already looked at Azure SQL as the hosted model relative to us hosting our dotnet Core. Up in this section, we're going to look at how spin up those instances by themselves. And we'll also take a look at how Cosmos DB works. As you see, there are quite a few ways that we can host them. Database here on Azure and we will be discussing all of those as required. Also stick around. We have plenty to look at. 16. Azure SQL - Hosting Options: Welcome back. So we're going to be looking at the Azure SQL hosting models. So if we want a database in Azure, we need only go over to services will see SQL databases. Otherwise you can just go to Create a Resource. So I'm going to go to Create a resource routes because there are certain things that I want to bring to your attention. So if I click on Databases from here, then you will see all of the options for databases. So what would jump voltage is that you have what you call Managed Instances. You have the SQL database, Cosmos DB of PostgreSQL, MySQL. So all of these are the different options SQL, this means like a Microsoft SQL Server modeled however, between these two, managed means that it would be more like a PaaS or more like a cis version of SQL Server, meaning they are hosting it for you and allowing you to make use of that hosted and managed Instance. Managed Instance means that they take care of all the backups that take care of most of the actual hardware kind of stuff, the infrastructural stuff. And you're usually charged based on the model, whether you choose the DTU model or the vCore model. So we'll get into that in a few. But the alternative to that if you want an SQL server, would be to spin up a machine that is already old, fitted with SQL Server. So here you see SQL Server 2017 running on Windows Server 2016. This means that you can actually get a virtual machine already outfitted, an optimized for SQL Server related activities. So of course the difference is that you're getting a virtual machine. We already saw what it takes to spin up a virtual machine. So in this situation, you would actually be paying for the license of the SQL Server alongside the price for the virtual machine, should you choose this route? And when I click on that option, you see here that it's the same user that we went through to get the virtual machine accepts the image here, like I said, is already old fitted to know that when it is provisioned, it should have SQL Server 2017 on it. So the difference with this also is that it's a virtual machine that you have to manage. You have to deal with the updates, you have to deal with the security, everything. It's up to you. They're just giving it a virtual machine with the software installed. Everything else after that is up to you infrastructure and software wise. So based on your, your objectives, their software, you will choose one and not the other mid sometimes you might even end up choosing both. And I've seen lots of software where they have SQL Server running in the VM, but maybe they've provision that like Reddy's or Cosmos DB as supporting databases. So there's no limitations or you can mix and match. Of course, you want to be very targeted with your selections and very strategic. In this lesson, however, we're going to look at spinning up an SQL database and SQL database. And we'll also look at what tools we have available to us on our, on our machine that's allows us to connect to and interact with the SQL database. So I'm just going to go ahead and click SQL database. This brings me to a screen that is asking me to confirm the resource group and the database name. And then we go down to select a server. So remember that when newer provisioning or database for our application where we actually had to put in credentials and all of those things for it to create a server, to then create the database. It's the same principle here. So I have to create a server. I don't have one in existence, so I'll just go ahead and you can go ahead and fill this over the server name, the location you want to touch. We're using a SQL authentication and you put in your admin, login and password. All right, once you've done all of that, you've provided enough information for it to know which server it should create. And when we jump back over to our page, we need to confirm if we're using an elastic pool or not. So elastic pools provide a simple cost-effective solution for managing the performance of multiple databases within a fixed budget. So that is one way of seeing that when you provision an elastic pool, you can have multiple databases running on the same server, sharing the same resources. But the price won't be very, it won't fluctuate much, right? So the hosting model for the Azure SQL hosting model allows us to choose two kinds of infrastructural underlying infrastructure models versus pricing models, right? So we have the DTU based purchasing model versus the vCore purchasing model. So the vCore purchasing model is like we're provisioning a particular machine with a particular resource allocation. And then we know that that is how much we are paying. So regardless of the usage, whether it's up or down, that is what we're paying to host these databases. That is what that vCore purchasing model is four, so this is general purpose one. And we are looking at using two vCores and up to 32 gigabytes allocated space. And of course, we can put in more redundancy for want. Now the DTU model has to do with actual usage. So there are estimations. Sure. But then based on whether you're you're reading or anything, so D2 is short for data transfer unit. It has to do with the read-write. So higher traffic websites, of course, wouldn't have more DTUs than lower traffic websites. So based on, you know, you can, you can try and figure out that based on your application, you'll be using so many resources database was are not and you can provision on the one that is most cost-effective for your organization. Of course, giving it enough resources to be able to handle whatever operations it needs to. So for this simple demo, I'm going to choose Basic. This is the lowest one. It only gives me what, two gigabytes of space and five DTUs, right? Well, that's fine. I don't really need much more than that, so I'm just going to go ahead and apply. Now we'll go over to networking next. And on the networking weekend, Save want a public endpoint, anybody can access it if we want. No accessible, want a private endpoint, I'll leave it on no access. And remember that we discussed that this is where the challenge of security can come in because you want to allow certain IPs to be able to connect, but not every IP. So you have to make sure you factor that into your risk assessment. And then you have other features you can enable Microsoft Defender and other features for security. We won't get into that additional settings. We can either store a buckled or spin up a sample database. I'll just do a sample. So we'll use Aboriginal works in this situation. And then we can put in our tugs on Review and Create. So I'll just jump ahead and create. Know that is being created. I want to introduce it to another tool, Azure Data Studio. Data Studio is a standalone database management application similar to our Management Studio. The only difference here is that, well, not the only difference. What's a major difference is that this one is cross-platform, so it's available for Windows, Mac, and Linux. And my favorite feature, it also supports dark mode. So once our database is up and running, then we will see how we use them as your studio to interface with it. So I've already downloaded and installed our Azure Data Studio and our database is up and running and ready to be connected. So I'm just going to jump over. There we go. So we're not the resource that connect using our Data Studio. So first I need the server name. So bringing up the Data Studio, I can go ahead and look for connections. And like I said, this is a really cool tool. It's cross-platform. They support different features they can add-on as you need to. And here I'm appending opiates, which I'll do later on. But it's a pretty cool tool in my book. So it kinda has the Visual Studio Code fan of look and feel. All right, so I can just click new, new connection. And then it's going to ask me what's the server sure. What type of authentication SQL we put in our credentials. And then I can select the database I want after that now it's going to tell me that I am not allowed to connect. We kind of saw that earlier with the SQL Management Studio. It's the same principle I cannot connect to. I need to add an account. And once I've finished that authentication, it, I just click OK, gives me that little addition to the firewall and then I can select which database I want. So this is the database that I created. I'll just go ahead and connect. You don't necessarily have to choose our database because we're just connecting to the server, right? So at this point, Here's the database. So it's provisioned with that AdventureWorks demo. And here are all the tables I can right-click, select top 1 thousand and ECM already in dark mode here. And this is what it looks like once it runs a query. This is your editor screen. You know, outside, I like this tool. There's nothing really fancy about it beyond what SQL Server Management Studio can do. But for me, I like dark mode, it does that and it feels a little lighter. It has Git integration. Our Source Control Integration, you can have extensions on it to help with your different tasks and different other resources that you want to interact it. So I think it's a really good tool for you to have in your arsenal as a, as your developer, you can also have many server connections and you can add and remove a will. So at any point I can always expand this as needed. I can also have multiple subscriptions and flip between them if I need to. So I think this is a really good tool, but ultimately that's it for provisioning and SQL Server database or an Azure SQL database rather. So there are of course more things to consider. And oh, look at that sort of graph is automatically updating is letting me know that I'm using up some of my compute allocation, right? There's the one query showed up in this format, right? If I do a right, it will also show up. So those are the DTUs. It's showing me how much I'm using and this was within the past hour. So the thing that you want to consider would be like tuning for intelligence, right? Or alerts because soon we will have to monitor or it would be in our best interests, the monitor the web up. It would be good to also monitor the database and what it's doing. You can add more security, put in more auditing, and put on your windows Defender to prevent against attacks. There are a number of things you can do if you needed to, if you provision the database, I need a connection string. It's red there, you just go ahead and get it and it's available in all these other provider formats. And if you just wanted to run a quick query from within the portal, here's your query editor, which I can just connect to using my credentials and voila. So you see that's also in preview mode of seeing this one for awhile, but it's still in preview mode and we can run our queries here, select star from customer. Let's see what that would look like. And you see it has IntelliSense. So it's not the dumbest. They'll miss tool our own. And so it's a nice, quick and easy way to run your query right here while you're in the portal. So I think Azure SQL is a nice and mature tool. One thing to note, there are certain things that this hosts the model does not support, like the regularly VM hosted model or regularly installed SQL Server model will support. So in my experience, add to build an application once I needed distributed transactions. And we had sculpted to be built with Azure SQL database provider. But then when we got into the meat of the application, realize that or just certain things as your SQL, at least at that time, did not support. And we had to kind of roll but that design and instead use the VM provisioned hosting model. So do your research. That's what I'll say. Do your research and make sure that you have the features that you need for your application before you go ahead, I made the decision. All in all though, this is a really good tool. It allows you to scale up, scale down, sit, scaling different times. If you're using the elastic pool and you have multiple databases running, you can see that certain apps need more resources at certain times of the day. And you can do all of that from here, all while automating backups and you have much redundancy. It is a very good hosting model for your database. Now when we come back, we'll take a look at the Cosmos DB. So that's another database type and it's more like a document database. And here I'm horrible. That's that one. We're going to get our hands dirty. And development just to show how that one works. 17. Azure Cosmos DB: In this lesson, we're going to be creating a database using a 0 Cosmos DB. So let's get started. Let's jump over and we can go to new services or creates a resource rather. And from there we can go down to the databases category where we will see a 0 Cosmos DB. So let's click on that one. Now in case you're wondering what Cosmos DB is. It is a fully managed NoSQL database service for building scalable and high-performance. So you've probably heard of MongoDB, that's probably like the standard for NoSQL databases, even though there are other engines that supports similar features, mongodb has carved out a name for itself. Well, Cosmos DB is as yours version or Azure as implementation of MongoDB like services are alike database in the Cloud. So you have Cosmos DB that is provisioned with familial. Say of Cosmos DB as the core model, which is neat. So we have the core hosting model, which provides native APIs for documents. And it gives us a querying language similar to where we would probably be used as SQL developers. And it's available for dotnet, JavaScript, Python, and Java integration. So that's the recommended one. However, if we look at the other options, we have the DB API for MongoDB. So it's just going to sit on top of an existing MongoDB and you can just migrate over the half Cassandra, like one's Cassandra would be from Apache. You have Azure tables which are on all those storage form from Azur. And probably you wrote the application using Azure Tables and you want to migrate to Cosmos. Dvc can use that one. You have Gremlin, which is more for graphing and Postgres. So those are all of your options. Once again, your context determines what you do. However, for this lesson, I'm just going to go ahead and hit Create on the core, and I'll fill this out. So I still have my resource group from Azure SQL Database put in my cones. My location is East US and East US 2, because every data center supports every service, right? So there might be a situation where you don't necessarily see your preferred location when provisioning a particular service, just make sure you choose the best location relative to where you are. And here you see that it comes with a free tier. So if you're still do discourse while you have the free allocation, then you're good to go. You can just go ahead and use that. Well, we have the two different capacity modes, so we have the provision throughput versus serverless. If you wanted to fully understand what those are, of course you just go ahead and say, learn more about capacity more than they give you a detailed comparison as the water provision versus serverless modes have to offer. Of course, the building will be different, but the underlying functionality would remain fairly the same. Let's go ahead and leave it that provision on the throughputs over to the next option. And this is talking about redundancy. In this situation, I'm not going to enable or trouble that redundancy networking. I'll leave this open and backup policy so you can set up how you want up backup to happen and how, as in Watts, kind of backup should happen. We have encryption, I'll leave that one as is, and then we can review and create when we're done. So let's just go ahead and get that provision. Once that is finished and you go to resource, you will be led to this Quickstart page where you need to create a container. So let's go ahead and create this items container. Then you see that you have two options. After that, you can download and run your dotnet up. So you can get to sum up that allows it to see what that container looks like and how you can interact with it. And you can also open the Data Explorer, and this is what it looks like. It's an in-browser inputs or tool that allows it to interact with the assets provisioned by the Cosmos DB. But I'm talking about the Storage Explorer, which is also cross-platform and it will allow it to access your blobs, your Table Storage, Cosmos DB. So you have, like I said, different tools for different situations. So let us look at the Storage Explorer. After you've downloaded and installed it, you see that you have all of your disk C or storage or cones, everything listed here. So under Cosmos DB, if I expand that, then they're going to show me the container that they created on my behalf. So this is a to-do list. Not entirely sure. I wanted to do list that least not for the demo that's coming up. I can opt to jump to the portal. I can create a new database, which is What's up container is. So I'll do that. So in this situation, I'll be using one of my existing applications which was built in blazer, which is a car management up, car into management up. So what I want to do is track the different branches of the organization. And I want to use the Cosmos DB to do that. All right, so here I can create a collection and I'm going to call it locations and then get that in there. So in the locations I'm going to have documents. If you're familiar with NoSQL databases, you'd know that you don't really have tables, you have instead documents. Document is basically going to be like a combination of a key value and the actual data. So I'm not going to go any further than this right now. Like I said, we have the application that we're going to be interacting with. And we'll use that to write to our locations and fulfill the documents in. And the cool thing about NoSQL databases is that they are not as strict as relational databases because you don't necessarily have to follow the same rules. Of course it's in your best interests to, but if that model expands afterwards, then the document is going to expand with it. It just adjusts and it's really stored as JSON as opposed to like hard data. So you're going to seal it works. So when we come back, we'll look at the blazer up and then we'll start worrying you dope to talk to our Cosmos DB. 18. Creating a Document in Cosmos DB: All right, So for our development activity will be using my car into management system that was built with WebAssembly blazer, and that was done in my course, modern web development with blazer and Don at five. So you can just go over to my GitHub account current on management if you so desire, you can use your own if you want. But just so you can follow along with me, you can go ahead and get the code in, open it with Visual Studio. And at the bottom of the readme you'll see how to configure for local use, which is participant ID. You change the connection string, the neuronal database, and then it will automatically scaffold and admin user and that default password for you. So once you've done that, then you can continue with me. All right. So this is the ones that are up and running and you press Ron, you should be seeing something. We can write this. You can always go ahead and register or just login with the admin user. Not sure what's going on with the styling here. I swear it worked yesterday. Alright, so we can go ahead and look through, and I would have already seeded some values into the database. Any old us not why we're here, while we're here to do is introduce another component that looks at the different locations for this particle. Car rental agency. Once again, this is strictly a demo. I'm just doing this so we can see how Cosmos DB can be interacted with with a dotnet Core application. So let's start our modifications. First thing I'm going to do in the server project is go over to new gets. And we're going to look for the Azur cosmos package. Once you have located, if you can go ahead and install it, then we jump over to our Startup dot CSS file in the same server project. And we're going to be adding the Cosmos DB as 10 for startup services. So I'm going to add these two lines of code. Var document client is equal to a new Cosmos client. And for that you're going to have to do an includes four. There it is, Microsoft dot Azure dot cosmos. So you go ahead and control dot-dot-dot includes and then configuration dot-dot-dot connection, string cosmos connection. After that, we just registered the singleton for that document client. Know of course we are introducing a new connection here, so I'm going to have to go over to the upsetting and create a new block for the b values. So this is going to be cosmos connection. And for that connection string, we're going to jump over to our Storage Explorer. We're going to click on that Cosmos DB that we have provisioned notes, not the collection area, but the actual one that we provisioned. That's that Libby that we provisioned. And then we look for the primary connection string. So you can just unhide that, tick that connection string and paste it in as the cosmos connection values. So here if you just scrutinize it, a B2C account endpoint is equal to, and then you see that that's just the URL with and a cone key and so on. So if you want to eat that secret, of course, you'd want to probably use application and manage users secrets. But for now, I'm not getting that complicated. Let's just keep it simple. I will just place it right there in the app settings. Now just to get through this example, I'm willing to have this document models folder and I'm going to introduce a new file called branches. So that's being put in the shared project. In this entire project, branches and branches with a good idea, not an integer like you probably be used to and you'll see why in a few. And then we have the name, the state, and the country. So whatever it is you want this module to have, you can go ahead and put it in. So that should be branch, actually not branches. Apologies. And then I will just refactor the filename also. There we go. So let us get into the meat of it. So I'm going to spin up a new controller. And remember that this would be an API projects, right? So we want an API controller. I'm just going to use the standard readwrite options and go ahead and add that. And then we have the controller, I called it branches controller. So let's just start are settled. Firstly, we have our constructor and we're going to have to do a few injections. So on I am memory cache, I'll just call this cash. And then we also want the cosmos clients. And I'll just call this C clients. Of course we're injecting these kilometers, cleared the usings, and then we'll go ahead and create and assign a field. So no, they're injected. Then I have a private field of type container. So you may need to add another using. But once you have that, we're initializing it to be equal to c dot get container. And I wanted to see the overload for this. So here it expects the string database ID and the string on Tina Eddie soul. What does the database ID if we jump over to our Storage Explorer, our database ID will be the name that we gave it here, which would be branches, right? So that is our database ID. And then it wants the container IDs. So if we expand that, what did we call this? The collection? That is locations. So that would be locations. And there we go. So right here, we already have the client connecting to the, to the Cosmos DB, courtesy of the startup file. And then for this particular controller, we're initializing client that can communicate with this database collection. Now let us look at what the post would look like or the Creates if you're using a regular MVC up, you know that it would be in the ad or the create post method if you're using the API that I'm using or an API, you'd be doing the same thing. Let me get rid of this URL, which is just the using statement. There we go. All right, so inside of the post, let us retrofit this to then only accept the branch. And that branch would, we'll just call it branch a comma b. So that's the object that we're expecting when we get that post request. And then we can do an invalidation if we want to do normal stuff. Once again, I'm not really teaching MVC or anything. I just wanted to demonstrate connecting to that the cosmos client. So the first thing that we need to do when we're creating a new record is to give it an ID. So I have to say branch or whatever you call it dot ID is equal to. And remember that we're using good, so good the new fluid. And then know we'll have to see a weights document. Container dot create item is sink right there. We'll also of course we're going to have to change still fault to be asynchronous and then we give it branch null. There's a construct called partition key, which I am going to introduce. So we can extend this to say yes, we already gave it the t item, so it's generic. So branches what we're parsing in. But then the partition key and you see that it does other parameters will the partition he pretty much serves as like an index within the documents. So if we're doing a search, it would be like a high-speed lookup area that they can use to partition the search results more quickly. So in this situation, I'm going to say new partition key would be maybe branch. And I think in our case, it would make sense to either split it up by the state or by the country. I'll do the country. And that's it. We just need to close. That's all. There we go. And then I'm going to convert this into an asynchronous I action results. So asynchronous task action results. There we go. Then I'll return no contents note at its action results. Alright, so that's pretty much it for creating oil for us, right in the code that would facilitate those creating something in our Cosmos DB. To test this quickly with all the overhead of creating a whole new component. We'll do that later on. What I'm going to do is use Postman and test this endpoint. So we can just go ahead and run. You can use Control F5 to run it in with all the debugger. And you can open up Postman or any similar API testing tool that you might have at your disposal at this point, if you don't have any, you can go ahead and get Postman. Once you have it installed, you can follow these steps. So what we want to do is connect to our endpoints that would have come up or that dress that would've come up when you run the application in the browser. So that's this address. Boats. We are going to append api slash branches because we are able to get to that API controller by going to api slash the name of the controller which is branches. So while there, we're going to choose requests step to post, and then we're just going to create this JSON body underneath the body area. Of course seems that the raw. So we're just sending over the fields that we know we need. We don't need ID because we're generating id, wants data goes in and then we can hit Send. And I'm getting a very nasty error here. So I got buck of 500 and it's seen that I sent over a bad request. The reason is that the input content is invalid because the required properties ID are missing. So it's seeing ids missing even though I know for sure IDs there. But I suspect it has to do with the casing. So what I'm going to do here is modify. This and give it an attributes of J song property. And you may need to include a using our install a package and soft dot JSON for that. And I suspect if I got that error with ID, so what I suspect is happening is that this being very strict with the JSON keys. Id with a capital I is different from ID with a common eye. And JSON usually has a bias towards lowercase first characters. So I suspect if it has that problem with ID, it's going to have you with countries since I am seeing that this is a partition key, someone to also put that for country. So he can just go ahead and build and then we can start that again. So I'll tell making those changes and trying again, I'm getting a new error. It seeing partition key extracted from document does not much will on specified in the header. So I'm glad we're seeing these errors so we can figure out where we might have gone wrong. And I think that at this point I probably went wrong when provisioning the collection itself. So what I'm going to do is remove the collection called locations. And from here you can see just deleted. And then I'll just go ahead and create a new collection. So we're calling locations. Again. Bullets, I need to say Unlimited here. So I had it on fixed. Now I'm seeing on limited. So like I said, partition keys are there as indexes to help with huge read datasets. So you know what I, when you want a quick search, but you have a huge dataset. You'd want to put in a partition key or a few partition keys so that he can, you know, go through the indexes Foster. So I guess because we said fixed, it didn't it didn't feel the need to ask us for our party should keep o and we said unlimited. We are going to say, Well, we don't know how much space we want. So they're saying, well, it's in your best interests of putting out partition key. So here I'm going to specify that partition key country. So there's click Okay, and then wait for it to create that, say recreates that collection and know that we try again, look at that. So now we get a tool for which really means no content, which means that our creation was successful. So if I jump back over to our Storage Explorer and look documents, there we go and just close that tab and reopened it. And there we go. There is our new document created, so we have that ID that was generated. We have the name, the state, and the country. So you see everything here is lowercase, Sydney months states. So like I said, I mean, you can make that decision to just make everything lowercase if you wish. At that point you see halters set it as a JSON property different from it being the C-sharp property. But ultimately that's all you can write information to the Cosmos DB. So what a lot of people do in hybrid situations, they have their main data store, but then they also replicate data to the cosmos DB or the NoSQL version. And then they would use the NoSQL part for the reads. So your main Datastore is not being pressured to also produce queries that a high speed. So when we come back, we will look at how we can actually pull the data from the Cosmos DB. 19. Reading Documents from Cosmos DB: So we're continuing on our interruption with Cosmos DB and our dotnet Core application. All while I'm using a blazer application. And to demonstrate this, what I'm doing is not unique to Blaser. This could easily have been a standalone API interacting with Cosmos DB could have been an MVC or Razor pages up, but the code is pretty much the same. We need to register the service, register the client, and then we access the data, whether we're reading or writing. Basically this is it to write. Of course, he can get more complicated based on the situation. But at the very least, this is what we need to do. So null, we're going to be retrofitting the AP. Well, the API and the blazer two-in-one in this listen to read from the Cosmos DB. But before we get into the reading stuff, I want to show you the changes that are needed in the blazer up to get the form up and running so you can call the API. And if you're familiar with these or if you are familiar with this project, you know, the basics needed. So the first thing I did was the ad and new endpoint to the static file that is found in the static endpoints file in client application also had to go ahead and put in a new line for the imports. So it knew where to find that branches model. So wherever you put it, if you didn't put it in domain, you can put it here and just make sure it's imported so that your components can see it. Also just made up quick components. I didn't bother to put it in pages and other places. Sorry, I didn't want us to put things on a folder in pages, had us put branches right here at the root of the Pages folder. Once again, just for demo purposes. Now this new component is quite simple. We have the page, we get to it by seeing slash branches. We inject our HTTP repository client. Once again, if you're not familiar with the project on a whole, no problem. You can just go ahead and do what I have done and just inject that branch gland. And then we have a heading that says branches. So to create a new component, you just go to Pages, Add razor component, give it the name, and then you get to a blank file. Pretty much seek and start filling it told with this stuff. So you have a little IF statement here that says if is success, then we display an alert seeing branch created and that there is a notification that everything went okay with the communication with the API. And the uncommon Mach. We have our edit form. So edit form has the model called brunch on valid submits we call handled create. So once again, if you're not very familiar with blazer, I'm not here to teach blazer. But if you want to get more familiar with basic and check all the course modern web development with blazer and done at five. So in this form, we go ahead and we have our simple, very simple form. I'm not even doing any validation once again, just demo purposes. So we have our form group divs, we have our input takes which are bound to the model branch dot name, branch dot state unbranched up country because those are the three points that we're accepting user input for. And then we have our button at the end that says Create Branch. Underneath I was simple code to be executed and expanding thing. There we go. And then I have the model being declared their private branch, branch. So once again, you need to have that imports, right? So that's it can pick up that branch. And then I just have a Boolean that says success and then handled create is my method where once the button is clicked, you will call this method and we go ahead and hits that branches endpoint bossing in that object. And we just see is true. So once again, I'm not doing any fancy era exception handling and watching on. So I'm just doing something very simple to see. This is what needs to happen when we click Create brunch. So the expectation is that you fill out the form, you press the button and then it calls input, which we already validated is working. So once you've done that, I'm just going to do a quick test. And remember to always log in. So if you don't log in, you met and I'm getting errors with the API areas that you cannot really explain. And once you are in, then we can just navigate. So I didn't even bother to put it on the navbar. So branches, we just go to slash branches. Why do I know it's lush branches? That's because I told it the pH is found at slash branches. So once there, we see that we have the branches will have the form. And I'm just going to fill out this one. I'm going to call the blazer. Blaser test is the state and the country is Jamaica. And then I'll click Create Branch. Basically just did our own trip to the APA, came back and said, Well, I saw no exceptions, so it was created successfully. All right. Of course he gets a bit more complicated than that, but once again, just demo purposes. So when I bring up my Storage Explorer and reef We'll now see another entry. So if I click on that entry, Here we go. So that's the ID, blazer, Blaser it as Jamaica. Right. So it's that simple, right? We did it one time. All we have to do is wire up a form that sends over the data. And then the code that we've already wired up works. So now that we have this speed, what I'm going to do is to read back from the API. So we're going to get underneath the form. So let's jump back over to the API and I have already written the code in the GIT. So first refactor is to change it from the innumerable string to task action results. Then I have my first object which is of type list branch. Then var branch documents is equal to our document container, not get item query I iterator. And that should be relative to the type of branch. Then we say a wild branch documents has more results. We want to get the branch pages. So var bank branch pages is equal to 0. It branch documents dot read next async, and then we just add that range. All right, So that's, I will actually want that to be two list, which of course means that inclusion for System.out link. So after we've done all of that, I want to return okay, list of branches and that's it. That's all it's reading from Cosmos DB and bucking or component, we're going to make the following changes. So I'm just going to put in an acetal horizontal rule underneath the form. And then I'm going to put in an if statement. So note the outside-in resistant ducks at sign. And then I'm just calling branch locations. That's going to be the name of the list that will be hosing the locations coming from the API. So if branch locations is equivalent to null, then we just have a div that says loading branches. So while it makes that call will see that notification. Otherwise when that changes, when that call is completed and we now have data, then we will start seeing the list. So we have a table with name, state, country, and then for each of them, I'm just spitting out a rule with the name, the state and the country in the code. Now I've added another property or another field called branch locations, which is of type list branch. And then I've put in an overhead for our method on initialized async. So an easy way to start when that is just to write override. And then once you do that, your intelligence would kind of suggest what you can override, right? So you just select the one you want to over-read And that's it. All right, so we're overriding on initialized async and then inside of that we're initializing branch locations to be, they get all from endpoints, dot branches, endpoints. So this is just going to go ahead and call that that GET method that we just created in the APA and then it will come and fill that and give us the table. No, I also modified the Handle creates where I put in an await on initialized async after the creation. So after everything is done, we want to recall this will just basically refresh the page. So let's take a look. Now, a quick reminder. Every time you load this up, make sure that you're authenticated. Sometimes even when it says you're authenticated here, you might end up getting an error. So if you want, you can always just check that all the other pages work as expected. And hold. Let's jump over to our branches. And when we go to branches, it authorizes OSI of the farm and in while it's in the API call. And then it's done. We will see our data as expected. So let us try and create a new one. So this is new location. This one will be in, let's see, Clarendon. And that is one of the parishes of Jamaica. So let us create the branch and look at that. We get our notification and we get our new location at it almost in real time. Let me try that again. Create branch and their toughens again. Now let's continue to build on this with one more activity and that is implementing a search. And I'm just going to put in another text box rights between the create form and the list and allow you to search by country. And I remember that countries, the partition key or the high-speed lookup key on the records in our Cosmos DB. So it's always good to whatever it is that you're going to allow the customer or the user to search by use that as the partition key. So this is the end result of me kind of tinkering with the interface a bit, nothing too fancy, but I've just added this new text box and this button that says search by country. So on the component, all I did was introduce another edit form. So to self to break tags in-between the previous form, a null, the search form. Once again, you can do all of what you need to do in terms of aesthetics. That's not a focal point of this activity. But I have a new edit form which is using a model called branch search. So you'll see that. Fuel brand search and unvalid submit, we will call the search. So we have the input text that is just taking the country. And if you want to add more cues to let your users know what exactly this text boxes for it, that's fine. But for this activity I'm not going through all of that. So the textbox to allow for the search for the country and I have the book knowledge it was copy this button and basically don't here and change the text. And going down to our code, what I've done is introduce a new object called branch search. And the only reason I separated them was the cause for this activity. I just wanted to make sure that we are searching based on the country that is entered into the search here and not the country that is up here. So I'll be, I'm using brunch as the model objects. I'm using brand search in this one. So it will bind to the country coming from the brand search. And then branch locations, which is the same list, will no be equal to Client dot search socially, that's, you know, a few claimed And we have a new endpoint called branches search endpoint, and we pass in the branch search country. So when you're writing these in, you're probably going to be getting a lot of red lines, that's fine. So to fill out the Redlands, it starts with the endpoints, search endpoint. So all I did here was introduced a new endpoint address that you could do this in different ways. Well, they want a new endpoint here or you do it somewhere else. But because I'm using a generic repository added want to muddy up your repository too much. So I just gave it its own endpoints. So branches slash search. And then in the repository, I introduced a new method. So we have Task List of type T search and we're passing in URL string country. All right? Of course if we have that in the interface, we have to implement it. And here's an implementation where we have the search. This is returning on our way to the client. Get from JSON the same list, but then we're passing URL slash country, right? So that's basically the only bit of change. Like I said, I want to keep the changes minimal. So that's why I just introduced a new endpoint and the one new method. No, in the API. I'm basically replicating the get's, but with a few modifications. So I just took this copied NBS and it down below. And that is why this sample is giving that example. So really and truly should look more like this. So that is what this gets permits it looks like so in the HTTP, GET them defending a new route where I'm seeing that this can be found at search slash. And then we have the, the query parameter of country. So then in the definition of the method which have no renamed to search by country, I have string contrast the parameter inside of the method. It's almost the same code. Only difference now is that I'm introducing the partition key in the iterator. So remember, we have the branch documents, this very line. I'm just building on that. So here branch documents is equal to document container get, get item query I iterator relative to branch. Then we're passing in the request options as the parameter inside of that method. So you see here request options is a parameter, it's optional. And then we're seeing new request, query requests options. And in that object where Plassey in a partition key which is being initialized to the name of the country or that value that is being passed in, and then everything else remains the same. So basically we're seeing when you're a boat to iterate through, please just partition what you're going to iterate through by the country and then give me, and then you can filter afterwards. So just do this first and then do whatever else you have to do. So as I said, having the partition keys very efficient when it comes to doing these read operations. So let's jump over to our interface and test. So I have three countries were three entries all for Jamaica. So that means if I type in England and search by country, then you're going to see nor results. Let's try to search by a Jamaica with a common J. You're probably still going to see no results. But then if I use capital G, Then I went to get them back. So right there, you know, it's case sensitive. So let's try someone, something else. So England or Intel. And this is in the state of, let's say London. And that's in the country of England. So we create that branch and it gets added to our list. So now if I wanted to filter on just the England locations, there we go, it's filtered. And then once again, if did that with lowercase, that'll be a problem, right? So that is pretty much how we can navigate the space of the data that is in Cosmos DB. Once again, it does make for a higher speed search read operation as we go along. And you can play around with this up. You can put in other mechanisms to maybe do full crowd against the Cosmos DB data set. And you can fix up this interface should you feed the need to. So stop right here for this one. 20. Section Review: So that's it for the Azure Database section of this course. Here we are looked at as 0 or SQL, the different hosting models. We also looked at Cosmos DB and we wrote some code to interrupt with the Cosmos DB. So I'm going to challenge you, expand on it, right? You will have the blazer app, you have the codebase so far, logic and think of doing is finishing up the whole crowd. So we already know how to read, we already know how to write, what do we want it to update or if we wanted to delete, right? So you can look at all these things. So you'll see here that I have this record that looks very similar to the airport that was there before. What I did here was an upsert which it actually created a whole new record with the new with the new values sold. Or two records are in the database so you can work our own those nuances with it. I also would challenge you to try and delete. So here I have the two methods you can pause, you can replicate them in the API and test them using Postman and then try to enhance the user interface to interact with them. But here I'm just doing that upsert where I'm just speaking. Well, based on the way that the HTTP repository is already designed, it's looking for an int id, which there is none, but I didn't bother to change that, but I did put on the branch from body. So it will take that JSON object and then it will just do an upsert. Like I said, the end result of that was two records with the same ID and just the different values. So then you can look at how you can modify the read operation to not just take everything, but to filter out based on that ID. Also, the delete takes an idea and a key, meaning I have to see him with that we needed the partition key when we were posting, which was the value of the country. It's the same way that I need to provide that key. So when you're deleting bear in mind, they have to puzzle the idea of the record and whatever value is in that partition key to then pass it along. So challenge yourself, right? You have the API call here. Challenge yourself to modify your user interface, whichever kind of up you're using. And go ahead and get that done. When we come back, we'll continue looking at Azure features. And next up is storage. 21. Section Overview - Storage: All right guys, So for this section we're going to be taking a look at Azure storage and we get to that through the construct of storage or cones. So a storage account allows us to store files. It does a table-like structure for NoSQL related services, which really and truly Cosmos DB is probably more efficient for we have the ability stuff cuz who have the ability to file share. So right now I'm looking at the existing storage account that was created to support our CLI needs. However, for this section, we are going to be creating one from scratch and we'll be using Blob storage to see whole weekend store files and serve them up across the Internet. Because if you want to stream movies, stream audio or visual content through your up that it's hosted in Azure, then Blob Storage is where you'd want to keep those files. If you want to. Create like a CDN structure for a static files, pictures, et cetera. Blob Storage is probably what you would look forward to. And like most Azure services that we've seen upon to know, it's durable, it's managed by the Azure team so you don't have any infrastructure worries and it is rather secure. So we will be taking a look at setting up all of those things from scratch using Microsoft 0's. So stick around. We have some fun activities for this section. 22. Azure Blob Storage: All right guys, We'll come back. So let us get started in creating our storage account for as 0. So I'm in all services. Once again, the pills to the top-left and you can go to All Services, go to storage, storage or cones. We have one already and we just point at all, but I want a new one for this section. So we have our subscription. I'm going to create a new resource group, is E-test SE RG. There we go. Is it as storage or co-owned resource group? Of course, the name must be unique. And if you just look at that icon here, it says the name must be unique across all our names, right? Once again, this is something that will actually be served up. So it's uniqueness is very important. The reason I'm going to choose East US 2 and the performance will, this is a skew you can choose standard for general purpose or premium for lower latency scenarios. Or maybe if you wanted streaming or stuff you'd want the premium. Also redundancy, we can choose geo-redundant, locally redundant, et cetera. So right now I'll just choose the lowest costing one locally redundant, but of course, based on your situation and your needs, whether it's private or for your company, Make sure you made the right decision at that time. So I'll go ahead and well, let's go with advanced. Let's see all the options that we can sit. So we can say require secure transfer for rest API. So basically when we provision this storage account, they're going to give us some API endpoints that are also interact with the contents in the Blob. So it's quite universal. It's not. You need to have done a core out by this point. It is. If you have something that I can read and write to rest APIs, you can use blobs to secure, to securely store your files. And then there are other options which you can read through. I'm not going to spend time on each. I think they're self-explanatory for the most part. So we can just go to networking. And you see here I can choose my connectivity method, my network routing. I'll leave that alone. Data protection, more security stuff. I'll leave that alone also, you can take your time and go through. We can even set up some access control stuff, tags, and then we just go to Review and Create. So once it is finished validating, we hit Create a new weight for a bit. And it's created go to the resource and you can have a look around, right? So once again, as your storage or cones, we have access to containers, we have access to file shares, we have access to QRS. And QRS would be like a messaging services between components. We have access to tables which I'll probably more used, Cosmos DB then this table structure, but it is available. You can set up a CDN, which if you're not familiar with what those three letters mean, it's content delivery network, which is a high-speed delivery. There we go. Who's a decay while to load. All right, so as your content delivery network, which allows you to send audio visual files faster and more reliably across different servers, right? So, you know, it's, it's a really nice way to share static files. You have access to the access keys. So when we are writing some code, you see the use of the access keys when we want to read and write to The Blob Storage, you can sense of security and encryption, your replication protection. So everything that you would have done previously you can do here. You've not static site, you have a longer lip set a you want to put up. You can actually enable it to serve up static files, like even WebAssembly blazer up or an Angular app, et cetera, directly from the Blob Storage. So it's a very, very powerful construct to have. So when we come back, we're going to go for our blazer up and then we will retrofitted to speak to our blob storage. 23. Uploading to Blob Storage: So when we were here last, we setup our storage account and our objective was to go ahead and retrofit an application to speak to the Blob storage in the storage, I call it. Now, before we get to that, we need to set up a container. So I did come to containers and you saw the logs. You can come here, you can add container or you can use the Storage Explorer that we had been using for our Cosmos DB Access. So you see here it says the kids that I'm not entirely sure what they plan to do with that feature. Well, I'm not worried about that right now. Instead we're looking at storage of columns. So we have storage or colon that was created for the CLI support. That's fine. We'll still have the ASI test, one that we just provisioned so we can go down to Blob containers and abuse, see logs. All right, so if we want a new one, we just right-click, say create blob container. And I'm going to call it car images. Because for a car into management up, we are going to be storing the images of the vehicles. So car images. And then if we look in their properties, we'll see that we have this URL that we can attempt to browse to. Voltage is going to give us off 40 for so that URL will actually be appended, are prepended onto whatever the image name is stored. So let us jump over to the top once again. And we're going to get a fresh copy of our car into management up. Once again, the project is current management dash blizzard WebAssembly. So you can use the previous one that we already started modifying for Cosmos DB. What I'm going to start off with a fresh one. So when you have up your projects, let us take a look, a little tour and see what exactly is going on. So in our vehicles control or in our server application on R&D API, you would see that under Not yet equal under Create vehicle. There we go. We do have a part where we create the file. So we actually do a file upload here. Well, this file upload is going street into the server folder right here on the server, right. So let us say we wanted to react to it that we didn't want to store things on the actual server. We want it to start storing our files or images in our blob storage. So that's what we're going to be retrofitting at this point. So what we want to do is jump over to our project, manage NuGet packages, and we want to look for Azure Blob. And that's the first one that comes up as your dots storage dot blobs. So we'll go ahead and install that. Then let's jump back over to our vehicles controller. We're going to inject our configuration object into the controller. Alright? So you can, in case you didn't know, you can always just type it all out here, control dot and then initialize the field, and then we can move ahead. So like I said, we're going to be retrofitting it to upload to the Blob Storage instead of the low-cost storage. So let me just condensed code Control M and O on it. Kind of miniaturize is all of your quota, collapses, all of your code. And I'm just going to go down to section that I've already put in. So put it in. I don't want to bore you watching me type, so I'm just going to explain this bit of cocci so I have a new method. Let me just put it where it's more appropriate. I have a new method that is an async method returning string, and it's called Upload File a sink. It is taking the image as a byte array and it is taking string image names seem to parameters that are present in the original create files. So the version of the code you have would have create file with by its image and string name, image name, same thing. Right? Now this method is initializing a Blob container client. So you see a new blob container client. We're looking for a connection string in the configuration. We look at that in a few. And we're getting a blob container called car images. So we just created car images and we're trying to connect to it right here. We'll see how we get that connection string in a moment. Then we're initializing a new client relative to the image that we're going to be storing. So it is always best practice that before you upload to the blob, you give your file as well. In general, it's always good practice to upload files with custom names who are already doing that because we actually transformed the file engineered to the name and everything. So by the time this controller, this image name would have been good. That was generated from on the client side. So we already have the custom name and we have the data. So we're generating a new client relative to this custom name. So it's always advised to do that. Then I'm using a file stream to do the upload. So this is just kind of like a shorthand using statement. Usually you'd have seen using MS and then you'd see open and close curly braces. This is kind of doing it in one line where we say open a memory stream relative to the data coming in with the image. And then we're seeing blob content info. Please await the blob dot. Upload a sink of this memory stream to the blob because this embodies that new blob that is about to get created in the Cloud. So we're just seeing, upload this data to that blob area. And we want to allow it to be public with a cache control. So this Cache-Control basically means that yes, allow caching at different servers that are close to the mean hosting locations so it speeds up access for anybody else in the vicinity. There are other content, sorry, there are other HTTP headers or you can add. So other things. Let the, the blob be more aware of the type of file you're uploading the language in all other metadata, you can add all of those things. So the blob builds up a bit or metadata file about the content that is being uploaded. In this case, I only have the dietary and I'm not willing to go into the computation of trying to reverse engineer it and find the mime type. We'll just leave it as is. And then what I'm going to do is return the Blob URI dot absolute URL or a URI, right? So remember blob embodies the file that is no uploaded. The URI, basically our URI dot absolute URL. This is a string representation of the direct link, the blobs URL slash, the image name. That would bring that image. Because if you compare what we did with the Create File, we yes, we uploaded it to a path, but then I also just pass but the direct path from the website to the folder to the name of the file. So websites would be the the API to the folder which was uploaded to the file. So that's the way I did it. I'm just trying to retain that kind of form factor where we're just using URLs to load the images. So I'm just going to PaaS, but the URL, the direct URL to that file while it's in that Blob Storage container. Once I've done that, then it would be stored. So I do that in both PUT and post them. You just look at it in post. So I've commented all the original line where we say v dot image name is equal to create that. And I've replaced it with the await Upload File Sync with the same parameters being passed in. So there are a few things, no one, the connection string. So we need to set this connection string. Now if you look in my app settings are JSON, you will not see the connection string. I had mentioned before that these connection strings you probably want to store in secrets and not necessarily the upsetting. So I'm going to practice what I preach this time. We're going to put it in the secrets to get to the secrets file, you right-click your projects. So we right-click the server and then we go to Manage Users secrets. Once there, we have a very similar configuration file that we can use, and we just put it in the same kind of key value JSON format syntax into that file. Now this value is really coming from, and I'm just going to jump over to my Storage Explorer. There we go. So that is really coming from when I click on the storage or cones. Then you will see under properties, one of them being the parameter connection strings, is it primary and you see secondary? I took the primary, so you can just unhide it and then copy that entire value, and then you place it in the configuration file. Now once in secrets, you can actually access it as though it was in the app settings just through the configuration object. So you don't have to do much more. But the good thing is that when I checked the scene to get to you won't be able to just go in and see the secret. While looking good job. You probably have to get a project and interrogated. Of course, this will be long gone anyway, my point is that this is nice and secure way to kind of hide your connection strings and any other secret values from prying eyes, right? So at that point, connections, string, configuration, car images. And one more thing that we'll probably want to do before we go any further is modify the public access level. So the containers generally, I believe by default they would be private and for good reason because it may not be a situation where everything you want to be displayed. All right. You might have to upload files or documents they have to store securely and you're using Blob Storage. So by default, you can't just put in that URL and grocery, which is why we saw it at 40 for error when we try to get the car images earlier. So when I right-click and go to set public access level, by default, mine was set to no public access. Don't know if you're on will be set to that by default. But even if it is, we're going to say public access, read for blebs on public read access, sorry for blobs on. This means that, well, if I do this one, that means that URL would work. So the URL for just the container would allow me to see what's in the container. However, when I say public access, read for blobs only means that I will still get 0, 4, 0, 4 with that URL, but the Blob itself, that absolute URI, that's I'm getting by here, would actually give me that file. So I'm going to apply this. And then with all of those modifications done, take this for a spin. So bringing up the current management up, jump over to vehicles and go ahead and create a new vehicle is choose our image, fill the form and submit, and there we go. So here is our new BMW created. All right, So if I look at this URL, just inspecting, you see that this was indeed uploaded, so that is that absolute URL. So this is the blob URL and then the container, and then that custom image. If we jump over to our storage explorer, of course we are going to have to see that image there also know the public access level setting that we had changed is what allows us to just be able to access via the URL or not. So if I said no public access, then we'd probably get a broken link because it would just be like a four or four. You wouldn't be able to see the tall, otherwise you can leave it on public if it's not so sensitive. So something like an image or a static file, like a music file or a video file, feel free to just leave it public as needed. Now, if it is a case where it's private and you definitely need to house special permissions or you want to restrict public access and only your application should be able to browse to that blob, then you're going to have to set up access tokens on-demand. So you would actually create a Tolkien, give you that time span, and then for as long as that token is valid while it's in your application or while it's being provided by your application, then access to that file would be granted. And that is why if you look in the Storage Explorer in the properties, once again, let me just made that's a little bigger. If we look back at the storage of columns properties, you're going to see stuff like keys, primary and secondary key. So these keys are used to generate those access tokens. And then generally speaking, those tokens have more information like their lifespan, who is the issuer, etc. I wanted to stop here for an although for our case study, we've done enough to know how to upload a file and hold to read it from Azure Blob Storage. 24. Securely Access Blob with SAS Token: Welcome back guys. In this lesson, we are going to be looking at restricting access on granting it temporarily using the SAS token. So remember, a SAS is short for a shared access signature. And this is pretty much at Tolkien that we'll go over as a query string on that blob URL to allow access to the resource in question. So going back to our Storage Explorer, remember that we can always set the public access level. So if we say no public access, that means that all the URLs that we would have stored in the database today, it will be rendered useless. They'll just give 40 for because we don't we don't allow public access. All right, and all we would have allowed public read access, which is why that URL works. So if we were to say an old public access, because like we said, there may be situations where you don't want to just casually StumbleUpon the link and be able to access the file that you have in the Blob. You can turn off public access and then you can retrofit your code or put in this code to provisionally generate a token on that URL and then send it all. This is a bit tricky, but I'm going to point out why it's tricky and what the general approach would be in a situation like this. So what we've done in the blazer up, we have a few endpoints that actually retrieve the details, so are the image for the vehicles. So we get v equals, which is really just getting all the vehicles and returning null. If I tried to browse to that page, we are going to see the older images. And when we scroll down we can't see the newer ones. However, when I look at the details, I am able to retrieve it. So this is just a random image that, that puts up for that particular vehicle. Here's another one, and that's just another image. So you notice it's displayed when we view what it's not displayed in the list. So I'm going to just explore our show you what the code looks like to get it displayed. So when we go to view our record where actually hitting this endpoint, which is get v could be tails ID slash details. So what happens is that we pull the vehicle record that's there. You should have that bit of code. And then I've introduced this new line of code that says vehicle dot image name is equal to spilled SAS URL and I'm passing in that image name. So this is a method that I had created for this purpose. So the method is right here. So you can go ahead. You can actually just write this. And then let me just show you if you write that line as is, and then you do Control dots. It will actually build that methods though for you with the parameter image name. So you can just do that as a quick, quick win. Alright, so let us look at the anatomy of that method and let me put it down here in the private section where everything else is. So build SAS URL. It takes one parameter of type, image name and it's returning a string, sorry, it takes a parameter of type string called image name and it returns a string. So in this initializing a Blob client, so I'm doing it one time and the method because I don't want to do it a few times when we may need to build this SAS URL. I don't want to do this in every single methods are just one method. Of course, there might be more efficient ways to build it in a bigger application where you don't do this all over the place. But I'm not getting into all of that architectural stuff, right? No, just want to know how to connect an interrupt. So we basically repeat the same step. Connecting are creating the client where we look for a blob connection where they looked at placing that in the secrets. And we're calling on our car images container. Then I have a variable called Blob name, which I really just passing in the image name into path, not get filename. So this is kind of peculiar. And here's the, here's where one of those explanations will come in. So what happens is that the Blob gets the name of the file. So if we look in the Storage Explorer, this file is the Blob, write. The name of the blob is this file name, right? That's why it's a good idea that we control the filename. So we are definitely sure that this is the name given to it. The problem though, quote unquote problem, progress, self-imposed problem though, is that we're actually storing the URL directly in the database. So we're storing the URL, or at least based on this design, we would have gotten the absolute URL to store in the database that we didn't touch them necessarily. Go and initialize a Blob client every time and then go and try and figure out the URL after that. Because if we only had the filename, like, oh, we only store the filename here. If we want to store that file name in the database, then every time I needed to get it from the Blob out after repeat similar steps to this, to then build a blood can't get it back. So that means every time we look at the index, every time we'll look at one vehicle and all these things. And you see that we didn't have to make any changes to index when we did our first upload exercise, everything just worked, right, because the URL was publicly accessible, URL was in the database. However, because of that design, I cannot use that URL as the Blub, the blob, blob client name here, right? So remember when we did the client name up here, we got the image name directly from the client application, the blazer client application, which give us that void volume. And then we were able to do the rest after giving the blob the name. So in this situation, I need that name. So I'm seeing path which is in System.out io dot git file name and ID. I'm giving it the value being passed in, which is the value that is coming from the database vehicle dot image name. So when it gets that URL, My it, when it gets the URL, here we go. It's not going to see this is the name of the file based on the URL that I got. Then we can go ahead and initialize our blob with that filename. And then after we get through all of that, we can then say we're building that shared access token. So we have this library courtesy of azure dot storage. So you have to include that library. Blob builder is equal to new and then new C-Sharp. I don't have to be that verbals. So new. And then we're initializing it up a few parameters. One, the Blob container name. So from here I can just say blob dot the container name, right? I could also have said Blob client. Well, Blob container doesn't really matter which one. So I'm just seeing blob dot container name. Blob dot name. That's the name of the blob, which is what we said here, right? You look Blob name. So that's what we're sitting here. How long should it last? So I wanted to expire in two minutes after this moment. I only want to allow HTTPS access protocols, right? And this is actually an enum that has a number of values they can choose the one that best fits the situation. For security purposes, I think it's obvious that we would want only HTTPS. And then the resource that we're accessing, its B for block, we're then you see the documentation here. B, if the shared resources or blob, see if it is a container, et cetera, right, so we're accessing a Blob resource. After that, I'm going to set the permissions for read because in this situation I'm creating this Tolkien for you to be able to read the data. However, this is also an enum where you can see at all create delete lists. That, so if you wanted the person to have full control, maybe like a blaze of beast and file management system for the blog. So they can create new files again, delete files. They can update files, whatever it is, will not update what you know what I'm saying? Then you set the permissions accordingly in this situation, I only want read axis. Then I'm going to return the generated SAS URI. So this method is just taking this object from the SAS builder and it will generate that URI that's has the Tolkien information included. Of course that's a URI softer do toString so that it returns that string. What does it return that string to? Well, I'm returning it to equal. Sorry, let me just find the myth like weekly they will vehicle dot image name. Because remember that vehicle dot image name in the database is just the URL. So we have distributed, don't initialize that, get the SAS URL. So no, this will have that original URL with the query string, with the SIS information. And then that is what is returned to our client up. So the only real complication that I have introduced because of the design of our application is really just because we're storing the URL in the database and I don't the URL to get just the name. On a normal circumstances, you just need the image are the filename. The doesn't have to be an image, the filename. And remember it's in your best interests to control the name that it gets. You can always add the metadata afterwards. We discussed that already, that from here you can add more metadata to the file when it's getting uploaded. But my point is that you, outside of that, you want to control the filename so that you can be sure it's the file you're getting and then you send it over. So this part is really just because we're using the URL with all of that. Let us see this inaction. So we already saw that it works. What we'll do is test it one more time and then we'll change the list to also take advantage of that URL. So going back to the list, if I look at the URL, them just inspecting here and you can see that that's the original URL that we would have seen. It worked previously when we had public access enabled. However, when I go over to the details, I'm seeing the image here. And if I inspect, now look at this URL. This URL has this query string afterwards, so that's our original URL up to this point. And then after that we have the squares string that has the information about the access token. So all of that is kind of a mixture of encoding, timestamps, et cetera, the Access Protocol, et cetera. So without this URL, you cannot access it. And then once again, that URL is only available for a limited amount of time. So let us know, just retrofit the index action to use that code and this is what it looks like. So for each vehicle in vehicles, all we did was get the list of vehicles. And then for each one we're just going to build a SAS URL using the information it gave. So with that, change me, Let's test and look at this so well, it worked at the expense of the database records that are being stored, but I mean, you can find that we are on that. So I wouldn't necessarily see a case where you're going to mix and match. You probably migrate everything, update the database that in URLs, and then start using the Blob Storage anyway. So at the expense of the URLs for the database records, it's very interesting to see what was generated for these, right? So the URL here would have been the blob URL and then the query string nonetheless. But of course there is no blood by that name. So that's fine. But the ones that we care about, the ones that were uploaded there they are. So we can always look at that. We see that that is the same type of URL file name that was uploaded, et cetera. When we go into the details, it's still pretty much visible because of the shared access token. So that is really it. Of course, the amount of time you give to the Tolkien matters based on your context. But that is it for how you can secure the files and security, interact with them while they're in your storage or cones. 25. Section Review: All right guys, so we've come to the end of this section and I hope you enjoyed exploring what Azure Storage or cones are all a boat. You got introduced to a tool that allows you to explore your storage are cones and manage your files. You can even do manual uploads if you have to do migrations. Remember that this storage account is good for files stores or storing static content, or even music and videos and anything that you probably want to stream from your website here would be the ideal, please. Are these. Otherwise we looked at how we can manage access to these blebs by changing the access level to the container. So we can go to the container and set public access level. And then we can change it from public or low on the container or to no access whatsoever. When it's on no access. The upper the onus is on us to put in SAS tokens that will gauge how we interrupt with those resources. That being said, play around with it, right? There's a lot more that you can do. You can look at the properties and get some ideas as to more metadata that you can put in on the file before it gets uploaded, you have access to other resources in the storage or cone like file shares, skews tables. We won't get into all of them right now, we're just focusing on storage. But once again, based on their context, you may end up interacting with one more than the other, et cetera. So that's it for this section. When we come back, we'll look at more as your services. 26. What is Azure Serverless Architecture: Welcome back guys. In this lesson we'll be talking about Azure serverless solutions. So hop on to know most of what we've been looking at. Colon says serverless solutions. And I think the only thing that we actually looked at that wasn't a serverless solution, was when we provision the virtual machine mock in the first module. But everything else you notice was just a service being offered to us through the database, the storage or cones, the App Service, all of those things we didn't have to provision on physical servers. We just briefly zone like logical or conceptual servers. Are we just provisioned the service and we know that it's running on top of a server. That's what serverless really means. So it's not that there are no machines present, it's just that they have obstructed that physical machine management away from you. So with that kind of architecture, you can increase your productivity and near velocity when you're doing tasks, when you're provisioning new services with enough to give us solution in a quicker time, you have far fewer infrastructural concerns. It can boost performance and it improves organizational impact because he can deliver high-quality solutions in a much shorter time. And I don't know the benefits of serverless solutions in general, is that they are relative to use it. So generally they tend to be a bit more cost-effective in the long run. They cost a lot when they're used a lot that costs less when they are used less, right? So all of those things are advantages of serverless solutions. So we have others like Azure Functions, we have Logic Apps, Kubernetes is closed the Event Grid. All of these fall on the serverless solutions and services that Azure offers API Management as your search Azure SQL Event Grid, the App Service, like I said, anything that is seen as a POS is pretty much a serverless solution given to us by a 0. And then when you look at the bigger picture, you can scale all it's multiple of these services in the same application, each one dealing without spark to kilo nuance, our particular task. So this is actually an example from the Microsoft Azure website where they're talking about serverless solutions. And this is an example of an e-commerce site that is using serverless architecture. So the only thing here that is probably not Azure related is the browser. You see the CD and we'll discuss that briefly. That storage account, they support authentication without you needing to put in a server are much work. You have. The web app serves the Queue storage azure Functions, which is what we're going to be focusing on. And the third bar to that's not related to your search of Assyria SQL Blob Storage. I've already sketched. All of these are components of one application, but there are different services that I just tied together. And on top of all, that you have application Insights which is providing like overall monitoring to make sure everything is talking to the other. And then at this point it starts to think about microservices and how every little part has its particular function within the whole ecosystem of your application. So in this module, we'll be focusing on Azure functions. I just wanted to give you a broad overview of what serverless architecture is because the fact is that we have been using it. I probably just didn't use the buzzword much up until now. But we will be building a function application that is going to be interacting with q, but we'll be using the Azure Service Bus as the queue. And we will be creating a function that will interact with it. 27. Azure Service Bus and Why: All right guys, So we've discussed what serverless solutions are courtesy of Microsoft Azure. Now let's take a closer look at this architecture. And what we want to focus on is the Queue storage, which could easily somewhat be swept told with the Azure Service Bus. So pretty much this architecture could represent any mission critical system or a system that needs to carry on mission critical actions without disrupting the user's experience. So let's get to Scenario going. So this is our e-commerce websites. This costumer accesses the web up through his browser. So the customer places an order. And what would happen is that the web app would write to the database and Blob storage and wherever else it needs the right to know what happens when the servers don't. So if all of that was sitting on one server and the server was known, then the user would probably get an error message to say, well, you know, service is unavailable these dragon later, which of course is not very good user experience for anyone. You don't want that for your system. So an architecture like this would actually allow the web up. In the case of a failure, maybe the database is though may be careful to certain operation at the same time, the Queue storage or the Service Bus would actually take the information from the web up and a low the wave up to say, well, I successfully handed it off to the queue so customer everything is okay, Thank you for your order. No. Later on, there might be certain things that happen after the fact you have payment, you have inventory, I have logistics, number of little operations need to be carried out after the fact. So then while the information is sitting within the Queue storage or the Azure Service Bus. And I'm kind of using them interchangeably, but we look at the differences in a few. Then we can have another upper range on happening courtesy of Azure Functions that would peak Inflammation up off the queue and then do whatever execution it needs to do. So that means all of that logic does not necessarily after within the web app itself. So this is a nice distributed system with many fault-tolerant services involved to help to bring some amount of stability and shield the users from potential system errors. So in this section of the course, we're actually going to be sitting over q stored will also set up an Azure Service Bus. And we will look at how Azure functions work and how they can be used to interact with those cues. So let us start off by looking at storage queues. Know as the name suggests, storage queues are found in your storage or cones. So remember earlier when we're looking at blobs, but also saw that you could have tables, you could have flashers, and you have Q. So if you click, cuz you say I already created one bot to create a queue, it's really easy. You just go ahead, add q, give that a name, and then you can have that URL to which you would subscribe to the q. So q, the name itself suggests it's Q is another word for the nine. So think of a line, a line of people. The first person in line is the first person to come off the line. So that's exactly how queue works, right? The messages go in and they come out in the order that they went in. So you'd always be picking the least recent message from the queue. So the thing with queues in, in Azure is that you can actually just add a message manually right here. So I can just go ahead and see an ad message I can give it tastes, can see test message. I can set the expiration time, did seconds, et cetera. I can tell it that it never expires. I can encode it. So let's try one that expires in 70s that's encoded, right? So from here I can see everything a boat, that message. I can also add it. Again, let's say test message one. And this one, I want you to expire in seven seconds. So the expiration time really means how long should it sit in the queue until it is picked from the queue? So if it's going to expire, then if nothing gets through it within the time that you set for it to expire, then it won't live in the queue anymore. So let's try one for five seconds and then four. For visibility purposes, you can actually encode it, meaning you don't want persons with the naked eye be able to decode what is being centered when the queue citizen that the security they can add to it. But let's try this 5 second message and then look at that. When I refresh, that message is gone. So it got added. And then five seconds later I refreshed. It was gone. So that's pretty much it for cuz I'm not going to center the coding or to tear on the queue. I just wanted to depict or to show you where you can go and setup your cues and interact with them. So I'm just going to go ahead and dequeue that message. Also. Know there are no messages in the queue. Now let us look at in, I don't want to say in contrast, but another level up at the Service Bus. So the Service Bus is more the enterprise level. Lou Shaun, right? So if you've ever heard of RabbitMQ or Java messaging service, this is the comparable technology that Azure offers. It allows you to connect your apps across private and public Clouds, right? So this one is a cloud-based messaging service that provides cues and topics with publish and subscribe semantics and rich features to lose it to build very reliable solutions. Like I said, it's really like a middleman between your main application and whatever, but growing operation needs to happen. Well, you don't want him in application to fail when some of the background services are not available. Of course, if the queue is not available, then that's another issue. But at least the queue can sit in between those main components that have a higher rate of failure. So let us try and create this. Lets us create a new resource group and give it a name, location. And of course I'm giving you the basics, insurgents doing dev. But you can look at the pricing details and see the difference between premium standard and b. So I'll just go ahead and review and create that. And once it's validated, click Create. Now once we go over to the created resource, you're going to see that it has a different, an entirely different makeup. Now we are seeing much more analytics about the messages that might be on the queue, the incremental going, any server errors, user errors, et cetera. You see that you have geo redundancy that was kind of baked into the storage of cones. But at least on the Service Bus, you can manage that locally, uniquely for this resource. And then you see that you also have the ability to add cues to the message service, right? So, or messaging service rather, lets us add a new queue. So here I can give it a name of course. But more uniquely, I can set a max size or right, I can sit that delivery cones. What does that mean? Maximum number of maximum deliveries. And I can be between one to 2 thousand. I can set the time to live for any message, lock dead lettering. So that means that if messages are not delivered, do we delete them? We made them expire. Do we keep them? So think about a scenario again. We're maybe something needs to be processed and every time it needs reprocess, it fails on till the time that it would usually live for, which in this case is 14 days has expired. Do we delete it or do we put it over? And like I reject pile, so I will go for the reject pile, right? So enable that lettering on message expiration would mean if a message has expired, clearly, it has not been processed as we think it should have been. Put it in the reject pile. And then you could set up alerts are on that reject pile to say, Hey, you have something in the queue that was not processed, please give it some attention and all stuff like that. So let us go ahead and create that one. And after that queue is created, if we click it, we can see more details about what that Q itself is doing. So you see they even have another explorer over here. And with queues in the Service Bus, you can do a little bit more robust testing. So firstly, can see what kind of data am I expecting. So let's see plain text. And I'm going to say plain text. I can send that. And that has now been sent to the queue. Know, when you receive it, that's when you're plucking it from the queue. But if you want to look at it without actually blocking like a preview, then you'd want to do something like peak. So if I do p, you'll see here that that message is right there in the queue. And even if I click it, I will be able to see its content so close. So let us go and see receives. So if I say receive the want to perform, a destructive receives it, you see that once I do that, it is now receiving the message. That's it, taking it off of the queue. And if I go back to PQ, then there is nothing to be cut because I just received the message of the queue. So let me try this again, plain text, again, send. So what's I didn't do was point out the stats. So you see it's telling me how many messages are here. The dead lettered message, once again, would be what wasn't process and is now expired and then scheduled would mean that okay. It's there, but it's supposed to be done at a later date. So that's it for demo. When we come back, we will look at how we can construct Azure functions to interact with our service bus. 28. Additional Azure Server Bus Features and Settings: So we're here to look at some additional features of the Azure Service Bus. I know that the last time we said that we're going to jump into looking at the functions and how we can integrate. But before we get there, I just want to point out that the service plan that we chose was BC. And of course we know that the costing is different for each tier. Here I'm going to demonstrate some of the things that would not be evident in the basic, but you would get them in standard and above. So if you look to the pane to the left, you'll see that you have far more blade options than you did. So you see that you have queues, you have this one called topics. You have the ability to migrate to premium. And I'll also take a look at Shared access policies, which is something that both tears do have in common. So we already looked at Q. So you know how the queues work. We know that we can peek weekend send, we can receive, and through the portal you can actually get to view certain things about the Q. Now, the Shared access policies, like a security feature or security policy, rather, is a security policy that you can apply to your Q. So let us say we add policy and I'm just going to say read, write. This policy says, what does the, the client interacting with the skew have the ability to do to this queue. So you can manage, meaning you can do everything, or maybe you can only send them listen, Read, Write, Right? So let us say I create this one that only can send and listen. Body cannot, money doesn't have full managements. Now after it's created, if I selected, then I will see that I know get the connection strings and keys that correspond with this particular policy. So if I wanted a client to only be able to listen, then I create a policy, give it a listen. And then that is the connection string and key that I'm going to use in that client. So that is one of the things with the shared access policy that's I did not demonstrate in the previous demo. Now moving on, let us look back at the QRS. Now if I said I wanted to create a queue, understand that you'll see that I have even more options available to me. Now I can see enable auto delete in queue. I can detect duplicates. I can enable sessions partitioning. Well, that was there before deadline, so that was there before. I can forward messages to cues and topics. Meaning if this you get some message, if it meets a certain criteria and then just forward it at the same time to another queue or to a topic. Now let's talk about topics. So a topic, if you look closely at the emblems, you'll notice that there's a difference in what they depict. So Q represents a one-to-one communication line, right? You have one sender to one listener, one-to-one. So at topic however, has subscription model meaning that as many clients as I subscribe to the topic, the can get the message. So remember that a queue, once a message is picked, that's it, it's destroyed. It's not available for anybody else who might be on the queue. So that's why that's one to one, because only one person will actually get that message. However, with a topic, something coming in can go to multiple places. So once again, another scenario for that could be that when somebody places an order, a number of things need to take please. So when that message goes in on that topic, called it the topic, write that message goes in on the topic. You're going to have the fulfillment department that needs to get, you know, some information. The payments department needs to get something, the logistics department, et cetera, et cetera, et cetera. So everybody who is subscribed to that order topic will get their message whenever an order is placed. So that's really the difference between that and this is what comes with standard. As you can see, you can migrate to premium, which is probably going to have a few more things. I have never dared to go to premium, to be honest. Of course we want to take time with the costing at this stage, but you still want to get an idea of what your options are. So when we do come back, we will look at how we can start writing code, our own or cues. 29. Building an Azure Function: All right, So what we'll be doing in this lesson is two-pronged. One, we'll be writing a console app which will double as a client that will be sending to the Service Bus. And then we will look at how we can construct a Service Bus triggered Azure function. And we'll be doing everything in Visual Studio. So let us get started. I'm going to create a new project. And like I said, it will just be a console up. This one is going to be zeros service both test and if anything, I'll qualify that with thought console up so that we're sure I'll leave the solution name without the console up. And we can use dotnet seeks or whichever one you have support for. The code is going to be fairly the same. Now let us start off by getting our NuGet package that allows us to interact with our queues. So like we've been doing upon to know anytime we need to do some as you're coding, we look for the NuGet package, so I'm just going to search for a service bus. And there we go As your.messaging service bus. And once that's done, we can get rid of that console.log land and let's create a new variable called connection string. So this connection string is going to the SAS that hose or connection string coming over from our queue. Actually that connection string would be with that Shared access policies. So if I click that and I choose that access policy, I'm just going to copy that primary connection string and then paste it inside of that areas. So that's what our connection string looks like. And I'll just add some comments so we know exactly what each line is doing. So the next line would be to add the name of the cube. So here these are the static values, this setup, right? So the name of for Q here is order q are going to be creating that. But for now let us just work with it that we know the name of the queue. After that, we want to initialize a Service Bus client. So once you put in service bus client, you're going to have to have that using for service PaaS. So you can go ahead and initialize that client. And we're also going to need senders. So Service Bus Client, Service Bus sender, know between the two of them, they need to be set up so the client is going to be equal to a new service. Most clients taking that connection string and the sender is going to be equal to the client, cannot create cinder and the queue name. So pretty much the sender's going to really be doing all the work. The client is just instantiating a new client. Our client object is instantiating a new client based on the connection string, which is then spawning up a new instance for that queue name, and then the sender is going to take over. So as an experiment, what we can do is take a user input for name. And I'll just add a little prompt there to make it look professional. But before we move any further, let us make sure that our QA exists. So I'm going to jump back over to Azure, jump to QS, add the new Q, I'm calling it order q. And remember that we had all of these other functions that we could set up are. And remember that we had all of these other options that we could enable. So could enable auto delete. It gives those the parameters for that. Not willing to enable any of these within this context. We don't really need that. We can test forwarding later on, but let me just go ahead and create. And we can confirm that it is created, jump over and we can monitor it from here. Alright, so let us run our console up and I'm going to put in my name, press Enter, or add a console app has completed its execution. Let me jump back over to the queue, and it did not work. Well, this is embarrassing. So no matter how much I refresh, I'm not seeing it. And I suspect that it has to do with the connection string because Okay, good. This is a good error. So I had this service, the Service Bus namespace add the shared access policy on the host service was namespace, but I didn't have it on was the queue. So I created the queue, but I did not give it its own connection strings. So let me go ahead and do this one. So read, write, and I wanted to send and listen creates. Then we copy this new connections through from this new read. Access policy or shared access policy rather. And then I'm going to replace it here. And I'm going to remove entity path from the connection string because entity path means what is the queue name. So you see that even though I wrote it in uppercase, it's all in lowercase. So I'm actually just going to replace it with the lowercase representation. And I'm going to remove entity path because entity path represents the cuny which is already a parameter going into creating the client. And one more thing, let us add the await. That was an oversight on my part because we're doing send message is sink. So we definitely need to our weights, right? So I'm going to go ahead and test this. So when that console app comes up, we can just put in our test and press Enter. Then to prove it worked, we come over to our queue and refresh. And there I have one active message, right? So that's a practical example of taking inputs and sending it over on the Q. So I'm going to try something else. How about if we had something automated? I just have a for-loop that is going to go through and send a message ten times. So let's try that one. And after running that and letting it go through ten times now I see you have a liver messages. So you see that so easy really is to put a message on the cube. Know, like I said, this Q is collecting the message. It's holding it. It's not losing the information and it's waiting for something to come and take the information from it. So that is where an Azure function would be a good solution. Because once again, that's another serverless solution. And you wouldn't have to worry about maintaining the uptime of it to make sure it's always taking up the messages. So now we'll look at how we can create an Azure Function that subscribes the RQ and then processes that that message as needed. So back in our solution, let us create a new project and we can just search the templates for Azure functions. Now, if you do not have this option, whether you're using Visual Studio 2022 or 2019, what do you need to do is go to the installer. The installer would probably still be on your machine and then you can click modify. So you see I have both 2022 and 2019 installed. If you go to modify, you can proceed to get the Azure Development SDK, right? So once you get that and you get all of those built-in Azure templates for your projects. So after you've done that and you know, you can get the Azure function, then you can go ahead and proceed, create that as your phone. Sean, I'm just going to see Azure Bus, Service Bus testdata function up one and creates. Once again, you choose your addition of dotnet framework or dotnet Core. But what we're looking for is Service Bus queue trigger. So you see you have a number of triggers that you can use with Azure function, right? The queue trigger here, that will be the Queue storage trigger, HTTP request, trigger, Blob, Trigger, et cetera. So Azure Functions can sit down in Azure. You don't have to, you just set it and forget it. Of course you maintain it. But once you set it, it will just sit down and listen on a particular connection for any activity. Alright, so in setting this up, you need the connection string name. So I'm just going to see Khan and we need to give it the queue name would be the order q. So you can go ahead and click Create. So this is our code file that we get. It may take a little while to sort out all of the libraries and so on. And you see I have a little error here That's because this is static and this also needs to be static. So I'll just make that study. Boy, There we go. So what's happening here? When it runs? It's going to say, Okay, I have a trigger. I'm looking for the q by this name order q. And I'm going to use this connection to connect to it. The, my queue item represents the data itself. And pretty much we're just logging that data. So the next thing that we need to do is make sure that this connection exists. So we're going to use the same connection string we had appeared to connect to our queue. And we will just stumble over here and paste it so you can go back to the portal and get it if you need to. Just make sure you add a comma con or whatever name you choose and that connection string. So I'm going to make this my startup project. And then I'm going to execute. Now you might get the firewall warning. Of course you want to allow access. And I'm getting an error here. So this is one of those good earth. It's telling me that the connection string should not contain entity path. So that's exactly what I mentioned previously and then got caught in my own. Web heard. So I got, I copied the connection string directly from the portal and that once again came with that entity path. So let's remove that and try again. And when it finally executes, it may take some time processing because remember that this is from your machine to Azure, so it's connecting, it's making sure everything is okay. And then it's just going to spit out a bunch of things to the screen. But if you look very closely, you're going to start seeing lines that look like this C Sharp Service Bus queue trigger function process. And then it's showing me the content of that message, right? So message1 misses 0, missing five, missing seven. All right, So here's what I'm going to do. I'm going to put this to the side and I'm going to run this new method, sorry, run this old console up. So those go to debug and start a new instance without debug. And then while it is running, just watch what happens in the Azure function. Well, because I'm switching screens out, press Enter for it to go through. But you see here, as soon as I generated new messages, it beat them up. So I'm not doing anything at that point, right? This to the left, the client is handing it off. And then the Azure Function is sitting on and watching the queue to know to process it. All right? If we jump over to our portal, we see that there are 0 messages, right? So we know we just generated at least 20 messages and we see the Azure Function picking them. So you probably wouldn't even get the time to see that it has arrived here. And that's really how easy it is to develop an Azure function. So to push it to the Cloud though, because it's evidently running from our machine at the moment. So to push it, you'd have to publish it. So you right-click, click, Publish, and then you get this dialog box asking you what is your target? Yes, it would be a z-score. And then the specific target, I'll just use a Windows function. Then they'll ask me, okay, do I want to create a new function? I will just say this one would be, well, you see that generated the names Odyssey order q. They will go from SHA-1 and then that's the subscription. That's my resource group, the type plan. So you have the App Service plan versus consumption, meaning UP per use. So each time it reads a message, it will actually charge you versus premium, which would probably be better for a high volume situation where you're being more like a fixed rate for messages in coming all the time. And of course, you set your location on the one that's best for you. And you choose the storage if needs be, then you can go ahead and hit Create. So we'll leave that to create. Then when we come back, we'll just run a few more tests. 30. Publish Azure Function: So this is our portal and you see that I have my order Q-function and I'm looking at the function. So this is really just like a function up. It's like an App Service for Azure functions. This is not the function itself just yet. So remember that when we were doing our web stove, we had to throw all black and App Service plan and then we put in a web up. It's the same concept. So what we just did through our publish was to create the actual service plan. Now we need to publish the service, it's self. So back in Visual Studio now you see that the function is ready to publish because it has all the connections, strings, and everything needs for the very Service plan that we just create an ACE. It's using the same dot domain. So I can go ahead and click Publish. And that might take a while, but no, it is built successfully. And then they waited for a while. Now it is ready. Let's jump over to the portal. And when I go into the function, I'm seeing this error message saying that the con is missing, the connection string for our queue is missing, poorly named, but we get the idea. So what I'm going to do is jump down to configuration. And I'm going to add it right here in the function up. So I'll just go ahead and say a new application sitting, give it the name that I know it needs according to the code. And go ahead and paste the connection string as it was in the code. Click Okay, save, Continue. And then if you want, you can go back up to the overview and do our restarts just to make sure that everything is okay and that error is gone. So let us test and see. So let's jump back to Visual Studio. You can go ahead and execute the cation, which of course we know it's going to write 10 messages to the queue. And then finding where the function rule, the log T2 is going to be tricky. So remember, this is buck truck. I'm not doing anything substantial with this. Triggered this function too tough. All I'm doing is logging to a file, a bowl what is happening? Or logging to something about what's happening. So this is actually going to create a log file in the storage account that is associated with the App Service. So when I bring up my Storage Explorer and I go down to my storage account as C file shares, and I'll see a folder created for the function. Now, it's noteworthy that the core files are not being hosted on Azure. They're not in the portal currently. Reason being, we build the function locally and then we just published it. So even if we wanted to modify the function, Let's see, before I get to the testing, if I wanted to see the function. So that's the function up. This is the actual function. When I click on photos, I'll see the actual function there. And then I'll be able to see different things if I go to code plus tests, this is where I would actually be able to construct or modify the code of the function had I constructed it in the portal. But because I didn't see that it's seeing it's only in read-only mode. And I have to make changes locally and then push the changes. So I can't do much modification here. Now let's jump back over to our Storage Explorer. When I go to the file, share the name of the function, and then I see a folder as his log files, I can go into application the functions, function, the name of the function I didn't change, it's still function one. And then we'll see the log file here. So when I open up that log file, it will download to my machine. And sure enough, here are the messages that I just triggered to the queue. So it is working. So now it is published in the Cloud and it is watching the queue. So like I said, this is, it's all one big domino effect. You have a function that is watching a queue. You put something to the QR, to the topic, whichever one it is, maybe it's more like a topic if you have multiple things that need to happen and then you have one or 0 function that's in charge of dispatching e-mails, one that's in charge of generating invoices, you know, different things being handled in different traunches and almost autonomously. So that's the power of Azure Functions. 31. Section Review: All right, so that concludes this section. We took a good look at what serverless technologies are. We already interacted with some and we looked at others. And once again, each one has its own particular. Please. In your service, architecture, be tasteful and be smart about which ones you use and how you use them. So in this section, we took a look at queues in the context of the storage queues which are a little simpler and easier to interact with versus the Service Bus, which we see is a bit more enterprise ready with a bit more features and security. We also took a look at how we can build a client that publishes to the cues. The service was the cues on the service was. We also looked at the fact that you have topics, which means that you can have multiple listeners to a particular topic. And then we build an Azure Function, which was just an autonomous host, that block of code that's just listening for our trigger and carrying out an action. So you can experiment with those things. Of course, once again, anywhere you see how they can help you in accomplishing that system resiliency then by all means implemented so that your user experience is the best possible. So that's it for this section. See you in the next lesson. 32. Section Overview - Azure AD: In this section we're going to be taking a look at Azure Active Directory. And this is one of the hallmark features of Microsoft Azure. It is widely used in many companies, in many applications, and it is a very powerful security tool that you can use the secure your applications, whether it's for a company or it is for personal use. And on top of it all, it is fairly easy to integrate into your application. Now, what we're going to be doing in this section is getting a complete understanding of Azure AD, how it works, how it can be leveraged, solve the security guarantees and uptime guarantees that it professes. And then we'll actually look at integrating it with our own application. Of course, in the absence of a full cooperation on, we're going to have to simulate certain things, but we will definitely get the foundation. Don't getting to Active Directory through reports or this fairly easy, you can just go ahead and say all services. In this case, I click Home and then you'll see Azure Active Directory there. And you will see that It's giving me a whole P and showing me some of my user details. So when we come back, we'll actually take a more in-depth look at what the features are, what we can do from this pin and gets a full understanding of what azure Active Directory has to offer and how it can help us with securing our applications. 33. Explore Azure Active Directory: Now let us explore what Azure AD has to offer. But before we get to though, let us discuss the concept of single sign on. So single sign-on means that you can use one set of credentials and get access to multiple places. So think about like with your Google account or even your Microsoft account. You can use that to access your Facebook account. You can use that to access various third-party websites. So it's not limited to just Gmail or Hotmail or wherever your email address is being hosted. So that's really a concept of single sign-on with the same credentials, you can access multiple places. No. Azure AD supports that candle single-sign-on where you can use third-party credentials, meaning non-Microsoft ECU would credentials to access as your just the same way. I am an administrator of my own account. I can give third parties access to my Azure App cones or my Microsoft services. So the mere fact that you have signed up for Microsoft Azure means that you have an Azure AD account. You can verify that by after signing in, if you click the top, you'll see that this is your yeah, cones. Even if you use a Gmail account area who are cones, you can still sign in, sign up for Microsoft services using that email account. And then I can always go and say switch directory, which allows me to see all of the different directories that my user credentials might be a particle. So it's only me. I'm a solo developer, so I'm only in my own default directory. However, a company could have added me using the same credentials. I could have had another cones, etc. So each one of these directories would actually get listed here. And then at will I could actually switch between them having signed in only once with one or cones. So that's what Azure AD brings to the table. Now back to my management screen from here I can then add a user, can add a group, I can add enterprise applications. And then you have different offerings of Azure AD in the sense where you can have it for enterprise settings, meaning it can act as the active directory for our company. It can even synchronize with an on-site Active Directory and existing one so that you can use those credentials on any other application across the network using those Active Directory. You can use your Microsoft 365 a cone to your Azure DevOps, it can be used everywhere. So that's really what is happening in the background. It's broke ear, probably oblivious to it if you didn't know that as your Active Directory existed. But that is what is really pulling all of those strings, allowing you to use this one set of credentials to access so many systems across the Microsoft suite of products. Null, needless to say, the investment into this brittle, it has been very heavy, it is very secured. They have active algorithms watching for brute force attacks and even more subtle attacks that might be coming in. It is being replicated. So even if one server is done, it's still another one to pick up the slack. So your Active Directory would have very minimized bone time. And it's something that I think that you should look into if you want, you know, in keeping with the whole serverless architecture, because you probably want to secure all the cones against an active directory in your organization. And that is a huge upfront investment. But as you've been seeing, these services are almost pay as you go. So almost always be as you go so far, as big or as small as they're, your organization might be, you're going to be priced accordingly. Now, in the next lesson we're going to be looking at creating another directory because even though the company might have the Azure Active Directory account, that is for the entire organization. You probably don't want to develop directly against that global one. So when we come back, we'll look at how weekend spin up our own Azure Active Directory within this cones and use that for our application and development. 34. Create Azure AD User: So let's look at creating a new Active Directory. So that's why we get away from this spin showing the one that we will want to use, that's our default directory. And then we can go to Create a resource and look for Active Directory. And then once we find that we're going to choose Azure Active Directory so that you go into the pin. And then here we're faced with a very interesting choice. So we have Azure Active Directory versus Azure Active Directory, B2C. So we're going to be looking at both of them. Both write null, I'm just going to go with Azure Active Directory. So Azure Active Directory would be for the more enterprise level ones where it's within the company and you want to manage it like that. Whereas B2C would allow you to spawn up your own instance of an identity provider that you can use that across multiple apps to support your own single sign-on for customer facing applications. So right now we're going to go with the first one and then we can go over to the configuration. And here we can set an organization name, a domain name, and our country slash region. So just give it an organization name, the domain name, that test to be unique of course, and you'll notice that it gets the entirely dot And then you choose the concordat is closest to your misrepresents you. So I'm not in the United States, however, the datacenters there are closest to where I am. So after doing all of that, you can go ahead and review and create. And after a little time, you will see that it gets created and you can navigate over to your brand new Active Directory. So you'll see here that the organization name is different. Firstly was what default directory notes is what the organization name is. And you see that it's not in dark mode because well, by default it's not going to be in dark mode. So if I wanted to switch between them, I can just go ahead and click my avatar, say switch directory. And then you'll see that you have the default directory and your new organization listed there. You'll also see that the domain is different. And for the default directory, it's based on my app cones versus the new one, which is based on the configuration that's I sit. But if I jump over to home, you see here that it says that I have no subscription associated with these cones. Or whether it might not be seeing that explicitly. But if I navigate to subscriptions, then you'll see I have no subscription, so I would have to go ahead and add a new settle subscriptions. And that would enable me to start creating resources and start moving ahead with things relative to this, cones. Know, while I'm here, I may need to have other users. I created this echo Maybe for my team so that we could work on our project together. What if I needed to add somebody to this active directory? So if I go and manage this Active Directory, I do have the option to go ahead and add users. And once I'm here I can see what the username is going to be. So at this point, I'm going to be adding them to the domain that I have created. This is the name of the domain based on what we set up some adding tons Williams to this domain. And you can see here that we can sit up FirstName, LastName, password. I'm just going to auto-generate the password and I click Show Password so I can copy that and use it later on. You can add her to roles and groups, block. All of these options are available to you as you need. So when you click Create, you go back to the dashboard. You might need to refresh so that that user cones can increase. But we know have two users. Now jumping back over to the new user creation screen, you see that you also had the ability to invite users. So you could invite a guest to collaborate with your organization. And this would be a third party. So their credential verification would not be managed by our domain are by Microsoft's AD network. It would be managed by whatever other AD network varies or identity provider that is being used to verify their credentials. So that's his 1 we talk bubble federated authentication and federated identity. Because once these identity providers have like a solid or a common ground upon which they can verify a user's identity, then that's his federated authentication because they are technically a part of the same fitter rage on their buddy and bed to same rules. I then an identity that can be used across all of these providers is a federated identity. So all of the claims and the inflammation like the first name, the last name, the e-mail address, all of those things can be shared information across all of these providers in the same federation. So in another browser window, went to test touches login. So you put in the, we just go to the portal dot in that address at the domain. And then we put it that password sign in and then out to being prompted to update their password, she's able to sign in and gain access to the portal. So you can see here that she's being prompted to start authorise. It may be later, and there's just nothing there to show because there's no subscription. And you'll see that notification. You do not have any subscriptions. So what half to set up the subscription with these cones. And then that would give any user who is invited to the domain for Yuri and to start doing what they need to do. That's really just the nice quick tour of how you can spin up your own Azure AD or cones. Of course, the measures that need to be taken in order to actually put it to some good use. How you can manage users. Know when we come back, what we're going to do is look at how we turn up and how we facilitate authentication using OpenID, Connect and Azure AD. 35. Register Azure AD Application: All right guys, so now that we have our AD, we need to have or UPS registered. So just the buck truck. When we want on up to be able to authenticate against or Active Directory. We first have to have or Active Directory running. And then we need to register the apps that we want to use this Active Directory for authentication purposes. So what we're going to do is jump down to upper distributions. And then from here we can click New registration. This gives us the ability to put it a new name. And for this particular exercise, we're going to be using the classifieds up. We can see how we want to control who can access this API, API or up. So, uh, cones in the Active Directory only versus a consonant organization. Meaning if we have different organizations connecting to the one AD account, then all of these organizations can use their individual e-mails and their individual credentials and get access through the Azure AD, which is like a consolidation of all of these cones. We can also extend that to go to personal So like your Skype or your Xbox or your Hotmail, Dakota, etc. And then we can have personal accounts only. So we'll just leave it on the columns with the organizational director only ba based on your business rules as usual, he made the best decision. We also choose what kind of application is going to be. In this case, it's going to build up application that we click Register. Now once that op is created, we get a new management pin dedicated to our app. If we click endpoints, then we'll see that we have different end points. Add up those no boat. And one that I want to 0 Randall is the OpenID Connect meta documents. If I copy that and then open that up in a new tab, then you'll see here that you have different endpoints is not the most beautiful document. You can probably take it formatted as it is only a JSON document, but what it does have our key endpoints that help it to know where to go to access watts. So you have a login endpoint, you have a logout endpoint and insertion endpoint, Those kinds of things, right? It also says what kind of claims will be included once it generates its Tolkien's. So once again, this is just a well-known document that just says, here's what you need to know when you are conducting your authentication flow. Otherwise, let us go down to authentication. And what we're going to do here is add a platform. So we're adding the platform, we're first going to say what kind of up are we expected to be interrupting with? Well, in this case are going to be doing our width. Next is going to say, what are the URLs that should be used for the redirection and for logo. So for context, the redirect URL would be that when I click Sign In based on the configuration is going to say go to the Azure AD or the OpenID Connect provider, that is where you should go to get authenticated. So when it does that, when you go there, just like thinkable when you're seeing sign up with your Googler cones, you click sign up with Google Home City redirects to Gmail or to Google. You put in your credentials and then it redirects again once you're verified to the application that called gmail initially. But that's what this redirect URI is. So once it read our Excel, we have from our classifier it up to Azure AD challenge screen. Once you're verified, were seeing where do you want the optical back to even if you have your own project that you're experimenting with. What you'd want to do is at least for testing purposes, give it that local host address that the project runs out. It has to be HTTPS also. So just get that HTTPS colon, that SSL ports that it would run at slash sign in DC. So that's what I'm using to say when you're finished doing what you need to do. Redirect here. Similarly, I'm just going to have the same URL except it's going to be sine ot, dash, OID. See for the logo to URL. Going on here, we're asking what kind of Tolkien's do we want to use? I'm going to use ID tokens. And you can see a bit of reading here that if you need to use it with a single-page architecture, maybe like an Angular app or maybe even that blazer WebAssembly up. Then you'd probably want to select both otherwise for regular dotnet Core Web Apps and other web apps that use hybrid authentication, select all the Id tokens you can do further reading there if you wish. So let's just go ahead and configure. And that really doesn't. So you see here it's added and you can always go back and modify if you need to. Be mindful of the fact that when you do publish your application, however, you will need to change this from the local host address. You will need to put it to the published address affair up. Now let's just jump over to API permissions, where we can actually go and manage the permissions that E1 Have when it accesses this. Well, that's a client would have with accesses this up, that is a part of this domain. So we can actually modify the Microsoft Graph and say, Okay, well, when an outcomes, what are the data points that we are allowing them to see? Civil want to expose more information, then that's fine. We can also allow them to access other third party applications that are part of the whole suite. We can extend those permissions over to our APIs. We'll be looking at that in a few. But all of those options are available to you. Of course, based on your context, you want to make the best decisions possible. Now when we come back, we will be building dotnet Core up from scratch and we will see how easy it is to integrate with our Azure AD. Bear in mind that this is built on top of OpenID Connect, which has a number of configurations and a number of operations happening in the background. But ultimately, our libraries given to us by Don a core combined with the Azure AD kind of abstract away a lot of those inner workings and configurations and they make it very easy for us to integrate. So when we come back, we'll be completing that task. 36. Create .NET Core App with Azure AD Auth: All right guys, so let us get into creating our ops. So we're going to do this one from scratch. And what we'll do is create a new project. I'm going to be using dotnet Core Web App, which means it's a Razor pages up. And we'll be calling this one classifies as your AD off lot client. And for the solution I'll take off the client. And next. No, I'm using Visual Studio 2022. You might be using 2019, so just so that it's consistent, I want to use a dotnet five. So that means no matter where you are, you can still do this activity. I'm also willing to choose the authentication type to be Microsoft Identity platform. So you see here you can choose individual accounts, you can choose Windows for local Active Directory, but you can also choose Microsoft Identity platform, which comes pre-configured with certain things for Azure AD. So I'll choose that one and leave that on. Https will need to sit anything else? And let's hit Create. Alright, so the first thing that hits me is that there's a missing component to this project. And it's a tool which can be used to configure dotnet projects to use Microsoft Identity platform. I can click Finish, but I'm getting this error. And to be fair, I don't think that this SDKs absolutely necessary, so I'm just going to go ahead and hit Cancel. Let's get started. So what did we get out of the box? Firstly, in the op settings.js ON, we see here that we have a configuration section that already has certain key elements that we would have seen during our configuration, right? So we see here that the instance is to login dot Microsoft Then they're asking for the domain and then the tenant ID, the client ID, and then a callback path. All of that is being initialized or bootstrapped in the startup file. Right here. Configure services that add authentication. And then it's using the OpenID Connect defaults authentication scheme. And then is simply adding Microsoft Identity Web up using that configuration section. It also has authorizations that can be added and then it is going to be used in the Microsoft Identity UI. So a lot of these things might seem unfamiliar to you. I do go through the identity configurations in depth in my User Security Essentials course. But for now, we will just be focusing on using what came out of the box so that we can see how the authentication flow works. Let's go between the up and the ports. I don't fill out the blanks. So we want the application client idea can just copy that and replace that in the app settings JSON. So that is the client ID. The tenant ID refers to our directory and there it is, right there. So you can just go ahead and copy that. And then our domain, if you don't remember the name of the domain, you can always go back to the Active Directory and go to the overview. And there is a primary domain name. So we did that. And that's what we use in the domain. Now, typically with any business solution, the users would need to authenticate against the domain to be able to gain access. That's typically how it works when you protect an application with Windows authentication, right? So in this application, based on this template, what is going to happen is that it's going to, by default, prompt us to sign in before it allows us to go into the application. If you're not signed in, you will be prompted if you already have your account authenticated in the browser, then it will allow you to proceed. So here you're seeing my sign in options as soon as I browse to the application, Just asking me to sign in. If you look at the URL you are seeing here, the tenant ID, the client ID, everything is being passed in with even I redirect, where should it go in, sign in DC. All of that was settled when we configured the Porta. And those are URLs that, those are query strings, rather being passing the URL too much to what the configurations are, right? So if I choose the sign in with their cone that I've been using, then here I am authenticated to use our AD client. So this is our Web up completely secured by as your authentication. So if I sign out, then they're telling me that it's a good idea if I just close all my windows and complete the same old right? So that situation are in that situation I would want to just close. So see them in the up or I'm not I'm either signed in or not. So this is a nice airtight way to secure your applications within your enterprise setting. Now I'm going to run one more test so they can see what's happening in the HTTP context once the user has signed in. And what I did was send in and set a breakpoint on the index page. So once I'm signed in, I can actually add an ADA Watch. And then I'll see here that this identity got created. So the cookie God created with my identity. If you want to understand more about identities and claims principles and ventricle my User Security Essentials course where we go through Microsoft, core identity and how it works in this situation, we're not using the local identity libraries were more focusing on a third party OpenID Connect provider, which in this case is Azure AD, and then it is just passing. But that claims principle, or enough information for us to create that claims principle based on the information stored in the cookie. So when I open up that watch on the user, let me just pin this so we have a bit more realistic. We can see here all of the claims coming back. And these are all bits of information that came back in that ID token from Azure AD. So it knows my name, it knows my preferred usernames or anything that we would have used. The setup that user in Azure AD account is being used in this situation. All right, so just sandals of that account unless try again, let's us use the other user that we had created directly inside of that idea cone. So this time I'm going to use another account and I'm going to use to j at is the dotnet dev j dot So that is the cost on domain a cone that we had created. Let's hit next within that password. And then this time around, they're asking us what for are there in foremost, rather, what permissions are being requested of us from that third party. So they would like to see the basic profile information. And it wants to mean t and the access rights that the data that you have permitted. So that's fine. I can click Accept. And then the teats are breakpoints again where we get to look at that identity. And we can now look at the claim. So this one is going to of maybe more claim CC here the name is Jay Williams because that's what we put in in their profile in Azure AD the preferred using the invoice study at Izzy dotnet, dot and you see here, so once you add somebody, our new user other tool, your Azure AD, you are essentially permitting them to be able to authenticate and you're up against your Active Directory. So it gives you a full management of your users, just like an Active Directory would in an on-premise situation. 37. Setup OAuth 2.0 Authorization Code Flow: All right, Let's take or a security considerations up another notch and let us look at implementing the authorization code flow with the Proof Key for exchange code or PKCE are pixie for short. Now, this flow is implemented when we have public clients that need to access our resources. In this situation, these public plans may not be able to securely store that type of ID token that is being exchanged between, like our web app null and our authorization server. So what happens is that they have to have a secret which will then pass on to the authorization server, which then gives them backup code, which says, yes, you can access, of course, containing enough information to make sure it's coming from that particular source. I'm just on the Auth 0 documentation on the same concepts. So you can actually just come over here and read upon it and read when it is best to use this kind of flow. You see a diagram depicting the flow and you'll get to do some further reading on it. But for now, let us look at implementing this flow using our Azure AD authentication. Let's jump over to our portal and from our active directory we're going to go to the app that we're using classifiers website, and then we need to create a new certificate and secretes record. So we click that one and head over the client secrets. You can just say new client secret, give it a name, and then set an expiration time. Of course, based on your business needs, you set one but six months is the recommended period. And then once you've created it, you give it a name, whatever you call it, you can just take that value so you can just go ahead and copy that. Now in our code, we would have that app settings block that we can just put everything in. So we have to put in these additional lines. You have to say use PKCE or pixie, set that to true, and then our response type, we need to set it to code. So by default it would have been ID on the sport. Tolkien know that we're no longer using the ID token and it would have defaulted to that's after explicitly know that we're now using code, we are enabling pixie. And then for that secret, because it is a secret, I don't want to storage here. So I'm going to put it in the secrets file. So right-click and go to Manage Users secrets. And that will bring up the secrets about JSON. And then I'm creating an identical section for the config. And now I have the key client secret with that secret value that was just copied from the port. Now once all of that is done, you can just do Control Shift and B to make sure that everything was still build. We don't have to change any code anywhere else. But when I go ahead and run, we get prompted to sign in. So if you're already signed in, I probably would just go straight to the authenticated page out of sandals of all the columns. And remember that I'm using my personal account and that AD a cone that we created with the app. So if I click that one, enter the password and sign in, we can proceed as though nothing has happened. So in the background It's a much more secure exchange that has occurred, but the user, of course, is oblivious to all of that. They're still just authenticate in the same way that the new hole 2. So that is really it for implementing the code flow using pixie protection. Now just for experimentation sick, just to show that something really did happen. If I mess up the key, lets us see I use a different key for that secret. And then I tried to run the app. Let me sign out of the already authenticated session and tried to sign in again. Then I get this error, both the invalid client value, right? So invalid trade secret is provided and some other information that really got me just reading it, the contact somebody else. Of course, a user shouldn't be seeing a screen looking like this. This is only for dev mode was I'm just showing you that that secret, getting that secret right is paramount to this exchange being successful. Now our last activity on this note would be to jump back over to the portal and disable or implicit flow. The implicit flow is less secure than what we're doing though. So we really don't need to keep it enabled. So we can just untick the ID tokens, go ahead and save and click, Disable. And that's it. So if we're not using it, it makes no sense to keep it enabled. 38. Secure API with Azure - Part 1: All right, so we're moving on less than in this lesson, we're going to look at what it takes to secure an API using Azure AD. Know the contexts for securing an API would be that when you have an API that is being accessed by public entities, you want to ensure that you are giving them Tolkien's or bearer tokens that contain strict enough measures to ensure that the cart up has the correct permissions and is doing the card thing in your API. So this is really just going to act as an authentication scheme of for our API and control who can do what bees on the Tolkien that they get. So let us start off by registering a new up and we're going to call this one classifies as to skip it, classifies API. We leave it on the web. We don't need a redirection URL because it's just good to be an API. So you just register. And then having registered, let us look at the permissions. So if we jump onto permissions, you see that the default permission is there, Let's go to Export API. And what we're going to do is one AD application URI, so we can set that. We're going to need that as the audience claim in the token. So we need to make sure that we have that. We save it. And we're going to come back for later on. Next up we need to add a scope. Know our scope basically says who can do what. So and I click Add scope. It's going to ask for a name. It's going to ask who can consent, and it won some other value. So let's fill those in. So this scope is going to be all axis. And I'm just going to say that admins and users can consent. And then these values are really what are going to show up on that concept screen. I'll just go ahead and add that scope. And once all of that is done, the next step is to jump over to the code. All right, so in Visual Studio, let us create a new project. So we're just going to add an API projects. You can search for templates for API. And we're calling it to our fall in the same naming convention. So this is going to be classified AD auth dot API that we hit Next. Now, I am going to use dotnet five as the project template. I don't need any authentication type and I'll leave everything else t, That being said, of course, if you choose dotnet six, barring the difference between the program.cs file and the new program.cs file, you should have absolutely no problem replicating the bits of code needed. Now I'm going to jump over to Neil gets, and I'm going to get a package for JWT beer of management. So I'm just going to browse and look for JWT bearer and we're using Microsoft ASP.NET Core Authentication JWT beer. So make sure that you're choosing the correct library first on for your dotnet. So I'm going to go with 5, 0, 1, 2 since I'm on dotnet five. Now let's jump over to the startup and start configuring our API to add authentication for bearer token. So I'm going to see services dot add authentication. And in that I'm going to specify that we're using bearer, so don't make any mistakes with the spelling here, bearer. And then after that I'm going to say add JWT mirror options. And in that Options block, we're going to have two options, at least for now. So option number one is the authority. Try that again. Audience. Sure. No problem. And we have the authority. So bucking the Porta, let's us grab that application ID URI, and that is our audience value. So we paste that here, audience. And then for the authority I'm just going to borrow from the app settings are for previous project, the URL for the instance which is that and that combined with me jump back, that combined with the tenant ID, that is our authority. The last thing that we want to do in this starter file is to go down to our configure method and add use authentication. All right, so with all of that done, let's us take it for a spin. So changing the starter project to the API when I run this, I get my Sorger doc that came by default with the dotnet five week project once you enable open API support. And when I try it out, we see that we get some data. Now let us test it with authorization in place. If I go back over to the controller, the weather forecast controller, and that author as flood, and then try this again. When we go back to retest, we're going to get 4 0, 1 error, right? So no, you have authentication enabled. We need to start with that whole flow to get that JWT token to start being able to access the API. Alright, so let us jump back over to the app settings or JSON foil client. And we're going to add this one extra line that says Save tokens and set that value to true. Because what's going to happen is that when you authenticate in the client application, we're going to be getting a token from our Active Directory. And that we need this toolkit which will then pass along to the API to say here is who. Please allow me access. It's like a daisy chain, so we do need to save the tokens in-between that transaction. Now, in the index page of the client, remember that by the time we're going to hit the index page have been logged in or well, opening the app, we're going to be prompted to login, and that's courtesy of the default authorization policy that is being enforced here. This is forcing us to login, so it doesn't really matter. But what we want to do in this index page is one, inject over HTTP client factory. And along the way you're going to have to end up using some usings. So you just use Control lot and add the using statements and make sure you initialize that field. Then in the En-Gedi, what I'm doing here is getting the access token. So I'm using the HTTP context, a method called get access token. And the key that by which we are accessing this data is called the access token. So we'll just do that and then we'll create a client, creates a request, I GET request to the endpoint of our API. So if you want to know how to get that endpoint for the API, just go to your API project. Look in the lawn settings and you want the HTTPS version of this URL with the port, with SSL ports, right? So once you do that, you set the slash weather forecasts. You know, it's I get so that's the first endpoint that's it's going to hit frankly, the only one that's there anyway. And then we're adding an authorization header of type JWT barrier defaults dot authentication scheme, or you would see the text bearer in plain text. So of course you want to avoid magic strings where possible. So I'm using the default, that constant class that is made available to us. And then we pass in that token value that we just got from the HTTP context. After that we send over our request. And then if we didn't get a successful quote, I'm just going to log the error, but that is basically the operation. So what I'll do is set a breakpoint right here at the call where we send the request. And in the startup oil in the solution, I'm going to set multiple start-up projects. I'm going to set both the API and the client up to start, and then let us start those applications. All right, So having run the applications are logged out and log back in. We've seen that flowing of terms. So that's all I did in the background. And if you look at what is coming back in my watch, you see here that I am getting this value, which is really the token. So if I, even if I continue, all right, so I just send a little decision that I had active and I went Inspect Element I'm on the Network tab and assets to record and Preserve Logs. So I want us to just see the flow of calls between our local application and the Active Directory Service. So when we click Sign in, you see that a whole bunch of activities happening. We choose our cones. Go ahead and authenticate, and we're getting this prompt. I'll just skip that for now. And no, I don't want to preserve and then we are signed in, but I do have the breakpoints. So when I jump over and look at what is going back in that token, we see here that we are getting a value. Alright, so this value, generally speaking, always comes back, is just that we never really tried to get it before null, so that is our values. So I'm going to take a copy of this value temporarily and we're going to inspect it in a few. I'll just paste in that window for no and let the application continue. So before we go and inspect that Tolkien, I want us to look at what exactly was happening between our two apps. So the first thing you'll notice is that it hits the sign-in. Alright, so that's the sign in from our local up and that's a 30 to then it goes over to the author. As authorize is one of the endpoints that is in that well-known documents, right? If you go back and look at that well-known document, you'll see that URL. And so the query string parameters that are worthy of note would be, here we go, the client ID, so we know the client ID, that is the application ID that we had to take from the portal, the redirect URI, that is why we specified so that's helps to prevent replay attacks and hijacking attacks, right? So we always want to put in that redirect URI, which is here is where I am going back to after I've authenticated or response type is code. We recently changed that we have the scope. So this is saying that we're using OpenID and we want to profile information. Now let's inspect that Tolkien by going over to JWT dot i, o. And on that set we can just paste R Tolkien. So let me go get it. And I got it with the quotation marks. So you can remove those quotation marks to that entire block of string, paste it, and then it will show you the breakdown of all the information coming back inside of that token so we know the audience, the issuer. We have the time to live, the expiration date of issuance date, et cetera. We have information about the user, so we have the user's name, we have their full name, their IP address. Quite a few bits of information about this user. Know what we want to accomplish is letting this Tolkien be passed along to the API so that we can get access to that end point. So I'm going to move that breakpoint down to the if statement and I'm just going to refresh our index page and then it hits the break point. And we see here that we're getting a 40 one response. So the token is very much present. We know that we are passing into long, but it is not allowing us access to the API. So at least we've solved one part of this puzzle. When we come back, we will look at how we can allow the API to start interacting with our AP, US through the use of this access token. 39. Secure API with Azure - Part 2: All right guys, So we're back in the portal for this activity. And what we want to do is to allow our Azure App or our AD application classifies website to know that it should allow permission to the API. So to do that, jump over to API permissions and then we're going to add up permission. And remember that we briefly looked at this screen where we could modify the graph and talk about Application permissions, what anybody would be able to do, we can get very granular with the permissions and the inflammation that is a load across. So that would be with the most commonly used Microsoft apps and any third-party apps. However, in this case we want my API. So when we click my API, we want to give it all access permissions. So remember when we created that, when we set up the API up, we want to allow that permission. Now, after adding that, if we click on axis, we see here this is our actual scope value. So we copy this. And I just wanted to point out one very important thing before we move on. And that is that if we click on Our up, we see here that we're actually setting up a delegated permission. So if you look at the difference between delegated permissions and Application permissions, the delegated permission needs to know who the user is. We know who the user is through the token. So a while ago we inspected the token and we saw all of that user information, so we can just pass that along. So this permission will actually hinge on Nevada. Yes, I know who is accessing via the inflammation in the token. However, if you add maybe like a service that you really wanted to just be able to access the API, access or permission without having user information in it. Then you'd probably want to look at the Application permissions which would facilitate that kind of operation. So to configure our AP to know that it should add this scope permission value. We need to adjust our Azure AD settings once again. This time we're going to say scope and we're setting up an object body. And this is very important because this is actually a collection of nodes is scope. There can be multiple values that you can separate and have key value pairs, right? So you just say Scope, open up the body, you'd give it a name, and then you give it that value. Afterwards. Of course, in the startup, all of that is being ingested by this operation. And the expectation is that when we log out and log back in, the token that we get should give us 1 that scope value as the audience. So if you remember when we just inspected the token, it was all zeros, maybe 12 characters that weren't zeros. But no, that should be replaced by our API App URL as the scope of access, that is know our audience. So let us start. And you want to ensure that you logos and then try to login again every time you make configuration changes because it's almost like you're changing the rules of engagement or the agreement between your app and the authority. So when we say an in and actually completed this step before, but what you would get is a new consensus screen asking if you are sure that you consent to having access to the new scope so you can just go ahead and accept that constant and then you proceed. Know when it hits the breakpoint this time around. If we take that Tolkien and inspect it on our website, then we can see on JWT dot-dot-dot or other, we can see that the audience value now has that API scope value, right? And then if you scroll down, you'll see that subjects, well, this is like a unique identifier. You see the scope has? Yes. So all of those valleys are no going back. So when I go back and look at the response, I'm seeing that I'm not getting a 200, which means it is no longer not seeing as unauthorized person because we are getting four or one before. No, no, It is seeing me as a valid user and this is a valid request. Now let's just confirm that it's actually talking to the API. So what I'm going to do is set a breakpoint in the controller for the API. And we know that the homepage cool is what causes API. And there we go, it's hitting the API successfully. Now that covers the basics of calling APA for an access token. When we come back, we're going to look at some more OpenID Connect features that we have available to us. 40. Setup 'On Behalf Of Flow' (API to API Communication) w/ Token: All right guys, so in this lesson we are going to be discussing don't stream API communication. So essentially, this has to do with machine-to-machine communication or APA to API communication. So think of like a microservice architecture where you might, as the user initiates a call through your web interface, but then that call calls an API. We already covered that part. But then that AICPA has the color model API and you are not involved in that particular communication. So there is a mechanism that we can implement to facilitate that kind of API, API communication. Which even though it's machine to machine will actually be happening on your behalf as the initiating user. Before we move on though, I just wanted to show you that I cannot change though some of the words and the frieze is being used in the API is called specifies is the recurring theme here. So instead of relying on whether forecasts of James weather forecasts to classifies listing of Jesus the summaries is some values I've changed in the class file for classified listing, which was weather forecast, some of the properties. If you want to do that, then that's fine. No problem. Otherwise that's not the focal point of why we're here. So let us say we need another API in this same solution. So let's just create that APA and amazingly giving it the same name, except this one is going to be dot API to hit Next and the same rules of engagement. Now with that API created, we want to kind of retrofitted to follow the same authentication scheme that we used on API one. So I'm just going to copy this from the startup of API one. We can jump over to the startup of api v2 and paste and go ahead and include any missing references and libraries accordingly, one thing though is that we remove the audience value because remember da-da-da, Audience value is unique to the API. The next thing that I'm going to copy is the code that calls API that's originally in our web app. I'm just going to copy that from the client. And I'm going to paste that inside of our classified listings. So yes, we call the API. But what I wanted to simulate is that when you hit this particular endpoint, it has to call another API. So before we have that default code, I'm going to put in code that handles that API call. Of course, for the HTTP client factor, we need to inject or HTTP client factor into the constructor of the class and then you add the missing using references. So what we're simulating here once again is that when we call our, when we browse through our client, we're going to call API one. This is API one. We know that we're calling this endpoint API one to complete its operation needs to call API to. So that is what we are going to simulate here. And there are certain bits that are going to have to go in, but we'll get there soon enough. Now let's jump over it or AD dashboard and go to app registrations and create a new UP, which will represent or second API. So classifies API to, we'll leave it on web and register. Let's then jump over to expose an API and we are going to be adding a new application ID that has to be taken. So copy that, save it. And while we're here, let's go ahead and add a new scope. And this one we can just call it All Access. Also, admins and users fill out the constant display fields and then add the scope. Now once that's done, we copy this to the clipboard and then jump over to our first API. So we have to go back to classifies API. And then we're going to have to give or API permission to the other API. So we add a permission. And just like, Oh, we had to set this up with the classifieds website to talk to the API. We have to set up the API to talk to API to. So I went to go over to my APIs and then I will select that API T2 has all access or API one has delegated permissions to access API to on behalf of the user. So we'll just add permissions. Now before we move on from this particular screen, there is this option that says Grant Admin consent for, for this domain. So pretty much what this is saying is that any user who is in this application, in this domain accessing this application, we're telling them that they have admin consent. They don't have to see that constants screen and accept again. So I can actually do that. And everybody, once you're logging in, you can just go bypass. It would be implied that you've already accepted and given consent to the fact that these security policies are being applied. Next up, and we're really at this point just repeating some of the steps that we've done before. We need to set up our client secret because we are going to be doing a code flow. So we give this secretes a name click, Add, and ensure that we take a copy of this client secret value. Now let's jump back over to our code and in it to configuration and the start file, we're definitely going to set that audience to the scope value or for the application URL value for API too. So just another teachable moment before we continue writing code, what we're about to perform is an on behalf of flow not this is all off 2 protocol or mode of communication. It's not unique to Azure AD. Azure AD just like any other OpenID Connect provider. Just facilities this kind of operation. So you can read up on it, read up on the details of it. But I wanted to scroll down to what the request is going to look like and what it requires. So we have to give a grant type, we have to specify the client ID. We need a secret we already got that, have an assertion key, who have a scope, et cetera. So all of these things we have to kind of put into a whole request before we actually try to talk the API to. So let's get to cooling. So I'm going to close all tabs that are not immediately important. And in API one jump don't NuGet packages and get this package called identity model. Now this is a library that has a lot of extension methods to help us with. All. There we go, I think spilling, so identity model and this is what we're looking for by Dominic Bayer and Rock Island. Nineteen million, ninety eight million downloads. So this library has a few extension methods to help us with our operation. So while it could be done manually, of course, you want to make use of the third-party libraries that will make it much easier for us to execute. Now let us review again within the context of the API what exactly is happening here. So website called API, want it to hit this endpoint. Api one has the need to call API to. But in order to make that call, it needs one, the original token that was used for access, which is why we're still getting buck and access token here, even though we're in the API. But then we need to go to the token endpoint and gets another token. So I'm going to say var Tolkien from endpoint is equal to, and then I can await my HTTP client. Sorry, I need to do that after initializing the HTTP client. There we go. So my HTTP client has a method requests Tolkien is sink, which is coming from our identity model libraries. So control dot and add the missing reference. And this takes a parameter called Tolkien request. So I need a new instance of Tolkien request and I'll just open up that object initialization block, move it to the next line. So inside of this token request, we're going to see address is equal to the URL where we know we get the Tolkien from. So we'll circle back to that in a bit. We're also lint off to specify the grant type to be used, which coming from the documentation is actually a static value that is always going to be used for these kinds of operations. So you can just go back to the documentation. And in that example you see grant type is equal to that value, so you copy that. And that is what we're going to paste in the code. Next up we have the client ID and client secret, both of which are relative to our API to so we noted the secret earlier and we're using the API to client ID. Now for the token endpoint or the address we are using, one of the endpoints are the endpoint for Tolkien's that we can find from our well-known document or from the endpoints. So if we go back to our ups and we click on endpoints, then you will see that token endpoint. You can just copy that. And that is what we're using as the address. So we're looking good so far. Well, you have some additional parameters that we do need to include. So for the parameters block is going to look something like that parliament does is equal to, and this is really just dictionary of strings, string key-value pairs. So what we have here would be assertion. And the assertion, well, I don't need a static string here. Our assertion would be the Tolkien. So I'm going to name this a bit more clearly. So this would be current Tolkien, meaning this is the token coming in from the calling application. So that's a Tolkien that we have access to right now. And the other two would be the scope, having the value of that school valid at the application URL for API i2. And the requested tool can use seeing on behalf of so as these are much extremes, be careful with the spinning at all times. Now one quick correction. This client ID should not be the idea of EPI 2. It should be the ID of the calling API, so EPI want. So let me meet that quick adjustment. This client ID should be API one of the skull Valley red. So this is the API one, the client ID of the application that's making the call. And then down here we have the scope relative to API to which is our current API. Next model. Next modification is to the line where we're adding the bearer token. So we're not going to be adding the current Tolkien. Instead we're going to be taking the token from endpoints so that, and I probably should rename that, but it's the access token coming from this. So let us call this Tolkien response, Tolkien endpoint response. That's a better name. Just so it's very clear as to what each each variable stands for and what each object is doing. Fixed misspelling. There we go. So token endpoint response that is going to come back from the end point make from making this call. And then it's going to tell us the access token that it has no gotten, that is granting it access. The next API just to ensure that you also update the request URL. So no, I'm going to 44 3, 13, that is the address of my new API. And I'm hitting that with a forecast endpoint, as we know, that comes by default. So what I'm going to do is set a breakpoint right here. The response. I'm also going to make sure that breakpoints are in the other parts of my upsell. We know that API, this API one making the call to the API to, so I'm going to go over to API v2 and the controller and set a breakpoint here just to make sure that it's actually hitting that endpoints. You also want to make sure that in the startup about CSU are referencing the HTTP client. That is because we're using that client factory. And then all the points of information, once again, make sure that you go and sit all three projects, startup projects. So we land on our first breakpoint that indicates that we're now in our API one controller. So this is a classified listing controller, right? So let me go ahead and press F5 so you can continue. And then boom, it hits the second API endpoints. So actually I did that wrong. What I need to do is let's try this process one more time. So what I want to do is look at the access token that was gotten from these endpoints, right? So once again, client app called API one, API one got td here. Then we use the current Tolkien to make a call to R Tolkien response endpoint or a token endpoint. See in all of these parameters. And then we've got that access token at the end of the call. And I think that I may have lost it in this debug sessions. So if anything, we'll just test again. Okay, There it is. So here's that new access token. So I'm just going to copy that value and jump over to Awesome. If anything, if you copied it the way I just copied it, just make sure you clean it up before you try to paste it. Otherwise, you'll end up like this. So put it in Notepad and you'll see the extra information in it I don't need. Now when we observe what comes back in this token, we see here that the audience is no set to API to, that was the URL for APA T2. We can see the app ID and we see all of these values coming back. And somewhere in there you'll still see some user information. So like I said, it's acting on behalf of the user from one API to another. And that's really how you do the requests on behalf or allow EPA communication in a very secure environment. And once again, this would be perfect for a situation that microservices. 41. Section Review: All right guys, we'll come back. And that concludes this section on why and what a journey that was. So just as a quick recap of what we looked at, we explored Azure AD. We looked at how we can spin up our own Active Directory separate from the one that we probably have by default. And we looked at a finite on user can be in multiple. We also looked at how they can manage users. You can set of groups, roles, administrators, all of these things, all from this dashboard. Further to that, you can go ahead and unregister apps. And these apps can be a web, an APA. Of course, the template is, generally speaking, kind of the same, but then there are particular rules in between that help you to differentiate between what the application will be used for. So in our adventure, we had a website application and we had two APIs. We also looked at how we delegate permissions between the apps themselves versus how the users interact with them. We looked at the different kinds of flows in terms of how we can authenticate from our web application to Azure AD. And we also looked at how we use access tokens to pass along between resources like an API to another API. So it was quite a bit of fun. I hope you learned something extremely new one you can go ahead and apply what you've learned here. Once again, all our implementation of Azure AD features is more geared towards an internal situation or at least an enterprise situation where a company needs to interact with an active directory or multiple companies need to collaborate on the same Active Directory so it can support those kinds of scenarios. And we've seen how we can write applications to interact with this kind of security. 42. Section Overview - Azure AD B2C: Hey guys, welcome back. In this section of the course, we're going to be looking at Azure AD B2C. So I have the portal open and I searched for Active Directory. You will see and that you have a number of Active Directory options. But the ones that I want to point out, our Active Directory, B2C versus active directory. So we just went through a whole round off looking at Active Directory, setting antelope on how it's used. No, we're going to look at the B2C version of that. So when we come back, we're going to compare them and see what exactly is different from them, from a theory perspective. And then throw the rest of this section we're going to go through and look at how the integration differs. 43. Azure AD vs Azure AD B2C: So we have gone through quite a bit looking at the Azure Active Directory Service. Azure AD for short, is an identity service provider aimed at giving organization users control to access and resources. And we've seen how that works. We see that we can go and settle users. We can register them in the portal, see what they can do, what resources they can and cannot access based on their account. And we've seen how that Tolkien can be passed along all of those application resources accordingly. And all you would have also heard me mentioned B2B, and that is Azure AD B2B, which is just another offering on top of the active directory that we already know and love, where it allows for cross organization collaboration. So different entities, different companies or third party individuals that are not directly inside of your enterprise sitting can actually be given access with on a temporary basis to certain resources. That's what Azure AD B2B brings to the table known, or we talk about Active Directory B2C, that really means business to consumer. So the first level is for business. B2b means business to business. So B2C means business to customer. So in short, this is very similar offering with very similar security features, but this would be used for external facing customers that you wouldn't necessarily want to go into a pore to then manage their access all the time. You allow them to self on board. You can control what they can and cannot see at a global scale. Both ultimately would allow you to setup many applications that can be accessed through one set of credentials, all backed by our Azure Active Directory, B2C offering. It does not offer as many authentication flows as the AD. However, it does offer you the ability to customize your interface on Customize hold the user experiences that onboarding process. So this is what we're going to be working with for this section. 44. Provision AD B2C Applicaiton: Now let's create our Azure B2C instance. So I'll just go ahead and hit Create, and I'm creating a new tenant. Go ahead and fill out the information. Of course, your organization name and a domain name. You'd want that to reflect either the application that you intend to support with this instance or the organization that you represent. Because it could be that you have multiple apps that will be used using this one incense of B2C Azure AD to facilitate a single sign on across a suite of apps. So you want to make sure that that name is representative. And then you choose your subscription and acetyl by resource group. When you're done, go ahead and hit Review and Create, and I'm getting this error. So this area C and the subscription is not registered to use this namespace. So I think I went out to get my hands dirty here. All right, So after some experimentation in the shell, what I had to do was run this command that registers that namespace inside of my current subscription. So you just need to run easy provider register, hyphen, hyphen namespace, and then the name of the namespace. As it is printed out in that error message. Outdoors, it will tell you that it's ongoing and equal to run that command to see there are some of the details depicted here. So after doing all of that, I'm going to start over this process. If you didn't have to do all of that, then I'm sure you're a happy camper at null. And in redoing all of this, I just wanted to point, or the fat that they do tell you how much it is free far. So of the 50 thousand monthly active users can access this and you can view additional pricing details accordingly. So let me try and create this again. And after a few minutes, we have our new tenant, which we can just browse through from our all resources, and then I can open B2C tenant. So once here you will see that kind of familiarity that you would have with this interface based on our experience with Azure AD. So you see here that we still have to register apps. We have a few more options. Like we can have these identity providers which are most commonly used when you want to authenticate on an application on the internet. They do just out of the box, support authenticating against them. You can add your company branding so you can actually configure it and customize whole, that whole experience will look to the users. And you do have full-fledged user management over everything that happens here. So like I said, it has some limitations compared to the Active Directory. But then for customer-facing apps, I think it is still very powerful and just a secure. So while we're here, I wanted to go back to the identity providers blade where we have the list of all the providers. So if I click on, let's start with local account. You see here that I do have the ability to state what I require to know about a user who is signing up using a local a cone. So local coal would be just like a regular basis if you spin up a dotnet Core up and you say using local accounts and you can customize what that user needs. That's pretty much what is happening here. So by default, I will need the email. I can say I want a phone number, I can say I want to use it in separately. Now if I look at one of the third party ones like Facebook for example, you're going to see here that it comes with the predefined requirements for Facebook integration. So if I wanted Facebook authentication or Google Authentication, et cetera, then I would definitely have to go onto one of those websites, register and up get that client ID and client secret, which are phrases that we're more familiar with, right? So the same standards that we would have been enforcing in our AP a 0 AD are the same standards that all of these OpenID Connect providers in force also. So each one has its own set of requirements. Google, I think, is much like Facebook, but then maybe tutor would have something different. No, it's the same thing. Essentially, you can just look through and see what each one requires. No, we can also go and add another one that is not predefined. And of course we would have to fill in a few more things because it doesn't know what to expect, what it is giving. It's giving you the template of what is expected, what it doesn't know what file use exactly will be available. So you would have to fill in more. So that's really how you would go about allowing LinkedIn authentication, Google Authentication, et cetera, through your azure AD B2C offering. So even if you don't want to use local account, you could still provision this and outfit each of these providers with the client ID, client secret. So your users will still feel that they are just authenticating with Google, which they are. It's just in a federated mode against your Azure AD B2C. So another thing that we can look at while we're here is user flows. So user feels pretty much refer to the stages are the steps or the actions that we're allowing users to hub. So you see here. We can select a sign-up and sign-in flow profile, editing flow password reset, et cetera, et cetera. So let us say that I wanted a sign up and sign in flow. I'm going to go with the recommended option and Create. And then I'll provide a name and I'm just using the same naming convention, so I'm kind of consistent with the app name in this section. Then we can see we want local accounts. So because we did not allow any other identity provider, we didn't enable any others. The local account sign up was automatically enabled. So that is what we see here. If we had enabled Facebook, Google, et cetera, would have the ability to select all of the ones that we want to allow the user to use during this flow. So I'm just going to go ahead with the default email sign-up. And then I can choose the type of method where I want the Multi-Factor Authentication top in via email or phone call, et cetera. Do I want to enforce it? I can make a conditional, et cetera. I'll just leave it off for now. And then I'm going to ask what are the token cleans that I would like to support? So here, we can choose what we want to collect from the user. So collect attributes, refer to what are we asking the user to tell us? What is your given name? What is your surname? So the first name and last name, maybe we want a country. The e-mail address will email is already implied, which is why the return claim here is not it's not clickable. So what the return claim column represents would be after we get all this inflammation from the user, what do I return as a claim with the Tolkien? So if I said I want about the given name and surname, well time also asking for a city, but you don't need to return the city. That means that Tolkien is going to have to return the first name, the last name. Even though the user is providing all of these at sign-up, of course, you can always add more. These are some preset ones that you can include in that sign-up form. So in this list, you also notice that you have a mixture of some that you can choose and somebody can choose on either side, right? So you can always choose to collect more things and return them if necessary. So I'm just going to go ahead and click Okay and create that user flow. So after creating into if we wanted to modify it, we can always click on it and we'll see all of those options. You can go to the properties and change the password, configuration, setup, age, gating, that means, you know, checking, making sure persons are of a certain age. We can even change the page layouts if we want. And that would allow us to setup custom pages and to simulate what it would look like setup a template. It's a very, very powerful and highly customizable tool and feature that Azure AD B2C offers us. So now that we've set up the tenant and we've also set up a user flow. Let us continue. And when we come back, we will look at how we can retrofit an application or just any application to authenticate using Azure AD B2C. 45. Authenticate using .NET Core Application: All right, so in this lesson, what we're going to be doing is retrofitting or Core application to use Azure AD B2C for authentication. So the first thing that we're going to have to do is register an application. And this would be a very similar flow to what we would have gone through with the Azure AD just no. No. The registration form is going to be very similar. So I'm just giving up a name. We want to choose the very last option that allows for any identity provider, our organizational directory. And then with all of that, we can just go and hit register. So this should all feel very familiar to you. It's the same interface pretty much from when we were disturbed on up in the previous section. I do need redirect URLs and kind of credentials. I'll just go ahead and add that redirect URL. And that one would be for web. And our URLs are going to look pretty much the same way they did in the previous section because I'm going to be using that app. I'm not going to enable access or ID tokens because we do want to use the code flow. So after doing that, I'm going to jump over to register as secretes also. So new client secret, give it an appropriate name and then click Add. And as usual, we take a copy of that while we have it. Now, jumping back to our overview blade, Let's look at the endpoints. And if we take a look, we're going to see that they do look slightly different and you'll see that they do have this thing called policy name in the middle of it, right? So that policy name actually refers to that policy or that user flow that you'd have created. So if you want to go back and look at that user flow, just to remind yourself that is the name of the flow. So technically that's a policy name. So if we wanted to see the well-known document for this application, what would have to do is put in the URL slash, the name of the user flow, and the rest of the URL. So when we do that, I'm going to end up with the URL slash B2C underscore one underscore classifies website user flow. That was the name I gave to iterate. And that's a well-known document. And you can see it's much better formatted than the previous one. And it does have a bunch of information as you would expect. It does all the end points as we know and expect. Also you'll see all of the response types listed, including code. Now let's bring up our up and let's start modifying it to use our new B2C instance. So what I'm going to do firstly is make a copy of this block. I don't really want to lose it so much. Lindsey, comment this out. And underneath that, I'm going to paste. And just so that we know one is Azure AD one is Azure AD B2C. I'm going to rename that section and update the reference in the startup so it gets that appropriate section. Now let's look at what we're going to be modifying here. So just bones, I'm going to be bouncing between the portal and the configuration, right. So that's instance name. Let me just double-check. Yes. The instance is no longer login dot Microsoft is no. My custom URL to my instance, my domain would be the izzy, the devs j dot onmicrosoft, that's my domain. Whatever comes after that is your domain and then your tenant ID, that of course, changes to the new tenant ID. And I'm just doing it line by line so you can see exactly where I'm getting these values from the client ID. We get that from the dashboard also. You can leave the callback path. We can leave it to use the PC and the response type code we do want to save or Tolkien's, and we want to update our app secret. So let me jump over to manage user secrets. And I'm going to just create another section. And that's I'm going to call Azure AD B2C, where I will also paste the new secret. So bother to comment this part out, that's fine. And we can go ahead and save all of this. So you see it nothing much changed. One thing I do need to remove whoever is the scope. So to me, that's an oversight on my part. Let me just remove it while we're here so we don't need any scope. So I'm going to just set the client project as our startup project goes. That's the only one that we need to test us to verify. And I removed the commented code. So it's clean that start with a clean slate and test result. And what we get here is an error. So these are at first sight, it can be a smokescreen. So it suggests that the current retrieve a document and a similar error to this is displayed in a dotnet Core app that has faulty upsetting. However, if you go down into the raw Exception Details and read, you'll notice that everything it is seeing is relative to it, trying to retrieve some configuration for OpenID Connect, right? So the error is not with the app settings. It's more on the retrieval of the well-known document from the OpenID Connect server. So what I'm missing from my app settings is this line that says sign up, sign in policy ID, which gets the name that we had to go and retrieve when we wanted to see the well-known document. So you can go ahead and add that nine and then try again. And this time we're presented with a sign up screen. And if you look at the URL, you see that it is going to our new B2C URL and domain, et cetera, right? So it's going to authorize URL from that well-known documents. So if I go ahead and put in an e-mail address well, actually, I can't put one in because I don't have an account, so sine of 0. And then that allows me to put you in an email address and some passwords and basic information. As we said, we Wanted city, you wanted the given name, unwanted surname. So you see, even my browser's auto-filling all of that formula because those are just common fields. However, I can send my verification code and when I retrieve it just puts it there verified. And after all of that is done, I can create UX. And this is a very, very unprecedented. So no, I have an error. And because I will in development mode, I'm not seeing the details of the error, but I can tell you what the error is. So what happens is that by default, Azure B2C is not sending over an access token, which is weird considering that we just went through that whole flow of it, of Azure AD expected, being expected to send over an access token. Nonetheless, we can correct that by adding a scope. So the same way that we had to add a scope section to the previous configuration block. I'm just going to retrieve that particular configuration block. But this time it's not necessarily an APS globe. It's actually the scope and the value here would be the App ID. So that's Australia, that again, and this time we're signed in. So if you go through that whole flow again, if you have to go through the whole flow, you will get signed in notice it doesn't have anything to display. That's because I didn't tell it to give me back the adequate information that I know my app is looking for granted. I do have the user claim with a bunch of fields that I could use instead. So I just put a breakpoint in the index page and I'm going to refresh. And remember that we have the code here that got the access token. I'm also just going to show you what comes back in the user claim objects. So if we go to claims and results view, we can see that we get by the claims for the given name, the surname. We get back the name of the sign-in, sign-up flow in sign up flow there, but the off time, so we do get quite a bit of information. Once again, we could extend this information based on the bits of information that we would have ticked off as part of the user flow. By the way, we also get bought the Tolkien, which if I just take a look at it, I can see that it contains midst of information that I am very familiar with. So you see here, all of those things are there. So once again, Azure B2C does support the same kind of token auth, barring 12 additional configurations and some are not required. So you know, it's a mix-and-match put at the end of the day, it's basically the same protocol, the same level of security, whether you're building an internal enterprise application or an external business to customer application. 46. Section Review: All right, so that's really it for this section, what we have looked at is the fact that the steps towards setting up Azure B2C, they're very similar to setting up Azure AD, granted, in my opinion, it's a bit simpler because there's a lot less overhead and fewer things to consider with certain configurations. But at the end of the day, we've seen that comparably they offer the same service, just four different categories of customers. So whether they're internal customers or external customers, B2C is for external customers. We also looked at the slight differences between the configurations needed for Azure AD B2C for your dotnet Core application, namely, father, we need that sign-up sign-in policy, which is the name that we give to the user flow that we're in. We control the bits of information that we expect to exchange between our system and the user and the phyla, we need to add the app scope, which is really just the client ID, and that allows us to get the access token as needed. Ultimately, however, the management of our Azure AD B2C is very similar to the Azure AD, so you could use them simultaneously and not feel like you have to learn two entirely different things. And we do get more customization as it relates to the look and feel. Though we get fewer options for certain configurations. I think it's all balances itself out. It is very powerful and it does what we need it to do, including prepackaging integrations into other very popular OpenID Connect providers. So with all that said and done, I am very excited to see what kind of app you build and how you security using Azure AD B2C. 47. 47 conclusion: Well, there's, I want to thank you for coming on this journey with me and I do hope that you would have learned a lot about Microsoft Azure and how it can integrate into your dotnet Core applications. Now, there are a lot of little nuggets here and there. And Azure is pretty much a moving target in the sense that they are always improving on the platform, always modify and user interface and the feature offerings. So there might be things that we would have done side-by-side with me and the button wasn't exactly where I clicked it. Or the verbiage will slightly different, but the core concepts and the core offerings off the technology remain fairly seem they're fairly consistent and they're always being improved upon. That being said, I encourage you go ahead and play around building up end-to-end, integrate with your Blob Storage, integrate with your Azure Functions with your Azure AD. All of those things are at your disposal. Spin up two Virtual Machines, see how they work. Deploy an app, use up your App Services. You know, I'm just giving you ideas, but at the end of the day, be creative and use Microsoft Azure to power your application and make it the best that it can be. Thank you once again for sticking with me and coming on this journey. And I'm excited to see what you build next.