Transcripts
1. Introduction: In this module, we're
going to be looking at as your storage accounts. For this Monday, we're
going to explore the Azure storage
account offerings. We're going to look
at Blob Storage Table Storage, Queue storage. And ultimately we're going to develop a solution that will interact with each of these
components of Azure storage. So as usual, we're
going to go ahead and create our storage or cones. Look at the different options and explore all the nuances, and explore how
you can integrate these offerings into
our own solutions. So stay tuned. This is
going to be a fun module.
2. Understanding Azure Blob Storage: To kick things off, we're
going to start by looking at the Azure Blob
storage offerings. Now, Azure Blob Storage is
a cloud-based service for storing unstructured data like text files, images, videos, etc. It is a scalable and secure and very cost-effective solution for storing large
amounts of data. And it supports security
features such as encryption, authentication,
and authorization. There's built-in integration
for Azure AD and your data will be encrypted
both in motion and at rest. Data can be accessed via HTTP or HTTPS protocols
anywhere in the world, and that makes it a suitable
candidate for CDN things. So CBN, those letters mean
Content Delivery Network. So because of this flexibility, Blob Storage is
perfect for that. Integrating or building
apps to support the Blob storage offering is very easy if you're
using dotnet, Ruby, Java, Python, and several
other frameworks, you can always go and look if the appropriate framework is
there for your reference. But of course it's
dotnet developers scores and it has us covered. Now it's commonly used for
scenarios where we want to serve images or documents
directly to the browser. We need to facilitate distributed access to
a set of documents. If you want to stream content
like video and audio, if you want to store log
files and continuously right, to those log files. Or if we want to store
data for backup, disaster recovery,
and archiving. Know if you're into data
science and big data, you'll realize that there
is an offering called the Azure Data
Lake Storage Gen2, which is Microsoft's
enterprise big data analytics solution
for the Cloud. This offers a
hierarchical file system, as well as the advantages
of Blob storage. And those advantages include
the fact that it's low cost, it has tiered storage
availability, strong consistency, and
disaster recovery capabilities. Now we just mentioned
storage tiers. So there are three
kinds of storage tiers. The first one that we
look at, this hot, hot storage tier is used for frequently
accessed data analyst. You'll see that it's the
most costly options. So if you know that
you're always going to the Blob storage to
access this file, maybe videos that
are being streamed, audio that is being streamed. Then you would want it
on the hot storage tier. Then we have the
cool storage tier, which is used for less
frequently accessed data. And of course it is a little
less costly than the hot because the traffic is not there against
the file or files. So whenever you try
to access a file, it may take a little
longer to load the file. But of course this is
better for files that are to be stored long-term and not interacted with as much. Then we have the archive tier, and this is for a
rarely accessed data. So this is for files
that generally speaking, organizations need
to keep files or keep records for
up to seven years. Alright, so you'd want to
archive files maybe after the third year you can
put them into archives storage because you won't
be accessing them as much. And if you do go to access, it is going to take much longer. It's going to have to rehydrate
that file or that blob, then it would bring it back. So of course, that is
used for extended for backups and it is the
least costly option. Now a storage up colon has at least two main components that you see across all
the other storage types. But since we're
talking about blobs, I'll kind of tailor the
inflammation relative to blobs. The first thing you have
is the actual accounts. Now this provides a namespace in Azure for your data
and every object I use stored in Azure Storage
would have an address that includes this unique
account name. Then we have the container and our container organizes
all the blobs similar to how you would expect a file system like Windows or Linux to
organize your files. You get that kind of folder structure
hierarchy available to you in our container. Then the blob represents
a physical file which is stored inside
of the container. There's a diagram
showing that hierarchy. Several types of blobs are. Blob is a blob, I mean, it's a file that is stored
in this storage or cones, but then different
blobs needs to be treated differently
for different reasons. So the first one we
have is a block blob, which is probably
the most common one. We use that to store
text or binary data. And it's made up of data
blocks that can store up to 190.7 trilobites, not terror, but Tilly bytes. Then we have append blobs. Now, this does have blocks
similar to the block blobs. Well, they're optimized
for append operation. So this would be ideal for a scenario where we
need to be logging data from a virtual machine
or in your App Service. If you remember, when we're sitting on App Service logging, we could have chosen to
log to a blob, right? So this is what we would use. It would provision an Append
Blob for that situation. The next type would
be page blobs. Page blobs store random access
files up to 8 kb in size, and these are generally used as virtual hard drive files for our virtual
machines in Azure. Next up, let's look at
as your Table Storage.
3. Understanding Table and Queue Storage: Now as your table storage
is a service that stores and non-relational
or non-structured data, also referred to as SQL. And it is a Cloud
service once again, that gives us key
attributes storage in a schema-less design. Now you would've
seen referenced as your table storage when we're
looking at other Cosmos DB. And it is one of the APIs
that Azure Cosmos DB support. So if you start off with Azure storage tables or
Table Storage rather, and then you need to migrate for more storage and
more redundancy, then you can move
over to Cosmos DB. But this is an excellent
place to start. If you're not sure of your
data needs and you need to be able to adapt to new
requirements regarding or detail, then this is an
excellent solution to start off with because
it's schema-less, so it will always be able
to evolve with your needs. Now access is fast and cost-effective and it
is generally lower than other SQL offerings for
similar volumes of data. You can also consider that
if you wanted to start off with our relational
or Azure SQL Instance, and you know that
you're not going to be storing that much data upfront that you might start
with tables and then go up. But of course, all
of those things will affect your decisions though. The next storage type that
we have is as Queue storage. Now this is a
service for storing large numbers of messages. These messages can be
accessed anywhere through authenticated calls
via HTTP or HTTPS. Essentially, when you set
up your Queue storage, you're going to get a link that allows you to subscribe to the storage and read and
write messages off the queue. Queue message can be
up to 64 kb in size. And you can have as
many messages as possible based on
the capacity limit of your storage or cones. Know generally as your cues, storage or queues in general, are used to create a
backlog of work items. The process asynchronously. This means that,
let's say if you're building an application that
needs to send an e-mail, do you want the application to pause while you connect
to the e-mail server, figure out the contents of the e-mail dispatch, the email, wait to see if it was
successful, the dispatched, and then the user can
continue, maybe not. So IQ would allow you to take that information handed off and then move on with
your application. And then you'd have
another service that's probably going to take that message off
the queue and send the email eventually so that we, the user experience
is not affected. And that's just one example. Queues can be used
in a number of ways, especially if you're talking
about microservices. But we're going to go through some examples
with this stuff. So I'll see you in the next
lesson where we get started.
4. Create an Azure Storage Account: Now from our portal
we're going to create a new storage account. So let us jump right into it. We're going to go over to our pills and choose
Create a resource. And from here we're going
to look for storage. So I can just type in storage, press Enter and okay, well, let's just use the
autocomplete storage account. We want a storage of cones, so let's select that
and create from here, let us fill out
or form as usual. So as this storage is a
Microsoft managed service that includes Blob Storage data
Lake Gen2, azure Files. We didn't really talk
about Azure Files, but I will explain that a bit. Queues and tables. No, it's important that you get these settings right
because some things will be available based on
your tier and based on the type of storage account
that you provision. So the first thing
as usual is to select the cart resource group. So let's go ahead and do that. And then for the storage name, I'm going to say
as you are horse, I'm of course going to choose the best zone based on
my solutions needs. And then we get to
choose a performance. So do we want standard which
is the recommended one, or do we want premium, which is four scenarios
that need very low latency. So for this demo, we will keep standard. Of course, if you
have other needs, you choose the one
that is appropriate. Also for the redundancy. I'm just going to choose
locally redundant. And redundancy, something
that you should also always consider with every service once again, based on your needs. So I'll choose locally
redundant storage. Go over to the next option. I'll leave everything
that was ticked, I'll leave it ticked. And then here if we were doing some big data operations
that we could choose a Data Lake Gen2 for hierarchical namespace
is we're not doing any big data operations, so I'll leave that alone. And then when we scroll
down, we can see here do we want the hot or the cool? So remember hotkeys for frequently accessed
day-to-day usage scenarios. Maybe like CDN. Maybe that is where the entire organization is storing files and
reading and writing. Versus the cool storage, which is probably where
you would put files that are accessed
once in awhile. Alright. Then we
have Azure files. So as your files
is a provision of the storage account
that gives us a fully managed file
share in the Cloud. And they're accessible
through industry standards, Server Message Block
or SMB protocols, network file system
or NFS protocols, or even just a RESTful API. In more plain English. As your files would be like
an in Cloud file share that you could set up
for your organization to replace the local shear. So because we're not
facilitating file shares, I will not allow or
enable large file shares. And then we can go over to networking and everything
there that is, the fault is fine. Data protection. So here we can set up options for how we deal with the data. I will leave the
defaults ticked, but I encourage you to read
through each one just to see what it's saying and change those values based
on urine needs. Of course. We can also enable Version
Tracking for our blobs. If you have several files, are several versions of a file, it will keep that log. And you can enable a change
feed just in case you have some service that needs to
subscribe to the blob state, okay, well, this change, I need to trigger an action. So those are things that you can consider if you
want to automate some other workflows based on the blobs and activities
done against your blobs. Now, when we move
over to encryption, I did mention
before that data is encrypted at rest
and in transit. So they are just showing
you the type of encryption. I'll leave the defaults
there and then we can choose which services support
customer managed keys, and I'll leave those as default. So once again, next
up would be our tags. What? I'll just jump
over to review. And once that review has
been completed successfully, we can go ahead and create
our storage account. Once the deployment is complete, we can click Go to resource. And once we're there, we will see our blob storage or storage account is created. And from here we can
do several things. So when we come
back, we'll explore what the different
options are and hold can interact with the different components
of our storage account.
5. Azure Storage Emulator: Now it is worth mentioning
that we have access to a local emulator
for Azure storage, and it's called azurite. And this is open source
and cross-platform. So you have several installation options
and you'll see here it provides
cross-platform support on Windows, Linux, and macOS. And it allows us
free local access to an environment for
testing Blob Storage, Queue storage, and
table storage. So all the activities that we're about to do involving all of these services we
could actually do if we implement our provision, this local emulator
for those services. Now setting up the services
is relatively easy. If you're using
Visual Studio 2022, Then you get it automatically. If you're using
Visual Studio Code, you can get it for extensions. If you're using npm, you can run a command if
you're using darker than you can pull that
image and install. And you can always go directly
to the azurite Git repo, repo and clone it down. And to run it, you
have several ways to interact with azurite if
you're using Visual Studio, visual Studio Code, etc. So you can go ahead and peruse this documentation if you just want a local
environment to set up. And then you can go
ahead and access them. Know, we're going to explore the Storage Explorer in a bit more detail as
this module goes on. But I just wanted to point
out that in order to access the local environment and simulate interactions
with each of the different components
of a storage account.
6. Storage Account Management: So from this
dashboard you can see that several options
that we may not have enabled or enabled
and probably wanted disabled later on are
available for change here. So if I wanted to change the default access
tier from hot to cool, I could easily click it and then go ahead
and change that. Certain things can't be changed
or you can just jump up and change performance
from standard or premium. But you could change
some other options. So that's later on he decided or the organization decided that it needed to allow large files. And you'll see here
that it supports fast storage of up to 100 kb. You could go ahead and do that. Could also change this
stock quotes here, but we'll leave it hot for now. Later on, we can either
remove the resource, our drop it down to a cool tier. Now back on our overview pane, we see that we have the option
to open in an explorer. If we click Open Explorer, you'll see that it is
something that you can download that will allow you to manage your files and your containers and everything
from your computer. We can also use a
storage browser. So we can click on storage browser and
that will bring us to similar enough interface that shows us the number
of containers, file shares, tables, and
queues in the system. We can also go ahead and
interrupt with those components. So I can go ahead and create
a new container here. And I'll call this container. Is of course container. I can set the access
level whether I want it to be private so nobody else should get to it or blob
for read access for blobs only or container with anonymous read write access
for continuous and blobs. Now leave this as private. And in advanced, I can set the encryption scope and
immutability support. I'll just go ahead
and click Create. Now once that blob has been created or that container other, I can go to the
container and then I can add a directory or I
can upload a file. Now remember that the
container is where the files really go
and it tries to give you that hierarchical
view of files and quote unquote folders or
directories view if you must. So I can add a directory and
I call it new directory. You, is it coarse tree? There we go. Then inside of this
makeshift directory, I can actually add up all the directories if I
want or I can just upload. So when uploading
I can choose to override if as already exist, I can drag and drop
or browse for files. And then when I
look at advanced, I can choose the type of
blob that I'm uploading. So there's a block blob, Page Blob or Append Blob. I generally don't change this because I really only
upload block blobs. Or you can choose the block size if you wish
to access stereotypes. So if it's hot, cool or archive, and we can upload a folder, we can do several things. So I'm just going to
do a simple upload. And I'm just uploading a random PDF that I
found on my computer. Nothing too fancy.
And there we have it. I just uploaded my first Blob
file and it is hot access. Of course, I can go
ahead and edit it. I can get a URL, I can generate a SAS token. And we're going
to look at all of those and what they mean. And I can even delete. Now remember that that
enabled the soft delete. If I delete this, I can actually go back and show active and deleted
blobs and that will bring back anything
that I have deleted. It will show me and
it was showing me the amount of time that
it's being retained for. And then guess what? I can actually ON delete it. If I right-click, I can click on Delete and it will be restored. So that's a nice little
fail-safe for if we accidentally
remove any files. No, recall that fast shares will require a bit more
finished in setting up, but it is an
excellent way to move your File Share from
your local computer or your organizations fascia from the local servers and
put them in the Cloud. And this is especially
useful for countries that require distributed access
to the same set of files. Alright, so if you want
a private file share for our distributed setup with
redundancies in place, this is an excellent
option for you. Let's move on to cues. Though. I went to add a new queue, and I'm going to call
this is a horse. Q. Let's go ahead and click Okay. Then you'll see off
the body of the URL. So we are going to get around, when we're doing development, are going to get around
to looking at the URL for our Blob container and
the blobs and everything. But right now we're just doing an overview of the
storage account. So from this queue URL, I can actually
subscribe to this skew and I can put messages
on the queue. I can read messages
off the queue now and we say message is what
exactly do we mean? When I select the queue, I can click Add message. So if I add a message
and say this is a random test message. And I can set amount of
time, it should expire. And so by seven days, if this message is not
processed, then remove it. Of course, once again, your needs will differ, so you choose the one that is appropriate for what you need. You could also see that the
message whenever expired, so once a message is
equal to under Q, never delete it until
it's processed. Alright? And then you would say encode, we would want to do that. I'll click. Okay. So when I click Okay, we now have a message
sitting down on this queue. And once again, when
we say messages, we're talking about data
that needs to be processed. So this message, as you can see, is really just text. And I just accidentally added
the same message twice. It's really just takes, alright, so that means that this
could be a sentence. It could be some hoard. It could even be a JSON object, which is a very popular way of transporting data like
bodies of data onto cues. So what could happen here
is that you could send over a block of JSON, JSON object as sticks. And then you can use a third-party service or another service to actually
process this JSON message, which will have all
the information needed for an operation at hand. Several things and several
ways you could use cues. Once again, the
major use for queues is to offload processes that might cause delays in the actual applications runtime. So it's quicker to using
the email example again, it's quicker to hand
off that e-mails body, which could have been
this JSON object with the sender to receiver
email addresses and then the body of the message
and everything else had it off to the queue and then continue
with the application. And then in the background, eventually this message will be picked off the
queue and send. When I say ventrally, I
mean, it could happen. See this same time it
could happen 20 emails later if you have hundreds of users sending e-mails
in the system, you don't want hundreds of users and the one on
waiting for the system. There are several
ways what we're going to look at some examples in the rest of this course. So that's all we add
messages to the queue. And of course we can
clear the queue and we can also dq message. So I could click this
message and I can say dq. Now, the next one
would be tables. Table Storage, once again
is a schema-less design. So I can click Add
Table, give it a name. And wherever you see
me not using dashes, because certain services have stricter naming
conventions and others, so some of them will
allow the hyphens, some of them will not. So you will figure it
out as you go along. It will always let
you know when you are violating the
naming convention. But when I click Okay, that creates a new table. And then once again we get a URL for this
table where we can subscribe to putting data in
and reading from the table. And then if I click on
the resource itself, we have a very similar layout to what we saw when we're
using Azure Cosmos DB. We have partition keys, we have row keys, and it will have a timestamp. So basically when I
click Add entity, I have the option to give a
partition key and row key, and I can add other
properties accordingly. Alright, so let's go ahead
and add a new entity. Now let us say that I wanted
to store some grocery items. And what I'm going to do
here is I went to see that the partition key is the
category of the product. So if I'm storing
something that's dairy, and then the, let's say property
number one is the price. And then I can
choose the datatype. So price would be double. Let's say it costs $2.50. And then the next one would
be the name of the item. I don't know why I put the
press before the name, but let's just say
that this is milk. And then insert. Now, I could have
generated a new row key. I could have put one arc
would have used a GUID. I intentionally left it blank, but I'm just showing you
the points that we can enter our partition
key and row key, but then we can enter as many other columns and column value peers are key value pair key attribute peers as we
need and like we said, as your needs grow. So again, the data so I can add another entity
that maybe later on I realized I need to
store more properties than just price and name, right? So if this time it
was say staple, and then let's give it a
row key this time one. And then the price is ten. Let's just say the
name is bread. And what else would we
need to storable this? Maybe I want it to actually
put in a category. Once again, that's a staple. Maybe expiration date. I'm just I'm just
making up stuff right. Because once again,
your needs will differ. Expiration date has
the time, right? So you choose the appropriate
data value and then I put in an expiration date and I'll just delete
the last one. So let's say that
this was a new look. As our needs increase, the number of columns we
can put in will increase, r can increase with very
little consequence. And that is where that no
sequel aspect comes from. So it still looks relational, but we still have that advantage of that schema lists design. Now that completes our tour
of our storage account. When we come back, we'll get
into looking at how we can go about creating solutions that interact with these
different components.
7. Storage Explorer: Now if you opt to use
the Storage Explorer, you're going to get that Visual Studio code
slash visuals, sorry, as your Data
Studio kind of feel. So first of all, you may want to manage our cones and
authenticate accordingly. And once you're authenticated, you can drill down into
your storage or cones. So once you have sorted
out your authentication, you'll be able to go ahead and access the different parts. So here's that storage account
that we created earlier. And once again, we
can drill down into it and we can look at
different file shares, the cues, the tables, all of those resources that
we had looked at earlier. You may even see some other
ones that you didn't see in the browser Explorer. So you can go ahead and do that. And all of the
options that you had in-browser are also
available here. So you can actually
manage your files, your blobs, your folders,
everything right here. And this is even better for machine-to-machine
management of the Blob Storage. So I think that getting the Storage Explorer
is a very good idea. And it does make management
of your storage of cones much easier than having to go to the
browser every time.
8. Web Application Overview: Now we're going to develop a
web app that's when you use the different components given to us by the storage or cones. So our web project idea has been to bear registration form
for conference attendees. We're going to use asp.net, MVC. This time we have
used the Razor pages, you've used blazer,
we're going to use MVC. But once again, once
it's dotnet Core, the concepts will
be transferable. Know what we're going to do
is use Azure Table Storage to store the register
and click Submit. We're going to store that
data in this table storage. We're going to allow
them to upload an image. That image upload, we'll
use the Azure Blob Storage. Okay, So usually you
would store it locally, but we don't want it stored
locally on the server. We're going to offload
that as your Blob storage. We're going to use
azure Queue storage because we want to
send off an e-mail. So I used that
example earlier in this segment of the course. We're sending an email is something that requires
the up to freeze, get that done, and then move on. So we're going to do is just
hand it off to the queue. And then we're going to create a console app that is going to simulate a client that would read the queue and then
process the message or the email data from the
queue for dispatch. So we're going to do all
of those things coming up.
9. Create and Setup .NET Core Project: So using the same
solution that we've been using for this
entire course, I wanted to create
a new project. Of course, this is optional. If you want to separate each
one into its own solution, then you can go ahead. And once again, I'm
using Visual Studio. You can also follow along if you're using Visual Studio Code. So go ahead and
create a new project. And this time I'm going
to use an MVC web app. So I'm just going to
type in the letters MVC in the search, and then choose ASP Core Web up with Model View Controller. Next, I'm going to call this one MVC Storage
Account Demo. Next, everything can
remain as default. And then we create, if you're using the
terminology for Visual Studio on
Linux or Mac OS, you can go ahead and navigate
to your desired directory. And then you can say
dotnet, new MVC. And then you can output it to a directory with the
name of the project. And then you can go
ahead and hit Enter. So now that the project
has been created, let us go ahead and add some of the libraries that
we'll need to work with. And of course we know by know the different ways that
we can add libraries. But I'm just going to
use the terminals since the terminal will
work for everyone, whether you're in Visual
Studio or Visual Studio code. So I'm going to say dotnet add packages or add package rather. And then the first
package we're adding is Azure Storage Blobs. So we can just hit Enter. Alright, and then once that
is successfully inserted, I'll just clear the screen. And then this time we're adding a zero dot storage queues. And then finally,
we're going to add a zero dot data dot tables. Alright, so dotnet add package
as your data dot tables. Now that we have
all of those setup, let us go ahead and add our
storage connection string, storage or colon
connection string. I'm just going to jump over to the app settings that Jason, let me hide it, the terminal so we can get
some more real estate. And I'm just going to call this storage connection string. And then the value for our storage connection string is going to come
from our portal. So let's jump over
to our portal. And then we jumped down
to the access keys bleed. And then we're going to
select the connection string. So I'll just show
Copy to Clipboard, jump back over to our code, and then we can paste that. In. There we go. So my clipboard was
not prime just yet. So that is our
connection string. And I'm going to
create a new folder. And this new folder
is going to store our data model for
the storage tables. So I'll just call it data. In keeping with our already established
naming convention, we're going to add a new class. And I'm just going to
call this one attendee. Know, because we are going
to be using Table Storage, we need to go ahead
and implement the IT entity interface. So attendees going to inherit off the bat from i table entity. And this comes to us from
our data tables class. And then by using Control dots, I can implement the interface. So this now gives us these
specific columns, right? Remember that these
columns were there before we started adding
other attributes. Alright, so of course I
can just clean this up and let they get B I
get and the set, set. And remove all of these through new throw nuts
implemented exceptions. Alright, so after
we've cleaned that up, let's go ahead and add
our own properties. So this is helping me out here. So I have name and I
would probably want firstName separate
from last name. And let's put in
an e-mail address. And what else would you
need a bolt on attendee? Let's work with this for now. So as our requirements, cms will mean putting more. But let's start off with this.
10. Add Table Storage Service: Now let's configure our
Table Storage Service. So I'm going to
create a new folder, and I'm going to call
this one services. And in here I'm going
to create a new class. And this class is going to be called Table Storage Service. And while I'm here, I'm just going to make a
quick adjustment here. I think I'm just going
to qualify this a bit and call it attendee entity. Just so it's not
just that ten d. That's just my my
little adjustments. If you feel the need to make
it as well, then do so. I'm just renaming it here and
renaming the file to attend the entity soul for
our methods here, I'm going to have, and I'm just going
to write the public. Sorry, I'm going to write
all the method names first and then we
fill out the blanks. So I went to have a task that is going to return
attendee entity. And this task is going to
be called get attendee. Alright? And it will take and ID string ID because I intend to use goods for the
ID. Good values. So we're going to have good
attendee get attendees, which will take
an old parameter. And then this will return a
list of attendee entities. And then two more methods, one to upsert and one to delete. So between these methods we
have full crud functionality. I'm just going to extract
this into an interface so that we have that abstraction
and that inheritance. There we go. Now, we can start
wiring this up. So I'm going to create
a constructor CT OR and Tab twice, degenerate
that constructor. And I'm going to inject my
icon configuration object into this service so that
we can get access to that configuration file
and the values there in. Let me just rename that. I'm also going to
have a private, let's say const string
for our table name. And I'm just going to
call this attendees. Alright, so that's what
we're going to be calling the table in our table storage. And we have the configuration. Now I need a client. Alright, so I'm going
to create method, I'll make it a private
method down here. So private task that is going to return a table client object. Alright? And of course, as you go along, you add the missing
using statements. We're going to say
get table client. And this client is going to
firstly define a client, I'm just calling client. This client is going to
be equal to new table. Well, actually this
is a service client. Apologize, I'm rename
this to service client. So this is new table
service client. There we go. Then we're going to pass in
that configuration value for our connection string. And just in case you
don't remember the key, you can always jump
back over here. I don't remember how much it
jumping over here to get it. And that is what
we're going to be using for the
configuration key value. Alright, so now that we
have the service client, we're going to save
our table client is going to be equal
to service clients. Dots get table, table
client via the table name. The table name is the same value that we
just stored up top here. So we can do that. Then we're going to
await or table clients to create if it doesn't exist. So look at this null. If the table doesn't exist when we are trying to communicate with our
storage account, we have the option to just
create if not exists. So I can just choose that. And of course I'll choose the asynchronous option and then we can return
the table client. So if the table doesn't exist, then it will go ahead and
create it and connect to it. And then our client now embodies
that connection object. So let us figure out
all we're doing this. Get attendee by ID. Now here we're going to have to get unstable client instance, and then we're going to go ahead and do I get Entity Sync? Know, it just looks like this. And of course if we're awaiting than the methyl
and must be async. So var table client
is equal to two. We'd get table client to choose
the method we just made. And then the second line
you notice has an error. So it's get entity a sync and all of those put the
ID and we have an error. Why do we have an error? Because it requires
a partition key. Alright? So we have to give
a partition key and row key in order
to get the entity. So attendee entity does not have any partition key
value here, right? I mean, yes, we do have the built-in field
for a partition key, but we don't have
a dedicated field that will act as
our partition key. So as part of good design, we would actually give
it another property. And let us see that
each attendee to our conference would be professional in a certain
industry, for instance. So what we can do
here is say industry, we'll just call it the name of the industry
that you're in, whether it's IT,
education, et cetera. And then industry here can be our partition key to partition all the attendees
by the industry therein. It makes sense, right? So as you build out your
application for your purpose, you always want to make
sure that you're making the best decisions possible for your design industry here
will be or partition keys. So back to our method and
what I'll do is that we should pass in string
industry and string ID. And then we will pass
in the industry and the ID to go ahead and
fetch that attendee. For the next method, which is get attendees. We of course start off
with the table client and then we're going to do a little different
bits of code here. So I'm going to say page table. I tend the entity object called attendee
entities is going to be equal to the table Client
dot query attendee entity. And then we can return
attendee entities D2 list, and then that will
satisfy that return type. Now let's look at our absurd. As usual, we start off with our fishing off or
table client object. And if we await, the method must be a sink. And then we're just going
to go ahead and await our table client dots. Let me try that again. So oh, I'm sorry. That's async. Await table client. There we go. Not upsert entity async. We're just going to
go ahead and give it the attendee entity. So we already saw hold upsert
works from when we were doing Azure Cosmos in case you're not so
familiar with it. Absurd is basically
a portmanteau or a combination of the
words update and insert. So if the entity exists based on the
information provided, especially with the
key and partition key, then it will just
replace the details. If it doesn't exist, then it
will create the new record. Alright, so that's this
whole upsert works for or delete attendee. Of course, we make it async. We go ahead and fetch
our table client. And we're just going
to go ahead and await table client Dot Delete a sink. And what I didn't do
for this one is provide parameters that we need to
provide to delete async. So for delete async, we need to pass in what would be the partition key and
the row key values. And those are the values that we pass into our delete operation. Now, if you're ever
in doubt as to what you should pass in, you can always look
at overlords of the methods and that will
help to clue you in. So go ahead and pass in the parameters and that should
be delete entity a sink. I apologize not delete async, but delete entity async. Async could have
deleted the table. Alright, so delete entity will
take those two parameters. Now we have our service for
our Table Storage created. We made a slight adjustment
to our entity here. And of course we need
to update our contract, which is the interface, because I changed the
signature for this method. So I'll just jump
over there and change the Delete to take the two parameters
as it's expected to. And for it to get attendee, we also need to
update it with this. Alright? So once the contract is happy, then the service is happy. And we have no no
conflicts and record. And then as usual, we have to register this. So I'm just going
to jump over to our program.cs and see
a builder dot. Let me try that again. Builder dots services. Dots add scoped service for the IT was stored service implemented by
Table Storage Service. And that's it. So
when we come back, we're going to go ahead
and create the controller and corresponding
views that then allow us to facilitate
that registration process.
11. Create Controllers and Views: Now let's create our controller. We're going directly
to our controllers folder select controller, and we'll just do
an MVC controller with readwrite actions. Click Add, and let us
call it the attain. This attendee
registration controller, of course, needs to be appended. And there we have it attendee, her dissertation on controller. Let us inject our IT service. So I Table Storage, I Table Storage service. Right? And we can go
ahead and initialize this as a private
read-only field. Alright? And now we can start
with our actions. Now, generally speaking,
you'd want to use view models to handle what's happening on the front and
different from the backend. But once again, this
is a quick demo, so forgive the lack of best practices and
abstractions were just doing a simple demo
to show the connectivity. Afterwards you can go
all out with all of the best practices
and separations of concerns for our index view, first thing that
we'll want to do, of course, is get the data. So I want to say var
data is equal to underscore table storage
dot get attendees. Now, this is a synchronous
I need to await. And then of course,
convert this method into an asynchronous
method as well. Then I can pass that
data over to the view. Now let's generate the view. So right-click the word
View, click Add view. And I'm just going
to go to razor view. And I'm going to use
a list template. And the modal class is going
to be attendee entity. And I can leave everything else. I don't need to reference script libraries and then click Add. And in no time our index
view has been generated. So from here, I won't bother to modify what we're
looking at yet. What I will modify is this
section where we pass over the primary keys so we
can use the role row key. And of course we need
the party chunky, so let me name these
appropriately. So we need the id and
the partition key, which is going to be
called industries. I can say industry
is equal to item dot industry comma and id
is equal to rho KI. Alright, so that's what
we're going to have to pass over to our edit, to our details, to
our delete methods. Why? Because all of these need both the partition
key and the id value, because we need to
fetch in order to edit, we need to fetch in order
to look at the details. And of course the delete, we also need that data. So that is how we're
going to do the index. Now let's move on
to the next one, which is the details. And the details would
follow a similar operation. But remember that here we're
taking the int id and we're taking the string
industry value. And actually id is a string because we're
going to be using a grid. And we need to make
some adjustments here. So this needs to
be an async task. Of course, I could
have just used the IntelliSense
to convert this. So let me just make the method Async to spirits that typing. And then we're not
getting attendees, but we're getting our ten d, which is not going to
expect us to pass over the industry value
and the id value. So we pass those to over we return the
view with the data. And then we can generate
this data, this view. And we'll detail, so we'll use the detail template against the same entity and go
ahead and add that. Right now we have our
details view created. So let me close anything that we don't need at this point. For Create, we can
leave that because we don't need any data
upfront or Louis. So let us look at this. If we want them to provide
the industry value, we could theoretically give them a drop-down lists are not theoretically based
on your needs. You could give them
a drop-down lists so that they can choose from a select curated list of industries that you would like to have as your partition keys. For this simple demo, we'll just go ahead
and type it out. But of course, in
future, in the future, you may want to have a
curated list that you recognize as your party shall or potential partition
key values, right? So let us leave the
Create view alone. Well, let us generate this view. So I'm just going to right-click
Add view, razor view. And then we'll use our create templates against the same
entity. Go ahead and add. After that creators
has been added, It's Notice it's
not saying async, we didn't need anything
asynchronous in the create, but we will need to do some asynchronous stuff
inside of the post method. So in the post method, we're expected to get
back on attendee. Entity object. There we go. Then what we're
going to do is try. So before we even get to
the inside of the try, before we even call the service, we need to put in some data. So here I wanted to say
that the partition key, attendee entity partition
key is going to be equal to and then that's
the attendee entity. Dots. What did we agree on? I think it was industry. That's correct. And then I'm going to
state that the row key, so we add the row key. There we go, is going to
be equal to Boyd dots, new GUID. To string. Those two, we can
go ahead and call our Table Storage Service
dot upsert attendee and give it the
attendee entity object. Because all the other
fields would have been set through the filling out
of that Create form. I have an error here because the conversion into
async was not completed. So let's go ahead and meet. That's a task and
that error goes away. So now we can move
on to the edits. So the edit will have to do a fetch similar to the details. So we can just go
ahead and do that. And of course the parameters
need to be similar. So why are you type when
you can copy and paste? And afterwards, oh, we need
this to be asynchronous, so let's convert that. And then in the post operation, we're actually going
to do something similar to what we
did in the create. So we have to make sure that we reset a partition
key and row key. And then we go ahead and absurd. Alright, so you'll
see that creates and edits basically follow the
same kind of procedure, but we have to be a bit
more careful with OER. So I'm just going to change all this parameter and
it will go through and make sure that everything will
operate the way we expect. So once again, we're
setting the industry. What are the industry is
to be the partition key. Do I want to have a
new row key though? I wouldn't want to
have a new row key. So I'm going to remove that one because the row
key is already coming with this entity and we can always make it hidden
on the form, right? So I want the setting
the row key again, I'll just be resetting
the partition key just in case that value changed
when it was edited. Now, I need to meet this, of course, is synchronous. There we go. We can move along. Now for the Delete, I'm going to forgo the need
for the delete registration. I'm just going to delete
confirmation at least for now. So I'm going to
remove the getMethod and then we implement
the post method. I've already converted it
to an asynchronous method, and I placed in
the string ID and industrial parameters much
like we had for the edit. And what we'll do is, I'll wait, delete attendee after it receives the industry
and ID values. So what we'll do in the index is re-factor this from a link to an actual
form submission button. Just place the forum here. And what we have is
a simple form tag with the ASB action as delete, so it will hit the post method. And then we have two
inputs, both as hidden. And the value here will tie in. And let me make this correction. So one should be item
dot roll key and the other one is item dot
industry, that's value. And then the names
of course correspond with the parameter names that we outlined in our axon parameters. And then we have an input
that is a submit button, which just looks like
a danger buttons. So I can comment on this link
because we don't need it. So that is how we're going
to handle the delete. Now I have one more thing
that I want to point out, and this is a little bit of our refactor and it's kind
of a criticism that I have for visual Studio and the way it refractors methods when
it does them as a sink. But appending that async
keyword to the end, I don't like it's purse. Also. Like you'd notice
that all of these are async methods and I did not
say get attendee async. So if you want to keep that naming convention with
your methods, that's fine. It's not something
that I necessarily practice beyond my
personal preference. I've noticed that MVC does
not do a good job handling navigation when async is added to the name of
the axon and the view. So be on the safe side. My suggestion is if it
got renamed to async, just remove that async naming. That's one. So do that for all the actions. You'll see that I
already went through. And all of those that were edit async and index async
and everything. It's renamed to just the original name
without async appended. That's one to rename
the views, right? So all of these views will
actually try to go to the async version or
they're named async. So you can just go ahead and
remove that async keyword. I'm just clicking it
and pressing F2 on my keyboard to then rename. And the action inside of the
ones that are forms that it, it should just go
to Edit, right? So this is just a quick
refactor that I'm suggesting just to make sure that we don't have any problems. I've never really
sat down and said, Let me try and make the async
naming convention on work. I just prefer it without, so it's something I
tried to avoid. Right. And are generated
this one before. If you wanted to have
the confirmation page, we could have kept the get generated the view accordingly. So this is the actual
form that I just used on the index to make sure that we can just go ahead and
delete to the post. The final activity
is to go over to our layout and let us
change our menu to have the option to go to the attendee
registration controller, and it will go to
the index view, and then you can give it a name. So I just remove the
privacy link and replaced that with the
attendee registration link. So when we come back, we
can go ahead and test.
12. Testing table Storage CRUD: Alright, so I'm running the application and just in case you're wondering
how to run. Once again, make sure that the new project is
your startup project. And then you can
just run without debugging if you
want to debug it. And by all means, go ahead. So I'm running without
debugging and I'm just going to go ahead and hit
attendee registration. So we see that our index page
loaded, we got no errors. So that means it's connected
to our storage or cones. The table saw nothing came back and it's showing us blank. That's fine. Lets us create new. If I create a new test
record, let's say test, test and test at ti.com
and industries IT, and of course I won't provide
a partition key, row key, and timestamps so that
spare parts of the clean up that would need
to perform on our UI. But that's fine. Let's hit Create. And here we see that it
came back with the record. So the industry and I
am partitioning key of the same values and
we got a new good for or 0 key and a
timestamp value. If I go to details, we see everything about that. So we know that the fetch one by ID slash partition
key is working. If I go to Edit, let's try and change
something here. So this is test 123, and I'm going to save, and that has edited that file. Alright. Now let's take
note or something. If I go to Edit again, I changed the industry this
time to ITU one-two-three or let's try something
real education, right? So it wasn't, IT was education. And then I tried to save nodes that I get an
entirely new record. It's the same rule key, but at different partition key. So you need to be very
careful with that. Because we did specify that the table storage is
using a composite key. It's looking at both
partition key and row key. So apart, Deshawn could have several records with
the same sorry, we can have several
partitions with the records with the
same rural key value. And that's what
we're seeing here. This is a different
partition and we are getting by the row
key value here, right? So it just went ahead and
created a brand new record. So that's something you need
to be very careful with when managing that
partition key value. And for that reason, you could probably just use
the row key as both parties, Sean Andrew key, to make sure that it's the same
record all the time. Now, I'm going to go
ahead and delete, and I'm not bothering to fix up the aesthetics and anything. This is really just to have a demonstration with
the Cloud integration, not necessarily to meet
the stylish, right? So I'm just going to
go ahead and test the delete and remove
one of these records. And there we go. The delete worked. So right there we just
validated that our Table Storage crowd works. Know what you can do for further consideration
now is go ahead and clean up things like the create and
edit forms so you don't need timestamp row key and
partition key displayed. We can remove those
from the Create, and we can remove those
from our timestamp. So let's go ahead
and clean that up. Then for our index listing, we probably don't need to
see those things either. So I don't need to see
the partition key, row, key, probably
row Keegan, stay. But then since it's
more of an idea, I'd probably want to
move it to the front. And even then, do you really
want to see that grid? Now? Let's remove that as well. So none of those three
items exists here as well. So we just keep it
nice and clean. And right there we just implemented credit
for Table Storage. Now let us see that our
requirements have shifted. And no, we need to add some some form of file upload to this form
because the attendees need to upload an avatar or
some image that we'd need to store securely and then
retrieve for display. So now we're going to look
at how we can leverage Blob storage.com or
something like that.
13. Add Blob Upload Service: Alright, so our
requirements have changed. I know we need to store an
uploaded image for the user, but we don't want to store it on our servers for several reasons. So what we're going
to do is offload that responsibility to
an Azure Blob Storage. So I'm going to
add a new field to this table entity and I'm
going to call it image names. So what we're going
to do is rename it to have the same value that
the row key would have. And it will just store
that image name here. And then that will serve
as a point of reference going forward whenever we
need to reference the blob. Now let us create a new service, and I've done that already. We're going to call it
Blob Storage Service. So just right-click services, add new class and we're calling
a Blob Storage Service. And in my Blob Storage Service, I've kind of written notes, some of the code already
someone to walk you through so that you don't
have to watch me type. Now, what I've done is firstly, create a constructor where I
injected the configuration. So you can go ahead and do that. I also have a field
called container name, which I'm calling
attendee images. Now, the reason I'm
using all lowercase is that and I alluded to the
naming conventions earlier. With Blob of storage, you cannot use uppercase
characters in your knee. So anything that you're naming, you have to name it lowercase, you might use some
special characters. Footnote, I'm just keeping everything lowercase
attendee images. And then I have my methyl and that is going to create a
container clients. Now you can wrap
it in that truck yet so that you can catch any exception is especially regarding the naming because what we're doing here is we're seeing try to
create a container, Blob container client to
be specific and pass in that connection string from configuration and
that container name. Now, if your name
doesn't match up, then you will definitely
end up with an exception. So you can just do that try
catch and put a breakpoint here just so you can see
what the exception is, you'd probably get a 400
error at that point. No, we're going to
go ahead and see a weight creative NOT exists. So when it tries to do this, it would hit the exception. And then if it gets this far, we want to return the container. So that's our first method
in our new service. Now the next thing that we
want to do is upload a blob. So we are going to have a
method called upload blob. It's going to be an async
task that returns a string and it's taking iPhone file
as the parameter type. So when we upload using
MVC or dotnet Core, in general, I form file is the datatype that receives
that uploaded file. And then we're going to
pass in an image name. The first thing I'm doing is establishing what the
Blob name will be. And the image name here
represents whatever name I want to store a
file as the reason. It's a good idea to rename your file before uploading it is that if you have multiple users who are trying to upload
files with the same names, you're going to end
up with collision. So it's better to rename it to something that is
unique to the record. Then we can go ahead and
establish that blob names. So I'm taking the
image name value, I can go to pass
over a unique value. And then I'm appending
that name to the original path of the
file that was loaded. So path that gets
Extension Form file dot Filename will look
at the entire file name, get the dot, JPEG, PNG, whatever it is, and then we
append it to our unique name. We get to our country
and our client. Then I'm going to establish
a memory stream object where I'm going to copy from the form file to
the memory stream, reset the position to zero, and then we're going
to use the client, upload the blob with the Blob name from
the memory stream. Alright? And that default represents
cancellation token. You could actually
do without it. So since we're not using cancellation tokens
that you remove that. And then I'm going to
return the Blob name. Now client itself is
actually client here is actually going to be a type of response blob, content info. So from that object you could actually look at
the value and see different bits of information about a blob that
was just uploaded. If you need more information
than just the Blob name, feel free to use that. We could also use the
container client to get more information
afterwards as well. But for now I just wanted to
store the name of the file. Now that we have
uploaded or file, let us look at how we
would retrieve or blob. Now retrieving the
blob is going to be a special situation because what happens is that when we set up the
Blob container, it's going to default
to be private. And we have already
established that we use URLs to access the different
blobs once they're uploaded, we're going to look more
at the URLs later on. But the fact is that
if it's private, then you cannot just access it. And you don't want to just
make it public because you might not want to ever find anybody to be able to access it. So when retrieving our blobs, we need to generate what we
call a SAS tokens are shared access signature
tokens and we're going to look at that in this method. So our new method
here is get blob URL, which is going to
return a string. So it's an async tasks
that returns a string. And once again it
takes the image name. So we initialize a
container client, and then we initialize
a Blob client as well relative to
that image name. Now we have what we
call the blob builder. And this is the builder
used to generate the shared access signature
token that I just alluded to. So what we're going
to do here is blob SAS builder is
equal to a new object, which is going to take
the container name that we want to access, the Blob name that
we'll want to access. Then we're going to set the
expiration on this toolkit, which means that after
this time passes, and I'm just setting
this to 2 min. Alright? But after this time pass is
that nobody should be able to access or use this link to
access the resource anymore. I was sitting in
the protocol to be HTTPS and the resource to be B. If you hover over resource, you will see that B
is specific to blob. If it's a different
type, you can see, you can see his said C for container and you have
different options. But for now we're just
doing Blob Storage. So we're setting B as
the resource type. And then I'm going to
set the permission for the blob SAS builder to be read. So this enum, you may need to include a missing
using reference for this enum. But we have several
options here. You can generate a SAS token
that allows for delete, for ad, etc, are
all permissions. Right now we just want read
because we want to be able to look at the image
when we retrieve it. And then we're going
to return blob dot generate a SAS token with the builder options
not to string. So this is going to go off, generate that SAS token, give us the full URL as well. And then it's going to return
that entire URL to us. Know when we move on, we want to remove that blob whenever the
record is deleted. So we're going to
have one more method here that is going
to remove blob. So public task AsyncTask removed blob and we're taking
that image name once again, connecting with the container, creating a blob client. And then I'm just going
to say, I'll wait, blob dot delete,
if exists, a sink. This is just a feel safe. There is delete async, but delete if exists. Hills with if there's,
if it doesn't exist, then there is no error, then we can delete all the snapshots. So remember that with
Blob storage we have versioning and each time
there might be a collision, it might create a new version
and keep that history. We can choose to delete everything once we are
removing the blob. So that's it for the coordinator for our
Blob Storage Service. I'm just going to go ahead
and extract this interface. And with that
interface extracted, we jump over to our
program.cs and register that service just
the same way we did for the Table
storage service. No, we can refactor our
controllers and views. Let's start with our
controllers or controller. I'm going to inject the eye Blob Storage Service
and initialize it. We know how to do that already. And then for the index action, I'm going to add
this for each loop, someone to say for
each item in detail. Because remember this is
returning the list of attendees. I'm going to say
get the image name. So I attempt on image
name is going to be equal to Blob storage
service that get blob URL, item dot image name. Why are we replacing that value? Well, I'm not going to be
storing the entire URL, I'm just storing
the name, right? So that's unique
name dot extension. So I'm passing that name, which the blob will have. Then I'm going to get
the entire SAS URL, which I'm then going
to replace this with for return to the
view for display. Of course, in a more cleaner
setting would've created that abstraction and
Singapore view models so we could massage the data
better and so on. But forgoing all of
those best practices, this is clean example
of how we can just go ahead and get the
URL for this paper pulses. Next we have the details. So for details are doing
something similar. We've got the record here. And then I'm just going to
see a record dot image name is equal to get the blood. You are once again no
format create post method. I've added a new
parameter and that is the I-Form file parameter. So this I-Form file will
capture the File Upload from the form which we
will modify and a few. I've also refactor
the code of it. So I started off by setting
an ID to a new grid. Then I'm sitting the
Row Key to be this ID. Now my unique name for naming my blob files
is going to be the ID. It doesn't have to be the ID. The ID is going to be
unique every time. I think that's an easy way to just look into Blob Storage and identify what's what
whenever I need it. So I'm seeing if the
form file dot length, meaning something
was uploaded, right, is greater than zero, then I'm going to go ahead
and set that image name to be equal to whatever
this will return. So remember that
our upload blob, if I go to the implementation sticking that and
the image name, so that's going to be the ID. We're going to go ahead and
fit that up, upload it. And then I'm just
returning the result of the concatenation of the
id and the extension. So that is what we're
going to end up with as the image name. And then if nothing
was uploaded, we can set up like
a default image. Alright, and then we go ahead
and absurd the attendee. Let's move down to
our edit and edit. We're just doing
that same if check, where if the form file
is greater than zero, then we go ahead and
upload our blob. I'm passing in that
row key value and the form file which we added
as a parameter as well. Now one thing in the view that I don't think I pointed out, if I go to the View here, makes sure that we
remember that we had removed the input
values for the row key, partition key, and
the timestamp. But to retain the original
row key for the record, make sure that you
have that hidden ASP for Row Key in the form. Alright, so that's Edit. And then finally we
go to the delete. Delete is kind of tricky
now because we need the original image name in
order to remove the blob. So what I had to do was
Fitch, the attendee first. Then we can delete
that and then we can remove the blob
bypassing in that data, that image name, because we fetched it first,
we have it in memory. We can remove it from the table storage and then
use what we stored in memory, then parcel with that
data for the deletion. Finally, let us look at the different changes
we made to the views. So for the create view, I've added a new section that will allow us to
upload the avatar. So I just created
a new form group. We have the label here and I just typed in upload our tar. And then the input type is file, and the name here is formed
file so that it corresponds with the parameter that we're
looking for in the post. And that's it for our Create. And my copying and pasting
skills got out of hand. So let me remove that
validation stuff. And I need this same
control inside of the edit. So I'll just add
this form group to the edit form as well. So if nothing gets uploaded, that's fine because we keep the same blob and
everything is good. However, if something
gets uploaded, then we upload and it will do the replacement
accordingly. Now in the details view, I've added another DT DD
combination to this data list. So here I have Avatar and
now I have that image tag, which is going to
take it's sourced from model dot image name. And I'm just sitting
our width of 150 and a height of 150. Then in the index, I've added another table header. And if you want to label it, you could say avatar or display
picture, whatever it is. And then in the TR for the data, I've added a new td, which is a similar image tag. The source is going to
be item dot image name, and the width is going to
be 50 and the height is 50. Alright? Also have refactored
the form of bits to put the HTML links inside of the forum so that they'll
display left to right instead of that haphazard
display that we had earlier. I know I said we
didn't need to fix it, but I'm sorry, I couldn't
continue with our fixing it. So I just put those
two links inside of the form right next to
the Delete submit button. And I've added the new HTML
attributes for the class, BTN warning for edit and
btn-primary for details. So with all of
those changes made, I'm just going to
save everything. And when we come back,
we will do our tests.
14. Test Blob Service Features: One quick adjustment before
we jump into testing. Remember that whenever
we want to facilitate form or file uploads in a form, we need to include this
attribute to the form. So I neglected to
point it out in the previous lesson we're
retrofitting or views. So Ink type is equal to
multi-part slash form data. So make sure that that is
present in the create form, as well as our Edit Form. I'm also going to add
this hidden field to the Edit Form to make sure that we can retain
the image name value. So remember that we need to retain the original image name. And if the user
uploads and new image, then we will change that old. But we need to keep the original image names so
that when we do an upsert, that value is not lost. Now let's jump over to our edit. Post action and I just went to adjustments
are needed here. One, I want to move
this if statement up because what we did was we absurd it before we even tried to check if we
should update the file. And we should also
update the image field. You mentioned name field, right? So we neglected that
part in our own through, but we can correct it now. So we should check if the form file length
is greater than zero. And all the things that form
file might also be null. So I'm going to make
this kind of no safe. Put that question mark there. So if form file has a value, we check the length if
it is greater than zero. And of course we could
also just check if it's not equal to null because obviously then it
would have a length, but whichever check you
feel more comfortable with, then we go ahead and say, get the image name to be equal
to whatever was uploaded. If we upload something, then we're going to
assume that we are new, we need to update
that image name. And then we can go
ahead and upsert. Now let's go ahead and
test our data site, cleaned out all of the existing records
because I was testing everything and I didn't create another record for
conducting this test. So if you want to
clean out all of your existing table
storage records, you can always go back over to your storage browser or use the Storage Explorer that
you may have downloaded. Go down to tables and you'll see the table that
we had previously created and then you can delete whatever record you feel
that you need to delete. Now, when I run, run this application and navigate to attend the
registration and we see our table. Let me create a new record and I'm going to
upload an avatar. And I'm going to choose an image here and give it FirstName, lastname, email address
and industry and Create. And there we go. So now I'm seeing an image
displayed here in Avatar. Look at that. And I'm seeing all
the data that were used to know what is this image. If I click Details, I get a broken link. So clearly we need to
do some work here. And what I'm going to
do is inspect element. And there we go. That's the error. Hey, it's actually printing
the words, model that image. So let's just jump over and fix that where this should
actually be at sign. And that shouldn't
be little model, that should be big M model
to represent the data. So let me go ahead and
fix that and rerun it. And when I reload
that record here, is the image being
displayed properly? No, I still have my
Inspect Element open. And if you look at that URL, you will see that this is
the URL to the actual blob. So let us review what the
whole URL thing is a bolt. If I jump back over to
the portal and refresh, we will see our record and
or record has image name. That is the same
value dot extension, of course, as the row
key. That's fine. Once again, we renamed
so that we can have uniqueness
across every upload. Now, if I jump over to
the blob containers, the first thing is
that you will see that attendee images
was created and it is created as a private
blob or container, right? If I go into, attend the images, every block blob
that is inside of this container is also
going to be private. So I can go here and
I can copy the URL. If I tried to go to that URL, I'm going to get
this message that the specified resource
does not exist. Now we know that that is
not true because here's that URL to the Blob storage and then the container
and then the name of the file and we just copied it so we know that
the file exists. So because it's set to
private in the access level, we cannot directly
access it via the URL. Once again, based on
the access level, we can enable or restrict
wholesale access to all of the blobs
in our container. Now, this one is private. That's why we generate
that SAS URL. And then when I
click generate SAS, you see that now we
can set up all of those parameters and then we can generate the SAS token and URL. And then once that Tolkien
and URL has been generated, if I copy this and then try to go to it in
the browser again. Well, my browser is actually
downloading the file. All right, so that's what
will happen if I'm using, if you're using a
certain browsers, in other browsers, it will
actually open the file. But let's look at the URL for a bit of pasted it
in Notepad Plus, Plus so that we can
inspect it a bit more. No, we start off once
again with the same parts, the storage of cones, the container, and
then the filename. And then you'll notice that
you have a query string. So we have Sb, that
is the permission, that's it to our
Windows. It start. Okay. That's the time. When does it end? That's the time. What
is the protocol? Https, and then we
have other parameters. The type is blob and the signature is that
random degenerated string. And SV there represents
a signed version, right? You wouldn't set
that, that will be set while the signature
is being generated. That is what this
SAS token brings to the table where we get
this special URL that allows access for
as much time as we sit between the
start and end time. And you see here, this
was set for one day. This was just using
the default options. But of course we can
set a time span that is specific to the purpose that
we need that token for. Now that you have some
insight into what is happening behind the scenes when we're getting that URL. Let's jump over to Edit. And when we edit with
fetch the record, let me try and change this file. And after clicking Save, I encountered an error. So of course I want to
show you the IRS and then we're going to work
through them together. So the error here that
was caught after tried to upload blob was that the specified blob
already exists. Alright? So of course we can't upload a blob that already
has that name. So what I'm going to do is remove the blob if
it already exists, and then we can go
ahead and create. So for upload blob, we're going to jump over to upload blob and we're going to modify what is happening
inside of this method. So as it stands, we're just going to go to the container and
tried to upload. And if you hover over that, you'll see where it says that it creates a new block blob, which means it will obviously
return an exception, as you've seen, where
the blob already exists. So that's why I reading
documentation is very important. To overwrite an
existing block blob, get a blob client,
and then upload. All right, we'll take
the instructions so we can reuse this line where
we got that blob client and replace this original
upload line with a new one that says a wet
blob dot upload async. And then I'm passing in,
I'm using named parameters, so know exactly what
we're passing in where the content is going to
be the memory stream. And then we're going
to overwrite true. So by setting override to true, it will overwrite
any existing blobs. And without setting
that it would be false. So it end up getting the same
exception that we got if we neglect to put in
that override parameter. Now there's another
twist to this. And it is that
what if I uploaded a J peg initially and then
I change it to an PNG. At that point, even
though the id value, the image name is the same, we are going to end up
with two different files. And do we really need to
store old files, right? So technically
speaking, you could also run a delete before you do the load just to make sure
that you don't end up with two different versions of what should be one
person's avatar. So you can consider
that as well. So to meet that happened
in the edit action, I'm going to send over the original image name as a parameter to our
upload blob method, which means now we will have to introduce a new parameter. So our new parameter
is going to be string, which is not audible
original Blob name. And I'm initializing it to not. And of course if we
update the interface, then we must also updates
the implementation. And we have the same
parameter listed here. So now that we have this
original Blob name value, I'm going to check if it exists. So I'm going to say,
if not string dot is null or empty and
pass in that value, then we want to await, remove blob and give it
that original Blob name. We're going to remove
the existing blob and then we can go
ahead and upload. So you see there are
many dynamics to distinct in our case
because we might end up with different files
and extensions and we can't guarantee
that this is always going to overwrite
the existing one. It is safer for us to remove the old one and then
add the new one. Whereas in a
situation where maybe the file name is always going to be consistent regardless, then you probably don't have
to go through all of that. But once again, options. So now that we've made
these adjustments, so let's go ahead
and test our edits. Again. I've loaded up the form, I've already chosen my other
file, and then I'll see. And there we go. Now I
have a brand new Avatar. I've successfully
changed the upload or the Blob image associated
with that record. So that's nice and easy. Of course, it had some nuances. Know the very
example that I gave where the file extension
may be different and we will end up with
multiple files for the same record I actually
had that happened to me. So here's one version with JPEG and another
version with PNG, the origin of files PNG, the new file was JP, and instead of replacing it just created a new one I
need to dereference. So that is why you may want to remove before you upload to make sure you don't end up with
multiple potential files. Even this one was uploaded
without an extension earlier. Those are little things that
you'll want to be careful, careful about, and pay attention
to when handling blobs. Finally, I'm going
to test that delete. So let us delete this record and we know that the Table storage
required it was deleted. Let's jump back over here and at least one of these
files should go. Okay, two of them went because I removed
the original one. Then I also removed the one that was
attached to the record. This one was attached to nothing so I can delete
this one manually. That's fine. But at least
now we know that we are successfully writing
to our blob storage, reading from our blob storage
using an essay, a stalk. And that gives us limited
access just for the period. And we are able to remove
and upload a new one, Change File associations
and everything like that. Now when we come
back, we're going to implement our queue service. And the scenarios that wants
an attendee has registered. We want to send off an e-mail, but we don't want to connect directly to the email service. Instead, we want to add
it to a queue so it can be consumed later
on, unprocessed.
15. Add Queue Service: So now we're going to implement
our queue mechanisms. So the first thing I'm going
to do is create a new model. I'm doing modelling set of data because it's not something that we're
going to store. But I do want a class
so that we can have a strongly typed
message accordingly. So I'm going to call
this one email message. And e-mail message is going
to have three properties. So the first one is
going to be a string, which is the email address. Then we're going to
have timestamps. So I'm just going to say date, time called this time stamp. And final thing is
the actual message. So that's good for now. And then as usual, we're going to have to
sit up on new service. So let's go ahead and add another cluster or
services folder. This time, we're going
to call it q service. And our queue service is going to have a constructor
that's going to inject or our configuration
similar to other services, we're going to also
have a set queue name, which I'm calling
attendee emails. Now I'm just going to
have a public that's a task of a public task
called send message. And send a message is going to take the email message object. So email message, and I'll
just call it email message. And the thing is that I am
hard-coding into this queue. But you may also want to keep this kind of flexible to other cues that you may
need to interact with. So what you could do
for this queue service is instead of setting
that queue name, you could actually just say string name as a parameter here. So that you can actually set the queue name when you're passing over the
message that you want. But then this is a specific one. So I'm just going to
making everything specific to our email
messaging operation. Now I'm going to initialize a queue client and I'm
not going to bother creating a whole method for that this time because we
only have one message, one method which
is sent message. So I went to say var
acute client is equal to a new queue client where we
need the connection string. We get that from
the configuration. We get the queue name
based on this value. And then we set the queue client options to
be message encoding the 64. So remember that we put
off, turn this on and off. When we're creating
the queue itself. Then as usual, we're
going to see Q client or a wait queue clients dot create, if not, exists, a sink. So we go ahead and
make sure that it exists before we tried
to send a message. To send a message, we
can quite simply say Q Client dot send message. And of course there
is an async option, but let me go ahead and put in what the
message should be. So notice these expects
a string message text, but we have an object
of email message. So let me go ahead
and I'll wait first. Now how do I convert this into something that can
be sent as a string? Well, we can serialize
it to be JSON. So I can say var message
is equal to JSON, convert that serialized object. And then we pass in
our email message that will just convert our class into a JSON message or
JSON string object. And then we can pass that
string over as our message. We can extract our interface. That's the only method we need
to extract the interface, go ahead and register it in
the program.cs as usual, and we're adding
it as sculpt just like the other
services before it. And then we can modify or ridges are attend registration control. And after we have injected
or new IQ service, IQ service, I like that name. Now we can modify the hotspots. So obviously creates
would be a hotspot where after everything
is done and we've saved, then we want to create
our e-mail message. So I can say var email is
equal to new e-mail message. And I'm going to set the
e-mail address to be the attendee entity dot e-mail
address that was entered. Right. We can set the time stamp
be a datetime UTC know, and then we can set our
message to be something like hello, FirstName,
LastName, hello. And they were just
concatenating. So we're using,
well, interpolating. So we're using
interpolation here. And then I brought the
line with the backslash n, backslash r. Thank you for
registering for this event. And I have a straight up. And then I brought the line
again and then I said, your record has been saved
for future reference. So now that I have
this e-mail object, I can call on my
queue service and C, send message and just give
it to that e-mail objects. Of course, we know we can do similar things elsewhere in our application because
if it is edited, then we could let them know
that it hasn't been modified. So hello, firstName, lastName, your record was modified successfully and then
we send that message. And then if it gets deleted, I can also do that
because remember, we would've pre-loaded
that data here. So after everything
has been removed, we go ahead and create our e-mail message and
I copied and pasted. So obviously, we
would use data as the object here for email
address, FirstName, LastName. Then the email message
is modified to say it was removed successfully, then we send off. So that is all we
would integrate our queue service
into our operations. Now when we come back,
we'll validate all of this.
16. Test Queue Service: So let us go ahead and
test someone to create a new record and
upload an image. And of course we're
just using test, test, test and IoT industry created. And once we validate that, we have a record and
our blob uploaded, we can jump over to
our Storage Explorer. And I'm going to refresh
all so that I can see if that new queue or tennis
racket cues and say Refresh. And now I see attendee emails as my queue and jump in there. And I have several messages
because I was testing, right? You can see here that we
have several messages and I did a full crud operations to make sure that
each one worked. So I have one here
that shows when it was created your record hasn't been saved for future reference. That's good. And then
we have another one here that says it was
removed successfully. We have another one here
that says it was okay, I didn't do any modifications, but you see here that all of these messages are
actually queuing up. So let us imagine now that the email service was
actually don't, right? So these should
have been e-mails that are sent immediately. But our email service was dawn. That means that the user would have registered
and we might have left The operation with the impression that he's or
a distribution on field. Because the e-mail service
was done the website field to connect and then throw an exception and probably
gave him an error message. We don't want that
to happen, right? Another scenario is that okay, maybe we have proper
error handling in place. So even though the e-mail services donor we
tried to connect, we didn't throw an exception, but then that e-mail is
lost forever, right? So IQ is an excellent way to
kind of hold our messages for our operations until we can actually
process them later on. When we come back, now that we know that our q is working as we expect it to, we're going to build
a little utility that will actually pick the
messages from the queue. And once we get the
messages from the queue, whatever operation on
it is that you need to carry out, can
be carried out.
17. Logs and Services Calls: Now while we're testing our app, I just wanted to take
a few seconds to point out what is
happening in the console, just so you get a
feel of what is happening behind all of
these service calls. So we've wired up
our table service or Blob Storage Service and
our Queue Storage Service. And if you look
at the console or the logs being spit out by
the app for each request, you'll see that
we're just sending GET and post and
different kinds of HTTP requests or HTTPS
requests to a bunch of URLs. Really and truly, it's not much more
complicated than that. So essentially, our service clients and
all of those are really just wrappers for HTTP clients
that are communicating via RESTful APIs with
each of our services. And we did say at the
start of the course that the RESTful APIs do allow us to interact with and manage
or resources on here is a live example
of holdout works. So when we're talking
to our table service, when rp is loads, we tried to get the data. Alright, and later on you'll
see that we interact with, okay, here's one where
we get a tool for. Right. Here's the
one that's I wanted where I deleted the record. So it deleted from the Blob. So if you look at the
URL, you'll see that that's the blob service, right? And it deleted that Blob name. And that's a delete request. Alright, here's the
output where it's updated something on the Q. Alright, so that's the
URL to the queue service. And here's a post request where it created
something under Q. And here's I get taken
getting from the tables. So I'm just pointing out
that everything is really just a RESTful API call
happening in the background.
18. Setup Queue Message Reader: Now that we know that
our service is working, let us build a service
that will subscribe to this queue service and
then consume the message. And then whatever it is that you need to do with it
you can afterwards. But let us build a
utility that will connect to and consume
these messages. What does I'm going to
build a console app. So I'm going to,
in the solution, open up a new or
add a new project. And this one is going to
be a simple console app. So choose a console
app template. And we're going to
call this one console dot q consumer dots demo. And then we'll go
ahead and hit Next. We'll use the latest.net
seven available at the time of recording
and Create. Now I need to add a package, and I need to add the
same package that we had added earlier for Queue storage, which is zero dot
storage queues. So I'm just going to quickly
and I think by now we all know a whole to add
or packages quickly. So I'm going to choose this route where I'm
just going to create a new item group
and then just paste the same package reference
that was in the MVC app. And save that. And you
get will go off in the background and resolve
that dependency for me. Now let's write some code. The first thing that we
would want to do, of course, is set up the or establish the connection to
our queue storage, right? And that is done through our storage account
connection string. Know, you have a few options
when you're doing this, the easiest option would just be to create a variable here. I'm called the connection
string and give it the value. But then for the
obvious reason that you don't want to put your
connection string directly in code and hard-coded layer that we're not going
to take that approach. For this situation, we
can use use a secrets, which means that we
would have to install a few additional packages and store it in a configuration file and then
reference it accordingly. And then by extension, we can also just set a global environment
variable on our machine. And then of course, if
we're going to deploy this application to
any other machine, they will have to make sure environment variable also exists there someone to take that approach for this
particular exercise. So to do that, we're going
to use our terminal window. And whether you're using, if you're on a Windows machine, we're going to use the
following command. And then I'm going to show
you the corresponding one for Linux and macOS. If you're on a Windows machine, whether you're using Visual
Studio or Visual Studio code, you can say set X, given name, I'm seeing as your storage
connection string. And then we place
the value here. So I'm just going to jump over to the web
project where we had the connection string value
from previous exercises. And I'm going to use that
here as the value and set. Be sure to get the
formatting right. You can see that I had some
errors because I copied over and a comma and add
extra quotation marks. So just make sure you get
just the quotation marks and the value in-between. Now if you're using
Linux or Mac OS, the command is
basically identical. But instead of saying set X, you're going to see exports. And it's the same format
to export the name of the variable and
then the value. One downside to this method
is that you may need to restart your computer afterwards
so you can hit Pause, restart, and when you come back, we'll pick up where we left off. So now that we have set
that environment variable, we can set the connection
string variable here. So I'm going to say string connection string is
equal to environment, get environment variable, and
we find that by its name. Alright, then we're going to initialize a new queue clients. And that's going to be similar
to what we've done before. Cute client Q is
equal to nu q client, referencing the connection
string and the name of that Q. I'm going to try and get
messages from the queue. So the first thing
I want to do is make sure that the QA exists, which is just the
rudimentary, right? If I wait, Q dot Q exists. And then inside of
this if statement, I'm going to go ahead and check whole many messages are there. So first off to
get the property. So I say Q properties
is equal to our weight q dot get
properties a sink. And then once I have
those properties, I can now check if
what the approximate, sorry, message cones is. I'm going to say
while properties that approximate message
Cohen's d is greater than zero. Alright, so that's
just a number that indicates how many
messages are in the queue. And then I'm setting up a
while loop to say, well, while there are
messages in the queue, then we want to do something. So I'm going to see go ahead
and fetch the messages. So I'm going to say string message is
equal to and I'm going to await method that I'm
about to create, retrieve. Next message is sink, or we don't necessarily have to follow that async
naming convention. But I do want to retrieve, let me get my spelling right. And then I'm going
to say console dot, write line, and
print that message. And I'm just putting
on the word received. Let us implement this method. So I'm just going to
degenerate the method below. It's going to be an async
method that returns a string. And inside of this method, I'm going to firstly
retrieve a message. No, you can retrieve up to several messages
at a time, right? So I wanted to say
queue message, retrieved message,
and this is an array. And then I went to await the Q dot receive messages async. And then we can take a cone. This cone is the max
number of messages and the minimum is one
and the max is 32. So I can actually say, give me up to 32
messages one at once. But for now I'm just
going to do one. So we're going to
process them one-by-one. And then I'm going to
have to convert from base-64 into a byte array, and then from a byte
array into a string. I'm going to explain
all of this to you. In our Storage Explorer. When we're looking
at our messages, we see the message sticks in plain text and that looks fine. However, when you
double-click it, you'll see that it is
showing you that it has decoded it as base-64. So it's showing me
what's in the message. But really and truly if you
were to look at it at rest, if anybody wants
to get access to the queue and just try
and see the messages are trying to intercept the messages rather than they would just see a base-64 encoded
string that would not actually see the texts
like you're seeing it. No. So for that reason, when we are processing
these messages, they're actually coming over
as base-64 encoded messages. And then I have to convert
from base 64, right? So this is the base-64
encoded then string. So this is actually
binary data, right? And then I can convert back
to UTF from that beta RE. So that's whole, that
whole flow has to work. If we're using base-64 encoding, then after I have retrieved
the message from the queue, I would want to remove it. So I'm going to say a weight
q dot delete message async. And we're going to remove
using the retrieved message than the message ID and their retrieved
message pop receipt. Alright, and if you hover
over these properties, you'll see what each
one of these means. So essentially after we've
processed the message or we've retrieved
the message and we can remove it from the queue. And then I'm going to
return the message, which we can then process.
I don't remember that. Yes, we're going to
console that rat land, but this could easily
be send the email or store in the database. Because if it's an
asynchronous operation and it's something that
needs to happen afterwards. Like we said before, especially in a
microservices situation, this could have been the agents that node does the
anchor leg off whatever process the
skew is driving, right? So this is just a
simple console app. In a real-life scenario, you'd probably use our work
or service that's always on, that you'd have to
manually run every time. The code would look
pretty much the same way with our
work or service. If it was as your function, which we'll look at
later on in the course. Then we'll see how we can handle this kind of
situation as well. So let us go ahead and
test and I'm going to set some breakpoints
at some key areas. This is just so that we
can track exactly what is happening as the
cord goes through. So I'm going to make sure
that the console up is my default project and go
ahead and run with debugging. And we hit the first
breakpoints so we can rest assured that
the Q does exist. And then I'm just going to
jump down to the while loop. So it went ahead and
fetched the properties. Let's look at that object
and we see here that the approximate
message cone is five. So because of that, we
jump into our while loop. And then we'll go ahead and
retrieve the next message. And I'll just press F5
so that it jumps down. Lets us assess. First we got one message, no, because this is messages, even though we specify one, it's still going to return
an array or collection type. So really and truly, it's going to return an array with that
number of messages. So we get an array in this case with one message, that's fine. Now when we look at this
retrieved message object, we'll see that we have our body. We have a dequeue cones, meaning how many times
was this message red? We have an expires on based on the different
configurations that we would have set up and
we have message texts, so we have all of
those properties on this message object,
queue message object. Now, notice that body
is an encoded string. So this is what
we're talking about. This makes no sense
that any human being, unless you are walking computer. So we're going to do
is convert that to a string and then convert
it to a byte array. And then we're going
to convert from the byte array into
an actual string. Then we're going to
delete the message, because now that we've
retrieved the message, we no longer need
it on the queue. So this is the message
that's that JSON body that we've talked about or
that we saw in the preview. So now we can proceed
to delete it. And then the message
gets returned. And I'm just going
to press Continue and remove the breakpoints so that all of the messages will get printed out to the console. And here I have an exception.
19. Code Cleanup and Optimization: All right, welcome back guys. In this lesson we're
just going to do some code optimizations
and clean up. And the first optimization slash cleanup activity brings us right back to the console
app that we just did. So what I've done is I've replaced the while
loop with a for-loop. Since I'm getting a definite
number of messages, it makes sense to just go through for a different
number of times, right? So that will
definitely help us to get over that index
out of bounds error as we'll only go this far once we are
within the cones. So that's our quick refactor. Of course, there are
several ways that we could do this and that was just a lapse in judgment to why I ended up doing the while
loop in the first place. Hey, we live, we learn. So that's our first correction. No, I want to turn
my attention back to the web project and
the fact that okay, we have the storage
or colon string here. But then in our services
like Blob Storage, Queue storage, and
table storage, we are instantiating a
client every single time. So we can probably refactor
that's a little bit. So let us start off by
getting a new package. And I'm just going to use the developer console here so that we can all
be on the same page, whether you're using Visual
Studio or Visual Studio Code. And in the web project, I'm going to dotnet add package. And the package is Microsoft
dot extensions that as your. Now that will give
us the ability to go over into our program.cs. And while it's installing, I start writing the code. So that gives me the
ability to go over into the program.cs and see a
builder dots services, and add as your services
or your clients rather. Then I can initialize
this with a builder and the lambda body and end
it with a semicolon. So what are we doing
inside of this? Add a zero glands. Well, I can now say builder. Dots, add clients. So you see here, no, I can add blob service client, I can add queue service plans, and I can add table
service client, all of that right here. So let's just start off with
add blob service client, and we have several overloads. The easiest overload
to use is the one that requires a
connection string. So I can jump over to the
configuration and we just get this storage connection string. And we're going to go to
builder dot configuration. I didn't get my spelling right. Configuration dots. Well, not that. Build our configuration. And then the key that
we're looking for is our storage
connection string. So now that I have
that one place, I can reuse it elsewhere. So builder dot add blob, service client, and I'm passing in that
connection string value. Alright, I can also build her daughter had to service
clients and guess what? It takes the connection
string and guess what? I can also add. If
you said table, then you are absolutely correct. Now let's zone in on the
queue service client a bit. If we go over to
the queue service, remember that when we
created a new client, we had some options. And that option section included the default
encoding that we want. So back in the program.cs, we can actually extend this
configuration by adding dot configuration options
to the end of this line. And then we're going to create a little
configuration body. I'm just using C as
my lambda token. Then inside of that
configuration section, I'm setting C dot message
encoding is equal to the same base-64 encoding. Know that we have
all of that done. We can inject into our existing services using these services instead of
provisioning them on the fly. So what we can do is start
with the Blob Storage client. And what we're
going to do is use the injected service to
get a container client. And we don't have to call
this method every time. So I'm going to inject the blob service client into this constructor and
initialize a local field. We know how to do that already. And with that injected,
I can actually, instead of saying var container
is equal to our weight, get Blob container client. I can actually replace this with a blob service client dot
get container client. And this is not a sink. So that's why we're
getting that red line. Alright? And I can go through and
replace all instances of that manual method to
just be using our client. And then if I want, I can erase this method. What I'll leave it
there for posterity so that we can see
what used to be. We have the table
service clients which we inject and initialize. And then we can replace our
our way to get table client. I can know just call
under service client, get dot, get table client. And then we reference table name and just carton that's spilling. There we go. So now all of these references, and this is probably
more similar to the Blob Storage
Client initialization. And it is still the
Queue storage or the client initialization.
So that's fine. And of course this method
is here for reference. I did go through and
replace all of them, but I just wanted to point
out that what we're not doing here is creating
if not exists. So there is a possibility
that in getting the client, it might return null
because it doesn't exist. So another factor operation could have been that
we just eliminate this and we still return the service client
based on that call. Alright, so we
could still retain the method call to this and return the client for
each call, right? Or for each method. So
it's up to you, Right? We could also have just defined a table client up top here, or a field up top. Then we initialize the field
inside of our constructor, and then we just use the fields. So there are several
ways to do it. Now on that note,
I'm going to jump back over to the program.cs and we're going to look
at another way that we could approach this. What we're doing
here is injecting the client, the service client. And then because we're injecting the service plan to
have to go and create a new instance of the actual client that will
connect us to the resource. So what if we wanted
to just registered the direct client
to the resource? So we would have to take a slightly different approach with what we're just
doing our own here. And I'll just do
it in another ad as your clients section. Just so that we can
have some deal initial. Alright? So let us say that I wanted the queue
client directly, not the Queue Service plan, not the service name,
but the actual client. I would actually use the
builder dot add client. Then this allows me to
specify the data types. So I want the queue, client and Q options. And then I'm going to
initialize a delegate using placeholders for the expected
parameters for this method, if you hover over the AD plant, you see what the parameters
are either represent. And then we're going to use a lambda expression
for our object body. And then inside of
this method were expected to return a new
instance of Q client. So that means if I jump over to this queue service
where we would have been initializing
our clients. I could actually use
this code right here and return a new client
which is using the configuration or the
storage connection string. I could put the queue name
in my configuration as well. So I don't have to have it
hard-coded in the file. So let's do that. In fact, I'm going to create
a whole new section in the configuration and
I'm going to call it as your storage. And I'm going to put the
connection string there. And I'm going to just take off the word storage
is kind of redundant. So now the key and
I'm doing the same, the same time because I
don't want to forget. So now the key to
the configuration for the connection
string would be Azure Storage colon
connection string, right? Because now it's in
that subsection. And I'm doing all of this, no. So that put a comma there. So now I can put the queue
name or several queue names, whatever Q names and will
need for my application. I can put them here. So the queue name
is going to be, don't tell me, there
we go. Me too to it. There we go. So the queue name is going to be attendee emails hour and I'll
just use the Blob Storage. I just set the keys from
null and table storage. Because this technique
can be used for all three storage services. So back in the program.cs, I'm going to make a quick
adjustment because I'm going to need the
configuration values here. So I'm going to change this
from Builder to just say B. And then queue name
is going to be changed the builder
dot configuration. And then we can
reference that as your storage section and
that the queue name key. Alright? And of course
we need to return. So this needs to be returned UQ clients with all
of those values. Now that we have this Q clients registered in the queue service, I can know inject that
Q client directly. I don't need this direct
reference to the queue name. And I can use this
cute client to do all the communications
accordingly. So I don't need to do
that in initialization inside off the send message or inside of this service at all? No, I can do something similar
for the Table Storage. I can just take this
table name and we can put it inside of our
app settings json. And then in our program.cs, I'm just going to copy and paste the queue client in
the same section. And we're going to
add a client for table clients with
table client options. And what we're doing here is returning a new instance
of table client with a storage connection string and our Azure
Storage Table name, key or Table Storage. Table, Storage table name, whatever it is that you
call it table storage. Alright? So now I can inject
this table client directly into the
Table storage service. So instead of injecting
the service client, well, let's just do both. So table client and I'm
going to initialize this. So I'm leaving all of this code here so you have future
reference, right? It's not necessarily
that you need all of this and I think
we know that by now. But I don't have to initialize a table client every
time I can just use my injected service, much like what we needed
to do with the Q clients. So now I can just do table client right
through this code base. So these are just some
optimizations that you can make to your code
to make sure that, oh, well, I don't have
to change this one. So this method would
now be archived, so to speak, right? So these are some
optimizations and cleanup activities that
you can due to remove some of these hard-coded
references to the configuration. And then of course we discussed the fact that you
probably don't want to store all of these keys
inside of the app settings. So you could actually
use secrets. And to use secrets, you just right-click
the CSP image file and you go down to
manage users secrets. And then you can add
that section there. So it is not one to get checked into source
control for one, and it's not immediately
visible to prying eyes, but it is still
accessible through or configuration in our startup. No one reason I'm
not configuring a blob client while I'm closing. I'm not going to configure
our Blob client. Because remember that
the Blob client gets created directly against
the Blob based on the knee. So while this is a
bit more efficient to just go ahead and inject
the service lanes. We still need to
create a client on the fly based on the blob that
is about to be processed. Because once again, we
don't know which blob is going to be processed in
order to start up with that. So you can go ahead
and validate that is incentives and these
new registrations work. And I'm just going to
comment on one of them. And that's it for this activity. Let's close out this module.
20. Conclusion: So we've hit another mask, one and we have completed this
module on as your storage. For the duration of this module, we reviewed how to create and
manage a storage account. This involved us
interacting with Blob Table and Queue
Storage Services, all part of this storage account offering from Microsoft Azure. We looked at how we can
use the Azure portal or the storage browser that
we could download to your computer to interact
with the different services. Once again, Blob
Table, Queue storage. So we saw that we could
manage the contents of those, whether from the Azure portal or locally using the
storage browser. We also looked at
how we can develop a dotnet Core solution to
use the storage account. So right here we
looked at storing and retrieving the connection
string to the storage or cones whole weekend provision
are and write code to interact with the
different services and the nuance is in-between. And ultimately we can
all talk it up to the fact that each
one of these services is being managed through
RESTful API calls and encode. All we're doing is provisioning HTTP clients to interact with
the underlying services. We also topped it off
by looking at how we could facilitate asynchronous
communication using queues. And this had us
implement something of a Pub Sub pattern where we published a
message to the queue. And then we had
another app that was subscribed to the queue that
would pick the message off, process it, and move on. So that's an example of the pops-up pattern and how we can implement asynchronous
communication. I thank you for joining
me in this module. I'll see you in the next lesson.