Transcripts
1. Introduction: Hello and welcome to my course. Understanding
Semantic Kernel SDK. I'm your instructor
Tervor Williams. I'm a software engineer,
lecturer, author, and a enthusiast, and it is my absolute pleasure to
deliver this course to you. Before we jump into the course, let's review some of the
main objectives that I hope to help you to
accomplish from this course. Now, one thing we
want to do is gain a comprehensive understanding
of the semantic kernel SDK. This means we're
going to explore the core concepts, architecture, and capabilities of this SDK
relative to.net development. We're also going to
look at how we can develop AI powered
plugins that are model usable and enhance the application functionality
with AI capabilities. We also want to look at
how we can integrate AI models into applications by using LLMs like those
that are available from Azure Open AI service and how we can integrate them
into our applications. We also want to make
sure that we leave this course understanding
how we can use this SDK to implement real world applications
like virtual assistants or career coaches or
systems that can help with decision making
powered by AI, of course. Also want to make
sure that you get to apply all of the
skills that you are learning through
hands on activities and projects to
reinforce the learning. So some of the key features of what this course has
to offering through One, an overview of the
semantic kernel SDK with in depth
explanations of its purpose and its role
and how it can be used to make AI integrations into
your modern applications. Also have hands on guided
exercises where we're going to put everything that we learn
about the SDK into practice. And, of course, we're going
to be building on top of LLMs provided to us
through Azure Open AI. In this course, we're
focusing on the GPT LLM. We also are going
to have real world use cases because
we're actually going to be building a coaching system that can process a user's query, retrieve historical data, and then offer a
personalized response. We're also going
to port this into an application for the web. And we're going to look at
advanced features as we go along like memory planning,
reasoning capabilities, configuring AI with
long term support or retention and several things
that you hear about with AI, we're going to be looking
at how we can use the semantic kernel SDK to
accomplish all of these. Now, before you
take this course, I advise you that it
is good if you are already familiar with Tutent
and C Sharp programming, and you are comfortable. You have those
foundational concepts like object oriented
programming, how to incorporate packages and libraries and
how to use ABIs. It's good if you
have an appreciation of all of that before
taking this course. Also, if you have
an understanding of Microsoft Azure,
that is good. Though you don't
need an in depth understanding because whatever
you need to understand, I will explain and will use only what is required
for this course. It's also good if you have
a basic understanding of AI and AI concepts
and keywords. As I will be using
some keywords, I'll try to explain
as I go along. But if you already
have that knowledge, then it will be easier for you to assimilate this information. Now, as a recommendation, I suggest that you
take my course, generative AI for.net Developers
with Azure AI services, where I go from scratch to
completion going through the different concepts
that make up AI and generative AI and the
different services that Microsoft
Azure offers to us. So by the time we
complete that course, everything we're doing
here would come naturally. Now, if you already
have the foundations of the previous three points, you probably don't need
to take the course. But I do make a
recommendation so you can see the whole
picture as you go along. With all of that said and done, I want to welcome
you to discourse. And well, let's get started.
2. What is an LLM?: All right, so let
us start off by discussing what an LLM is, and LLM is an expression
or an abbreviation that you see a lot when
you're dealing with artificial intelligence,
especially nowadays. So a large language
model or LLM is an artificial intelligence model that is designed
to understand and generate human like text based
on vast amounts of data. So you'll find a lot of generative AI applications
are powered by LLMs so that they can perform natural processing or natural
language processing tasks. Natural language means we are speaking to the AI the
way I'm speaking to you. I'm using English. So it's not about using
a programming language. That's traditionally
how we talk to computers through
programming languages. No, LLMs allow us to speak English or native
language to the system, and then it can understand
what is being asked of it and then produce a
response as best as possible. And, of course, any AI engine is only as powerful as how
it has been trained. So LLMs tend to be trained on a lot of data on
a lot of things, so they can understand a lot of variations of our natural
language prompts. No, models like Open AI, Open AIs GPT Series
and Google's BRT are prominent examples of LLMs used in various
applications, and pretty much an LLM uses
deep learning techniques, and it is capable of performing things like
language translation, summarization, and conversation
and sentence completion.
3. What is Generative AI?: Now let's us briefly
look at what is meant by generative AI. So artificial intelligence
is designed to imitate as best as possible
human behavior by using machine learning
to interact with the environment and execute tasks without any
explicit directions on what it should produce. So generative AI describes
a trench of AI that allows the system to
generate original content. Now, people typically
interact with generative AI through
chat applications, and an example of
this would be like Microsoft copilot or hat GPT. And generative AI
applications accept natural language input
like English and return, sorry, appropriate responses in natural language images or code. And what's most important is that it generates
this on the fly. So it's not going to do a predetermined
thing all the time, but it is trained
on a lot of data, like we said, backed by the LLM, and when it gets a prompt, it can look at the
vast amount of data, make the connections,
and put them all together and return
a relevant response. And that is why
when you're using the co pilot or the Char GPT or any other AI engine based on generative AI
technology and LLMs, you'll see that you can
generate original content just by making a suggestion
or asking a question.
4. What is Semantic Kernel?: All right so now that we've laid a little bit of the foundation, let us jump into the main topic of why we're
here, the semantic kernel. Now, the semantic kernel
is an open source SDK that empowers developers to
build custom AI agents. The semantic kernel allows developers to combine LLMs with native code and create AI agents that understand and respond
to natural language proms. It is an efficient
middleware that enables rapid delivery of
enterprise grade solutions, of course, AI powered. And best of all,
there's support for it across C Shar, Python and Java. So here's a little graphical representation of how it works, and I've taken this diagram
from Microsoft documentation. So there is what we call
the AI Orchestration layer. This is the core of the
semantic kernel stuck, and it allows
seamless integration of AI models and plugins, and this layer is
responsible for combining these components to craft
innovative user interactions. Then we have the connectors. So the semantic kernel offers a set of connectors
that enable us as developers to integrate the LLM directly into our
existing applications, and these connectors
serve as the bridge between the application
code and the AI models. Then we have the plugins. Now, the semantic kernel
operates on plugins, essentially serving as
the body of the AI app. The plugins consist of prompts that you
want the AI model to respond to and functions that can complete specialized tasks. So we have some
built in ones, and, of course, we can build our own, and we'll be looking at both
of them in this course.
5. Why use the Semantic Kernel?: Alright, so now we know what
this semantic kernel is, you're probably wondering, Okay, so why should I use this thing? Well, for one, it
integrates with LLMs like Open AI as your Open AI and
hugging face to name a few. Basically, it allows
developers to create plugins to interface with the LLMs and perform
various tasks. So the SDK provides built in plugins that can quickly
enhance any application, and it allows us developers to easily utilize LLMs
in our applications without needing to
learn the intricacies of each LLMs API. And that's the most
important part. So it provides us an
abstraction layer so that it allows for the simplified integration of these AI capabilities into
existing applications. It also allows us to
develop streamlined, reliable AI experiences
while avoiding potential unpredictability
associated with prompts and responses from LLMs
when we use them directly. Finally, it supports the ability
to fine tune prompts and plan tasks to create controlled and predictable
user experiences.
6. Understanding AI Agents in Business Context: No, another key expression
that you're going to see is AI agent. No, an AI agent is a software entity that
can make decisions, take actions or interact
with an environment autonomously to achieve
specific goals. These agents use
artificial intelligence, including machine learning and natural language processing, and they can understand and
respond to their environment, often adapting
based on feedback. They're designed to
actually specific tasks or goals like
recommending products, answering customer queries or controlling a robot
in a physical space. So some tangible examples of AI agents include chatbots
and virtual assistant, which are AI agents that
interact with users. They can answer questions, provide information or
help with general tasks. You also have
recommendation systems like systems that
suggest sorry products, movies, and content based on user behaviors
and preferences. We also have robot
process automation. These agents can handle
repetitive tasks like data entry, form processing or scheduling, and you have robotic AI
agents that operate in a physical environment like a warehouse robot or an autonomous vehicle
or a home assistant. Now, in the context
of a business, AI agents can help to
streamline operations, improve customer service
because they make applications smarter and more
responsive to user needs. They can also help with
decision making by automating tasks and
providing insights.
7. Visual Studio Code: Now we have Visual Studio
code as the alternative. Now, this is a great cross
platform alternative, which means it will
work for Windows, Linux and Mac versus
Visual Studio, which is Windows only. And, of course, it is
compatible with the Net. And then there's this extension
called C Sharp DevKit, which is very useful for helping to give us that visual
studio feel and kind of bootstraps on for.net related operations like
creating projects and interacting with
our project files. But for some things, you will be required to write some CLI commands,
which is fine. We'll go through those, and
actually for this course, I'll be using Visual Studio code more than I'll be
using Visual Studio. Now, to get Visual Studio code, you can go to
code.visualstudio.com, and from here, you can download. You can download for Windows or you can download for
other platforms. I think this will
actually default based on what platform you are on when
you browse to this page. After a very simple and straightforward
installation process, you'll be given your
visual studio code window. And it's really
just a text editor, but it can be enhanced
through extension. So once you're here, you can
go to the extensions tab, and from here, you can look
for the C Sharp Def kit. So once you click on
that C Sharp Def kit, you'll see the button
there to install. Here, mine is updating, so you'll see that it's
currently install ink. But for an extension
you don't have, let us say the Scubnt is one. You'll see that button,
we'll say install. So when you go to
the C Sharp Def kit, you'll see Install
and click that. Of course, if you
don't see readily, you can always search
and once you find it, then go ahead and install it. It may take some time.
Now, you'll also need to install
the.net SDK manually. So this comes with
Visual Studio, but if you're using
Visual Studio code and or you're not able
to use Visual Studio, then you have to go and get the dotnet.miicrosoft.com
and get the SDK that you want to use. Now, you can install both. This one is the recommended one. And as I said,
standard term support, and it was released very close to when I'm
doing this recording. But what I'm saying is
that either one works, you can install both, and you can use either one to
create your projects, and it's quite easy
to upgrade from.net eight to.net nine if you
decide to do so later on. Now, you'll see here this
is same for Windows, but you can get all versions. All you have to do is
go to A and I'll go to all.net eight downloads,
and from here, you'll see that you
do have versions for different operating systems
that are not Windows based. So once you have all
of those installed, Visual Studio code, the.net SDK. And I recommend actually, I got the order mixed
up. I apologize. Do Visual Studio code first, then install the SDK, and then attempt to install
the C Sharp DevKit. As it does say that
the dotnet SDK is one of the requirements. So to avoid any problems, do it in that order, and
then you can continue.
8. Microsoft Azure Account: Now, what is Microsoft Azure? Microsoft Azure is Microsoft's flagship
Cloud computing platform that provides services
like computing, networking, storage,
and databases. It offers seamless
hybrid cloud solutions, and it allows
businesses to integrate premises data centers
with cloud resources. Also provides easy scalability, so business can quickly
scale resources up or down based on demand
and other factors. Now, most importantly,
Azure includes built in AI and machine learning services like Azure machine learning, cognitive services, bot
services, and others, and all of these can be used when developing
intelligent applications. Now, I'm just going to quickly walk you
through what it takes to set up an Azure account if
you don't already have one. First of all, you need to create a Microsoft account or sign
in with an existing one. So your journey starts right
here at azure microsoft.com, and I'm sure they're
going to redirect you to the site that is most appropriate with where
you are in the world. So from here, you
can go ahead and sign in if you already have a an Azure account? Well, sign in. If not, you can click New
to Azure and start free. And then you have to
choose your subscription. So you do have the
option for a free trial, which allows you to explore the platform and its services
without any upfront costs. You do get some
amount of money for, and they do give you
the lowdown down here. So let me not try to
quote and mislead you, but you get there we go, $200 for 30 days to try out services. But for the course of a year, you actually get a
lot of free services and access to free services throughout that 12 month period. Alright? So you can
start off for free. They do require that you add
some payment information, but there's no
upfront commitment. You can cancel anytime, and your credit card will not be charged when that free
subscription is done. So you don't have to worry about the credit
card being charged. Once a free
subscription is done, then you'll just lose access. But then, if you don't want the free and you
just want to go straight into the paid or when
the free is exhausted and you have to start paying to continue using the services, you do have to pay
as you go option. Now, this gives you
more resources, more features, and of
course, it's paid. So there are far
fewer restrictions on what you can and cannot do, and you only pay for what you use. That
is a great thing. So a lot of these services
you can spin up, especially while
you're learning. You can spin them up, and for the few minutes or hours that you may use the service,
you will be charged. But once you destroy
the service, which I will be
showing you how to do, how to decommission the service or deallocate the service, then you will no
longer be charged. So this is a great way to go as you're learning and you don't want to spend too much money, but you do not want
the restrictions that the free
account might give. However, if you're just
starting with Azura, I do suggest that you start
with the free account, so you can see what
it is capable of, and the free account is good
enough for this course. So after you've selected the suitable subscription plan based on your needs
and your budget, then we're going to go ahead and set up what we call a
resource group that is used to organize our resources into one
logical container, and then we're going
to explore and utilize different Azure services
like machine learning, the app service, the
SQL database storage.
9. Semantic Kernel Overview: Or now we're going
to get started, and we're going to build our very first kernel in
this section of the course. Now, brief overview of
what we're going to do. We're going to look at
the kernel, what it is. We already discussed the
concept behind semantic kernel. Now we're going to
get a little more in the specifics of
how exactly works. We're going to look at
using Azure Open AI, and we're going to complete a simple chat up with
the semantic kernel. So stick around. We're
going to explore some concepts and get into some practical examples
in this section.
10. What is the kernel?: Now, what is the kernel? The kernel is the
central component of the semantic kernel. So remember the semantic
kernel is really the SDK. What exactly is the kernel? So it can be considered as a dependency injection
container that manages all of the services
and plugins that are necessary to run
the AI application. So remember that
the semantic kernel on a whole is really
a wrapper around the underlying SDKs that each
AI engine would provide. So open AI as your Open AI, hugging face, all of those
have their own SDKs. The semantic kernel on a whole
is a wrapper around them, and the kernel itself manages the different
services in your application. So all the services that you
expose to the kernel will be seamlessly used by
the AI as needed. Any prompt or code that is
run in the semantic kernel will use the kernel to retrieve the necessary services
and plug ins. Now, the kernel is extremely powerful because it means
that as a developer, we have a single place
where we can configure the most important
parts of our AI agents. We can also monitor them. And Okay, let's say, for example, we have
an application. So we're going to look at a
high level diagram that is taken from the Microsoft
documentation. So we have our application, and then we need to
run at AI prompt. So the kernel is going to select the best AI service
to run the prompt. It will build a prompt using the provided prompt template, send the prompt to
the AI service, receive and parse the response, and finally return the response from the LLM to the application. And throughout this
entire process, you can create events and middleware that can be triggered at the different steps. So this means you can perform
actions like logging and provide status updates to
users, and, of course, implement everything
that is required to make sure that you're practicing
responsible AI usage. And once again, all of
this can be done from a single place courtesy
of the semantic kernel.
11. How to build a kernel: Now, let us review how
we build this kernel. So using the
semantic kernel SDK, it takes minimal setup. So we need the semantic
kernel SDK package. We need some
endpoint for an LLM, one or more, but in this case, we're just going to use one,
which is Azure Open AI. We need the SDK to connect to the LLM via endpoint
and run the prompts, and the semantic kernel SDK supports different LLMs
like Hugging Face, open AI, and Azure Open AI. And if you're wondering
what's the difference between open AI and Azure Open AI and what LLMs mean I want
a bit more information, you can always check
out my course on generative AI for
dotnet developers. I go in depth in that
course and take it from scratch about how to
develop AI solutions. So for this example, we're going to use or
for this course, rather, we're going to be using
Azure Open AI and even more so for this
example that's coming up. Now, while we're on the
topic of what is needed to build our kernel in this course, and in this example
that's coming up, we're going to just explore what the Azure AI services are. So we kind of went through what Microsoft Azure is and how
to set up the account, and assuming that you've
done all of that before, I'm just going to go
ahead and let you know what the AI
services are all about. So you do have Azure
AI Cognitive Services, and you have Azure
Open AI service. And I'm going to focus
on the Open AI service since that's what we're
using in this course. Now, the Open AI service
offers access to powerful language
models like GPT, and I'm sure we've all heard
of or used hat GPT by now. And these can be used to develop intelligent applications
like chatbots, content generation and language
translation applications. So natural language
processing is at the helm of LLMs work. And Azure provides robust
NLP capabilities that allow us as developers to analyze, text, extract insights, and perform analysis like thorough sentiment analysis
and other kinds of analysis through services
like text analytics and large understanding
or Louis for short. A little more about
Azure Open AI service. Well, let's take a step back. Open AI is an AI
research company that has produced
several groundbreaking, AI powered applications like hATGPT to name one,
very powerful one. Now, Azure Open AI is a
service that gives us restful APIs and
access to open AIS, powerful language
models, and you may not know that these are
different models that are available
in the Chat GPT. But if you use CHAT GPT, I'm sure you'd be familiar
with at least 3.54 and 40. But just to show you that there are other
models that you can access from a
developmental standpoint, and as you're open, AI exposes these to
us and giving us the backing and the guarantee of Microsoft as you're hosting
an infrastructure, of course. Now, with these models,
we can complete things like content
generation, summarization, image understanding,
semantic search, and natural language
to code translation. So there are several things you can do with these GBT models. Once again, if you want
to go in depth in them, you can check out my course generative AI
for.net developers. All right, so now that
we have an understanding of what the requirements are for building the kernel and starting to build
an application, we're going to jump right
in and start off by provisioning our Azure
Open AI resource.
12. Create Azure OpenAI Resource: Alright, so I'm assuming
that you've already set up your Microsoft as your account
and you are able to login. So once you log in, we're
going to go straight into creating our
Open AI resource. So I'm going to
search up here for Open AI and actually
have it right here, but you can always just
type in the search, and you're going to
click on Azure Open AI. And from here, we're
going to go ahead and create that Azure
Open AI resource. Do note that this resource
will incur costs. So if you're on the free tier, then of course, you
have the credits, and you might get very
limited capabilities, but otherwise, you
will incur costs. So after you're
finished with a demo, do remember to tear
down your resources to make sure you don't incur
anything unnecessary. So for this course, I'm going to create
a new resource group and I'm going to
call this resource group semantic Kernel RG. So just something that
I can identify with. So when I'm removing
the resources, I just need to remove
this resource group, which I've named very
specifically based on the examples that we're
going to be using. So I'm just going
to go ahead and do that and then
give this a name. And this name generally
want it to be unique. So you could use something
else to make it unique. And by the time you're
doing this course, my resource will
no longer exist, but you can always append
your name on it to just to make it a bit more unique just in case they
say it's not available. And for the pricing tier, I'm going to go
with the standard. Now, if you want to see
the full pricing details, you can do so via this link. And then I'm going to proceed. I'm going to confirm that I want all networks to be able
to access this resource, and I don't need to tag
it and then hit next. And from here it's
going to ask me, or it's going to validate
everything that has been entered, and
then I can create. Once that completion
step is done, we can go to resource. Now, this resource gives us access to Azure Open AI Studio. So let's go to studio. And while it loads, you might be required
to reauthenticate, you can authenticate using
your Azure credentials, and in no time you
should be allowed in. So once we're here, what we want to do is create a
new deployment. So this is us setting up the LLM that's going to
power our kernel, right? The kernel is really just
a code based connector to some LLM that we have. So once again, we're
using Azure Open AI. And here, I'm going
to go to deployments, and we're going to
create a new deployment. So we're going to
click Deploy Model and deploy base model. So just for context, you can deploy a base model
because Azure Open A has several base models that are pre trained with certain information
and for certain things. But we can also fine
tune that base model or another model to make it
more unique to our needs. So the needs of an
airline company would be entirely different from
the needs of a school. Right? So the base model
can be a general, you know, go between for both of them because the base
model would have been trained on a lot of
every little thing or a little off
every little thing. But then you really
want to fine tune your model to fit your specific
business needs sometimes. In this example, though, we're going to go
with a base model. So let's deploy a base model, and we're looking for the GPD. So you see here, have several GPT models that are available to you and
several versions. And each version has
its own capabilities and would have been trained
on certain information and up to a certain point. So when choosing the models, be very, very deliberate. However, I'm going to
go with GPT 35 Turbo, 16 K. And if you don't see 16 K, you can go with the
regular GPT 35 Turbo. So from here, I'm
going to confirm. And then I can give the
deployment a specific name. So in the case
where maybe I have multiple deployments of
the same type of model, I would want to give
them unique names. In this case, I'll just leave
it with the default name, which is indeed the name
of the model anyway. And I don't need to
change anything. If you want to change stuff, you can go to
customize and you can modify the number of tokens
being sent across per minute. You can change the version, right now, default
is the only option, and we have default
and default version two for content filters, and I'm not going to
enable the dynamic quota. And yeah, I can now deploy.
13. Build a chat completion app: Alright, so now we're going
to be writing some code. And in this lesson, I'm going to go ahead and use Visual Studio code
and the.net CLI, just because it's a bit
easier and it's easier for everybody regardless of
your OS to follow along. However, if you're more
comfortable with Visual Studio, feel free to follow
along because we're just simply creating a new
console application. So using the.net CLI, I'm going to run
the command.net NU Console O Build your Kernel. So this means create
a new console app and output this up to a folder called
Build your kernel in whichever destination
you have selected. So let's go ahead
with that. No I can see the into build your kernel, and then do code dot Launch Visual Studio
code in that location. So once Visual Studio
code is launched, the next thing I want
to do is bring up the terminal inside of
Visual Studio code. And I could also do this inside of the original
terminal window, but why not use the one
inside Visual Studio code. So bringing up this
terminal allows me to now type the
command.net add package, miicrosoft dot Semantic Kernel. So this will now go ahead and fetch that package
that allows us to support semantic kernel
in our application, and then it goes the nuget
fetches that packet. And once that is done, I'm just going to confirm the version that
we're working with. So the version, if you
go to the CSP file, we are now on version
1.29 point oh. Now, you can use this Edit view or you can
use a Solution Explorer, which is a bit more natural. If you use Visual Studio, it feels a bit more natural
with the way it's laid out. And this is available
if you have installed that C sharp DevKit. Alright, so let's get
into our example. So I'm going to start off with a using statement to use
that semantic kernel. I'm also going to go ahead
and initialize a builder, and this builder is going to be kernel dot create Builder. So remember that when we started off discussing
the kernel, we said that it's like a
dependency injection container. So if you've used
that Net core before, this shouldn't seem
too unfamiliar. Every time you open a program
that sees in an A speed on a core application or at least let's say an
A speed on NCR, Razor pages or MVC up
that is preconfigured, you would always see
some builder object that allows you to
add dependencies. So this kernel Builder
is allowing us to add those LLM dependencies that
we need for our application. So here, I'm going
to say add open AI, and then you'll see
add Azure open AI and the different kinds
of Azure open AI, services that I can add
off the bat, right? I can also add plugins
and services right here. So for this example, we are doing an AI
chat completion app, and then this is going to have some it's going to
have some overloads. So if we look at the overloads, it's expecting a
deployment name and endpoint and different
things like token, credentials, service
ID, et cetera. So you have different
overloads for this method. But before I fill those in, I'm just going to finish
this up and say Var kernel. Is equal to, and
then we say Builder, go ahead and build. So this is saying, Alright, take all of the
different LLM endpoints and everything that we've
added and then just create one final kernel project. Sorry, object. My bad. Now, just to speed this along so don't have to sit
and watch me type. I'm going to go ahead and say console that right line
into your inquiry. So this is a prompt to the user, and then I'm going to
receive their inquiry, which I'm going to store in
a variable called prompt. Now, once I have that
input from the user, I'm going to go ahead and
pass it along to the kernel. So I'm going to say
result is equal to, and then we're going
to await the kernel. And just so you know,
kernel has several methods, asynchronous and
synchronous alike. So you can create function,
it can create plugin. And we're going to get into
all of those things as well. We can invoke prompt, invoke acing, et cetera,
invoke streaming. So without getting into details
of every single method, what we're going to do
here is invoke a prompt. We're using the
asynchronous method. Hence the weight, and
we're going to pass in that prompt
variable, that value. And then we're going
to go ahead and console that right
line, the result. So once we've done all of this, let's backtrack now and fill in the missing
information here. So let us look at the different parameters
that are needed. So firstly, we need
deployment name. And I'm going to kind of
give I'm going to state the parameter name here so that we know what
value we're putting in. So we need the deployment
name. What's next? We can do the Open AI client, but I want to use the API key. So I'm looking at
the next overload. And we can put in. So we can put in credentials. That's not the one I want. I can also put in that API key, right? So I can put in the string
endpoint and then the API key. So let's put in endpoint there. And right now, I'm just making them blank because, of course, we're going to go and
fetch the values, and then API key would
be the next parameter. All right. So these are the three minimum bits of
information that we need in order to allow the kernel to connect to our
chat completion. So let's jump back
over to the studio. So in the studio, we can first
fetch the deployment name. So whatever name you gave
it when you created it, that's the name that
we're talking about. So back in the code, deployment name is GPT 35. I didn't change it, so
it's the same name, right? Then the endpoint
and the API key. Now, it would be
easy to think, Oh, here's a target URI, that's endpoint, but that's
not quite the endpoint. We have to jump back over
to our portal and in our portal in the resource
as your open AI resource, let's go down to the keys and endpoint and we'll see
here the endpoint. This endpoint is actually the first part of
the target URI here. It's actually up to this point, and then the rest
of it is the target URI for this resource, which you can use in
certain situations. In this situation, we
don't need all of that. So it's easier or
better to target the open AI endpoint rather than the
specific deployments. Point, right? So we're
going to come back, and that is our endpoint, and then we can use the API key. And in testing, I
realized this key is actually the same as
the key one here. At least that was my experience. But either way, it's
better you use the key that is associated with
the endpoint that we want. So let's go ahead and copy
that one and paste it. And do note that whatever
values you see here, you don't generally want to
store them directly in code. You generally want
to put them in an environment variable
or use secrets. This is a simple
example, simple demo, so I'm not going
to go into all of that complication right now, but do know that whatever values you see here by the time
you're doing the course, I would have expunged
these from my resources. So make sure you're using
your own and don't try to copy the exact values
that you have here. So now we have our simple console app that will take some input and
return, some result, courtesy of our GPT three, five, Turbo deployment,
which we have in our open AI resource
on Microsoft Azure. So let's go ahead
and test this out. All right, so to your
inquiry, let's say, list the best places
in Jamaica to visit. All right. So I think by
now, you know I'm Jamaican. So let's see where in my country would be the best place
for you to come and visit. And see they list out
some really nice places Montego Bay Grill Oraios. Kingston is the capital. You can always hike
or blue mountains. Go to Port Antonio to relax. Trelon is rich in country. Culture, apologies
and flora and fauna. Treasure Beach is great. All
of these are great places. So yes, I approve this list, and thank you to our chat
completion app that we have just created using
our semantic kernel.
14. Section Review: Alright, so that
section was short but hands on. So what did we do? We reviewed what the kernel is, which is the pivotal or the central piece in the whole concept of
the semantic kernel. We looked at how we can
provision and open AI resource, and then we looked at
the Azure AI studio where we created a GPT model based
on the 3.5 version. And then we connected to it using our semantic
kernel, SDK. Encode and created our very first chat completion
application using the semantic kernel. So that is really how
simple it is once you have the LLM and you have all that information for
the endpoint and so on, you don't have to
worry about any SDKs and any complications with our uniqueness relative to the LLM that is
being connected to, because the semantic kernel abstracts all of that away and gives us a simple way to
connect and interact. So now that we have seen
a practical example, let us go on to the next section where
we'll look into plugins.