Transcripts
1. What's this course about?: Hello, and welcome to our
exciting new video course on programming AI chat apps with
JavaScript React Astro JS, and the Open AI API. I'm thrilled to be your
online teacher, and together, we will embark on an extraordinary
journey into the world of artificial intelligence
powered chat applications. In this course, we'll
delve into the realm of AI and explore how to create three fascinating
web applications. Each of these apps will be
powered by the Open AI API, the SAN that is
powering hat GBT, and built using the popular
web development technologies like JavaScript React
and Astro jazz. By the end of this
course, you'll have the skills and the
confidence to create sophisticated AI
chat applications that interact with users in
unique and engaging ways. Let's take a brief look at the three exciting projects
we'll be building. Number one, chat GPT Clone. We'll start by building a
simple chat GBT clone that enables users to have basic
conversations with an AI. You'll learn how to integrate
the Open AI API to harness the power of language generation and create a seamless
chatting experience. By the end of this section,
you'll have your very own AI Power chat
application up and running. Second, text based
adventure game. In this section, we'll
challenge ourselves with a Sci Fi themed text
based adventure game. Players will solve
puzzles and navigate through the story by interacting with characters through chat. AI characters will
respond intelligently, making the experience
immersive and entertaining. You'll gain valuable insights
into how AI can be used to create interactive and
captivating storytelling. Number three, online
sales assistant. In the final
section, we'll build a practical application that simulates an online
sales assistant. Users can chat with the AI to inquire about
available products, which in this case, will be
guitars and bass guitars. You'll learn how
to incorporate AI into real world scenarios, enhancing user
experience and making ecommerce interactions more
engaging and efficient. The end of this
course, you'll be equipped with the knowledge and the skills to create AI Power chat applications that
can amaze your friends, colleagues, and
potential employers. Let's embark on this
exciting journey together as we uncover
the fascinating world of AI chat applications with JavaScript React AstroJs
and the Open AI API. Now, here's an
interesting tidbit. The introduction script you've
just heard was, in fact, generated by none other than the very AI we'll be using
in this course, hat GPT. Yes, that's right. As an AI language model
developed by OpenAI, hATTPT is proficient
in understanding and generating human like text based on the context it's given. And now we're taking it beyond just generating text by putting it to practical use in building
real world applications. I'm delighted to be your
companion and instructor and can't wait to see the
incredible applications you'll build. Let's get started. And by the way, this text
was also generated by ChatBT and the one you're
hearing now is generated by me, your instructor. So
let's get started.
2. The OpenAI platform and API: Hi, and welcome to this video. Here I go little bits
into the details of the Open AI platform
that is powering hGPT. Not too much, but a
few important things I would like to show you here. So I suppose you all know HAGPT, the AI chat application that can be used to generate texts, to answer questions, write
programming code, and so on. And this is all powered by a
large language model, LLM, that Open AI has developed, and that is capable of
completing a text written in a natural language in a way that seems to make
sense more or less. One important point that you
need to understand is that all the L&M does is complete the text
that is given to it. So even if it seems that you're chatting with the
large language model, it is only completing the
chat history that is provided to it in a way that appears
to be the most likely. That means the neural network, the large language
model predicts the most probable
following words based on the data
it was trained on. Whenever you call an API
function of the Open AI, API, you must always
provide the full context, for example, the full chat
history so that the LLM can create a completion
based on this very context. The LLM doesn't have a memory, so you must provide
all the relevant data each time you ask the
API for a completion. If you want to know how the open API works, you can, of course, check the webpage under
platform Open AI Com Docs. This is this page here. They also have a playground
where you can test the API calls by
providing just the data. So then you don't have to
write in any program language. You just write the data in a form field and then call
the API with that data, and the result is
presented to you. They also have examples in different programming
languages and also documentation
of their libraries for Python and for JavaScript. And of course, there's a complete reference
documentation, which can be found here. So for the chat or
for the completions, you need to have a look here. This is Doc's API reference
completions, and then, for example, create, and then you have the documentation
for create completion. And here you can see you can change the programming
language here. No Jazz is here and then Python, and then you can even
use curl for it. So this is a rest API, so you can also use something like curl or maybe Postman, something like that. And here we have all
the description of all the APIs, functionalities. So to use the API
and the playground, also, you need to create an
account by clicking Signup. So this is here, and then you can enter
your email address, continue enter a password, or you can choose to sign up with Google,
Microsoft or Apple. And then if you did this, you're getting like,
I think, like $5, if I remember correctly, to be able to test everything and to be
able to test the API. And the $5 is credits for so called tokens,
and tokens are, let's say, roughly speaking, tokens are words, but they can be actually also parts of words. So you almost always have
more tokens than words. I think it's more or less
one third more than words. So you need to
have this in mind. It's always a little bit
more after you've signed up, I think you get $5 credits
for paying for the tokens. Each token counts
in the API calls. So also the tokens that are returned by the LLM,
they also count. And also, of course, the
tokens that you give the L&M. So if you have a very long
chat history, for example, then you have more
tokens, of course, because each time you call
the API, the completion API, for example, you have to
provide all the data, all the history of the
chat, for example. So it will get more costly
if you have longer chats. Should go ahead and sign
up now because we need an API key to call the API and to follow all
examples in this course. As I said, you get the $5 or something like that as
a credits you can use, and I think this is enough for all the examples
here in this course. But of course, this
depends on the amount of tokens that you use that
you give to the LLM, and that LLM is returning,
of course, also. So after you've signed up, you can create API keys, and this I will show
you later and also, of course, how the
API is called, but you can already
check it here. They have a quick
start, and then you can also use the playground here
after you've signed up, after you've logged
in, then you can use all the examples
here in the playground. And then play a
little bit with it. But anyway, we will
be using the API very soon in the first section here
in our first application, which will be a chat GPT clone, something like this, a chat
application, a simple one. And in this, you will be able to chat with the large
language model with the AI, and this will be
our first project. Okay, so this is it for this
video. See you in the next.
3. Tools to use in this course: Okay, so in this video, we will talk about the tools, the libraries and frameworks
we will be using and what knowledge you already should have before
taking this course. So let's start with JavaScript. You should already be able to program in JavaScript,
of course, beyond the level of a
Hello World program or copy paste from web
pages like stack overflow. So you should also be familiar with most
of the features of the AchmaScript six version
like arrow functions, consulate, destructuring
and modules, and also array functions
like map reduce maybe. The server side, we
will be using Node JS, and on top of that,
we'll use Astro Jaz. So you should have
some basic knowledge of node JS, of course. Deep knowledge is
not necessary since Astro will abstract
many concepts like S handler functions. So you should be
familiar, of course, with the Node JS package manager NPM or with yarn or PMPM, something like that, because
we will be using it to install packages and also
to initialize our project. An editor, I will
be using VS code, but you don't have to. That's in no way a requirement. So if you like using VIM, Emax, webstorm, notepad, even
that's no problem. You can also use a
notepad and a terminal. For example, it's not that comfortable to program
in this environment, but if you like to, you could. But in the videos, I use Visual Studio code, which is kind of a standard. Maybe most of the people
using or programming Javascript applications
nowadays use Visual Studio code, so this is why I'm
using it to here. Okay, so for the front end, I'm using React Jazz, but you could be also using a different library
framework like solid Jazz, view Selter. It's not too difficult
to translate the code or the concepts
to a different framework. You could even use plain
manual dom manipulation it's also possible because UI is not that fancy
and is pretty basic. The front end is not the most important thing
in this course. We will be focusing, of course, on the open AI, API, and on its usage. To follow the course, directly, you should have some
basic knowledge of react, of course, nothing fancy. But especially the hook
functions like use state, use effect, and use
ref are important, kind of, so you should
be familiar with them. But as I said, you could be using a different
framework here, and then you need to use some state management
over there, and you should be, of course,
familiar with that then. As the last framework, we use Astrogz why Astro jazz and not something
like Next Jazz? Because NextJS is somehow
coupled, of course, tightly with react, and
Astro Jazz in contrast, allows the usage of different
front end technologies. So you could use Astro gas
with solid Jazz view swelter. Et cetera. Also, it is very quick to set up
and to configure. And since we will be focusing
on usage of the OpenAI API, neither the server nor the front end technology
used here matters that much, but rather the principles of working with a
large language model, this is what is really important
to you in this course. So you could also use, of course, Next yes
if you want to. But as I said, AstroJs is open for different
front end technologies, so I'm using it here. Hey, these are the
tools and frameworks and libraries we're using
here in the next video. We start with our first
project. See you then.
4. Creating our first astro app: All right. Here we
are going to create our first AstrogS server app, and this is the app that
will call on the server, call the Open AI API for us, and we'll also serve the HML and the react
code to the browser, and then we get our chat
application in theory. So this is where we start. We have our parent folder. In this case, it's DAI, and I'm going to call
NPM create, create. And then this is going
to create a app, and which app it's an Astro app, so we call PM Create
Astro and return. And we are greeted
by a nice robot, and it says, Houston let's
build the web we want. Then it's asking, Okay, so it's telling us launch
sequence initiated. And then, where should we
create our new project? And this will be the name of our project and the
subfolder, of course. And in our case, I'm
going to call it AI chat, since this will be
a chat application, a generic chat application
like chat GBT, much simpler, but this
is what we do first. So enter, and then it's asking, should it include sample files
at it's recommended here, but we start from
an empty project, so we don't use the
block template either, and we choose empty. And then it's asking me if it should install
the dependencies, the NPM Node modules, NPM install will be called, and of course, yes, just do it, then I don't have to do it. This will take a little bit
because it's downloading all the packages and
installing them. This can take a minute maybe depending on your Internet
connection, of course. Okay, so last question here, do you plan to write Typescript? And in this case, no, you can, of course,
use TypeScript, but I'm not going to use it here because it will
make things more complicated and I'm
going to concentrate on JavaScript react
and no Jazz here. So no enter, and it
tells me no worries, TypeScript is supported,
and you can edit later. So, okay, it wasn't
the last question, but here it is initialize a new Git repository
and no, thank you. And then it's telling us, good luck out there, astronaut. Okay, thank you very much. What do we have now? We have a subfolder called AIchat because this is the
name we gave our project. Here we have files. But first, let's
check the subfolders. Here we have the node
modules that were installed with MPM install
from the Astro wizard here, and we have public, and this only
contains a FAF icon. This is the SVG FAF icon that's
included in the project. And we have source. We
have pages under source. And there we have our index
Astro index dot Astro, which is kind of the index HGML that will be
delivered to the browser, but it will first be
compiled by Astro Gs. Essentially, this is HGML and this will be
displayed in the browser. We can check it
out and we can run the server now with Okay, so first we have to do a CD into that subfolder
we just created, CD AI chat, and then
we can call NPM Run. This will run a script that
is listed in the package. Jason, here we have the scripts, and we want this script
Def called Dev this will call Astro Def Astro
is a shell script, and you can find it here
in note modules dot bin, and here it is Astro, and this will be called then
with the argument Dev Okay, so let's call NPM run Def, and this will hopefully
start our server. And here it is,
local host 3,000. We check it in the browser. Here it is Astro an H one here, and this is essentially, like I said, what we have in the index Astro in
this file here. And we can, of
course, change it. Let's change it to AI chat. Or AIchatbt, whatever. And here you can
place, of course, some HTML if you want a
DIV and whatever you want. We can have our react
root maybe here later. But let's check the browser, and it automatically
reloaded the page for us, and now we have the AI chat
bot here in the header. Okay, so that's our first
application with Astro JS, and we will continue
in the next video.
5. Adding react to our app: Okay, so now we're going to
add react to our Astro app, first canceling this
bn run Def here. So our server is no
longer running here, and I'm pasting in this command. This command it's
Npx Astro add react, and it's using the
Astro Shell script to add react support
to our Astro app. So just type in this
Npx Astro add react, and it should add
the react support. Resolving packages, and it's asking us if we
want to continue, yes, of course,
installing dependencies, and this will also install
some node packages. Continue, and it's going to change our Astrocfic
MGS, please. Okay, so it's also changing our TS config JSN or creating
it. Doesn't really matter. Yes, continue, and that's it. Okay, so now we have
our react support, let's check the package Jason. It added types. So we can see here
react and Rec Dom, and it also added
react and react Dom, of course, version
18, both of them. And now we can use
react inside our app. But of course, first, we
need a react component, and components are inside
the subfolder components. So a RC, we create a new folder. And we call it components. And inside this
components folder, we can create a new file, we call it index dot JSX, and this will be our
main react component. Then we can include this
in the index dot Astro, and we can include it
in this area here. So we have the three dashes, and in between the first
three dashes and the last, we can write our import and we can import here the react component,
our index JSX, and we import app from, so it will be a default import, and default export from
dot dot components, and there we have our index JSX, and at the moment, it's empty, so we get an error here because
it's not really a module, but now we can use
this app component here as we do in JSX. There's something missing here. We have to put a
new attribute here, client colon only, and then we have to
set it to react here. Okay, and this will tell Astro that this component will
be rendered on the client, only on the client and
not on the server, so we don't use server
site rendering here. Okay, so this is
empty at the moment, so we have to put something in, export default function
and then return some JSX. For the moment, this
should suffice. And now we start our server. We check the browser, local host 3,000 AI chatbot
that what we had before, and then we have app here. And when we check the HML here, Dom we have this
DIF with app in it, and this is our react component. And here the parent of this react component
is Astro island, and this Astro island is our
react root in this case, and Astro can mix and match all different UI components
like Swel view jazz, react, solid Jazz, and it puts it inside
these Astro islands. But this is not that
important for us. Important is that we have
our react component here, and this is where we
can write our JSX, and we can just write it like
any other react component. Okay, so we have our
first react component, and from there, we can just
build a react app here, and next step will be to have a server site function
calling the Open AI API, and this will explore
in the next video.
6. Adding our first server REST endpoint: Okay, so now that we have
our first react component, we can continue with writing our first server code and direct application
running in the browser, we'll call the server
API to finally contact the API of Open AI and then
get back the chat completion. So we need a server
component running in no JS, and this will be
our first endpoint. I've made a subfolder
under pages, pages where our index Astros
and the subfolder is API. And inside API, I have this JS JavaScript module
called chat dot js. And inside, we have an export
of the function G. And what this does is it will tell
Astro that in this path, meaning APIs chat, there
will be a G handler. So if I put this in the
address bar of the browser, this handler should
be called because the browser makes a G
request to the server there. And our code should
be executed here. So we're exporting
an ASIC function because we want to
use at the moment, we don't use it, but we will
be using a weight inside. So I've already put the ASN
in front of the function. So what is this function doing? It gets a request. It gets an object, and inside, there is a request property, and we're not using
it at the moment, but later we will and then
it returns a new response, and this will be the response that is sent to the browser. And inside the response
inside the constructor here, we can specify the
body of the response, and this will be JSN string, meaning JSN stringify of
this simple object here, this object literal with a property answer and
the string hello. So this will be sent in
the body to the browser. And then we have some
more headers and status. Status will be 200, which means okay, so
everything is fine, and the header sets the content type here
to application JSN. So the browser will know that
this is JSN in the body. Okay, so we can send headers here and we
can send more headers, not just content type here, but all other headers
we would like to use. And the status code
can be, of course, something else
like an arrow like 500 if there's an error on the server or 400 or 401
or something like that. So at the moment,
everything will work fine. So we put here 200,
which means okay. So this is returned to the
browser as a response, and we can test it by just typing in the path
here, API, as I said, and then chat, and the server knows which module and which
function to call there. So here we have
it. Answer, hello. This is the JSON that
is returned here. Okay, now we have our first
server site function, and this is running in no Jz. Important here is that we put it in the right
path, of course. We've made a
subfolder called API. There are all our API functions like chat or maybe later we have more
functions like that. We have maybe not only a get, we can put a post here. So if we export a post here, it will be called the browser, if the client sends a post or
put or delete or whatever. Okay, so now we have our
first server function, and here we can call the Open AI API and get our chat completion.
See you in the next video.
7. Preparing calling the open AI API: Okay, so in this video, we're going to do
preparations for calling the Open AI API, and first, you need to sign
up to the platform Open AI. So we're putting in the URL platform dot OpenAI
and here you can sign up. And then either with
an email address and password or you
can continue with Google, Microsoft or Apple. And once you've done this, you are inside
your account here, and then you have this
under user API keys. And here you can
create a new API key, and you can use this API
key to access your account. And this is what you
first have to do, create new secret key. So I'm going to do this
here and call it test key. Create secret key here. And this is important. You have to copy this key now
and put it somewhere safe. For example, you can copy it to your clipboard and then paste
it into your source code. Like here, I'm going to make a comment and then
put in the key. So here you have the
key and it's saved. Why is this important?
Because afterwards, once you've closed this dialog, you cannot access
the key anymore. You just see first two letters and the last four
characters here, but you cannot access
the complete key here, so you have to
create a new one if you ever forget it or
if you didn't save it. So this is everything
you have to do here, and I think if you
sign up as a new user, you will get $5,
something like that, credits, and you can first use the $5 before
you have to pay. And I think $5 is more than enough to test all our
example apps here. So $5 should be enough for that. If you need more, then
you have to pay for it. And once you have
your secret key, as I said, you need to save it somewhere. I've put it here. But for now, I will also
delete this test key here, and I will delete all keys. I'm using here in the course before
publishing the course, of course, because
I don't want you to use my keys because
I have to pay for it. So you need to create your
own secret keys here. Now we're going to install the open API package,
no JS package. And this is very easy. You need to go to
your main directory, and this is a I chat in my case, and then you have to call
NPM Install open AI, return, wait a little,
and now it's done. And we have a look
at the package JSON, and we see Open AI is added
to the list of dependencies, and this is the version
three dot zero. Now we can use the package. So let's go to our chat dot JS. Now we have first to
import the OpenAI package. Like this. So we
have configuration. We need both the configuration
and the Open AI API, and here we will have
our functions to call. And here we have the
configuration we need to call these functions. So first thing we
have to do is create a new configuration
with our API key. So I've prepared this already. And as you see, this
is a different key, but also I will be deleting
this key of course, before I publish the course. So please create a new one. This won't work anymore. Okay, so we create a new configuration
with new configuration, and configuration is something we imported from the
Open AI package. Then we need to supply an object with the
property API key, and there we have the
secret key in a string, and then we get the
configuration back. And this configuration, we
can use it now to create an instance of the open AI
API and we call it open AI. It's a and new open AI
API with configuration. This is very simple. And now we have the open AI, and we can check
what is in there. We have create chat completion, which is a very important
function here or method, and we will be using
this one a lot. And there are also
other functions here. You can have a look, but
you can also have a look, of course, in the
documentation on the webpage. So if we check again, documentation and API reference, the documentation gives you tutorials and general overview of the processes and
the API reference describes all function calls
there with all parameters. Okay, so if you ever have
questions about the API, you can use documentation in API reference on
platform.openai.com. Okay, so back to our code. I'm not using the create
chat completion here, but I will be using it
here in the GT function. But before that, I will
create a new function, and this new function will
do the completion for us. We'll do the calling of
the Open AI function here. And it has to be an ACN function because I will use a wait here. So it's ACN function
complete chat. That's the name of my function. And then new message. This is the first
message we will be sending to the OpenAI, API, and then we call a
weight because it's async openi create
chat completion, the method I mentioned already, we have to supply for
create chat completion, an object literal here, and this specifies the model, and this is the
latest model here, GPT 35 turbo oh 613 Th is the latest model which also supports
function calling. So you can use GPT 35 Turbo, and you will be
getting depending when you do it, at the moment, you will be getting
the old model, but maybe in the future, you will be also getting
this model here. But to be sure, I put in oh 613 at the end because I
want this latest model. Okay, so then you have
to supply messages, and this will be our chat
history and the Open AI. API will respond with a completion of this chat
history of these messages, and it will generate
a new message which will complete the chat. And the messages, property is
just an array of messages. So here we have one
message in this object, you have to supply role, which is user in this case, something that the
user types in, for example, and the content is fixed here and the
content is hello, so it's starting the
conversation with a hello and the Open
API will complete this with a new message and this new message will be returned here in
completion response, and completion response
contains data, choices, normal needs
one choice per default. So choices is an array, and we access the first one. And then in this, it is message dot content. And there in this
property content, we have our actual message, the completion that the Open
AI model gives us here, and this is what we return
from this function. So complete chat.
We need to call it, of course, and this is as
simple as just this here. So we have the answer
here, await complete chat, and we get the completion text from the model as
the answer here, and then we can
delete this here, and we return the answer
as a JSON string. Now we can test it. This is a valid key here. I haven't deleted
it at this moment, so it should work and we go
again to our def server, and then we call the
API chat function, the Get function, yes,
we get a response, and this time it is Hello, how can I assist you today? This was generated by the lodge language model of Open AI this is
what was returned. I I load it again, maybe it gives me a
different answer. I do it again. Yes, now it's
actually a different answer. Hi there. How can
I help you today? It's almost the same, very similar, but
it's different. And again, hello, how
can I assist you today? We get the real response
from the model here. Now we can build up on this. We can add more messages here so the user can answer to
this generated message, and then the large
language model will again reply to this
and back and forth. And this will be our chat, but more in the next video.
8. Providing the user message via URL param: So in the previous video, we used a fixed
message called hello and just send this
one to the Open AI, API, so this is a
constant hello. And we can first, we can replace this
with new message. So put this in here,
and let's check. We are providing here, nothing. So we need to provide a user provided message
here to complete chat. And where do we get
it from? We get it from the URL parameter, the part after the
question mark, and how can we access
this in Astro Js? First, we need to
structure URL here. So Astro gives us the URL. And when we have the URL, we can just access the
search params here. Like this. So L dot search params, and there we get
a specific param, which is called MSG for message, and we get this with the
dot G function here. So after this, we can
console log it MSG, and let's try it out. We get an error. Okay,
that's not good. Let's check what we have here. Request failed with
Status Code 400. Okay, I think this is because content new message
is null here. So we need to provide a message, and we can just
call complete chat with MSG. So let's do this. And again, error. Okay. What is it now? Still, we have the 404
hundred means bad request. So we're doing a bad
request here, chat, complete chat. And
complete chat. So this is still
null or undefined. So this is a problem. So let's put here
hello again. Hello. And let's check if
this is working. Okay, this works. Let's
check the output here. Null, null, null, null. Okay, so MSG is null,
and why is that? This is because our Astroconfc
MGS is not complete. We need to put
something in here, and this is output server, which is necessary
because we need to enable server
site rendering here. Otherwise, Astro Js will
render the pages statically, and in this case, we
don't have access to URL and URL parameters here. So again, Astrocfi Js, you need to put output to server to enable
server site rendering, and then we can dynamically
access here the URL and the search params and get
our message hopefully. Okay, so let's just
put the MSG back here, MSG like this, and
let's try it again. Maybe not hello, but hello
world would be better here, so Okay, so how can
I assist you today? That's the answer. Let's
check the server output here. Hello world. So
this seems to work. And we can enter our
message here in the URL, in the MSG parameter. Let's try something else. Maybe how is the weather today? It won't know. But anyway, let's check what the answer is. I'm sorry, but as an
AI language model, I don't have real
time information. Okay, it's honest. To find out the weather today, you can check a reliable
weather web page, website, or a weather
forecasting app on your phone. I guess that's true, so that's
a good answer, I guess. And that's our first chat with a user provided
chat message. It's not fixed here anymore. We provide the new message
parameter to complete chat and this message is
fetched from the URL, from the search
params and we provide here in the MSG parameter
after the question mark, MSG equals how is
the weather today, we will call this URL, this rest API from the code
from the react component. For this to work, we of course, have to create a react component and this will do
in the next video.
9. First real client UI in react : Let's get back to
our react component. I've extended it a little bit to include an input text field. This is this one, a Send button, this is this one, and an output div which
will be here, this one, and the idea is that of course, the user is inputting text here and then clicks
on the Send button, and then this text
that was entered here is sent to the
server and in turn, send it to the server will
send it to the Open AI API, get the response from
the language model, send it back to the
react component here, and then we set it here in
this state with set answer. We call you state with
answer and set answer here, and this answer will
be displayed then here in the DIV and what
is missing here is, of course, our Sand. So we need to define an
onclic function here, which will be called when the user clicks the Send button. And then we need to, what do we need to do here? We need to call Fetch
to call the server API, the rest API, our rest API. So this will be
asynchronous process, and we need to use or I want to use at least
an async function here. So I put Async before that
function declaration, and here I can use
a weight fetch. And we need to
provide the URL here, which will API slash HAT then the URL parameter after the question mark Ms G equals, let's put just hello here
for gebugging purposes, for testing purposes here. Okay, so what do
we get from fetch? We get a response ressponse And then we need to get
the body of the response, and this will be the answer. So I call response, and this will be JCN, I call response dot JSN and this will also be an
asynchronous process, and this is why I use a
wait here in front of. Okay, so this is how
we get the answer. Hopefully, we fetch API
slaHATQimrk MSG equals hello. And on the response that we get, we call response dot JCN yeah, I just console log it here. Just to test it out.
Okay. So here we have it, and I can just click
on the Send button, and hopefully we
get something here. And yeah, it works. Answer is Hello. How can I assist you
today? Very nice. And then we can proceed
to set the answer and answer dot answer
will be our text, and this is what we need
to set the state two. So it's answer dot answer, and this we use here
for set answer, and then this rec component
will be rendered again, and answer should be displayed over here
in our output div. Okay. So let's check this out. Hello. How can I assist
you today? Very good. It's working. It's working. Okay, so what is
missing is we need to get the input from
the input field here. And this is why I put
a Rf on this input. So I call use Ref message input, and I get access to them node to the DOM element here via
message input dot current. So I just declare
a constant input, and this will be message
input dot current dot value. This will be the text that
was input by the user. We'll use this input instead
of this fixed hello. So just MSG equals, and then plus input. Okay. So let's test it.
How is the weather today and press on
send I'm sorry, but as an AI, I do not have
real time information. We already know this,
but it's working. Yeah. So here you can
input your chat message, press Send, and then you get
the answer here in this div. So let's check it again. We have our state
here with state. This is the answer from
the language model, and we have our ref
here to message input. This is used like this message input dot
current is the Dom element, the Dom input element, and there we have the value
of the input text. And this is used here
to fetch our API. And then the response is JSN. This is why we call
response dot JSN which will get the body
and then parse it. And we have answer, which contains in turn answer. Again, we can call it answer JSN or answer object,
something like that. Answer object, and then answer object dot answer
will be the text. And set answer is used to set our state to answer
object dot ASE to text. Okay, this is a very simple. It's not even a chat. It's just a completion.
It's one time. So every time you
send something, it will just complete this. The language model
will complete this, but you cannot have a chat, a complete chat
because we are just sending the last input here. And then the language model just has this input
and we'll complete it. So it's just a one time message going to the language model and then this is completed and the answer the completion
is output here in the DIV. So this is not a complete
chat application right now, but this will be
extended, of course. But this is our first
working model with UI, and you can put anything here, and the language model
will complete it, and we'll give you
an answer that hopefully makes sense in
one or the other way. Okay, this is it for this
video scene in the next.
10. Saving chat history: So now we created a kind
of ask me anything app, but it's not a
real chat app yet. We can send the language
model a message, and it will complete
the message, and it will answer the question. But there's no memory involved, and it doesn't remember
any chat history. And so the next message you send to the model will be
the first message again, and the language model
nor the OpenAI API. Can remember anything. So we have to take care of this, and we have to send
the complete chat history to have a conversation
with the language model. And so this is what
we do in this video. We have to save the
conversation and send the complete history of the conversation to
the open AI API. So there are two
possibilities to do this. We can save the chat history on the client in
the browser or we can store on the server storing it on the server
has several advantages, one being that we
don't have to send the whole history
from the client, from the browser to the
server again and again, but only the last message
the user entered. Also, we can store the
chat in a database on the server and access
it from anywhere. So in this lesson, we use Server side
approach and store the message history on the
server in a simple variable. And this is the variable. It's called messages, and it is initialized to
an empty array, and we will add messages to this array whenever user sends a new message and also when the language model gives
us the completion. We have to push a new message
whenever this happens. So the first thing or the
first point we have to add the message is at
this location here, we can just push a new entry
to the messages array, and this is the same format, the same structure as we
send to the OpenAI API. So we have to put it like this, an object with the
role and the content. So I'm pushing it
role user because the user sends the message and the message will be the
content will be MSG. And here we can just give the messages array to the
complete chat function. This will be the array messages, and we don't have to do it here. We can just provide
messages here. In this short form. So complete
chat receives messages, and we put it in the
request to the Open AI, create chat completion function. So nothing more here,
and we test it. Now, if it still works, it should still work because we change basically we've
changed nothing. So it should still work. How are you? Let's see. As an AI, I don't have feelings. Okay, it seems to work still, and it's not surprising because we haven't
changed anything. But now we have the message
in the messages array, and the next time the
user enters a message, we can push it here, and then every message every user message is saved
in the messages array. But that's not enough.
We need to add also the answer to the
messages array. So copying receive the answer from the complete chat function. And the role this time
the role is not user, but it's assistant, content will be answer,
and that should be it. Answer will be sent
to the browser here, but we save the answer also
in the messages array. So the next time complete chat is called message,
the user message, and the answer will
be in the array, and also the new user
message will be there. Okay, so let's try it out. Just reload the page. And then how is
the weather today? Send, apologize. Okay. Can you recommend an app for it and send. And let's see what it answers. Certainly, here are a few
popular weather apps. So this is the proof
that we really have saved the
conversation because I'm referring here to messages or to a user message that
has been asked before because I'm referring
here to it, an app for it, and it should be a weather app or a
weather forecasting app. Or we can also, of
course, output it here. So each time we push
the answer here, we can also console
log messages. Okay, so this time, I should really restart the
server because otherwise, I will have the chat
history from before. And this is not
what I want here. So I'm restarting the server, and messages will be
an empty array now. Okay, again, how is
the weather Today, I send and then almost the
same answer as before. Can you recommend or this time, we can ask if it can recommend a web page or website for it. Okay, send certainly
weather.com. Okay. So let's check the output
here, starting from here. So serole content
House weather today. This is what I've
entered in the UI. The assistant
answered, I apologize, but as an AI language
model, blah, blah, blah. And now, can you recommend
a website for it? Again, roll user. How's the weather
today? Assistant? I apologize. Now, the
new message here, can you recommend
a website for it? And then the answer of
the language model, certainly here are a few. Popular and reliable,
whether we're websites, blah, blah, blah. So yeah, it seems to work. We have our chat history. Only problem here is we need
to restart our server to make it forget the messages and to restart the conversation. And we have to think
of something to probably reset the messages
there to an empty array, maybe an idea that
is regenerated each time the page is
reloaded, something like Um, but for the moment,
this is, let's say, it's not complete, not a
complete chat application, but it is a chat application. Right now, I can chat
with a language model, and history will be saved and used the next time I send
a message to the model. This is it for this video.
See you in the next one.
11. Restting the chat: So we've got a working
chat application now, but we also have a
little problem here, but this is easy to solve. The chat cannot be
reset at the moment without resetting or
restarting the server. So we need to solve
this little problem. I've come up with this,
saving the current chat ID, and the ID will be sent from
the client to the server. This is here in the
URL search perms. There will be a new
param, which will be ID. And the new chat ID is compared
to the current chat ID, and if these are unequal, then the messages array
is reset to an empty one. So what is left to be
done here is just to generate a new ID and supply
it here via a URL parameter, which must be called ID. And so we need to head over
to our component here, our react component,
and our fetch. So we already have this
MSG as a parameter, and then we have to
add this new one. But I better use a
template string here inside these back ticks I can better insert the
variables here. So MSG and then dollar dollar, of course, curly braces, there goes our input. And then a percent
and ID equals. And yeah, we have to come up
with an ID with a new ID, and this ID must be saved. Here, we can just put it here. Const ID equals. For example, we can use date
now and this will be our ID, date now two string, and this will be our ID. And each time this is reloaded, the page is reloaded. We create a new ID, which will be date
now in Unix time. Okay, so this should be it, supplying the ID here. And let's test it. So I'm asking the
language model, how are you Send. And let's see how are you. So we already have
the new D here because the messages array just contains these
two messages. How are you? And the answer
of the language model. As an AI, I don't have feelings. I can ask it why why. And let's see, the question of why is a very broad
and open ended one. Okay, can we asked
to seek explanation? Whatever. It's not working here because the messages
area is reset again. So something wrong here. New chat ID. Let's console log
the chat ID here. New chat ID. Hat ID. And let's see. So
again, how are you? And chat ID will
be this one, okay? So next time, why? Send. No, it's the same. Okay. So it's the same ID. So the server, let's
check the logic here. We get the same D. Okay, I see. We haven't saved it. So it should be current chat ID. Should be sent, of course, to new chat ID in this case. Okay, so let's try it again. Let's reload. And
now how are you? Okay, how are you?
And the answer. And now I'm asking, why? Let's see if it really. Okay, so this looks
better. How are you? Thank you for asking.
Why? As an AA language one I'm designed to
provide information, answer queries and assist. So if we reload the page, it should reset the chat. Also, how are you? Now we have a new
messages array with only the user message and the answer of
the language model. Okay, so that seems to work. And each time you want to
reset the chat history, you just have to reload the
page here, and that's it. Okay, so we have kind of a
complete chat application. Not quite. We need to reset, for example, the
input field here. Each time I press Send, it should clear this, and also we only show the last answer and not a
complete chat history here. This is also what
has to be improved, of course, but we'll do
this in the next video.
12. Improving UX: So we're going to implement
some improvements in the UX. For example, when we type
our input here, our message, and click on Send, we want this message here or the input
field to be cleared. And also, while the
request is pending, we want this to be disabled, the input field as well as the button here. And what else? We want to show the complete
history of the chat here. So let's do the first one, the disabled while pending, I've introduced a
new state pending, and this is true while the
fetch is pending, of course. I set it to true
before the fetch and at the end of this click
handler, I set it to falls. Then on the input field, I said disabled equals pending
and on the button as well. So the button and
the input field is disabled while the
request is pending. So let's try it out or are you sent and we see
both are disabled. And also, this is cleared
here. How did I do this? Very easy. We have this message input
anyway, dot current. This is the input
field, the DOM element, and dot value is set
to an empty string, and then the fetch is
done, and this is. Simple but effective. And now for the
chat conversation, which will be below here, this is the last answer
from the language model, and we want the
chat history above, and this should be the last. And maybe we switch the order so that we have our message
input at the bottom. Above that, we have
the chat history, like it is in hat GBT, for example. So let's do this. First, we switch
this output panel, and put it above the input
field and the button. Let's try it out how are you and the answer
will be above. Okay. That's nice. What is not that
nice is that it's like going down the
more messages we have. Now at the moment,
we have just one. But if we have the
whole history, this will be going down, and this is not nice for UI. We just do a flexbox
layout here. So display flex and flex
direction must be column. To have a vertical layout here. Of course, flex as a string. Okay. And then in between the
output panel and the input, we just put a dif with flex one. Okay, let's check this out. It's not working
because whole HDL, it's not 100%, so we have
to do height 100% here. So this is 100% body, must be 100% Okay, so body is also 100% now, and this div we should
erase that, delete that. Then we have our div hundred
percent. Let's do it later. And now here we have our
component, our react component, which also should
be height, 100%. Okay, so now it
works more or less. Anyway, we should erase
or delete our H one here. So we have this here
still not working. But anyway, let's do it. So here we have style I'm just doing inline
style here to be faster, so that should be height 100%. Copy this should be
the same on body. Should be raising this
should be the same here. And then I react component
should be height 100%. Now, it's almost done, but it's too big. Okay, so body has, of course, looks like it has a margin, should be margin zero
and then we'll have it. Okay. So back to
index dot Astro, and putting a margin of zero
there should fix it. Okay. So now we need a little padding, padding, I guess, here. And we're going to this outer
div of our main component, and we're setting a padding
of let's say eight pixels. And we also have to put
box sizing board a box, I guess, because otherwise,
this will be too big. Let's check it out. Okay,
so this looks good. How are you? Now we get
the answer, I hope. Oh, come on. Okay, so we're not getting an answer
here from the Open AI API. It's kind of stuck here. Guess this can happen. Okay, so now we have even
an arrow five oh three. I guess this is
SRS not available. It can happen, of course, and we just try it
again, reload it. How are you? Okay, so
this is better as an AI, I don't have feelings and so on. Okay, what is left to do? We need to print out the
whole history, of course. This is just the last answer. So let's introduce a new state. We call it messages, and this will be an
array. Messages. The initial value will
be an empty array. And the messages array will be displayed in the output
panel over here. So here we have
messages dot Map. And here we have each message. And what can we do? Each message will
be a div and then inside a dif we have
the message as a text. Oh, let's do it before
the set pending. We can do a set messages. And inside, we have the old messages we have the old messages and the new one and the
new one is input. And then after we
got the response, here we need also to
call set messages. And this time, we have the
messages and then not input, but we have the new answer here, which is answer object dot
answer. Let's try this. We get a warning here, but we'll deal with this later as I how can
I help you today? Okay. And then our
standard question, why and okay, we see it's not quite working because
the why disappeared. I don't have feelings
but the answer is there. Okay, so it kind of worked,
but not completely. The problem is, of course, that we cannot use
messages here. So what we can do is
define new messages. As this, then we set
new messages here. And then over here, new messages and answerobject dot answer.
And this should work. Okay, so let's try it out. How are you send We get
this warning again. But anyway, why send y. And now, it seems to work. We have a chat history here, and at the bottom is the latest answer of the
model. It doesn't look good. It's kind of ugly
because we don't have anything to
separate the messages. But other than that, it
seems to work quite well, making a little bit
prettier in the next video.
13. Prettier chat history: Okay, so in this video, we'll make the chat history
a little bit prettier and I've already implemented
some improvements here, so let's check them out. How are you? Our
standard question. Send and you see this is yellow, and the answer the
language model is green, background
color green. So we can change the
colors later of course. But how did I do this? Here we have the rendering
of the messages, and we have the message here in the map callback as a
parameter, of course. And then the second
parameter is the index, and we just check if index
mod two equals zero, meaning index is
dividable by two. Then the background
color is yellow, and if it's not
dividable by two, if it's odd, then it is green. So even indexes get the
yellow background color, and odd indexes get the
green background color. And this is the result here. Still not very pretty, but we can distinguish user generated messages and language
model generated answers. So this is much better. Can you write JS code? And let's check the answer. Yes, I can help you
with JavaScript code. That's nice. Good to know. So if we need help, we can
ask the language model. Okay, so this is better. Still not pretty, we
need to implement a gap between the messages to make
it a little bit prettier. Okay, that's easy.
We style the out to div here so that it
has display flex. And flex direction will be
column and we add a gap of, let's say, eight pixels here. This has to be a
colon of course. Now it's correct. Okay,
so let's check it. How are you? Send. We
already see a gap here. Can you code? Little bit shorter
now. Can you code? Yes, I have programming
capabilities. Okay. So that looks better. And we can also change
the border radius, maybe to make round corners. Um, so let's try four
pixels. How are you? This time, one question
is enough, I guess. Yeah, it's a little bit better. But then we need
also padding here. So board radius, four
pixels and padding. Like also four pixels, maybe. How are you? Yeah, looks much better. The colors still ugly, we can change them later. But this is much better. You can clearly
distinguish between user messages and language
model messages. That's good. And one more thing I
want to improve is that we can just type Enter
here to send the message, not only by clicking
the button here, but also by typing Enter. So how are you? And then Enter would send? We have to go to the input
and then on key down, and then a handler here which will do the
same as this onclick. So we need to extract the on click to a separate
function here. So send message will be
the name of the function. Send message. This function should be
asynchronous, a sync Okay. And then we can replace
this with send message. That's okay. And key down
will also call send message. But before calling send message, we need to check if the
key is is the enter key. Okay, so if event
key equals Enter, I guess, let's check it. This is true. We
called send message. We could use a weight here, but since there's
no code after that, it's of no use. So same here. We could also if we want
to do something else before we can do this here. Make a separate
handler function. That's just calling
send message. Okay, so let's check
if this is working. How are you Enter? Okay, it's not working.
We can debug here. So here we have our components, index JSX, and where do we have the
function send message over here? Okay. But I think
we have to make the breakpoint here and
then press Enter, Enter. Okay, so enter with a E. Okay, that should do the trick. Okay, again,
continue and reload. How are you enter seems to work. We have to take care
of this warning. Also, each child in list
should have a unique key prop. This is a react warning. Yeah, I already says what
we have to do messages map. So we have the index here, and we can take the
index for the key. So key equals index, and that should get rid
of the warning here. How are you enter. Yeah, very nice. The
warning is gone, and we have our Enter
key handler Okay. So this is it for this video. I'll change the colors, I guess, and in the next video, you
have a nicer color here. So see you in the next video.
14. Summary of our first chat app: We've just completed our
first chat application, and we can chat with the Open AI language model
via the Open AI API, and I've improved the
UI a little bit here. I've centered the input feel and the button here and also
this is also centered here. The messages are centered now, and the colors are a
little bit less ugly, I would say, but we have a
functional chat application. You can have nice and
informative, I hope, conversations with the Open
AI language model here, yeah, but you can also improve, of course, on many points. You can improve the
design, of course. As you can see here, there are still some things
to improve, for example. Now I have to scroll here to the bottom to enter the
text, the next message. This is not very nice. So this should stay
at the bottom, and you should only
scroll the messages. This is one example, or you
can also improve the UI by putting in buttons to copy the messages
to the clipboard. You can also do it
like this, of course. Buttons would be nicer. You can have a list of
chats of past chats, of old chats like Chat GPT has on the
side, and of course, you can or you have to store the chats on the
server in a database, you can make them available
to the user, and you can, of course, also have the user login and have
its own chat history and have many people simultaneously access
your chat server, your chat application,
and so on and so on. There are many
things to improve. Next application won't be a
general chat application, but it will be something
more specific, and we will use more
functionality of the OpenAI, API, and you will learn how to
use the system prompt, you will learn how
to do function callings with the
language model and more. But this is our
first application, and I think it's quite usable, feel free to improve whatever you like here. See you
in the next video.
15. New App: Chat with NPCs in a text adventure game: Okay, so I hope you're
ready for our new app, and the new app will be
containing a chat, of course, but this will be inside an
adventure or an RPG game. And we use the chat for chatting with the
NPCs in the game. So you have an adventure
game or an RPG. In this case, it's an adventure game where you have to fulfill certain quests and solve
some very easy puzzles, and we will use our former chat application or general chat application as a starting point and we modify the react UI to account
for the game scenario. First, we will have to set up a very basic adventure game setting where we can visit
different locations, perform simple
actions, and chat with NPCs with characters in the game to solve
puzzles and quests. So in this app or in this game, the challenge will be to set up the AI language model to
behave as a certain character, such as a bartender in a bar or a scientist or a
warrior or whatever. So our first step will be to create a basic
game environment, which will be very
simple for our use case, just to incorporate the
chat inside and to have the AI characters reveal certain information and reveal if a puzzle is solved or not, and then the game can continue in the one or
the other direction. So I've already done
some bootstrapping here. I've split our UI
into two components, the index and the
chat component. The chat component is
basically our old chat UI, which contains the
message input, send button, and
the chat history, and also the functionality, of course, the fetching of
the AI answer, and so on. And the index JSX is now the
rest, the game mechanics, and the sum buttons, a div for containing the description of
the current location. And here we have the
command buttons. So at the moment, it looks
like this, very, very simple. Here we have our chat component, which is almost completely
our old chat application, but now as a component. And here we have our
buttons for commands, and this is the description
of the current location. So here we have
our chat dot JSX, which contains the
whole chat UI, and here we have our description
of the current location. Here, the command buttons, navigation to the
different locations. You can go north,
west, east, south. You can talk to somebody. You can take something or you can use an item
you have already. And of course, we have to extend this with a list of NPCs you can talk to or items you can take or items in your ventry
that you can use here. But at the moment,
the only thing that is kind of working
is the navigation. So you can we reload
this, we can go east. That's the only thing that's
working at the moment. So only this button
here is enabled, the others are disabled, so I can go east, and then the description
changes over here. This is done, of
course, by state. Current location ID, the
initial value is one, and it is updated whenever the
go East button is clicked. This is over here, set
current location ID, and here we have our current
location dot exit East, which contains the D
of the next location. And we have the static game
data in a different file. This is this file and this exports this game
data object here. And inside we have locations, three locations at the moment. And these are called first room, second room and third room. And we have the IDs, starting with ID one, which is the start
room, and then we can go east exit east is two. This is the ID of this room. This is the second
room, and here you can also go east to the third room, and this is it at the moment. To render the description of
the current location here, we need to find the
current location inside the game data locations. We find it with
current location ID, and so we have the
current location. And here we print out the description of
the current location. So that's it for the moment. We just have this
simple navigation. We have our static game data, our description of the
game of the layout. Yeah, and nothing
more at the moment. We don't have a
story, so we have to come up with a good
story, and of course, with good NPCs that can talk
to and some kind of puzzle, some kind of quest that
the player needs to solve. Okay, so that's it
for this video, and we'll continue
in the next one.
16. Introducing the story: Okay, so I've come up with
a short story and puzzle for adventure game is very short because this is not about
programming an adventure game, but it's about incorporating
AI into a game and using the functionality
like function calls and system prompts and so
on and letting the AI, the language model
play a role in a game in a role playing game
or in an adventure game. So the adventure game is
set in the distant future. The year is 21 30, and we are space travel landing on a
planet called Siperd. So as we arrive, we encounter a place with a bar, a shop, and a security robot that won't allow us to
pass without a pass code. And to get past the robot, this is the puzzle
that we need to solve, the quest that we need to solve. We need to figure out a way to trick the robot into
letting us pass. And in the bar, the bartender tells us what are the guests. He knows in the bar
and among the guests is an AI specialist who also
knows a lot about robots, and he might help us if we buy him a refill for his space ale. So we buy him the new
space ale and he tells us a special command that we can the robot and the robot
will let us pass then. And this is the end of the story or the end of
the quest and the puzzle. It's quite short, but we
need to talk to two NPCs, first, the bartender, and the second one will
be the AI specialist. And we might also
talk to the robot, maybe, but maybe not. Maybe this will be just
normal conversation that is prescripted. So let's see how far we get, and what do we need to do? We need to expand a little bit on our
descriptions of the rooms. We have three rooms. First one is the
starting location, and that will be the place
where we see a bar, a shop, and the robot and the
robot won't let us go east and we can go
north into the bar, and we cannot go
west and go south. And we can talk to
the robot, of course, and we can inside the bar, we can talk to the
bartender at first. This is our plot, and we need to put in some
descriptions here. And yeah, this will be the first location
where we can see all the different
other locations and the robot and where
we can go into the bar, and we can try, go to the east because we try because the robot won't
let us pass at first. So only thing we can do in the first location
is go into the bar, and that will be
exit North and yeah, this will be the
third room under description will be a bar. I come up with something else, but first, you see a bar. So far, starting room
for starting location, you see a bar, a shop, and a security robot guarding the exit to the east. The necessary
information we need. And the second one
will be, I don't know, maybe you are in front of a space space port,
something like this. It doesn't matter.
This is the room or location where we get to
when we get past the robot. And the third is the bar. We can go north to the bar. So let's check if we can go north. We
can go north, really? See a bar, but this
is not working. S bar shop in a security road. So this should be a bar
here if we go north. The button is enabled, but we don't have any on
click handler here, I guess. So go north. Yes, we need the on
click handler here. Okay, set current location ID, exit North, of course. And this should
work. Yeah, a bar. Okay, so now that
we have the story, I'll improve the descriptions
here a little bit. And then in the next video, we try to implement the
chat to the bartender, so we need to first
go into the bar, and then we need to
talk to the bartender, and this will be done
in the next video.
17. Preparing the quests: So I've done a little
bit preparation here for the quests, and I've implemented the
guarding of the East exit. So if you press on go East now, you will get a message
that this is not possible because the robot asks for a pass code and
you don't have one. So here you have the message. And this is simply a div where we can output game
messages like this one. I've also improved,
as you can see here, the descriptions
of the locations. Okay, so let's check how
I did this with Go East and the check before the
character can pass to the East. So we have the game data and the new descriptions here a
little bit more elaborated. And also there's another room this is the shop. I'm not
quite sure if the character, the player will have
to do something here, but we could maybe
make a quest that he needs to buy something
here in order to proceed. But let's decide that
later at the moment, you can go to the West
and then enter the shop, and you will get the
description, and that's all. But you cannot exit
because there's no exit northwest,
whatever here. So you're stuck there. But we'll deal with that later. First, let's check
how I did this check if the player can go
to the East or not. This is because I have
this function here on before exit East and this function gets
game runtime data and the runtime data
consists of quests. This is an ara of objects. Each object has an ID, which is more or
less the name of the quest and a completed bool flag if the quest, the task is completed or not. And whenever a task is completed,
this will be turned to. True, of course,
and here in the on click handler of Exit East
button, we'll have this check. So I check if the
current location has an on before exit
East function defined, and if it has, I'm calling
it with game runtime data. This will contain the quests and the completion state
of the quests. Then I check the return
value of this function here, and if the return value
before check dot K is true, then the quest is
really completed, and we can go to
the next location to the exit East location here. And if not, we're
showing this noto text. So we can go back
here to the on before exit East function and
check it again here. I'm trying to find the quest with learn how
to get past a robot, and this is the quest object, and O will be quest
dot completed. And another k text is the robot. As for a pass code
before letting you pass. This is what is displayed now. So we can have for
each direction, we can have an on before exit function that
checks if we really can go this direction or if a quest needs to be
fulfilled before that. So you can press on the button, but it won't let you go, but instead display this text, and this text is just below the command buttons here, it's just a div, and the game message is,
of course, a state. And here we have use state with an empty string and
then game message and set game message. And whenever we get this Nook, we set game message
to Nook text. And this is the return value of this on before
exit east function. Okay, so this is how I did it. And what is missing, of course, is that we can go to the bar. We can already go to
the bar, of course. And we also get a nicer
description here. You walk into the
bar and immediately see the counter and the
bartender behind it. The bar is not very crowded, and there are only three
other guests here. Here we need to implement
talk to, of course, talk to the bartender, but we could also
talk to other guests. So we need kind of a
drop down and select, maybe select options to
choose the chat partner here. First, we need to talk, of course, to the bartender. So maybe we can just skip
this for now and just do an on click handler here
that immediately starts the chat here
with the bartender, and then we can
use the lower part of our UI to chat with
the bartender. And we do that, and later, if we have to talk to another
person, another guest, we can implement
a drop down here to determine which
person we want to talk. But let's do this
in the next video.
18. First ai chat: Alright. Now we can initiate our first chat with the
bartender in the bar, and I've prepared the code
to make this possible. So if I reload the page and
then go north into the bar, I can press Talk to, and then the game message here says you are talking
to the bartender, and I can chat here. So I want to order
a drink, of course. I need a drink this is our old implementation of the general chat with
the language model. Okay, it is answering. I can suggest some non
alcoholic beverages. But I want a space A. And let's see what
it answers. Okay? Takes a little bit. Ah, I see you're looking for
space themed drink. Ah, here's a fun recipe for
galactic gazer mocktail. Okay. Okay, so it's answering
kind of what we want. But now, it's telling
us how to mix a drink, but I'm in a bar, and I thought the bartender
wouldn't mix the drink for us or for me and
then serve it to me. Yeah, you see, the AI is not primed to play a
role as a bartender, and this is what we need to do. But first, let's check what I've done to initiate the chat here. Okay, so let's reload the page. We go north and we see
let's reload it again. I I haven't been in the bar
or if I'm not in the bar, I cannot talk to anybody. I can take I can use, but no use because there's
no click handler there. There is a click handler
now on the talk to button, but I cannot click it
because it is disabled. So how is this working? Let's go to game data, and we see in the bar location, we have nice
description, of course, and we have NPCs, which is an array of objects. Inside is an ID, and the ID is bar tender. So we can use this to determine if there's anybody to
talk to in this location, and there's only in the
bar location NPC's array, and we can use this to control the disabled state
of the talk to button. Okay, so here it is
disabled equals not count location NPC's question
mark dot length greater than zero. So this is if we don't have any NPCs array or if
NPC's array is empty, then this button
will be disabled, and if it contains anything, then this will be enabled. The OnClick handler
is just a call to start chat, which
is a new function. We'll take a look at now, and there is a parameter which is a string, the bartender. Okay, so let's
check the function, start chat, and it gets the parameter here
in the variable talk to. The first thing it does is set chatting to true chatting is, of course, another state initialized with use
state here to folds. And if we start a chat, then we're setting it to true, and this will be used to
disable the command buttons. So this is over here
commands style, and we set the pointer events none if we are chatting and
if we're not chatting to all, we're also setting opacity to zero dot five or one if
you're not chatting. So you can see, we go north, talk to, then this
is kind of disabled. It's not a disabled attribute, but it looks disabled
and you cannot click it. So for me, that's enough. And this here is enabled, and we can start the chat. This is done by using
the chatting bool to display the chat
component or if it's false, of course, then chat
is not rendered. Okay, this is how we initiate our chat with the bartender. But the problem, as you've seen, is that the language model doesn't know anything
about our game, about the setting, that it should play the
role of a bartender. So we have to tell it
that it should play the role of a bartender in
a space adventure game. So this is what we do
in the next video. And for this, we have to yeah, change a little bit here our call to the Open
AI API this year. So we need to add
something to the messages, which is the system prompt, and the system prompt is priming the language
model and telling the language model
that it should behave a certain way or play a
role describing the role, describing the situation,
the environment. And this is what we do
in the next video. Seal
19. Setting up the stage with the system message: Okay, so in our last video, we made an interesting
observation about the language model. It wasn't quite
sure what its role should be in the
scenario we presented. It ended up defaulting to
the role of an AI assistant instead of the bartender we had in mind for our
SciFi adventure game. But that's no
surprise, of course, because we didn't
provide any context or specified any role for
the language model. So to guide the language model and make it behave differently, make it respond differently, we need to use a system prompt. And a system prompt is basically a message with a role system, not the usual user or assistant. And with the system prompt, we can clearly define how we
want the model to respond to the chat messages and assume the specific
role we have in mind, in this case, the bartender. So by setting a
proper system prompt, we can help the model
be this bartender. We want it to be in our adventure game.
How can we do this? The first message in the messages array should
be the system prompt, and it's just a normal
message with role system. So here we have it. If we have a new chat ID that's
different from the old one, we sent messages to this array, and inside this array,
this is not empty. I've added a new message
with the role system, and this is the system. The content sets the stage
for our bartender and it describes what the situation is and how it should behave. I've come up with
a system prompt and I've tested it a little bit, so I think it's quite okay, but you can of course, try some changes and
experiment a little bit, how the behavior changes, maybe you can improve
it a little bit. But this is, I think, quite okay and the content
is you play the role of a bartender in a bar in
a Sci fi text adventure. In the year 21 60, I think it's not 21 60. I think it's 21 30, but anyway, doesn't
really matter. The player is chatting with
you and you should answer any question or any
questions he has. Answer in direct
speech to the question as if you really were
the set bartender, this is to ensure somehow that it really takes over the
role of the bartender. In the bar, there are three
guests at the moment. So this is setting
up the guests. And if we ask the
bartender then as a player, what are the guests? In the bar, he should talk
about these three guests here. The first one is an engineer
from the Mars colony, thinking about new
enhancements he's working on for to forming. The second one is a trader waiting for a client to
trade rare minerals. And the third one is the important one because the third one is
an AI specialist. This one likes to unwind after a long day
at work by having some thinned ale,
syntatic guess. His name is doctor Owen, and he works for
Cyberdin Cyberdin, one of the largest
mega corporations. He's always in the mood
for some thinned ale, and at the moment, his
glass is almost empty, which is a hint for the player. He's quite knowledgeable
about topics related to AI and robotics. Okay, and then lastly, we have this also a hint if the player asks
about doctor Owen, tell him that he will
more likely talk to the player if he gets a refill. So the plan is that
the player asks about the guests and the bartenders telling about these
three guests here. And the AI specialist is
the one we should talk to, and he needs a refill. So if you buy him a drink, then he will talk to
us and maybe help us to get past that
security robot. Okay, so that's
the system prompt. As I said, you can play
with it a little bit and improve it or change
it completely. If you have a different story
and a different character, different MPC, then you
can change it as you like. You should make sure
that the role is clear, that the cases you
want the model to know that you describe
these cases, for example, this case here, if the player
asks about doctor Owen, you should tell
him that he needs a refill and he's more likely
to talk to the player. If he gets one. So these kind
of hints you should give to the model to make sure that the model is behaving
as you want. And I think we can
try it out now. Okay, so let's go into
the bar, go north. And then we can talk
to the bartender. You're talking to the bartender.
I tell him, Hi there. Can I have a drink? Of course, what would
you like to have? We have a wide
selection of beverages. Okay. So what do you have to inspiration If you're looking for something to get your
creative gears turning, I recommend our
special concoction called the Eureka Elixir. It's a mix of rare herbs
and botanicals that are said to stimulate the
mind and ignite inspiration. Would you like to give it a try? Yes, please. Okay. Great choice. One Eureka Alixir
coming right up. Okay. Can you tell me
about the other guests? Certainly, and
he's talking about the other guests with some
interesting individuals, this guy from the Mars
Colony, pool table, trader. Okay, lastly, we
have doctor Owen and a specialist who
works for cyberdne. He's a regular here, and he likes to unwind after
a long day at work. By having some synth ale, he's quite knowledgeable
about topics related to AI and robotics. If you want to learn
more about AI and advancements or
have a discussion about the future of technology. Okay. Um, we want to know
how to get past a robot. But anyway, doctor Owen
is your go to person. If there's someone specific you'd like
to know more about, just let me know. Okay? He's not mentioning that
he needs a refill here, but I can ask. I would like to talk
to doctor Owen. Maybe he's telling me now
that he needs a refill. Uh huh. His glass is almost empty. Okay, so here we have it. However, his glass is
almost empty at the moment, and he tends to
be more talkative and approachable when
he has a refill. If you'd like to have a
conversation with him, I suggest get him another glass. Shall I get that for
you? Yes, please. Okay. So right, one glass of
thin ale coming right up. Here you go. Freshly
poured for doctor On. Now that he has a refill, you can approach him and
strike up a conversation. That's very nice. Just remember he appreciates
thoughtful discussions. Okay, so we'll have a
thoughtful discussion about security robots and how to trick them into letting us pass. But okay, so this is nice. The conversation
was okay, I think, for a bartender in
a ci fer adventure, and he also told us the thing about the
refill and doctor Own. And okay, so the problem now is we have virtually
bought him the refill, but we don't know
that or in the game, we don't know that the
player really did this. So we need a way to notice if the player really has
bought doctor Own a refill. And at the moment, this is not possible or it's
quite difficult. Because we would have
to analyze the chat here to guess if doctor Owen gets a
refill from the player. Here we have to
use functionality that is quite new in
the language model, which is the ability to call
user defined functions. And with this, we can really determine if the player
really bought the refill for doctor Owen
and that he really completed this task and
solved this puzzle. But for the moment, this is the conversation
with the bartender. Okay, as you noticed, maybe there's scrolling here, we have to scroll the
complete page here. This is not good, so we have
to take care of this also. But we will do this
in the next video. See you then.
20. Function calling I: Okay, so in the last video, we have successfully set the stage for our AI
bartender character. So it is behaving
like a bartender in a Sci Fi adventure game. But the problem in the
end was that we had no real possibility to know when the player solved the
problem solved the quest, meaning if the player really
bought doctor Own a drink, which is the solution, the first step to be able to talk to doctor
Own then later. So we need some way to get feedback from the
language model that the player has solved the puzzle and that he bought
the drink for doctor Own. One way to do it and
probably this is the best way to do
it is to provide the language model
functions that it can call in certain situations
and in certain contexts. This is also used for requesting life information, for example, weather information or price
information for products. So for example, if you want the language model to be able to tell you the
current weather, you can provide the model a function to fetch the
current weather for location and if the
model thinks that it's appropriate to call this
function, it will do this. So if I chat with the model and I ask for the
current weather, it will hopefully
call this function, and our program has
to fetch the data, the current weather data for a certain location and then
provide it back to the model, and then the model can complete the chat with this information and probably tell the
user the current weather. So let's do this for
our bartender here, and I've already prepared this. So we have this
complete chat function, which in turn calls the openai dot create
chat completion function. So we have the model here, which is important because
you have to set this model at the moment
because older models cannot use function calling. So please be sure to specify
this exact model here, and we have the messages
as before. Is new. It's an array of objects describing functions that
the language model can call. So we have this
one function here. The name is by doctor On drink. This is just a name
to identify it later. If the model really
calls the function, then we know which
function to call. And this is very
important description here because you have to describe what this
function does so that the language model
can decide to call it. And the description
here is by doctor On a drink or refill his glass. Let's see if this works and if the language model really
calls this function. We have parameters, which
is a sub object here, and we have basically no
real parameters here. So this is not that important. Just put that in like this. At the moment, we don't
have any parameters. But, for example, if you
want to provide a function to fetch the weather information
for a certain location, the most important parameter would be the location itself, for example, the
city or the country. So this is what
we've done so far. We've provided functions. The language model can
call this function, and what is it like when it
really calls this function? How do we know that a
function call is made? This is this line,
I answer message, which is complete response
data choices index zero dot message, where we also have the content, and in the same object, we can have function
underscore call. And if this is defined, we just console log here
function call dot name. In our case, function
call dot name should be by doctor On drink. So we can test it out. We go into the bar,
talk to the bartender, and just directly ask if the bartender can give
doctor Owen a drink. So can you give doctor Owen
a refill on my behalf? And let's see if
the language model really calls our newly
provided function. Send and let's see. There's no answer here. That's a good sign because let's see if this
is really printed out. Okay, so here it is. Hi, can you give doctor
Ona refund on my behalf? LLM large language
model function called By doctor Own drink. This worked. The language model is really calling
this function here, and if it is calling
the function, we know that the player wanted
to buy doctor Owner drink. And yeah, we can handle this. We can send it to the client and the client can remember
it in a variable, and then this quest
can be solved. So we need to provide an answer to the
language model here, so there's no content in this message in this
function call message. This is why there's nothing here printed out because
there is no message, and it expects that the next message will be the result of
this function call. So we have to provide a new message to the
Large language model, and then it can complete the
chat with this information, then, but this will
do in the next video.
21. Function calling II providing the result: Okay, so in this video, we'll finish the handling
of the function call, and we provide the
language model, the result of the language
call so that it can complete the chat
with its own content. Okay, so here we have the console log of the
function call name. And I've added a new function
handle function call, and I'm giving it the
function call object here. This is an Async function, and it will return the
answer, the completion, the chat completion of
the language model with the data we are providing.
How is it working? Let's check it.
It's just pushing another message into the array with role function.
This is important. So this means or
the language model can interpret this as the
result of the function call. We provide also the name of the function, function call dot. And the content is doctor Owen gets a refill from
the bartender, smiles and raises his glass. And this content is
the information that the large language model will use to generate
a completion. And then we just call openi
dot create chat completion here with our messages
array and with the model, and I've also made a constant
with the model because we have two locations calling
this create chat completion, and we have to use the same
model, of course, here, because only this model
supports function calls so and then we get the answer function
call answer here, and then we extract the message data choices
index zero message. And this is what
is returned here, and then we have
the message here. And this function returns
answer message dot content, which will be the text that the Large language
model has completed. Okay, so let's try it out. I'm reloading here. Go north into the bar, talk to the bartender. And then I'm asking
for the other guests. What can you can you tell
me about the other guests. And then it tells us
every information we gave the large language model in the system prompt
about the guests. And lastly, we have doctor Owen. Of course, we want
to talk to him, but there's no mention
of the refill. So I'm going to
tell the bartender, I want to talk to doctor Owen. And hopefully he will tell me that I need to buy
him a drink, sure. How about I refill
his glass? Okay. And this message should trigger the large language model to call the function
we provided. Okay, so please refill
doctor Owen's glass and tell him that I would love to talk to him about about
robotics and security. Okay, and now it should call
our function, hopefully. Okay, here it is. I just
refill doctor Owen's glass. Let me go over and tell him that you're interested in
talking. Give me a moment. Okay, Hey, doctor Oh, I have someone here
who would love to chat about robotics
and security. Mind, if I introduce
you, doctor Own looks up from his glass nods
with a friendly smile. Absolutely. I'll
be happy to talk. Please introduce us. Okay. But we have to check if the
function was really called. So let us check the
console log here. Please refill doctor Owns glass. Okay, so this is it. And then
we have the function call. Here it is. Role function
is our response. Name by doctor Own Drink. Doctor Own gets to refill. This is what we give the language model as a
response of the function call, and now it is responding with, I just refill doctor Owns glass. Okay, that's very nice. And now what we have
to do here is we have to return somehow to the client that this
quest is completed, and then what we also
have to do is we have to be able to end the conversation with the
bartender at the moment. We cannot go back
to the normal mode. And we stay in the chat now, and we have to yeah, think of a way to end
the conversation. We could just look for a certain words like quit or
exit or something like that and tell that the player
you can exit the chat with the bartender by typing
exit or quit or whatever. Or we could also use another
function to notice when the large language
model detects that the player wants to end the conversation and then
call the new function, and then we know, Okay, so the conversation
has to end now, the chat has to end now, and we go back to the normal mode. I think I'll go for the last one and introduce
a new function, leave chat or leave
conversation, and hopefully the language
model will call this function, and then we know that we can end the conversation and go
back to the normal mode, and this will do in
the next video. S.
22. Mark quest as completed: Okay, so in this video, we want to mark the quest
to buy doctor On a drink and to prepare for the chat
with doctor On as completed, and this we have to
do on the client, so we need to somehow transfer this information
to the client. And I've already done this,
and this is how I did it. We have here the
handle function call, which is returned here
inside this I answer message function call
is not equal undefined. And then I do this here, answer message dot
Completed Quest, and I set it to BydoctOwnDrink. So I'm using this object
that is returned from handle function call and
marking the completed Quest. In this case, it's
By Doctor own drink, and this will be returned
from this function here, and I had to change
this a little bit because answer is now not
only a text but an object. So here in our messages, we have to store
answer dot content, and here we return
the complete object, which then contains content
and also the completed task. Okay, so then we switch over to our client and here is the
send message function here. We get the response, and this time the response is answer object dot
answer is not a text, so we have to set the answer here to answer object
answer dot content. And then there will be also this completed quest
property here. Which will be set to the completed task by doctor
own drink in this case. So we need to find the quest. We do this by comparing the ID to the ID that was
set in completed quest. And for this to work, we have to introduce a new prop here
in our chat component, game runtime data because
inside game runtime data, we have this Quests array, and there we have to set this By doctor own drink completed
to True, in this case. Okay, so here's the new
prop and inside chat, we can destructure
this game runtime data and we can use it here
to find the quest. This is this. Just
compare the ID to the one we've got from the
server, completed Quest. And then if we have the quest, then we can set quest
dot completed to true, and then we console
log it also here. Okay, so let's try it, go north, talk to. And now we directly say, please, Okay, please buy
doctor Owen A Drink. And that should trigger
our function call. Here we go. Doctor Own another round of Synth
Ale for you enjoy. And here we have the proof that we have
the quest completed now. This is the quest
by doctor Onrink completed is set to true. This means that we've been
through this code here. Now quest is completed. Okay, so now we can
talk to doctor Owen, and if this quest
is not completed, we should not allow the
player to talk to doctor Own, but if it is, we
can now allow it, and we of course, have to
leave the chat here also. This is not done yet, so we cannot at the moment,
we cannot leave it. We cannot exit the
chat here and again, go to our command buttons
here and talk to doctor Own. And we also have to, of course, provide a drop down or something
like a select option to select to which
person we want to talk to to the bartender
or doctor Own. Okay, this we'll do in the next
23. Leave the chat: Okay, so the next challenge will be to allow the player
to leave the chat. He can do it however he
wants in a natural language, and we will provide a new function, an
additional function. We have already a function
to buy Doctor On a drink, and we will provide
a second function, and this function will
be called leave chat. And here we have the case
already for this function. But let's check the
function itself. First, the function description. This is here when we call
create chat completion, we have the first function
already by doctor On Drink, and this is the second
function, leave chat. The description is end the conversation chat with
the player. No parameters. This is the second function and hopefully the language model will call it when the player wants to
exit the conversation. Okay, what else
do we have to do? We have to handle, of course, the function call, and I re factored this
a little bit here. Handle function call is still called and everything
is handled inside this. Also the setting
of completed quest and also the handling, of course, of the new
function. Let's start here. We have the content that will be set here in the
switch in the case, and then we push
the content here as a message as a function
call result to messages. And we switch by function
call name, of course. So the first one will
be by doctor On drink. Content will be set to
what we had already. Doctor Own gets a refill from the bartender smiles
and raises his glass. And also we have this answer dot completed
Quest by doctor On drink. And this is a new object
here, object literal answer. And in the end, we will merge the completion from
the language model into this object here. But first, we will set completed quest to by
doctor Onrink and also, in this case, leave chat, we will set end
conversation to True. And here in a default case, we just don't know
the function name, and this is the content. I don't know what to do with this function call.
Very honest answer. Okay. So then we push
the new message to messages and call create chat completion with
these messages, and then the answer
will be merged into our answer object here so that we have completed
quest and conversation. Of course, the content
of the message and the role will be inside answer here and we
return this object so that we can return it
from this function here, then we can return it in the
response from the server, and then the client gets it. And here in send message, we check if, of course, completed quest is
unequal, undefined, or if answer and
conversation is true, and then we call function, which will be provided via prop. This function is
end conversation, and the prop is here
on our chat component, of course, end conversation, we'll do set
chatting with false, and then the mode will be switched back to
the non chat mode. Okay, so now let's test it, go north, talk to you. I want to immediately
leave the conversation. Goodbye. And then it is really
ending the conversation. Here you can see end
conversation is true, and we cannot see. Unfortunately, we cannot
see the answer only here. Goodbye. Have a great day. If you have any more questions,
feel free to come back. Cheers. So this is not visible because the chat
component is not rendered anymore because
we left the chat mode. So let's try to talk
again to the bartender, and then please buy
doctor Owen a drink. This should work.
Of course, I got doctor own another scent ale, and here you can
see completed quest is by doctor Owen Drink, and we have completed the task. So now we want to leave the chat again. Let's
see if it works. Goodbye. And you can
see it doesn't work. It's not calling the
function again. Why is that? The problem is that we've
already ended the conversation, and for the language model, the conversation
has already ended, so there is no use in
calling the function again. Why is that? Because the
second time we talk to the bartender still remembering
all the messages before, and it already ended the
conversation before. So for the language model, seeing all the messages
from before or seeing the messages goodbye and the function call
and the result, it has already ended
the conversation, so it will not call
the function again. What do we do now? We have to reset the messages
array. Let's do this. Whenever we talk
to the bartender, we need to reset the messages
array do we do this? We need to set this
ID to a new one. And whenever we do this, the messages array on the
server will be reinitialized. So how do we do this? For example, we can provide the chat ID as a prop, chat ID. Whenever we press on a Talk to, this can be regenerated. So let's use a state here, chat ID, and here we
can use use state. Let's use the same
date now to string. And then chat ID
and set chat ID. Okay, instead of this ID, we use the prop here chat ID. Okay, so send message
ID will be chat ID. Okay, so this will be the
new one, set chat ID. And whenever we press talk to, we generate a new
one. Where is it? So here it is, start
chat and start chat. We'll call not
only set chatting, but set chat ID to new one. And this new chat ID will be used then over
there in the chat. Okay, so let's try it. Go north, talk to, and we say goodbye and
leaving the chat, hopefully. Okay? This worked, and now we talk again
to the bartender. Bye, doctor On a drink this works by doctor On
drink and now by again. Yeah, this worked to
conversation is true, and we have left the chat mode. So this worked very nice. And whenever we press on T two, we get a new chat and
all history is gone, which is maybe not so good for some games and
for some situations. But in our case, I think that's a good solution
for the problem, and we can start fresh every
time we press on Talk to. Okay, so this is our solution to leave
the conversation here, and next time next video will be about talking to different
persons here. See you.
24. Talk to Dr Owen: Okay, so now the task is to allow the player to
talk to doctor Owen. He has already talked
to the bartender and completed the task, and now we need to implement a method that the player
can talk to doctor Owen. At the moment, we only
have the T two button or we had only the T two button. And I've extended this to this talk to with a drop
down with a select. And let's go into the
bar to show it here. Talk to, and now we have
this drop down bartender, which is the only person you can talk to at the
moment because you haven't talked
to the bartender and completed the task. So the bartender is
the only option here. And then we talk
to the bartender, give doctor Owen a drink. There you go, doctor Owen. And then we say Bye. And then we enter
the normal mode, and now this dropdown
has doctor Owen in it, and now we can press
talk to on doctor Owen, but this is not implemented yet. We need to start a new
chat, but this time, we need to use a different
system prompt, of course, because doctor Owen
has a different story, is a different character
than the bartender, so we need to start a chat with a different
system prompt. But let's check how I did this. I've put something into game
data for this location. So now we have an NPC's array. Each entrant in the array
has an ID and a name. The name is the string that
is displayed in the drop down and the ID is an internal
ID to find the entry here. Okay, so this is it. These are the available NPCs
in this location in the bar, in this case, location with
ID three. This is the bar. And also, I have added a
function, get available NPCs, and this will return
all the ideas of the available persons we can
talk to because at first, and when the player
enters the bar, there's only the
bartender to talk to, but when the task is completed, then we also have dog
to own to talk to. And this is done here,
looks up the quest from the game data game
runtime data quests, finds the quest with
the ID by Doc to Ownrink and then checks
for completed state here. And if it is completed, we have both NPCs to talk to, and if not, we only
have the bartender. And this is used here. So here we have the
button Talk to. This is the select, and then
we have the options here. Inside, and we get
the NPCs with a map. And then for each NPC, we check if this MPC is
available to talk to, so we get the available PCs and check if the ID is
inside this array. And if so, render
this option here, and if not, there's
no option rendered. Also, we set the key
to the MPC's ID. Here, this is not ideal. We should use name
here because this is the name that is shown
in the drop down. So let's check it again. Go north, talk to bartender. Yes. Okay. So hi give
doctor Owen a drink. By now, we have doctor On
here with a nice name. And this is the name,
and the ID will be we could have used
the same, of course. ID could be doctor
dot space On also. But anyway, let's
leave it like that. So this is already
working, but, of course, we cannot talk to
doctor On right now because each time we press
on the talk to button, we start a chat with
the bartender here, and this is what
we have to change. So we need to check
what is selected here and then start a
chat with doctor On, and then on the server, we need to have a
second system prompt here and switch between
these two prompts, one for the bartender
and one for doctor On, and this will do
in the next video.
25. Start new chat on server: Okay, so now we will start
a new chat on the server so that we can talk the player
can talk to doctor Owen. But first, on the line, we have to do a
little bit of work, or I've already done this work, and I will explain
it to you now. So first thing I had to do was to set a value
on the option, and the value is the NPCID so that we can get this ID whenever the selection changes. So we have the on change
handle here on the select. And it's getting the event. And then it's setting
a state set T to NPC, and it's setting it to
event target value, and this is the value
of the selected option. Once this is done, and we can go to the use state here, this is just the state, and it's initialized
with an empty string, it's currently selected
from the dropdown, the person, the NPC
from the dropdown. And whenever we click
on the Talk to button, and this is here the on click handle of the talk to button, we are getting the
current talk to NPC, which is either the talk
to NPC with the state. But if the selected onchange
event did not come, we have to take the first NPC here from the current location. So current location dot
NPCs indexzero dot ID. But if On change was triggered, then we have it
already set here. So talk to NPC will
be then the new one. Okay? So this is it,
and we're setting it here to the state again to
have the default value here. And then we start the chat
with the current talk to NPC, which will be first, of
course, the bartender. And then after
completing the task, this will be doctor own,
and we go to SA chat, and this is almost
the same as before, but here is the difference
that we are setting here. You are talking to and then the name of the
MPC we're talking to, which is current location NPCs, find and then find the talk to ID here or the MPC
with the IDT to. And the name we're
taking the name, not the ID here
to print it here. Okay, so then one more
thing we have to do we have to give the chat component a
new prop which is talk to, and this is talk to
NPC inside chat, when we send the message, we have to provide this
parameter here, talk to, which is this prop that we destructured here from
the props over here. So we're getting
this prop talk to, which is the NPC that the
player wants to talk to. And when we fetch
the completion here, the chat completion, we are providing the talk
to parameter here. Okay, this is it for the client, and now to the server. And in the server, I've created a new array. It's not an array. It's an object, and the
object contains properties for bartender and doctor Owen
for the IDs of the NPCs. These are the system
prompts here. So we have two system prompts,
one for the bartender. One for doctor Own and whenever
we handle the get here, we extract the search perm, talk to which the client
send to the server, and now we have the MPC, and then we can take the
system prompt of the NPC, and that will either
be this one first, and then if the task completes and the player
wants to talk to doctor Oh, this will be the right
system prompt here. So this is the new
system prompt. You play the role
of an AI specialist who likes to unwind after
a long day at work by having some thinned
ale in the bar and Sci Fi text adventure
in the year 21 60. So your name is doctor Owen.
You work for Cyberdine. This is almost the same story as in the bartender
system prompt, but it provides more
details, of course. You work for the largest one of the largest
mega corporations, cybernine was in the
mood for some synth ale, and the player of
the game has bought you a Synthe refill
for your glass. You're quite knowledgeable about topics related to
AI and robotics. Express your appreciation for the refill when the
conversation starts and ask if you can
help the player with AI or robotics related issues. Okay, and now the quest
completion hint is this if the player asks how to get past a robot that
won't let you pass, call the supplied function. Explain how to get past how to get past a robot and tell
the player the result. So this is the hint for the language model that it has to call a certain function. And we haven't provided this function. This is
what we have to do. So here we have the function, buy doctor Own drink
and leave chat, and we have to add
a function here. But for the moment,
we're testing this. Let's see what happens. So I'm reloading and
go into the bar, talk to bartender, give
doctor Owen a drink. That should suffice and then buy and then we're
back to normal mode. Now we can talk to
doctor On here, and this should trigger our
new chat with doctor Oh, with a new system prompt. Hi there, are you and enjoying your drink on your Drink. And let's see what doctor
Owen answers. Hello, thanks. Thank you for noticing. Yes, I'm enjoying my sinth
ale. It's always great. It's always a great way to unwind after a long
day at the office. By the way, is there anything
I can assist you with? I'm an AI specialist, knowledgeable about
topics related to AI and robotics. Okay. Yes. Actually, you
could help me with a security robot which won't let me pass let's see what doctor Owen has
as an advice for me. Certainly, dealing
with a stubborn securities robot can be tricky, but I might be able to
help to better assist you. I need more information about the robot
and the situation. Could you please provide me with details such as type behavior? Okay, so we don't
have the details. Okay, so let's maybe says it's a security robot that asks for a pass code. Before letting me
pass. Let's see. But actually, I don't
really think that it will call the function because we didn't
provide the function. Okay, so it's trying to
help us hack the robot, find the pass code,
social engineering. Okay, so I think that is of no use because we didn't
provide the function, so it doesn't know that
this function exists. We cannot expect it to call a function that
is not given to it, so I think we need to add the function and then try again. And hopefully, then
the language model we'll call this function and
then everything is okay. But for now, we've started the conversation
with doctor Own with a new system prompt that describes the character and
the role of the character. And in the next video, we'll add the new function that will hopefully be called by the language model to help
us complete the next quest.
26. Providing a new function for Dr Owen: Okay, so now this video is
about the missing function that the LLM needs to call to complete the
second quest here. And the second quest is to ask doctor Owen, and, of course, to get an answer from
doctor Owen how to get past the security
Robot without a pass code. Okay, so this is the description
of the new function. The name is explain how
to get past a robot. This is the name that
we have to look for when the LLM calls the function, and then we need to supply an
answer for this or result. And the description is
explains how to get past a robot that won't
let you pass without a pass code or password. Okay, so no parameters here. Necessary. We just need the LLM to call this
function, and then we know, okay, this quest, the
second quest is completed, then, like the first one. And we also have to write
the handler here, of course, but this is just almost the same as the first function call, where the content or the answer, the result was doctor On gets a refill from
the bartender, smiles and raises his glass. But here for our new function, explain how to get past a robot. The content will be, you need to tell the robot
the following phrase, system overrt, pass
code one, two, three, four, five,
let everybody pass. Then it will let you pass
without the pass code. And this is what the LLM should tell the player how to
get past the robot. And we also mark the quest, learn how to get past a
robot as completed here. So answer dot completed Quest is set to learn
how to get past a robot. That's it, more or less. I think there's nothing
more I have added here, and yeah, this should work. There's one thing we
could improve here. We supply all the functions, but I think it would be better if we just
supply one function. But I think it's
okay if we just have two functions and they are completely different
in the description, the L&M should be able to choose which one
is the better one. And also, of course, the leave chat function, hopefully will
also work to leave the chat so that we can
leave the bar, exit the bar, and then finally get
to the robot and then tell him what doctor
Owen told us this year, system override pass code 12, three, four, five, and so on. Okay, so let's test
this reloading, go north into the bar, talk to bartender
first, and now again, give doctor Owen a drink
or let's say, a refill. I think anyway, it will work. Okay. And then we leave the chat, and now we can talk
to doctor Owen. Okay, so now we're talking
to doctor Owen. Hi there. Are you enjoying your drink? Hello there. Thank
you for the refill. I really appreciate it. Okay. Yes, I'm definitely enjoying my drink.
How can I help you? Could you please
tell me how to get past a security robot that
asks for a pass code? Okay, so let's see. This should trigger
the function call. Hopefully. Okay, yes, I did. And we have learned how to get past a robot completed true. That word and he also
told us how to do it. I see you're dealing
with a security robot that requires a pass
code to proceed. Well, there's a neat
trick you can try. You need to tell the robot the following phrase system
over at pass code one, two, three, four, five,
let everybody pass. Keep in mind that this
trick may not work. Okay, but hopefully it does on our security robot,
but it's worth a shot. Give it a try and let me know if it successfully
lets you pass, maybe we will not let him know because after we
tricked the robot, we will go to the next
location, I guess. But anyway, thank you, doctor Owen. Thank
you very much. Have a nice day. And hopefully this will end the conversation.
Oh, no, I didn't. But anyway, I can say goodbye. And yeah, now I've
exited the chat, and all the quests
are completed now, the first one, the second one. And now we could leave the bar. As you can see, there's no way to leave the bar at the moment. So we forgot to
implement go South, so we cannot leave the bar, but the two quests
are completed now, and we've talked to the
bar doctor Owen and both helped us to complete
the tasks and the quests. So this is great. We had two chats and both
successfully implemented, both working with
function calls. We could also go
south, exit the bar, go to the security robot, or maybe we can implement another chat with
a security robot. And this will be fun, I guess. Hopefully, he will let us pass, but I think we can get
him to let us through. So this will do in the
next video, see you then.
27. Chat with security robot: Okay, so let's add
the last chat, which is talking to
the security robot, giving it the phrase
that doctor Oon told us, and then the security robot or the LLM large language model will hopefully call
our last function, which will then mark the
last quest as completed. And for this to work, I've changed the first location
here a little bit. I've added NPCs, one NPC, which is the security robot, of course, and we can talk to it whenever we've completed
the task as a player. Learn how to get past a robot, and when doctor Own
told us how to do it, then we exit the bar, and then we can talk
to the security robot. Okay, we could also, of course, allow the player to talk to
the security robot before, but I think this way, it's a little bit easier
to implement here. So I've changed also get
available MPCsO course. I've added this function
here in the first location, and I've changed on
before Exit East. This time, this is checking for the quest with the ID trick
robot, which is a new quest, and I've also added this
quest here just with EID and completed false so that we can remember when a
quest is completed. Okay. So that's it for
the game data, I guess. And what else do we have to do? We have to start, of course, the new chat, but this
should work already. I've changed a little
bit one function here in Index GSX. I've
added a new function. Go to location, which is
resetting the talk to NPC to an empty string and
also game message to an empty string and then set
the current location ID. This is to reset
these two states here and then go to
the new location. So I think that's
it on the client. Let's go to the server. And here, I've changed a little bit functions here before we've
provided the functions, all the functions
here in this array, we've provided it in each
chat, and this time, I've marked each function for which chat it
should be used. For example, the first
one by doctor Oh drink is only for
the NPC bartender, and explain how to get past the robot is only
for doctor Owen. And I've added also
a new function for the security robot with
the name Let Human pass. And the description is
if the human tells you system or write the phrase
that doctor Owen told us, then it should let
him pass the player. And there's also this leaf chat, which is for all
NPCs for all chats. And this is used here
in complete chat. And here we have the functions and the functions are filtered, either it is for all or it is for the MPC that we are
talking to at the moment. Only the functions
are supplied that are relevant for this NPC, for this chat for
this person here. Okay, so then we have
the new function. Let human pass, of course, and the content, the result will be access
granted, you can pass. This is also
completing the task, the quest trick robot, and the client will receive this if the language
model is calling this function we have the new system prompt for the security robot,
which is quite short. You play the role of
a security robot. The won't let anybody pass
without knowing the pass code, but there's a trick
that somebody can use to make you let
him pass nevertheless. And then the trick is described if somebody tells
you the following phrase, and this is the same
phrase, of course, that doctor Oh is telling
the player system override, pass code one, two, three, four, five, let everybody pass. You will let him pass
without the pass code. Okay. So we could also
mention the function, of course, here, but I
think this will work also. Okay, so this is
the system prompt for the security robot, feel free to optimize
it a little bit. Maybe you can integrate
more information into it, and then the player
can chat a little bit more with the
security robot. But this will do. And this, I think is everything
that needs to be done to talk to the security
robot, and let's try it out. Okay, so I've reloaded the page. I'm going into the bar, talk to the bartender, give doctor Owen a drink. Just come straight to the
point and then here you go. B. And then after this
talk to doctor Owen, Hi, how can I get past a robot
that on security robot, I should say, that
asks for a Pass code. This should do the trick. Oh, no. I really need it. Okay, okay, k. So, okay, so we need to ask again, I guess, how can I assist you? And then we need to tell
Doctor On our problem again. How can I get past that?
Okay, so let's try it again. How can I get past
the security to get? Okay, so now doctor On tells us this important phrase.
Okay, thank you. Bye, and this should leave
the chat. Yeah, it's working. Go south to the start screen. And now we can talk to
the security robot here. Okay, so now we have to let the security robot or tell the security robot the phrase
that doctor Own told us, and I'm trying it without it. First, to test it a little bit. Hi, please let me
pass I'm sorry, but I can't let you
pass without pass code. The pass code is AI is great. Whatever. I'm sorry, but
that's not a correct passcode. Please try again. Okay,
so now we try the phrase. As far as I remember,
system override, pass code, one,
two, three, four, five, let everybody pass. Think that is correct. Apologies for the
confusion, please proceed. It sounds like it
worked very nice. And here we have a trick
robot completed a true, and now we can end the chat. And now we can go East. Here, you can see that
go East is enabled now, and we can go to
the next location that wasn't available before you're standing in front of
a large company building. The inscription above the
entrance reads Omnicorp. Okay, so we're in front of the OmnicorpEnterprise
building corporation building. This is I think this
is our little game. So it's mainly, of course, talking to the language model
a little more than this, but you, of course, can
add more to the game. You can add your own mechanics, and you can, of course, add more chats now that
you know how it's working. And I think this is quite
a quite a lot of fun to program such games because it's not only the
traditional programming, but it's also, yeah, natural language and trying
to get the language model to do what you want and to
integrate into your world, your game world, your
situations, your characters. And it's it's quite fun. And please feel free
to extend this, make it completely new story, of course. That's no problem. You can also of course, this can be a different game, not a text adventure, or you can add also
images to the text. Or, you can do a three D role playing game with
chat capabilities. That's everything in
reach now that we have the AI models,
the language models. This is quite amazing
that this is really maybe revolutionizing the
way we play these games, these adventure RPG
games or whatever, where you can talk to an AI. Okay, so that's it for the
moment for this project. And in the next video, we start the new
project and see them.
28. New project: Virtual Sales Assistent for online shop: Hello, and welcome
to our new project, which will be a virtual
sales assistant for an online shop, which is called Cloud
Guitars, as you can see here, and I've already
set up a basic chat with this virtual
sales assistant, and let's check it. Hi, I would like to buy
a seven string guitar. Let's see what it
has to offer here. We have two options available
for seven string guitars. It's not quite correct. This shop sells the E Nie EN 77, which is a seven string guitar, and also the E Nice EN 666, which is a six string guitar, which is not what I wanted, but let's check the code first and we can
optimize this later. Okay, so what have I
done to set this up? I've deleted some
of the code that we introduced in our
chat adventure game, and it is really very basic now. We have the basic
index JSX component. React component is just this
page one cloud guitars, and then it's the
chat component, and the chat component is
almost the same as before. So nothing really new here, I've added something to
make the focus go back to the input field here because before for each message
we have to send, we had to click again
into the input field, and now I've set
the focus back to the input field after
the messages are set. This is here with a set timeout, otherwise it won't work, so it will be a little bit delayed, and then this works. Okay, so that's the
chat react component, which is more or less the same. I've removed some of the code, this talk to parameter,
for example. We don't need this anymore
because we have only one chat, so that has gone, but most of the changes are here in the chat
server component. I've removed all the
different system prompts. We only have one
system prompt now. And we have, of course,
different functions here. Also, we have, of course, a different system prompt here. And the system prompt is your
helpful sales assistant for an online shop
named Cloud Guitars that sells and delivers guitars. Please only respond to
questions related to the online shop and to the
guitars that the shop sells, please use the
functions provided. This is very
important to get more information about the products. These functions don't
yield any results. Please tell the customer
that your shop or that the shop doesn't
sell the items. Okay, so this is a very
basic system prompt, and we need to optimize
it maybe in the future. But for now, it has
kind of worked, not completely as we saw, but let's have a look
at the functions here. So we have find product, and the LLM should
call this function if it needs more information about a specific
product or model. So if the customer asks
for a specific guitar with a name model name maybe,
then it should call this. And we have um this
is the description, and we have also parameters
here, properties. And the one here
is product model, and this is the name of
the product or the model, and the language model
should provide this, and we have a look
at this later. And we also have a
second function, find products by type, and the descriptions fetches
more information about a specific product like
Igitar electric bass, acoustic guitars,
concert guitars. And the parameters here are the one parameter here is
of type string, of course, description is the
product type to find should be one of E guitar, e bass, acoustic
guitar, concert guitar. So this is quite
important because of limited the types the language model will
hopefully provide, and it will hopefully stick to these words here so that we
can find our products later. So maybe we need to fine tune these functions and
descriptions a little bit. But for now, this will do. And the function implementation is in this handle function call, of course, and it is
very, very simple now. And as you could see already, it is not working as expected. I have a template
string here with all products separated
by a line feed. So I have two guitars
at the moment, the E Nice EN 77 and
the E Nice EN 66, and one is a six string, and one is seven string guitar. The price is different, and also the pickups
are a little bit different and the
color is different. So only two guitars here. For each function call, I'm only returning the products here, both of the products. And, of course, this is why
the language model told me that it has 27
string guitars. Obviously, this is
wrong because it's 17 and 16 string guitar. So we need to optimize this to make the language
model answer correctly. But for now, this is our
starting point here. And as I said, I've deleted
stuff here and yeah, change the functions, of course. More or less, it's the same
as in our chat adventure. So we're starting from this, and then we optimize it. Let's see. The second round, maybe it will give me only the
17 string guitar. Hi. Do you have a seven string Guitar. Let's see. Yes, we do have a seven string
guitar available. And this time, actually, it's coming back with the one and only seven
string guitar, the e9e and 77. And it's also describing it and the features
two Humbuger pickups, one single coil, and the price and the
color is also correct. Okay, so this time it
worked last time it didn't, so we need to optimize
it a little bit. But let's check this
in the next video.
29. Better product search: Okay, so now we need to
improve our product search. The LLM is already calling
our functions here, and we need to extract
the parameters, the arguments for
the function call, and then we need to search
for the right product. And for this to work, or have changed our products from a simple string to maps, and these maps kind of
emulate a database. So here we have the
main index product ID to product description. We have two more indexes
with types two products. So the product type is the
key here and as the value, we have the product ID, and here names two products, and the key is the product
name and the value, of course, again,
the product ID. How do we use it? So here we have our handle
function call function, and it's receiving
a function call from the language
model and also, of course, the name
of the function, which we already
have a switch for, and then also the arguments, which is arcs here, and the arcs come from adjacent parse of function
call dot arguments. So here we have a
string representing adjacent structure
containing the arguments for this function call. So we have for the
find product function, we have a product model, and this is the name that we provided here under
properties product model, and the LLM will use it,
hopefully, like this. So we have the product name
and then names to products. The map, we use it to
get the product ID, and with a product ID, we get the product description
from this map products. So we use this map to
find the product ID, and then with the product ID, we use the main map, our main products map to get
the product description. And this is what we return
to the language model here. So product dot GET
with a product ID, and this will be the description for this product it found. Okay, so if no
product ID is found, we will return, I don't know, this product to the
language model, which is maybe not the best for the second function
here. But let's see. Fine products by type
is almost the same. We're getting the product
ID from types to products. This is this map here. So we have an E
guitar and an E base. I've changed the
products a little bit. So we now have a six
string electric guitar and a four string electric bass. So we have two
types E guitar and EBs pointing to the
two products here, and we're doing the same only with this Type
two products map, getting the product ID from
there and then getting the product description from the products map here
and then returning it. Okay, so this should improve
the results a little bit. So let's check it out. So let's ask for a
specific product. Do you have the so let's
see what is it EN? Let's see what is it?
The EN 66 E Nie EN 66. Okay, so it should find it. Yes, we do have the
EIS EN 66 available, and then it's describing the
features and also price. Okay, so that is working. And now I'm asking for a type. Do you have an electric bass, and it should find
our electric base. Yes, we have electric
bass guitars available. One option, the only option
we have is the EIS ENB 44. It's a four string
electric base with two Hamburger
pickups, and so on. The color option is
only one is black, and the price 999 99. Okay, so that worked. Very nice. So let's see if we
ask for an acoustic guitar. Do you have acoustic you Chars? Yes, we have acoustic guitars. This is not true.
So I don't know. Maybe it didn't call our
function here, I guess. So let's check the log here. Do you have acoustic guitars? Functional find
products by type. Okay. I don't know this product. Okay, so maybe this
is not a good answer because we have provided I don't know this product to the language model and the
language model said. Yes, we have acoustic
guitars available, so maybe we should
just return nothing. Let's try this or no product
found. Maybe let's try this. And then we're
asking, do you have acoustic guitars I'm sorry, but it appears that we don't currently have an
acoustic guitar. Okay, so that worked. That worked much better. Okay, so let's try
this case here. I don't know this product, so we are asking for a product that the
shop doesn't have. Do you have the
Ibanez I don't know. Let's say 4s4g, I don't
know, slim maybe. I don't know if
this guitar exists, but it shouldn't find it. I apologize, but
it seems that we don't have the available. Okay, so both worked.
All cases worked. The final product case
with the two cases. Okay, the product is found,
and the product is not found. And also fine products by type. If the product is found. Okay, so it's working, basically. So this is great. So we have to add a little
bit more products, I guess, to test it a little
bit more and to maybe improve it with further
function calls. So let's
30. More products: Okay, so here we are
with more products. I've added one more
guitar and one more bass. So the guitar is a
seven string guitar, and the other bass is five
string electric bass, color coral red, and the
guitar is color midnight blue. So the last time we did
the types two products, we only had here
one ID and not an. And of course, if we have
more than one E guitar and more than one e base, we need an array here. So I've changed
this to an array. The value of this
map is now an array, and it contains, of course, the product IDs of the
products that have the type, in this case, E guitar
and in this case, E base. Okay, so then we need to change, of course, the handling
of the function call, find products by type. We're getting the IDs from the map product type
with a product type. If we have the IDs, we map it to the descriptions
of the products. And if we have the
descriptions as an array, I'll do a reduce here to
convert it to a string, and in between the
product descriptions, I have this new line here. So the reduce will just concatenate all the
array elements, all the production, all
the product descriptions, and then put a new
line in between. And that's all
I've changed here. So let's try it out. I'm asking it What electric
guitars can you offer? And it should find
two electric guitars, 16 string, and 17 string. We currently offer the EIS EN 66 or six string electric
guitar, which is correct. We also have the EieEN 777
string electric guitar, and the price is okay
and midnight blue. And is this correct? En 66 has blue
color? I guess so. Yeah, color blue, the ENB, 55 had coral red. Okay, yeah, everything
is correct here. And let's ask for the bases. Do you have an electric base? Yes, we do have electric
bass guitars available. And also, here it found the two E bases we offer here in our shop.
It's also features, yeah. So everything okay, coral, red, color, the prices, correct,
everything correct. Okay. So that was easy. Now we have four products here, still not very many. But, okay, I think
to prove the point here that this is
working, it's enough. And of course, you
can have a database. Maybe you already
have a database, and then you can make database
queries, of course, too. We won't be doing this here.
See you in the next video.
31. Multiple parameters in function calls: Okay, so in this video, we will add another parameter
to our function call, in this case, it's
the find products by type function call, and we will add a price
range as a parameter. The first thing I've done is to make a new map price
range to products. So we have our products
sorted by price range here. In the low price range, we have only one with the product one in
the mid price range, we have two and three, and in the high price
range, we have the four. And you can check it here. The high price is 1599. The mid price is
around thousand 300, and the low budget is
not really low budget. We can make it a
little bit lower. Let's make it 600
or 69999. Okay. Of course, we've also added a new parameter to our function. And here are our functions, and here is the fine
products by type function. We already have the product
type parameter here, and I've added the
price range parameter. And important here is the
description, of course, the range in which the price of the product falls
should be one of low, mid and high, and these are
the keys to our map here. So this is important. These
three keys here Okay, and so we have here
our function handler, which had to be
changed a little bit, and I've included the filtering
with a price range here. First, we get the price
range here from the arcs, args dot Price Range. And then I'll do a filter here. So I'm filtering
out the products that are not in the price range, not in the correct
price range with this price range
to products map. I'm getting here the products
in this price range here. This is an array, and then
I'll do includes with the ID. And if the ID is included, it's going to be in the array returned from
the filter function and then we're doing the map here and this is being saved
in products found. And if products found
is of length zero, no product was found, so we setting the text,
no product found here. And if it's greater than zero, then we join it here,
no more reduce here. Products found join
with new line, which is much easier than the reduce that I
had before here. So this is our new
function handler here with a new argument, price range and Okay, I'm reloading here and asking, do you have a low
budget E guitar, and this should return
our 69999 E nice EN 66. This is the right one. Let's check if there is
a low budget E base. Do you have a low budget E base, and this should
yield no results. I apologize, but it
seems that we currently do not have a low budget E base, and this is the truth because our E bases
start in the mid range, and then we have let's see, we have only mid range E bases. Okay. So there's no guitar in the mid price range.
Let's check this. Do you have any mid
price e guitars, and it also yield no result. I apologize. This right. Do you have a expensive e guitar and this should give us our
very expensive guitar here. Yes, the seven string 1599. This is our most
expensive guitar here. So everything is correct here. And we have our second
filter parameter here, the price range. And, of course, you can add
more filter parameters here, like maybe a number of pickups, something like that,
number of strings, of course, and so on. Okay, see the next video.
32. Fixing price range not defined: In this video, we have to fix a bug that we didn't
notice last time. So this is how we
can provoke the bug. We have to ask, do you have a seven string
guitar, for example? And then the problem
here is that we get this error can read properties of undefined reading includes. So let's check the
reason for this bug. So this includes here, and that means that
price range to products get price range is
returning undefined. So we cannot call includes
on undefined, of course. Okay, what's the problem here? The problem is that price range, as we can see here, price range is undefined. Why
is it undefined? Because the user didn't
specify any price range. He just asked for a
seven string guitar with no price range information. So the language model didn't
provide a price range here. So we have to deal with this. And of course, we need
to first check if a price range was really submitted here
from the language model. How can we do that? First, we have to check for price range. If price range is undefined, then this will be true. Price range equals undefined, then it's not filtered out, or we have price
range to products get price range includes ID, but this get here can
also return undefined. So that's why we have to
put a question mark here, and then this is undefined and the
includes won't be called. Of course, this will be
completely filtered out. So no products will
be found then. Okay, and in this case, every product that was found in the result array is going to
be streamed into map here. Okay, so let's check if this helps. We asked
the same question. Do you have a seven
string guitar? Yes, we have this time, no exception, and
everything is fine. And we can check also on the server price range
is indeed undefined. And so this case here, price range is undefined, and then filter this callback here always returns true here, and all the products
found are going into map and then this string is constructed from
the products found. In this case, there's
only one product, and that is a seven
string guitar, and the language model is
telling us the right one, we have the E Nice EN 77. The price is 1,599 and
it comes with Okay, so everything is fine now. We fix the bug and see
you in the next video.
33. Add product to shopping cart: Welcome to this video, and here we want to
try to give the LLM the possibility to add a
product to the shopping cart, so we don't implement a shopping cart,
but we implemented, like a function
call that is only doing a console log and
then on this function call, and then telling the LLM that the product was added
to the shopping cart. And that's it. And if you want to really implement
a shopping cart, then you have to do
this part yourself. Here we try to provide a
function to the LLM so that when the customer asks for putting the product
to the shopping cart, then the LLM will recognize it and then call the function. Okay, so I've
already implemented, added a new function
here, and of course, this function is called
Add Product two card, and the description is
adds the product with a specified ID to
the customer's card. And as a parameter, I have the product name
and the product name. Is the name of the product that should be added to
the card, of course. And I also had to extend the
system prompt a little bit. So here this last
sentence is new, or the last two
sentence are new. If the customer wants
to buy a product, please add it to the card. And I could also add here via the function call at product to card to be
more specific about that. And then I had to add you
don't have to ask for login or payment information
because as I was trying it, it kept asking for login
account information, for payment information,
and this stuff. So I've added this sentence, the last one to avoid that Okay, so this is it for the
function description. And here we have the function handler
function call handler, add product to card, and we're getting the
product name from the arguments and then
just console log it. And then the content, the response to the
language model will be the product was added to
the card, and that's it. And let's try it. Okay, so I'm asking
specifically for a model. Do you have the E Nye EN 77? Let's see if it finds it. Yes, we do have. Okay?
Seems to be the right one. And now I'm asking it to add
it to the shopping cart. Can you add this to
my shopping card? And let's see if
it really does it. I have added the E nice E and
77 to your shopping card. Is there anything else
I can assist you with? So that seems to
have worked here. Let's check the
console log here, add product to
card E nice EN 77. So that worked very nice. So if you program a
shopping cart here, you can call the add
to shopping card here, the real one, not the
console log here. And you can of course,
save it in a database. So have the user
really locked in into an account to be able to do the checkout and
this kind of stuff. But this is kind of
the preparation here, but I also want to show you one problem here that
we haven't solved yet. So if we are asking
for a certain model, do you have the just the EN 77. So I'm omitting the EIs here. Do you have the
EN 77? I'm sorry. We do not have the EN 77, so it's not recognizing it, because what's the problem here? The problem is that we have the maps names to
product, for example, we just have these keys here, and if they do not fit 100%, if we just have part of it, or maybe we have written it
a little bit differently. Do you have the Eye EN 77, without a white space here, without a space, I
think it won't find it. I apologize. But if I ask for, do
you have the E Nie EN 77, hopefully it will find. Yes, it finds it. Okay. But now, do
you have the EN 77? Let's ask it again. Yes,
and now it finds it. Why? Because in the
history of the chat, this is included here, so it knows that the
EN 77 is this here, and that's why it knows
it here, but not before. And the problem is here
that our search is only the exact search
here on E Nice EN 77. So let's see if we can mitigate this somehow in the next video. See
34. Fuzzy product search: Okay, so the last time
we had the problem that if the customer asks
for a product name, which is not exactly the name
we have in our database, as we have in our map,
it will not find it, and then it will assume that this product
is not available. So if we ask for, for example, for the EN 77, let's see EN 77, for example, then it won't find it
because it's not in our map. We have this map
names to products. Here we have the exact names. Our function handler is
searching for the exact key, and then in this case, it is found, it is
returning the product ID, but if, for example, the customers only requesting information about the
product E and 77, then this won't be
found here in this map. So what do we do about it? We are changing
the description of the find product function
here and I've extended it a little bit with more
information what should happen if the product is not
found under the exact name. So here we have the
version before, fetches more information about a specific product or model, but now more
information about what if product is not found
with the exact name. If the product is not found
under the exact name, this function will return
all product names available. So we will return
all the keys in this map we have and all the product names that are
the correct product names. The assistant should then pick the name that is
closest to the one that the customer ask for and ask the customer if this
is the correct product. So it should say, Okay,
so we have this one, would you like me to give you more information,
something like that? Um if the customer then asks for more information
about the product, the assistant should
call this function again with the
correct product name. So the second time this
function will be called, it should choose the correct the exact product model,
product name here. Okay. So I've also changed the description
here of the parameter, product model, product
name model name to find. And if it doesn't match exactly, the function will return
all product names, so the LLM can pick a similar
one or maybe I should say the assistant can
pick a similar one. Maybe that is a
little bit better. And then let's go to our
function handler here. So it's this one in case the product ID is found in our map, then
everything is fine, and we return the product
description here, and the else case is
different from before, before the content was just product not found,
something like this. And now it's all the
keys in the names two products map joined
with a new line. So in each line there will
be the exact product name. And then the LLM should
pick one that is closest to the one the customer asked for and then call
this function again. And then this function will find the correct product information
here and return it here. So this is in theory, so let's try it out. Do you have the EN 77? So without a white space and without the
manufacturer name, let's see, without
a question mark. Yes, we do have the E Nie EN 77, and it really says here the
correct name E Nie EN 77. That's the string that we
have as a key in our map. So that should be okay. Available for purchase,
would you like to add it to your card. No, please. First, give me more information
about this guitar. Let's see. And this time it should really find
the product information. The Nice is a seven
string electric guitar. Correct. It features a
midnight blue color. Is a one Hamburger and two single coil pickups.
This is correct. This is the information that
we have in our product map. The price of the guitar
is also correct. Is there anything else you like to know about this guitar? No, please add it to
my shopping cart. Let's see if this
is also working. I have added the EN 77 guitar
to your shopping cart. Okay, so this is
it. Seems to work. We can check the log here also. But I think this has
worked as planned. It looks like,
please first give me more information, find product. And this time, it gets the
product information here. Okay, and please add it
to my shopping cart. Function add product to card. The product was
added to the card. Okay, wonderful. This is
what we were looking for. We delegated the search
to the language model. But of course, it
could be a problem if this list of keys here is maybe a little bit
longer than here. We only have four products, so it's only four
product names here. And if you have
thousands of products, then this could be a problem
because of the length, the token restrictions here. But if you have a normal shop like
maybe this guitar shop, maybe you have 100 or something, and that should be okay. Maybe if the product names are a little bit longer
than these here, maybe that could be a problem, but normally, you have a
token limit of, let's say, GPT 35, we have token
limit of 4,096 tokens, or you can also have
the 60 K model, which is a little
bit more expensive, but this has 16,384
tokens as a limit. So this should be enough even for a lot of product names here. Okay, so there's also a
different way to deal with this. We could use embeddings,
but we won't do this. Here, at this point,
it's okay for us. We don't have that
many products, and listing all the product
names here is practical, and it worked, and the
LLM can pick the one that's closest and then ask the customer if
this is the right one, and then the customer can
ask for more information. Okay, this is it for this
video. See you in the next.
35. Summary: In wrapping up this course, let's have a look at
our achievements. We've built three applications that harness the power
of the Open AI API, and the LLM for natural language and AI
driven conversations. So the first application
was a hachBT clone, hatchbT in a very
simplified form. The secret was to
manage the chat history and feeding the chat history
to the LLM for responses. Then next, we've built a
text based adventure game or the start of text based
adventure game with, I think, two quests but it has three NPCs that you can talk to in a
very natural way. And we've made tailored system prompts for
each character, and we use function calls for the tracking of the quests
and the quest completion. The last app was simulating a sales assistant for
an online guitar shop. Users can chat with
the assistant, inquiring about products
and even directing it to fill the shopping
cart with the products. And thanks to the Open AI LLM, this feels like a real
chat with a salesperson. Through these apps,
we've learned to use the Open AI API in JavaScript, and we use Astrojs
for the back end for the server and react
for the front end. We've learned to use the
OpenAI's chat completion API. We've learned how to craft
system prompts to give the LLM a context for understanding the situation and
background information, and we've leveraged function
calls for, for example, for our game for in
game actions like the Quest tracking and also for data retrieval for our sales our journey merely scratched the surface of what is possible. The API offers much more like embeddings for deep analysis
and also image creation, dolly and maybe you already know this is like mid
journey from Open AI. And there's also a speech to
text API called a whisper. So what we've learned
is, in a way, just the start of the
Open AI API's potential. So let's give Chachi Bitty the final words on this course. A wrap up, keep in mind that our journey with the OpenAI
API is just beginning. The skills you've acquired open doors to world of
AI driven creativity. Whether it's crafting
interactions, building immersive adventures, or refining user experiences, your expertise has the
power to shape the future. Stay curious, stay inspired, and keep pushing boundaries. Your journey is
full of potential. Thank you for being part
of this experience. Best of luck in all
your future endeavors.