Transcripts
1. Class Introduction: Hi there. Thank you
for showing interest in this class on CPT and AI. Here I want to give
you a short outlook and the structure of
the class itself. We're going to start with
some theory and talk about what CIP is
and how it works. If this is your first time
coming across this topic, I suggest that you skip a couple of lectures and directly, go to the one where we enter the tool and
show the interface. Then it might be wiser to come back and look at
the first lectures. Otherwise you might
find them confusing. After that, we're going to talk about the science of
prompt engineering, which basically explains how you should engage and interact
with these tools. Then we're going
to look into abuse cases and role playing. Towards the end, we're going to discuss some recommendations and disclaimers and we
will finish off by talking about the future
and what is yet to come. Without further ado, I hope to see you in
the next lecture.
2. Understanding the basics of ChatGPT and its underlying architecture: Hi, everyone, and welcome to this lecture on
artificial intelligence. In the first lecture,
we're going to talk about CAT EPT basics and its
underlying architecture. I'm sure that so
far you might have gotten confused with
the terms like GPT, LLM, RNN, NLP, and so on. We'll try to impact
them in this lecture. Let's start with the basics. CG EPT is an AI interlocutor. It generates human like responses in a chat
based setting. It has been trained on large amount of text data to understand and perform
these conversations. CGPT is essentially synthesizing all the information
into a single answer. The reason why people see it as a game changer is
because instead of searching for
something and getting a list of links or
contents to choose from, you get a conversational
output as if you asked your question to a
person, not a machine. One of the very first things
that we need to address and explain that PT is nothing more but a
statistical predictor, and I found a great explanation online to illustrate
what is meant by that. P is always fundamentally
trying to produce a reasonable continuation of
whatever text it got so far, where by reasonable, we
mean what one might expect someone to write after
seeing what people have written so far. Let's say that
there's this text of the best thing about
AI is its ability to. Now to determine the next word, imagine scanning billions of
pages of human written text and finding all instances of
this specific text so far, and then seeing what word comes next, what
fraction of the time. This is what GPT
effectively does, except that it doesn't
look at the literal text, it looks for things that in a certain sense match meaning. But the end result
is that it produces a rank list of words that might follow with probabilities. Now you might wonder k, which of these words should
it pick and you might think that it
makes most sense to take the highest rank word. But then if it always did so, the text would always
be quite monotonous, and if you use the same input, you would always get
the same output. But let's say that if
sometimes at random, it would pick the
lower ranked word, so here everything
besides a learn, then the text would be and the output would be more
creative and more interesting. This randomness is so called
temperature parameter that determines how often lower
ranked words will be used. Let's now zoom out
a bit and talk about LLMs or large
language models. LLMs are neural
network models that are trained on vast
amounts of text data. T billion and
trillion parameters to really learn pattern, structures, and relationship
within a language. They're basically
trained to predict the next word in a sentence
or fill in a missing word. If that already
sounds familiar, yes, it's because C GPT
is at its core, a large language model. GPT itself means generative
pre trained transformer. Let's explain each of
those three words. In the contents of
machine learning, which is this overall field, generative refers to the ability of a model to generate new data. In the case of GPT, it generates coherent and
contextually relevant text based on the given
input or prompt. Pre trained or pre training is a technique in a machine
learning where a model is initially trained on
a large dataset or amount of data to learn general patterns and
features from it. And lastly, transformer
refers to a specific type of deep learning
architecture that was introduced in a paper in 2017. This revolutionized the field of machine learning and
natural language processing because it introduced the self attention
mechanism that allowed the model to
capture relationship between words in a
sentence without the need for recurrent
connections or RNNs. I'll shortly explain that soon. But just remember
that transformers can do things parallelly. Now, if you think about how were things done before
the transformer. This is where the RNNs come in. RNN sends for recur
and neural networks, and they essentially did
the sequential processing. That obviously suffered
from certain limitations because it couldn't capture
the long range dependencies, was basically long sentences
or when there was a text where a lot of distant
words were connected. To help you understand
the difference between transformer and RNN, I prepared an analogy
to illustrate this. Imagine that you're
reading a book and really trying to
understand the full story. If you're RNN, you would read it page by page sequentially
moving from one to the next. While you're reading, you're maintaining the mental
state that carries information from
the previous pages and understands the
further context. This is how you would agree
probably humans do read. But on the other hand,
transformers can be compared to, let's say this imaginary
person who can instantly see and comprehend the entire book in one glance. They have the ability
to look at every page simultaneously and
understand the connections, relationships, and
patterns within the story. They do not rely on sequential processing
just as RNNs do, but they utilize this attention mechanism to give importance to different parts of the book while they're simultaneously
analyzing it. They're doing this parallelly. Finally, let's zoom
even further out and just shortly mention
NLP and machine learning. NLP or natural
language processing refers to a field of study and technology that deals
with interaction between computers
and human language. It involves developing different algorithms
that can enable computers or machines to understand and interpret
human language. Without going too
much into details, you can just see this as a very broad category and
large language models that we previously mentioned can
be seen as a tool or approach within the
broad NLP field. That would be all. Thank you for watching and hope to see
you in the next lecture.
3. Chinese Room Argument and AI alignment: Everyone, and thank you for continuing to watch this class. In this lecture,
we're going to talk about the concept
of intelligence because lots of people wonder to which extent these
current models like apt, and the future ones are
and will be intelligent. The Chinese room argument is a philosophical
tought experiment that questions the idea
that computers can truly understand language
or possess intelligence. Here is actually
an example of how the actual artificial
intelligence currently works, which might make you question to which extent the word
intelligent here is correct. Imagine you're in a room with a book of instructions
that tells you how to match up Chinese symbols with other Chinese symbols. You don't speak Chinese at all. You're just given a stack of
Chinese texts to work with, and the people outside of the room slide notes with
Chinese writing under the door. You followed instructions from the book to create a
response and give them back without actually
understanding what the Chinese text means. From the outside
to those people, it might appear
that you understand Chinese because you were able to produce an accurate response to their input to their note. But in reality,
you just followed a set of rules that have
been programmed into you. You do not actually understand the language you're
working with. You're naturally intelligent in the way that a human being is. This argument is that even the most advanced
computer programs may naturally understand
the meaning of the words they
process and produce because they are not possessing the true intelligence
as humans do. This challenges the idea that machines can achieve
human level intelligence and raises really
important questions about the consciousness
and so on. So it's very important here
to understand even though it might have been a bit too
simple of an example, that AI per se is not intelligent as we
perceive intelligence. It's important to understand that That's why it also
has a potential to harm, and this is where AI
alignment comes into play. AI alignment refers
to the problem of ensuring that artificial
intelligence systems are developed and programmed
in a way that aligns their behavior with the
values and goals of humans. This is important
problem to solve because AI systems become more
advanced and capable. Their actions may have increasingly
significant impacts on the human society and
the goals they're given. The goal of this AI
alignment concept is to make sure that
the development of AI system are beneficial
to humanity while still avoiding unintended consequences
of harmful outcomes. This can involve designing AI systems that are transparent, interpretable, and can explain
their decisions to humans. The best way to think of it is that it's important that
when you give a task to AI, and let's say that you give
them a task or give it a task like solve the
world hunger problem, that it doesn't
just decide to go for killing all the people on the world and
solving the problem, but rather that it follows the AI alignment principle and understands other
dependencies there are. These are two
philosophical concepts and I just wanted
to bring in into this overall discussion
before we dive more specifically
into Chachi PT. Thank you for watching
and see in the next.
4. Biggest players and overview: All right. Welcome back. And thank you for
joining another lecture. Let's now step away a bit from the philosophical and really
broad concepts and come back to maybe even the
topic of this class. Following what we're going to look are the biggest players currently in the AI field. Here's an overview of some of the Welcome back and thank you for following
the class so far. Let's now step back a bit from the broader overview and the big picture and some
philosophical concepts as well. Let's take a look at an overview of some of the biggest players in large language model
field and conversational AI. So first of all, Chat GPT is a
conversational AI model developed by Open AI. As we explained
earlier, it is based on the GPT architecture and designed to generate these
responses in a chat setting. It has been trained on the date until
sometime in the past, and us anything after that might not be
known to the model. At the moment, C GPT is run
by GPT four while initially, it was run by GPT three. These are the further
models that will be coming available and
available in the future, but the overall concept and
notions that we're discussing here will more or less stay the same because this is what
was developed at the core. The next one that we can
discuss is the PI chat, and I'm myself even
not sure whether to keep this as a
separate category or not. But the BIG chat is the conversational AI
developed by Microsoft. It is based on the GPT
architecture because there was a partnership
between Open AI and Microsoft. Currently, many see
it just as a GPT, but connected to the Internet. Because with Bin Chat, you can search for real
time information, whereas, at least while I was preparing
this lecture particularly, GPT still didn't have in its free version access
to the Internet. Lastly, of the three
currently biggest players, we should mention Bard is a conversational AI model
developed by Google. It came last to this party and it's seen as a
direct competitor to hechPT because Bing or Microsoft are in a partnership
with CechPT and Open AI, but Bard is on the opposite side and it's connected
to the Internet, so it offers real
time information. And all three chat pots here aim to provide human like
responses to questions. At the moment, they only
accept text as input, and they also generate
text as output. It's already announced soon these tools will generate
other media as well, so images, videos, audio, and that will further
enhance the user experience. I just want to add that
most other tools, plug ins, and extensions you encounter are in some form,
as of right now, using APIs for these models, meaning they're
in the background running on these platforms. In the context of
conversational AI models, you might hear about Lama, or Lamda or Sanford
Alpaca, and so on. Those are large language
models that are developed by various
companies and organizations, some of which are either
providing infrastructure or direct competition
to existing Chachi PT, Bing chat, and Bart. So that would be all,
just a short overview for the current lecture, and I hope to see
you in the next one.
5. How to navigate and use ChatGPT for beginners: Everyone. Now we're finally entering the Cat GPT tool and I'm going to show you
the interface of it. I don't plan to
deep dive into it. This will be a short tutorial
and a very basic one. If you're already use it, I trust that you will know 90% of the things that
I'm going to show now. Bear in mind that
interface changes a lot. Over time, and I don't know how it might look next
time you open it, but usually the functions should stay similar at the core also if we talk about
the other tools as well. First things that
you should see here is how to access the CPT, you should go to
chat open a.com. You need to make an
account with open AI initially and then be
able to access the chat. Once you come to it, you can click always new chat
to generate this window, where it gives you
some examples, capabilities, and limitations. And if you want to fur depending whether you're there for the first time or not, you can see the history of
your previous chats before. Interestingly, let's start with one example now of prepared, act as a physicist explained theory of relativiity
in simple terms. After a plays enter, I will get the response
generated here, and you will see here that that will be counted as a new chat. It will soon rename
itself exactly. Let's say that I want to
change something about it. If I want to change this name, I can do it here and
press and confirm. If I want to delete it,
I can do that as well, and if I want to share
this chat with someone, then I can generate a link. They will not be able to change anything about your prompt
that you already used, but they will be able to access this chat if you share it
with someone externally. What you can do here is
addit your initial prompt, Let's say here instead of explain the theory of
relativity in cipral terms. I see that the
answer is too long. I can just add and
make it short. Then I can save and submit. What will happen now is
that it will regenerate. I can stop generating if I notice that something is not
right for whatever reason, and then I can just
regenerate it again. Here I can always see
just a with my input. Same with the output, I can
see the previous versions. This is though up to the moment where I stopped the generation. But here you can see
that this version is far shorter and it usually asks
you for some feedback. From the settings point of view, once you come here,
go to settings. There's very few
features right now. In the general ones, you
can change your team. My current phone is dark. By default, it's usually light. You can clear all the chats, if you for whatever reason
want to delete them. I would suggest that if you
don't see the chat history, you definitely
need to enable it. You can manage the
share links here. So all the ones that you create, if you want for whatever reason to export all of your data, it will take some time,
but you can generate that here and then
obviously delete account. If you're upgrading
to the CPT plus, that's the feature that's
the option to go here. You can hide this
side bar as well. And, I think if you want to let's say that you're
happy with this output, you just want to
use it somewhere, you don't want to share it
with someone as a link. You can then just copy this by clicking
here or by marking, highlighting the whole
text and just to show you that It automatically works. Yeah, that would be all
just a short tutorial. I hope you found it useful
and see in the next lecture.
6. The science of Prompt engineering: Hello, everyone, and
welcome to another lecture. Here we're going to discuss
prompt engineering. Prompt engineering in
Chat GPT refers to the process of designing
and formulating effective prompts or inputs to elicit the responses
from the language model. It involves providing
specific instructions or context so that the model can be guided towards
write response. And here are a couple of reasons why propt engineering
is crucial. First of all,
control over output by carefully crafting
the prompts, users can have more control
over the generated responses. Well designed prompts can
help ensure that the model stays on topic and provides
accurate information. The next one bias mitigation. These models like CechPT
can sometimes exhibit bias, and prompt engineering
can be used to address this issue by
explicitly instructing model to avoid it or just express multiple views
so that there can be a more objective
comprehension of the topic. Output consistency. Consistency in responses
is important for user experience by using
consistent prompts, developers and people
can encourage the model to generate responses that align with the desired tone,
style, and personality. Next, clarifying user intent. Clear and specific prompts can help the model
better understand user queries and requests and adapting to
different domains. CiPT obviously has so
much information and it is such a
versatile model that can be used in multiple domains. But prompt engineering allows customization for
specific use cases by tailoring it to act as a certain role in
certain domain or context. Now, let's use some examples because this was quite
theoretical so far. Suppose you want to
ask CechPT about climate change and specifically how it affects
developing countries. A bad prompt would be tell me about climate change
because this prompt is too general and does not provide any specific guide on the aspect of climate change
you're interested in. The response from Cech PT
might be a general overview of climate change without really addressing the impact on
developing countries, which is something
you want to assess. And a good prompt. Now on the other
hand, a good prompt of the same topic
would be what are some specific ways in
which climate change disproportionately affects
developing countries. Please provide examples. Here, the prompt is much
more focused and provides clear instructions to the model. It asks for specific
information about the disappropriate impacts of climate change on
developing countries. Obviously, in some cases, you might just be looking
or in some instances, you might just be looking for a simple and
straightforward answer, but bear in mind that even just a shallow
understanding of something, might lead to
different conclusions. In such a case,
prompt formulation, when you're just
looking for something shallow doesn't play a big role, but you should still
be aware of it. When you want to follow
up on the topic or dive deeper with more context
and better understanding, then obviously, you need to prioritize the prompt
engineering topic. Now, above all, it's
pivotal to understand that the prompt engineering
is an iterative process. It involves experimenting and refining and
reiterating and doing things multiple times based on the output of the model to
achieve the optimal result. I think as you gain
experience with this, you will be able to identify
patterns in the model, and eventually
prompt engineering will become multimodal
and incorporate multiple inputs when it comes to combining text or
images or other media. But You know,
whatever the case by understanding how prompt
engineering works with text, initially, you will
have an easier entry with other media types. The science of prompt
engineering also gave birth to a job career
of prompt engineers. The people who choose to step
into this career right now are obviously in shortage
and very on demand, let's say, bear in mind that obviously such a career might be a temporary one as these tools incorporate new features
and capabilities. But I personally believe that
prompt engineering isn't, and in future won't be a career. Per se, but rather a skill. If you think of typing or
Excel or bullean search, it's a piece of your
overall skill set, not an entire career itself. And this is why I
see it crucial to get yourself informed
on prompt engineering. Just like in communication
with humans, you can get you can't get everything clear on
the first go and that the model itself cannot understand full intentions
just from one prompt. Articulation matters, and users will have specific
requirements or desired output that
they want to put into the model to
get the outcome. This is why prompt
engineering covers the gap between what the model produces
on the very first try, the second try, and
all the later tries. Thank you very much for watching this lecture and see
you in the next one.
7. How to Phrase a Prompt: All right. In the last lecture, we took a look at what
prompt engineering is. Let's now look at how
to phrase prompt. There are three steps
that I want to outline. The step one is considering
the context of your prompt. It's important to
set a specific feel or topic for the
model to focus on, and that will help it
understand the purpose of the conversation and provide
more relevant answers. You can start by telling
the chat who it is. For example, as I wrote here, you're a recruiter
or an HR manager. You can choose
either of the two. The step two is referring to giving the model a task to complete
and ask questions. Give a clear task to execute. For example, if the prompt
is about health and fitness, the task could be give the best advice when it comes to improving someone's health. Here you can ask specific
questions within the prompt. This will give the model
a better understanding of what you as a user
are looking for. The step three refers to
considering the output. After the model
provides an answer, you can take a look
at the output. If it's not what you're
expecting or lacks details or doesn't go
in the right direction, this is where you can
refine the prompt. There's many ways that you
can make it funnier, shorter, change the format, the structure of the output, and so on. We will come back to the process of optimizing the prompt, which is the last step three
that we discussed here. But before that, here's one graphic showing basically how you can phrase the prompt. First, you can start
with some context. In this case, it wasn't
assigning a role, but rather saying
that everything before this should be ignored, so basically
restarting the model. Then it started with giving
context of who the person is. So you're an experienced
content writer with high levels of expertise
and authority. There was a clear task, so
your job is to write content, instructions, asking if
the model understood, and then after the
output was provided, there was a refinement with
rewriting it more naturally or expressive language and
including examples, and so on. Here's another case where you can see how that look like
in a bit shorter way. Here, this was a
instruction to write a 50 word copy for a product
called creator growth. There was basically, this front was approached
from the different point, from the marketing view of
providing a call to action, providing the pain
points, et cetera. This is something that
feeds the model to be more specific and does give you
a more precise output. We discussed the refining the prompt or
optimizing the prompt. This, in most cases, means tailoring the further
responses to your needs. Nevertheless, in some instances, it can improve the accuracy or relevancy of
the total answer. Now I will show you
the list of some of the most useful
optimizing prompts that you can use for
different purposes, and these are the ones
that I commonly use. Here in the A column, I just gave them a short title. Here I will walk
you through just shortly what the
prompts are about. The first one I use sometimes
as a feedback partner. I ask the tool to be my feedback partner
and I provide the ideas where I would
like to receive feedback. I ask it for a
constructive response that includes the
following point. Listing the aspects
of idea that it thinks are good and
have good potential, identifying things that can be improved or
further developed, explaining me the
reason behind it. Don't want it just
to call it out, but rather to
constructively explain why it considers something
to not be so good. Then usually ask it to
formulate the response, so it's clearly distinguishable
between positive aspects, and so to say suggestive or aspects or areas of improvement. Then I basically based on the whole idea or the
context, whatever it is. Similarly can be done with
this shorter, prompt, criticize whatever
input you give it and convince you
why they are bad. Whether let's say criticize these three ideas and
convince me why they are bad. This is a bit more going on just the areas of improvement side or let's
say a negative side where you just ask the model to
criticize pros and cons is a bit more balanced way
where you can ask it to provide a list
of pros and cons. With whatever you provided, let's say you provided
a business idea or a new process that you want
to implement in your company, this is something that
you can ask it about. Once you let's say had this feedback exchange
or refined your idea, you can ask for an action list. From the blurry context or let's say the
things you defined, how that would be broken down
into the actionable list. Summarizing is one of the most useful optimizing
prompts I use because it's so powerful that you can feed the long text there and
ask it to provide a summary. Next to it, you can also ask
it to simplify the text. If it's something more scientific or something
that is outside of your common areas where
you interact, et cetera. This is something where
you can ask a tool to simplify it and
explain it to you as if you were a
13-year-old or a 60-year-old or a 12-year-old, whatever is most fitting. Confidence level is where you just want to
see how the tool is acting or where it's using or pulling
the information from, you can ask it to qualify
the confidence level 1-10. If you want to not the tool
to work in certain direction, you can feed the existing input into the output that you
want to be provided. Let's say that you want to get a great I don't know headlines or post
description for your link. A post that you plan to publish. You can already feed
the previous hooks or headlines that you had and ask it to provide
ten more of a similar kind. When it comes to the output, you can format it as a table. Again, depending on what
you need, you can just say, I want you to act
as a text bas Excel or just whatever
output it's providing, you say, create this in
a markdown table format. Usually, you can copy this table directly into Excel
or word and so on. Conversation. Sometimes
you can when you want, this still doesn't
work perfectly, but the tool can to some extent provide can switch into the
conversation mode, meaning that you can
ask or you can tell it to ask questions back when it's unsure about something
rather than guessing. This is not something that
tool does by default, even though it's called GPT. Restart means basically
starting from zero, if you had long conversations
about something, you can always open a
new chat, let's say, but you can also
continue in the same one by indicating the model that
it should restart itself. Transparency You can ask it to walk you through the
reasoning step by step. This is also something
that research has shown that improves
the model performance, especially when the topics
are more complicated, that you can say let's
work this out in a step by step way to be sure we
have the right answer. What turned out there to
be the case is that model then just in a nutshell to explain it focuses on fewer tokens and then can provide a more
accurate responses. Another problem that
worked a bit less successfully is that saying
answer the question, then critique the answer, and then based on the
critique reconsider the other answer options and
give a single final answer. This, as you can assume again, relates to that feedback point that we initially started with. If it sometimes
happened that the model refuses to execute a task. Let's say that you ask it to provide you with a contract
sample that you can use, and it says as an
AI language model, I'm not able to
do this and that. The trick sometime
that works is that you ask it to write a draft of
provide an example instead. Formatting the output.
This is very useful. These two points
refer to what we mention of making the text
funnier or formal and so on. You can really
change the format, change the length style, ending lines, starting
lines, different phrases. You can even ask
it to write it in a non AI way, which again, is challenging to
some extent for it because it's still an AI, adding mgs, emphasizing
certain parts. This is really a beauty of
the tool when it comes to copywriting and generally
generating text, different tones that
it can provide. Format styles and so on. Lastly, something that should eventually come into effect. But what you can ask it to do is to provide sources
for the answers that it provided or even to look for sources within
a certain timeline. Again, this is
still not something that works perfectly yet, but something that so
hopefully eventually be integrated into the tool just as it is in the
Bin chat, for example. And that would be all. This is just a list of my current optimizing
problems that I have, and I as I learn also
new ones and test them, and obviously the
tool changes itself, I'll try to keep
this up to date. Thank you for watching and
see you in the next lecture.
8. Business (work) use cases for ChatGPT: Everyone. Welcome back
to yet another lecture. Now let's finally start looking into some
of the use cases. In this lecture, we're
going to strictly focus on the business and work
related use cases. This is nothing field specific. You will see a wide array of
different topics and ideas. The point of this is to give you a very broad overview of how you can engage
with the model. Be ready to jump from one
thought to another Don't try to look for any pattern
within these ideas. They're around eight to ten, I think, different ut cases that we're going to go through. Let's start with the first one. Yes, something that I
have to say is that you will see on the left obviously
what the use case is. On the right, you don't see the chat itself because
for the sake of time, I didn't want to
type these prompts and then wait for
the responses and can overwhelm the tool or it would unnecessarily
make the video long. This is why I already performed these conversations and tested these prompts for you and what you see here are
just shared chats. Obviously if I wanted to continue to interact with
them, I could easily do that. But this is why you see them in a slightly different
version and we explained when we were going through the interface
of the tool, what shared chats are. Right. The first one
is very basic to give the tool to solve
a certain problem as a specific
profile or a person. In this case on the right,
you can see that I ask it to act as a CEO
for some company. I could have been more
specific here, but I didn't. I gave a bit of context for what the CEO or the tool in
this case is responsible for. Then I asked to address a potential crisis
situation where a product recall is necessary. The question was, how would
you handle the situations and what step to take to mitigate any negative
impact on the company. This is what the
tools as the CEO, I would perform these steps. This is something that can be a great starting point when you face certain issues
for the first time. You don't have to
be a CEO for this, you can be at any other role if you're experiencing
something, but obviously you don't even have to use this
in the work case. You can do very
similar if you're just training for your
sport or changing your nutritional diet or any other role that
you want to play here. The next one is generate
a business plan. Here, the prompt was generate digital start up ideas based
on the wish of the people. For example, when I
say certain need, I wish there's a big large
more in my small town. The tool should generate
a business plan. Here since it was a
digital start up idea, what the tool actually did, maybe not the
wisest thing to do, but it was very narrowed
in this case is that it generated an idea or business
plan for a digital mole. And what it did is it was asked to do this
in a markdown table, as we discussed as one
of the output points, and that's exactly what it did. I provided different sections from the selling proposition, to the sales and
marketing channels, to the cost structures, to the key activities, overall estimated cost,
potential, and so on. As you can see, it
kept the output quite limited because business plans
are sometimes super long. But again, another starting point in which technically
you could deep dive for each of these sections
and ask it to share more. The next one is
evaluate pros and cons of a different decision. Let's say that here
I use an example. I'm trying to decide
if I should implement an employee resource planning
tool in our company. I didn't have
experiences before it, so give me a list of pros
and cons that will help me decide why I should or
shouldn't make this decision. This is what it exactly did. I gave pros and it
cons of the ERP. Now, if I wanted to Now, even ask our financial situation is quite difficult
at the moment, would you advise us to proceed with the tool implementation. Further, I could have asked which tools it
recommends and so on. It's yet again another example
of how this can be done. Streamlining and
optimizing the process by pasting the written
process into it. This is one of my
favorite tools to organ the use cases when it comes to the
operations field. I ask to act as a process
optimization expert. I basically choose
a certain process that I paste into it and I
ask it to read through it. And tell me where certain
things can be cut down. It can basically
make the process cleaner and eliminate the waste. I first obviously have to give a context of what the
process is about. Here I said this is a
process of publishing job post at the start up of
45 people. Here's the task. I want you to read it,
understand each step and then help me optimize and
streamline the process. Once I said it's fine with it, I even said, don't
give me summaries, don't give me
rephrase sentences, just give me suggestions
to streamline. And I past the
whole process here, so all five steps. What it did is again, put those five steps,
but it says here, instead of gathering all
the necessary information, consider creating a
standardized template. It told me here to streamline
the drafting process, develop a library, of
pre approved job posts. Here it advised me to use an application tracking
system, and so on. The next use case is about
enhancing Excel capabilities. Here, what I just did is ask it to provide different Yeah, to to basically clean
the data for me, I gave an input of
some weird names and last names with
different symbols and signs and to put it in the markdown
table that I could just copy and paste to the
Excel or Google sheets, and that's exactly what it did. I even gave here the context of what is okay, what is not, and it gave me the note
that for one case, that the last name was
followed by the period, and then that one in this
case, I wasn't included. Talking about Excel. There
are some extensions like sheet plus at AI
and Ag lx.com at the moment that help you
further enhance ECL or Go Sheets capabilities by asking to generate certain formula or to explain you the
formulas that you passed in or to generally where you could use the
human like language, so you can just
text what you need, and it would be able to execute and provide you
with that formula. We will discuss more
about this in lectures. I just wanted to shortly
bring it up here. Ask you to write an e mail,
an announcement celebration. Here I use an example of a certain sea level executive that wants to communicate
or let's say CEO, that the company is
moving to a new office. And then I said write an e mail to the whole company
and announcing us, changing the office
location next year. I gave a short of a
contest that we're going from a co working space
to a private office, and I told you that it
shouldn't be longer than 120 words and which
tone to use, et cetera. And this is basically the e
mail that I got prepared. Bear in mind again,
that there are some extensions
like CG PT writer, where you can
incorporate this within your G mail or other
provider and then give it similar prompt to
generate input for you. The next one is about
creating a list of objects or feedback a might have
about a new product service. Talking about, let's
say business plan now that we generated
something and created it, and now we want to anticipate what the market
might think of it. This was a brilliant
prom to say, create a list of ten objections a customer might have about a new healthy soda made with plant fiber and
prebiotics called Co. This is basically the ten
different And the let's say objections or feedback
that came up that you could hypothetically
think about how to address. And Yeah. Actually, this was
more on the level of that was in a direct quote, which would be an
objection, actually. I ask it in the second prong to provide them as a quote
objection from the customer. Somebody could said,
I'm hesitant to try Ojo because I'm not
familiar with the brand. I then to stick with
the trusted sod options that I know won't disappoint. How would you get back to this objection or what
would you respond? These are things that you can a p. Generate headline ideas based on the provided keywords. I think this is
quite interesting, as we mentioned already
that you can feed the model with certain
text already to nudge it into the direction you
wanted to think and then get a desired or
more precise output. This is what I did. I act as a headline generator and provide 15 headline ideas based on
the following keywords. The next one is about reviewing and
polishing certain content, whether that's
document articles, messages, I ask it
to act as an expert. Content writer told her it
is time for a review that I will paste pieces of the article and want
to have the revision. I said that the primary focus
is clarity and readability, that it's okay to make
slight adjustments to the ph to enhance the positive
tone and flow of the text. Here I provided the first input and here's the answer that
I got, then I told it, please rewrite the
section by changing a tone to humerus one
and making it shorter. This is again playing with
the formatting of the text. Summarizing documents,
videos, and meetings. Again, as I mentioned, one of my favorite cases to really
summarize long points. Here what I says is
summarize the text below in no more than 150 words and create a list of
bullet points of the most important learnings
along with some explanation. You can see all of this is
the text that I pasted. And this is what it does. It provided me the
bullet points that I asked for for the most
important takeaways. One of the bullet points I was
referring to the metaphors used to describe
artificial intelligence that can be misleading. I ask you can you surly
explain the metaphor Mckenzie that was used in one of
these examples here. Running experiments. I just wanted to say on this
note that there are already many extensions for
summarization like perplexity, that AI chat, PDF, Merlin, and so on. Coming to the research summary in this case or basically playing with a user
research topic. I ask it to be a user
research expert. I will past the
long list of notes. I just let's say finish the interview and want to
summarize those notes. I asked a tool to create
a summary and provide the most important takeaways
within a certain length. I provided that input, and this is the
summary that I got. Then what I asked it is
to generate a list of five hypothesis that this
interview confirmed. I think this is
really remarkable that probably what you
could also do is give it to the hypothesis that you have and then ask
to which extent it thinks that those hypothesis
were validated or not. Obviously here I wrote
question generators, so don't underestimate its
power to also generate the questions prior to the user research interview
that you might be conducting. Another use case on
the technical side, let's say, you could ask the tool to build a
Chrome extension for you. This is something I
played a bit with. I started saying, act
like a programmer. Can you help me build
a chrome extension, then I can use to
automatically find duplicates in
Google spreadsheet? Obviously, I could have just pasted the whole
content I had in the Google Spreadsheet
and ask it to find duplicates on my behalf. There is already
a feature within Google Spreadsheet that
can detect duplicates, so you don't actually
need an extension for it. But it was just a case
I wanted to play with, and then it started
providing me with a step by step overview and how I should do things and which three
files are important to create. And then I ask, Okay, these three files. Where should I create
them? Is it four? Is it note pad? Is
it something else? Then it says that I should
do it in the note pad or sublime text or
a similar program. This is basically
how our discussion continued at one point. I got an error that says, Manifest file is missing or unreadable,
could not load it. Then I asked it for help, and you can see here that
this is just the start of the conversation because
I really wanted to deep dive into this use case. Okay. Finally, practicing
a business language, I think this is something
that is quite magnificent, not yet fully developed
because the tool doesn't work in a more sophisticated
conversational way. But you can train it to
act as your teacher. In this case, I told it act as a Spanish language
professor, teach me Spanish. I said I only know some phrases. You should technically
create a class for me. I explained what is my level. I said that obviously initially, we have to talk in English. And then I even said that
if it's unsure of what kind of learning material you
should provide me with to ask me what I like talking
about or prefer reading, and answering, et cetera. This is how it said okay. Let's begin with these phrases, and then the numbers and so on. Then I ask that Ok gave
me some exercise to practice Spanish for my upcoming
business trip in Madrid. It gave me a scenario that I arrive at the airport
and then asked me to, this is what wrote, I gave me the translation
that I could look into, and then how I would respond. Here it actually
indicated me how I should respond when I'm in a restaurant and ask if I have a reservation, so that I should write C, et cetera, and so on. Then as I get better, I could probably ask
it to also prepare a more difficult
questions for me. Last thing I want to show you
here is just this website, which has awesome GPT prompts. It's called you can find it by just Google in Prompts hat, and here you will find a very, very long list of different
roles that you can choose and edit these
prompts to fit your needs. Here. You can take the prompt of acting
as a travel guide, or as an advisor, storyteller, stand up comedian, novelist. Wrapper, very, very
different personal trainer, real estate agent, doctor, chef, if you're curious
into recipes and so on. There's an ocean of different
use cases and my idea here is just to show you how creative you can get
with a couple of thems. Thank you very much for watching this video and I
hope to see you in
9. HR use cases for ChatGPT: Hi there, and welcome
to yet another lecture, in which now we're going to look into more specific use cases, talking about the field
of HR and operations. Let's dive deeper into it, same structure as with
the previous video. So One of the first cases, we're looking at is solving a problem as a
certain role in HR, and this can be very
relevant depending on where the problem lies or where the challenge or the overall brainstorming
is directed to. What I use here as an
example is recruitment. I said, I want you to
act as a recruiter. I will provide some information
about job openings and the tool should come
up with strategies for sourcing qualified
applications. This was my first request. I need to find senior
project manager in Vienna for an impact software
as a service start up. This is first where it
started offering me different tips on that I
can leverage social media, utilize all my job boards, how to engage with
passive candidates, network in the industry, attend career fairs,
employee referral programs. I would say this was
something quite broad, but then I ask develop a step by step roadmap or to do
list from these things. Now that things became
far more actionable. So to say, obviously,
there's a lot of them. I could have shortened them, but I think there are also
some great ideas here also tap into alumni network and educational institutions. It's really hard
to think of all of these things on top of you, to keep them on
top of your head. This is where you can use the
tool as a sparring partner. Create a bleion search. Still talking about hiring. You can ask it to act
as a talent sorcer, create a bulion string for
the title in Paris France, and then you can get a
Bulion string as an example. You can then ask
it to modify it, or the different
way to approach it is that you can
copy as I did here, the whole job
description into it, and then ask based on that
to create the Bleion string. This is how it provided me with an overview based on the
feeded job description. The next one is to identify alternative job
titles and seniority. If you already work with
sourcing and bul in logic, you know that very often
your luck goes only so far depending on what people
chose to put into their biography, CVs, or titles. This is where you can be
creative by asking for different synonyms and alternative job titles
that in this case, I ask for a sales development
representative position, and this is everything that
I got in this example. Then later I ask beneath
the most common ones. Out of these, it should choose
the most common ones and add variations with
different seniority levels, but not including the
entry or executive level. This is then again, more
specific output that I received. Translating to any language, depending on the markets
that you're tackling. Here I ask you to act as an Austrian translator who
is also experienced in recruiting and hiring and
to translate these roles, these titles into German. Since German is quite
specific language when it comes to the
general attributions, here you could see that
it provided me with both male and female
versions of the job title. The next one is about
creating a list of companies or starts
in a particular field. I ask it to act as a
researcher looking for top five security start ups in Israel that received significant funding 2018-2020, and to put it in
the table format where I even went to further customize how I wanted to be, so to include a year, the name, the headcount, and so on. Even though it had
some limitations, I s m say that it gave
reasonable answer. But remember, if you
try these things, and if a prompt, when you say, give me a list
of companies doesn't work, or you see that it's
hallucinating by just providing
inaccurate responses, then you could
alternatively say, how would you go about finding top cybersecurity
startups in Israel? Then you might get a bit of a different answer or
let's say that could be a detour of getting to the destination that you
eventually on to to reach. Creating an outreach message. So now we're moving forward in the at least
hiring pipeline and coming to the outreach stage where you can ask it to
again act as a recruiter, provide it with a job
description, and then I think here, request that or just say that you plan
to reach out on Linked in, that message shouldn't be
longer than 500 characters, and how the tone should be. Once you get that message, like in this case, you can obviously edit it
and adjust it a bit, and then still bear in
mind that this is not a personalized
message because it's dependent on the feeded
job description, not on the person's profile. But then I ask it
now expand it to be a more formal
e mail outreach, and then I got something long. Creating candidate personas. Another very useful use case, when you want to act
as an HR expert, create a list of three personas that
could be relevant for the below job description or doesn't necessarily just have to be when it comes
to hiring, of course. But in this case, this is what it was
provided as an input. These were the three
personas that that were, and then I asked it to add
a demographic elements, and then I got a bit more clear overview of those persona. Creating or improving
job description. By now, you can see the majestic value that this tool can bring
in the hiring stage. Then creating the
job description or just providing the
existing one and ask it to refine it or to enrich it or to just
make it more engaging. In this case, what it
did is added emojis, but it was still quite long with different bullet
points and long list. I asked it to make it shorter, and I implied which sections or responsibility and
qualification section shouldn't have more than
five bullet points each. This is exactly what
I've got later on. Explain Jargon or
different expertise level. Let's say that you're
still a recruiter. You just finish your talk
with the hiring manager, and you just got the
information that you need to hire someone who is who
has a strong med of Python. But you struggle to understand
the difference in levels, or let's just say that
you got this information prior to the meeting and you just want to prepare
yourself better. Here, I ask GPT to
basically the difference between beginner
intermediate and expert in Python and what
they're capable of doing. This is exactly what I
got as an explanation. Creating interview questions. Here, act as an
interview expert, generate a list of ten interview
questions that I should ask a person who applied for C Java script
developer position. These are some ideas. Give me five more questions
that are challenging. Obviously, that can
give me questions that are more based on the
entry level and so on. It doesn't just have to be
obviously technical questions. You can also ask for a list
of behavioral questions, hypothetical questions,
even screening questions, if you just want to
give a phone call to a candidate initially. List of relevant job boards and communities to
post vacancies. I think this is a
very interesting case and helps a lot with research. As you can see, right now, the tool is mostly used
for research purposes. Whether you're researching
something for the sake of learning or
researching something for the sake of
immediate utilization, like for example, job boards. I still think there's
a high value in both. So Here, I ask it to act as a recruiter
expert in Netherlands, provide a list of five
free job boards in which I could post a vacancy for a software
engineer there. It provided some of the options. Then I asked, are there
any communities or groups where I could
discover talents, so that I could approach
more passive candidates. It did provide me with some MSA that at
this point it also hallucinated because some of these things didn't work out. But for example, for MT, I think that was
a good suggestion that I could look for
some groups there. Overall, if I went if I
dived deeper and prompted my prompted in
different ways and optimize the input that
I was following up with. I trust that I could have maybe discovered some
slack channels or certain sub dits or different communities where I
could source these talents. Write a policy or a contract. I think this was a
super useful case when you just want to deal with something that is not something on the
administrative side, let's say that should
still obviously have an supervision or
human oversight in this case or a
legal oversight. But I ask it to write an employment contract
for a role of marketing manager with a formal
tone and legal structure, and this is a draft
that I received. Then I ask it to
translate to Spanish for our employees in Madrid and it generated that
within a seconds. Finally, if you're preparing
for certain conversations, whether it's with law expert or merger and acquisition expert
or any other stakeholder. I think this was quite of a useful tool to give
you a head start. So you can say we're planning to hire an employee in France, even though we only
have an entity in and business in Austria. What are some important
questions to ask an employment lawyer
regarding this? Obviously, you don't want to spend a lot of time with them, especially if you're
paying for their services. So this is why you
want to come prepared. These are very, very
specific questions to the point that show that you did your work prior
to it to coming. I think that's most of the HR or operations use
cases, I wanted to show. Obviously, they were mainly
focused on recruiting, where I think the current
purpose mostly lies. But obviously as we mentioned the previous one with
streamlining the processes. I think that's something
very useful, obviously, when you want to just
research more about how certain companies are doing things on the
cultural perspective, on the organizational
design perspective, and so on, the tool can
be very, very helpful. Talking about
recruiting and hiring, I just want to finish up to say that when it comes
to these topics, be aware that these
tools are equally available to candidates as well. They can optimize their CVs,
cover letters, outreach, answers and interviews,
they can supplement and improve their assignments and so on. I think this is good. This tool should
be for the benefit or for the support of
both sides in this case. Instead, if you are
a recruiter or HR, instead of busting down candidates who are
using this tool. I think you should recognize their AI savviness and rewarded. Ask them, what you can do, and I saw already many companies engaging in that proactively, that they are asking
candidates to transparently walk them through the tools they used and how. And I think at this early
stage of the tool adoptions, this is particularly green flag. Yeah, this was all.
Thank you very much for watching and see
you in the next video.
10. Role playing with ChatGPT: Everyone. Let's take
a look now into a separate segment within GPT where you could still build
up on multiple use cases. But here I'm referring
to the role playing. With the advent and rise of this these AI language models, role playing can be extended to virtual interactions and
interactive experiences. The primary purpose
of role playing is to enhance learning and preparation for the real life scenarios. Again, there might
be and that might, but for sure are
infinite examples, Some of the most
common use cases when it comes to role playing are interviews and assessments, so you can utilize
role playing here to simulate job interviews, providing an
opportunity to practice answering questions and
handling different scenarios. This can be both as a
candidate or an interviewer. L angage learning
is the next one, which we already mentioned. Here, Chagp can serve as a language partner allowing
to practice conversations, vocabulary,
sentences, and so on. Customer service training
is the third one. I think this is this can be very beneficial to replicate customer
interactions, facilitate this
communication, exchanges, and development
of the skills and problem solving abilities
and building empathy. Negotiations and
conflict resolution, by simulating these scenarios, you can practice
persuasive techniques, active listening, and strategies
for conflict resolution. Finally, one a bit outside of the box about therapy
and concealing. Here it can assist you in different conversations where it can create a safe space to
express thoughts and emotions. Us, it's very
important to mention that this is still role playing. This is still an
artificial environment, especially talking
about, let's say that the therapy and last point, and it's still something that
is on a pretrained level, something that is interpreting emotions differently
than humans. Nevertheless, This
can be very powerful, but still only in
the written form. Be aware that there might
be some limitations, at least when it
comes to the scope. Let's take now look
at let's actually demo the first case with
interviews and assessment. What I prepare are basically
two ways to go about it. One way that you can do it is
to ask the tool to generate a conversation with common questions and
desired responses, and then you can
read through it and understand what strong
interview entails. What I did here is I
said, k, Number one, as a job seeker who
is being interviewed for a product manager
job, and then number two, as an interviewer
who is interviewing this job seeker for a
product manager job. Is it leads to a role play. Ask the most common questions. Then Yeah, keep this communication going until I write up, et cetera. This is how you show me.
Interviewer, good morning and welcome. Candidate,
good morning. Thank you for having me,
giving the introduction, then the first question,
what was your role? What challenges did
you face, et cetera. It goes all the way to finally, can you tell me about
the time when you face a difficult challenge
in a project and how you overcame it, et cetera So it kept the
conversation rather short. But what I could
have done now is when after reading,
asking for feedback. So what was good about it, what could have
been done better, et cetera This is more passive way of interacting
with the role play. The more active one would
be a second way and it's to ask the tool to act
only as one side, in this case, interviewer and ask you questions to respond. Then based on your response, it should give you
feedback as well, besides just asking
further questions. This is what I did.
I said, now, okay, act as an interviewer for
the product manager job. I am a candidate, and I will reply below to your
question and ask me the most common questions
and then wait for my response before
you ask the next one, and then later we can later
you can give me feedback. This is how we started,
so can you briefly introduce yourself and highlight
the relevant experience, certainly in my
previous role for a software company as a
product manager, this that. Then I got the next question. How do you approach a
process of defining and prioritizing
product features? Then I said, Okay, this is a crucial aspect in the
product management. I utilize a combination
of data driven insights, customer feedback and
business goals to make informed decisions, and so on. Then I get another question. How do you gather
and incorporate customer feedback into the
product development process. Here it kept asking me two
questions always in a row, which is not let's say by the interviewer book where you always want to ask
just one question. But what I did here
is, for example, I totally gave an unexpected
answer in the sense that, I implement features that I
think are most important. Customers usually don't
know what they want. I see my role to
anticipate what they might need and develop
the product for them. This is a very
controversial opinion, let's say, then I told it, let's hit pause now, please
provide me feedback. It gave me some kind
of like summary or impressions on what
was well, et cetera. But here, it says Yeah. To improve your response, you could emphasize
the importance of actively engaging with customers,
leveraging, et cetera. Here it also says, however, your latest response suggests a somewhat dismissive attitude
towards customer opinion. It's important to consider the customer opinions can vary, but they are but
they still offer valuable perspectives and can help guide product decisions. Again, this is referring
to this answer where I was quite
controversial and direct in of providing a different perspective
than what is expected. Again, this is a great tool where you can test
these things out. I just wanted to share
this short use case. I hope you will discover many, many more and share
them further. Thank you very much for watching this video and see
you in the next one.
11. Disclaimers and Recommendations: Everyone, and welcome
to yet another lecture, where we're going to discuss shortly some of the disclaimers
and recommendations. In the previous videos, you saw how you can use the GPD and different
use cases that you can find when interacting
with the tool. But here are some of the points that you
should be aware of. First, when it comes
to the disclaimer, you should be aware
that models can be biased because they are
trained on the data that has existed so far
and obviously that data was at the time subjective
and biased itself. Models may fabricate information or what is used as a
term is hallucinate, which would mean that they could totally invent
certain information. Models may struggle in
classes of applications. When it comes to, for example, spelling related tasks
or logic related tasks, you would get surprised
if you ask for a very simple task in some of these classes and then you
get a very wrong answer. This is something that
might have already been approved, but
just be aware. And lastly, from the
disclaimers is that models are subject
to prompt injection, jail break attacks, data
poisoning attacks, et cetera. These are different
strategies of how you can manipulate the model. If you're curious, feel free to research a bit more online. But this is something
that you should be aware when interacting with
the output of the model. When it comes to the
recommendations, something that might
be common sense, but it's still worthy of
mentioning is that you should combine these models
with the human oversight. Use it rather as an input, never as a sole decision
maker or sole evidence. Use it as a source of
inspiration and suggestions. As mentioned, it should
be a supplementary tool rather than an autonomous agent, which is mentioned
in the last point, that it should be seen
as rather a co pilot and a supportive hand than
something that is still capable of executing
task on its own. These are the
current limitations, obviously with future
models and iterations, the model should get better, but this should still be used in combination with
the human oversight. Thank you very much and see
you in the next lecture.
12. GPT4 and Plugins: Everyone, as we approach the
final lectures of the class, I would like to shortly
discuss GPT four and plug ins. Regarding GPT four,
it's currently the most recent
version available, and I want to add that despite the versatility
in various tasks, current language models
still have limitations. They solely rely on
training data which may be outdated and not tailored
to specific applications. Additionally, the
default capabilities limited to generating just text, which should change in
the upcoming period. But to enhance
these capabilities, this is where plug
ins come into play. Plug ins are third party
services that can be integrated into GPT to enhance its capabilities and provide
additional features. They allow GPT to access
the information that is in real time or personal or too specific to be included
in the training data. So not just what was available on the web but also
information from the deep web. In response to users
explicit requests, plugins can also enable language model to
perform safe and a safe actions, but
also constrained data on where the action
just needs to be performed. I think it's the
easiest if I give you an example of how
this might be used. This is an example
of, let's say, that you want to
book a table for two an Italian restaurant in New York City for tomorrow
night, as it's stated there. And you can just
go to GPT and type this input and then select
appropriate plug in. In this case, the appropriate plug in would be open table, which would then activate
and take your input and use its own services to find available restaurants
that match your criteria. It would then present you
with a list of options allowing you to go
directly to reservations, and then you can choose one of the options and
confirm your book. The plugin will also update the chat bot with the relevant information such as the name, address, and time of
your reservation. It basically enhances
the capability of not just having the tool
generate the output for you, but rather perform
the actions for you. Again, this is more on the customized node depending
on which plugins you use. This is just in a nutshell. I think the story
should be expanded with the topic of
autonomous agents, which we will address
in the next video. Thank you very much for
watching and see you there.
13. How the Future might look like: Everyone. Let's expand
on what we discussed in the previous recording
and really see a bit more into the
distant future of what might be there when it comes to the large language models. As mentioned in the
previous lecture, plug ins will play
an important role in GPTs and overall large
language model development. They are already here, so we technically shouldn't be talking much about them
in the future tense. However, eventually,
plug ins will allow GBT to access
contents of deep web. In other words, your accounts on various platforms and really make the experience
more customizable. Now, let's talk about the
upcoming two developments for large language models
and GPT like system, one that is a bit
further into the future and the second one that is
just behind the corner. One of the main trends that
we are seeing in the field of AI is the emergence
of autonomous agents, which are system that can act autonomously and intelligently
in various environments. Autonomous agents can have
different goals, capabilities, and personalities, depending on which designs and
purposes given to them. To give you an
illustrative example of how one of these
agents might work, imagine that you made an autonomous agent that could be given an
objective to, let's say, open an online business, and the agent would
come with to do list, just as CPT would. But this is where
the CPT would stop. The autonomous
agent would also go on to perform the
two dos and then add new to do based on the progress and then further
on based on the learnings. It would repeat this
iterative process until the initial objective of opening an online
business is met. It would be really
capable of accessing different platforms
and performing different actions
on your behalf, basically being
your virtual clone. In the attachment
and the resources, I will add one example of how this autonomous agent
worked on a small scale, where it had an objective
of ordering a pizza, how it went to Google, searched for the pizzas nearby, went on the website for
the inputted pizza. I found it and put it
in the card and went to the checkout and pre
filled all the information, basically just asked for the final confirmation on ordering. Talking about autonomous agents, please be aware that
they will still require some time to be capable of the things that you probably might
already be envisioning. Question is if they will ever reach the fully
autonomous stage. In the near term,
on the other side, you can expect far
more interaction between the tools and humans, something that is, let's say, still midway to the autonomous. This is where co pilot
comes into story. C pilot serves as a
personal assistant and can help you with complex
and different tasks. They understand the
natural language inputs and generate engaging responses. They also adapt to
different environments. Microsoft was the first
one to commercialize this term co pilot by
saying they plan to add or integrate the
intelligence into all of its world known programs of
Microsoft 365 solutions, Ward, Excel,
PowerPoint, and so on. Copilot will become part of
the operating system as well. Co Pilot is essentially an agent or like tool like a Cha GPT, but it is really your companion
on every step of the way. I'm using the word copilot, not just when referring to
the Microsoft solution, but rather to this
overall assistant that you would have in those
tools and applications. It's really difficult to explain the scope of its
potential abilities, even with examples
because think of everything what you've
seen from GPT doing, but now having all of that customize in the
specific software you use. Instead of having to let's
say you open PowerPoint and instead of having to design the presentation yourself, you would just be able
to describe what kind of presentation or slides you need, how many of them, with which
photos, with which text, et cetera, and that would be
generated. By the copilot. Similarly, Excel, you
wouldn't need to be aware of all the functions or
formulas or Google for them, but you would just
be able to use a plain language
explain what you need, obviously with
some iteration and tweaking and you would get
to your desired results. Microsoft is currently
pioneering in this direction, but it's also
biggest competitor. Google is catching up, and they already also announced similar AI features within their Google
Workspace solutions. So it's important to be aware that this is direction
where the tools are headed. And Lastly, I just want to say, I hope this content has
enriched you in some way. I hope you enjoy the lectures. Thank you for following them. I'm very grateful if you stick
to the end of the journey, and I hope to see you in
one of the next ones.