Transcripts
1. Introduction - What to Expect: There is this one
simple rule we need to stick to when it comes
to charting with CA GPT. The better the inputs, the better the how it puts. If you don't want
to stay behind, you have to be able
to be visionary. I'd encourage you to think of prompt engineering as
a skill of the future. We need to develop the
skill of composing proper proms to get what
we really want from C GPT. I've been working in marketing
for over a decade now in a field where we incorporated
AI in our daily processes. A long time ago, and I've
seen so many p prompts and so many complaints
that GPT is stupid, or the results
aren't interesting, at, they aren't good enough. And that's all because
of the way we prompt. I decided to create
this course to help you get better
results from chatting with E I power tools and let them become your friends,
not your enemies. This course is for everyone, every niche, every
business, every level. Of course, if you're
already Chachi Pro, there might be a lot of
things you already know, but I hope I will manage
to surprise you anyways, because I added many
secret source insights to level up the process. All the things are going to be extremely practical, a promise. What are you going to
get out of this course? The list is also long. Break for techniques to create original high quality content. You will learn how
to set the style for how the AI writes to
match your own writing style, your brand voice, and all
the other personal needs. With the right prompts, you will optimize your work and your daily
processes with AI. I will share my
exact proven props, and I will also share the sheet sheet with proms
I created, special for you. You will avoid many II
mistakes and pitfalls. And by the end of this course, you will know how to get next level quality outputs from GPT or any other GPT
powered AI model, and you will discover
how to format and structure your proms for
different types of results. You will learn advanced
prompt craft and use advanced techniques to
get the best results. Yeah, big things are
common. Are you in? Together with intuitive
explanations, I will share both
hands on examples and resources to make your life
so much easier. Let's go.
2. Class Project: Class project. You already
know what they say. Learning to work with
AI and guide it with the most effective
prompts is probably the highest leverage skill
you can develop this year, if not this decade. That's what we are
here for today. Many experts say that
in the near future, AI will become such a big
part of our daily life that prompt engineering
will be one of the most in demand skills
in the workforce. And honestly, I think for many people and for
many industries, it seems like that near
future is right now, and for others, for example, for all white collar workers, maybe five, seven years from
today or likely even sooner. But first things first. Why our class project, our homework, the homework for
you is so important today. To become irreplaceable
in this new economy. We must first grasp how I works. What tools should we use
to get good results, and that's the most important, the right prompting techniques. New knowledge and
skills spread quickly, but there are also many
brilliant techniques and prompting methods
that almost nobody speaks about, at least not loud. And we will put them
into practice today. We will unbox a bag
full of tricks. But the thing is, that's far better learn by just
doing practical work, practical experiments
rather than just watching me talk and just
only watching the course. That's why I want you to pause the course every time
you need it and practice alongside me to try out all the new methods right away and minimize the risk that
you will forget them. Effective prompt engineering
requires both knowledge, also the knowledge of all the underlying models being used, and that's also a
part of the course, but I can't stress this enough. Practice is the most
important thing. Your journey will be much
more effective if you see proned engineering
as a very useful skill. A skill set to use
as a complement to all the other skills
you bring to the table, and you practice it like every other skill which you want to take to a higher level. That's why my top advice for this moment is
practice alongside me. Make notes, if you like, and of course, note down
what works for you. Which ideas are the
best for your industry, for your needs, for your
business, for your project. Pay attention to
what works for me, what works for the
people I know, the people I will talk about. And take the best ideas for you. The skill is to be
able to systematically understand the language of different AIs and how
to instruct them. That's what prompt
engineering is all about. We your homework, which we also call a class
project is this. Test out different prompting
techniques for yourself. Put it all into practice, and in the end of this course, share your favorite
results for me, your favorite conversation
with Chaz PT, your favorite prompts and
the results from using it. How to share it? Simply make a screenshot of this
favorite conversation, favorite part of the conversation
and pose it right here. If you have any questions or maybe you'd want
the feedback from me, remember that I do
love questions, and I'd love to
discuss with you, so don't hesitate to head over to the discussion
section as well. Well, I really hope to see you.
3. What Does GPT Mean and How ChatGPT Works: What does GPT mean and
how Chad GPT works. Before we dive into the nuts and bolts of
prompt Engineering, let's talk about GPT, the main AI model we will
be using in this course. In November of 2022, C GPT, the Chad bolt interface
powered by GPT, introduced large
language models, LLMs into the mainstream media. Since then, numerous apps
and ols have popped up, and you probably heard
about some of them even you haven't realized
they're powered by GPT. So what is GPT? GPT is such a powerful
AI system created by open AI to understand and
generate human like text. Of course, each version is becoming more and more advanced. There's a huge chance that
when you watch this course, there's already the next
GPT generation out there. Each one is more advanced, as I already told you, but the system and the way
it works remains the same. So the information in our
course won't get outdated. And of course, anyway, I will update it when needed, so you don't have
to worry about. Chad GPT stands for chat based generative
pre trained transformer. And I know it may
not ring a bell. So here is a simple
cheat sheet to understand what
GPT really means. Generative means it
can create new things. It can generate responses
to our questions, and it needs to be prompted. Pre train tells us that
the model has already learned a lot based on
different data. It was trained. It advanced on a large amount of the written material
available on the web and also
academic content. Transfer is the special method it uses to understand language. It processes
sentences differently than other models out there. Good news, this also means that no two responses
are ever the same. As it uses the algorithms
to generate the next word, it gives a different word
so the results are unique. And here comes the
infert observation. This is why when my
coworkers and I use T GPD to generate Facebook
ads for our new apps feature. Although our props
were very similar, all the eight responses
were different. And of course, some of them
were much better than others. And of course, the
number next to GPT shows that this
is, for example, the third version with each one getting better and smarter,
as you already know. So how exactly does GPT work? I know that for so many of you, C a GPT is actually
the first time the artificial intelligence in this form has landed
on your radar. But is GPT something really tall in new and
one of its kinds? You may not realize it, but AI has been around for some time and is also
present in our daily lives, and C GPT wasn't first. Because look, what's the role
of artificial intelligence? Artificial intelligence
is designed to leverage computers to mimic
the problem solving of decision making talent and capabilities of our human mind. The best examples of this
would be facial recognition, the way recommended videos
on YouTube or TikTok work. Different tools, chat bots, or self driving course. And we all know these, right? They've been with
us for years now. So why is GPT so extraordinary? Let's start with a twist. The following response
is all written by C GPT without my edits without
any edits, so listen. TGPT is the latest
breakthrough in natural language
processing technology developed by Open and AI. It's a chat box that
generates human like responses to text
input in real time. One of the most
impressive aspects of TGPT is its ability to understand
and respond to context. It has the ability to remember previous conversations and use that information to generate
more relevant response. This makes it feel more
like a conversation with a real person rather than
a robotic interaction. Not a stand out
feature of TGPT is its ability to understand and respond to different
accents and dialects. This is a major advantage for businesses looking to
expand into new markets as it allows them to communicate effectively with
customers regardless of their location or
language background without any barriers. Okay, GPT aren't very
modest, but you are right. Also know that for some
people, for many people, it's hard to imagine and
understand the way GPT works. So I like to exemplify
and describe it that way. So it's easier to understand
the way it works, even if you aren't familiar with all these advanced EI terms. So listen, you can
imagine Cog PT as an extremely
ambitious student. Who spends his whole days
lock in the library and reads and learned from so many different books
available out there. But the best thing is that he is the best friend
you can imagine. He doesn't gate keep. He wants to help you
every time you can. So when you ask him a question
or give him a prompt, he uses what he has learned
to give you an answer. And I really think when you
imagine Chachi PT like this, it's much easier to
treat it as a friend, not an enemy who is
here to steal your job. And that is this
additional reason why why I love this way of
describing aG PT so much. CG PT and all its close
competitors like Varden or Bin are bringing to reality a concept that was
once for decades, only a crazy dream and existed
only in science fiction, having a real engaged
conversation with a computer. Can you generate texts
for us, write code. Explain scientific and
mathematical concepts. Explain difficult
motives from novels, give us language
lessons, write articles, or even love poems, give us film recommendation, and the list goes on and on. The most advanced
version can even as legal exams or
generate recipes from just a photo of your refrigerators
contents. It's impressive. All we need to do is ask and
give it a prompt to tell it what it can do for
us and what we expect. The key to this process lies
in Chagpts architecture, a network of interconnected
layers that work together to analyze and
interpret what we want. Each layer of this network contributes to
understanding the context, the semantics and the
nuances of our prompt. After all, we humans are
complex dynamic beings who do not always communicate directly in an easy to understand way. Chachi PT, on the other hand, is a machine, a very
sophisticated one, but building a bridge between complex human brain and ag PTs algorithms
was a challenge. Here is how open AI themselves illustright
away aG PT strain. And I had to show
you this as here we have many interesting insights
to understand our tool. O CGPT much better, so it's worth pausing
and reading to have an idea how the
process looks like. Today, we won't go
much deeper into technology standing
behind GPT because I don't want to bore
those of you who aren't into technology
on math so much and simply chose this course
to know how to use GPT in real life just to
make your life easier without all this theory
and all this background. And that's okay as well, and I completely
understand this approach. I advocate using AI Ta GPT as your writing partner and
your personal assistant, and I also use it
myself that way. That's also one of
the reasons why I'm such an AI
enthusiasm myself. But we shouldn't
forget that next to all the superpowers,
all the strings. Chad PT also has some
limitations and weaknesses. As they say, there are also
two sides of the same coin.
4. ChatGPT Limitations and What ChatGPT is NOT?: GPT limitations and
what GPT is not. As you already know, my team and I have been experimenting with generative AI for
such a long time now. We've incorporated AI
into our daily processes. We've added features that
are AI powered and are based on Open AI API to
the tools we're creating, and we're excited
about all the impact these models can have on our lives over the
coming months. But I want you to realize it's not all
that perfect and easy. CGPT, as everything, also has its limitation
and disadvantages, and despite loving the model, we need to talk about them. And of course, we should always, always keep in mind that this is still a developing
technology, and perhaps these weaknesses
and these limitations will eventually be
addressed or spased. GPT can provide wrong answers. Yeah, Sometimes it hallucinates. Already know the
advantage of GPT. GPT stands out from honor AI tools and
AI assistance due to its unique methods when creating responses to our questions
and our prompts. It accumulates an
answer by piecing together likely tokens
which are determined by the GPT string data
rather than searching for whole answers from the
sources from the Internet. We will talk about tokens in one of the next chapters,
so don't worry. We'll definitely come back to it so you can fully understand it. But the downside. But the downside is
that GPT can't really distinguish what is
true and what is false, and what is really
far from realities, so it often hi oinates. And some responses may
not be just a little off. They can be factually
inaccurate, and unfortunately, in some cases,
completely made up and couldn't be further from
the true ion of events. This is an interesting
issue and ongoing dilemma, not just for CA GPT, but for all large language
models in general. That's the biggest problem. And you may laugh when I
say that GPT hallucinates, but in fact, this is an
official term for it. When C GPT and other
large language models, LOM generate factually
inaccurate information and give us false statements, we call it a hallucination. This is also one of the
biggest potential dangers of AI generated responses
and EI revolution. CGPT have a confusing way of blending real
facts with fiction, which makes it even harder
to distinguish with which parts of the answers are
true and which are made up. Some inaccuracies can appear
completely in accent. But have much broader
implications when dealing with more serious or
more sensitive subjects. To the untrained I,
incorrect statements will seem completely true. Needless to say it could lead to horrible consequences
when used for tasks like giving medical advice or describing historical
events, for example. The results could
be catastrophic. That's why it's so
important to fact check all the results and keep in mind that AI can't be fully trusted. The big red flag is that when Cha GPT answer your prone or your question with
incorrect statement, something totally false. It answers with such authority. This confidence is
really mind blowing. Look at what confidence
ChagPP represents while sharing statements
that are completely made up. You could give GPT 100% of
the context needed to give you the right answer and it will still surface
the wrong answer. Like some Altman
CO of OPN AI Set. CGPT is incredibly
limited, but good enough. I some things to create a misleading impression
of greatness. It's a mistake to be
relying on it for anything important right now. It's a preview of progress. We have lots of work to do on robustness and trustfulness. Lack of empathy and
emotional intelligence. En processes electronic signals but can't feel any feeling, no sense of threat or safety. It also, of course, doesn't have childhood
trauma as Taylor Berger scrolled across her sin in the early days of the
writer's guilt of America. I've read this very
interesting interview with Peter Garson, head of Innovation at V CCP, hosted by Rosie Copland
about AI and scientists. And this is the way he
explained the issue. The phase artificial
intelligence is misleading. There is no intelligence. It's statistics and probability. The chat bots are not intelligent in the sense that
they are thinking machines. Their prediction machines. That's why a lot of
people in the field call this machine learning or
statistical inference or pattern learning and artificial intelligence
sets an unfair expectation. AI doesn't have emotions and has no way to acquire
them on its own. It can only learn from humans and the sources
it has access to and end up copying all the fair and unfair
behaviors and beliefs, which is also very
dangerous because it can distinguish good examples
from bad examples. I'm seeing this heated
debate around the dilemma, whether AI should be
treated like humans. It may surprise you, but some people believe that AI can do much more than
just copy human behavior. They really think AI can become aware of
itself and become a superhuman and even
have real deep feelings. Of course, AI is getting
smarter and smarter and can do things that only
humans could do before, but let's stick to facts. EI is in sanient. It just has a lot of opinions borrowed from the sources
it was being trained on. Ca GPT is biased. Yes, as with most EI platforms
and AI powered product, a GPT is biased. As you already know,
it was created from the collective writings and many Internet academic sources. As we could easily predict, this has resulted in one of
the biggest Tag PT issues. It has inherited some of the same terrible biases that
exist in our daily world. The data used to
train gipT is biased, and of course, it is. Then the model itself
is also biased, which potentially leads to
discriminatory outcomes. And unfortunately, that's
not only a potential risk. Many users say that they have seen that agiPT
is really biased, especially on sensitive topics. There is this primary rule
when it comes to EI tools. The better the data
is being trained on, the better the intelligence, and the data isn't
always perfect. Doctor Joy explained
it that way. Data is what we
are using to teach machines how to learn
different types of patterns. EI is based on data. In data is a reflection
of our history. So the past dwells
within our algorithms. CGPT was trained on data using terabytes of
text from humans, and we shouldn't
forget that GPT was trained throughout
society and the Internet. Taji PT is not a search engine. Some people think that
it's the next Google, but it's definitely not. As you already know, Chachi PT can give you false information or hallucinations as it's
officially being called. Why is it not a search
engine? It's really simple. Look. First, understand how GPT gathers its knowledge
and what are its sources. It must be trained on a dataset. To accumulate new data
and new information, the underlying engine
must be trained on it, and it's time consuming,
really time consuming. It has this huge potential to improve search engines
functionality, but it's not likely
that it would completely replace the
search engines we know. And we should always, always verify the fact
that T GPT tells us. Also, it doesn't always accurately arm you
with its sources, even when you ask
and prompt it too. And if it studies a source with false and
misleading information, it may be 100% sure
It is true and then share this information
with all the CA GPT users. However, the good news is when
inaccurate information or seriously misleading
statements are caught in the feedback process, the information CA GPT
provides become more accurate, so it's learning thanks
to our feedback.
5. The Core Strenghts of ChatGPT: Okay. Now it's time for the
biggest strengths of GPT. The previous chapter may
sound really serious, and I don't want you to
lose the enthusiasm. So in this chapter, we're going to focus
on I strengths to bring our excitement
and curiosity back. I think you realize how powerful CG PT and
other I powert tools are. And that's why you're
here watching the course. But I'd really like to
summarize the positive side, so we can refll
our excitement cp, our excitement cp, and then
go further straight to the prompt engineering and the exact techniques to get
the best results from CGPT. Because the thing
is, I really can't wait to introduce
you to this part. But before we do that, let's briefly discuss
the advantages of using our CG
PT in a nutshell. Let me know in the comments
or in the review section, if there are any advantages
you'd like to after the list, or maybe some advantages that you'd like to
put on the top. Because in your opinion, they're the most important ones. I'm really curious. So please let me know. What's the number one
advantage for you and the number one positive side of using EI power tools for you. Really can speed up many
processes and the mundane work. Sometimes doing simple
tasks can take hours, especially if you're
lacking the inspiration or writing on a topic that
you really don't enjoy. With the right prompts
and parameters, GPT can help you with
almost every task, and the results can
exceed your expectations. Enjoy cost savings
and save time. When we say time is money, we actually mean that saving
your time is really equal to saving money because time is by far our most
precious resource. So we should always
treat it with respect and save it whenever we can. CaCPT is not good
for fact checking, but it's great for so
many different tasks. And today, I will show you how to take the full
advantage of that. With prompt engineering and my favorite prompting
techniques. So here is where
things get really, really exciting. Let's go.
6. What is Prompt Engineering?: Is prompt engineering. Prompt
Engineering is basically learning to talk with AI to take this communication
to the next level, make it clearer and more
enjoyable for both parts for us to get the best results and the things we want,
exactly what we want. No compromises, and for
AI to understand what we expect from it and what exactly we want to
get. Why it matters? To get the best results
and the best responses, we can simply type
in whatever we want chat with TG PT
like it's 100 human. Yes, of course, we already
know it's really smart. And also the newest variation of Toch PT gives us those
suggestions what we may want. But there are many
professional tips you can implement into your daily processes to get even better results and
higher quality of responses. And let me tell you
That is a game changer. Prompt Engineering is all
about crafting these prompts, so the AI model can generate the most helpful and
accurate responses and deliver exactly
exactly what you want. GPT isn't a mind reader, so we need to guide it. Or you can think of a model as a super efficient
assistant or your ambitious intern who take
your words very literally. Look, the clearer, the more
precise your questions, your instructions, your prompts. The better your assistant, your intern can
perform and help you. That's basically the essence
of prompt engineering. You need to give it the best
possible instructions to receive the best
possible responses and high quality help. Why prompt engineering matters. You already know
why we should care. Think about being
in a new country in a totally new city with
a good clear road map. That's exactly what prompt
engineering is for AI. Good prompts help AI go further and get
where it needs to get. Prompt engineering is like
this guiding hand for AI, guiding it in the
right direction. And without clear instructions and easy to interpret prompts, even the most advanced, the most sophisticated AI models may not give you the
results you need. It will get lost and
interpret your instruction otherwise because it
can't read your mind. But with the right prompts, you can guide AI accurately
towards your needs, your goals, saving so much time, so much energy,
nerves, and effort. Prompt Engineering lets us get the specific responses
we want and need. It enhances our
interaction with AI, making it more effective
and innovative because we can receive the highest
possible quality response. Of course, AI models will
get more and more advanced. Sure. But no matter how
advanced the AI is, you still need to communicate what you want to
achieve somehow. And we can assume the AI will be perfectly
aligned with our needs, and it will predict
what we want. We really need to develop
the skill of composing the proper proms to get
what she really wants. According to generative
AI statistics, by 2025, 10% of data generated globally will be created by
artificial intelligence. Does a lot. While it's easy
to think that everyone can ask AI to create high
quality articles, images, charts, translations,
summarizations, or even python code, many experts make it their job. This is interesting.
Popular website, indeed, shows almost 300
jobs in the USA for the so called prompt
Engineering and AI Whisperers. At least that's the
case for today. The moment I'm
recording this course. And while some results
provided by generative AI, you see on the web or on
Instagram dit, Twitter, wherever you log into to discover new things
and new inspiration, these results might
seem incredible, but keep in mind that
they are so good, so advanced, so full of
details, so impressive. Because of the good prompts, someone typed into the system. In order to make AI do
great things for you. Things you want it to do, you need to understand
exactly what you want and how to describe it, how to communicate
in natural language, so the machine, AI would
understand it too. This is exactly why
prompt engineering is becoming so crucial. Someone who is a pro
at prompt Engineering can determine what
data What format is needed to train the
model and what questions to ask the model to get
the high quality results. Today, our goal of
prompt engineering is to create prompt data both very precise and
comprehensive for EI.
7. Understanding Prompts As Tokens : Standing prompts as tokens. If you're new to AI, the term token may
sound confusing. I know. But believe me, it only sounds complicated. It's a key idea, so I need
to explain it to you. But trust me, that's
really easy to grasp. A token is a
representation of a word. Part of the words or a symbol. Tokens are used by
AI tools as a way to conserve memory
and computing power. Why you may want to ask. AI only holds so
much in its memory, so Tokenizing proms allow the AI to consider
more content at once. It's sort of like how we
all shorten it was to stay within Twitter's character limit when creating a new twit. Tokens are the building blocks of language for
AI like GPT four. They are the units of texts that AI reads
and understands. Oh, I know which real life
example may be useful, so you can imagine it easier. Think about tokens like different ingredients
in a recipe for a cake. Put it on, they're
just single pieces. It's hard to predict what
will be made out of them. Mix them together
in the right way, and they form a perfect cake. Just like tokens forms, complete sentences,
de AI can understand. How exactly does this relate
to prompt engineering? Well, when we provide
a prompt to GPT, it doesn't see a
sentence or a paragraph. It sees a sequence of tokens. Then it analyzes these tokens to understand your question and generate the response you need. It's a very quick process. You can't see it,
but it's happening. Just as we humans make sense of each sentence by reading
individual words, the AI breaks down
our props into tokens to understand
what we're asking. Let's look how the Open AI
tokenizer tool provides a straightforward
illustration of this process. Before the API
processes our prompts, the input is broken down
into tokens just like this. As you can see,
in English tokens can be as short as
a single character, for example, a dot, or as long as a word
depending on the context. AI models like GPT four have a maximum limit of tokens
they can process at once, usually in thousands, but that
limit increases with time. This limit includes the
tokens in the props we type in and the
response GPT generates. Like I said before, it's also a bit like the
character limit on Twitter. Understanding props as tokens, help us grasp how
AI models read and process our questions and the task we want
AI to do for us. So here are the key
takeways to remember. AI tokens have nothing to
do with the crypto word. It's not a crypto term. Tokens are the building blocks or language for
AI like GPT four. In the realm of AI chat bot, a token can be as short as one character or as
long as one word. Tokens represent raw text. For example, the word
fantastic would be split into the tokens
fa, Ts, and ti. Tokenization is a type
of text encoding. For example, the
sentence, Hello. How are you? He 16 tokens. Before the GPT API
processes the prompt, our input is broken down
into tokens, always. Generative language
models also don't write our responses word by
word or letter by letter, like we humans do, but rather token by token. Models like our CGPT generate each text response
token by token to. Open AI released a very
cool tool that lets you play around with text ganization
that they use for GPT. Take a look at it when
you have a minute. You can find it
right here. Tokens. It's a fundamental concept
in prompt engineering, and keeping that
knowledge at the back of your mind will help
you create prompts that get the best results
from AI models like GBT four in all
the next versions.
8. Our Interaction with AI - Inputs and Outputs : Interaction with AI,
inputs and outputs. After you understand
the concept of tokens, at least I really hope so. Let's dive a little bit
deeper to explore how we can interact with GPT
and other AI models. We use these terms quite often, but do you know what exactly
are inputs and outputs? Just like conversation between two friends or two co workers, the conversation between us and GPT or any other AI model
involves two elements. Input and output. Two sides need to communicate. Input is our prompt. It's usually a question
or a task for AI. And the output is the
response we get back from AI. A good real word
analogy is once again, for example, cooking together. Or better better baking
a cake together. Imagine you're baking
a cake using a recipe. In this scenario, the AI is like a super smart baking assistant. Input. Think of the
input as the list of ingredients and instructions you provide to your
baking assistant. You tell the assistant
what ingredients you have. Flour, eggs, sugar, you know, the whole list, and how you want them to be mixed and baked. Similarly, when you
interact with NAI, you provided with information,
questions or comments. This is the input that the AI uses to understand what we want. Output. Now imagine your breaking assistant
takes the ingredients and instructions you've given and follows them
to create a cake. The finished cake is the output
of your assistance work. Similarly, the output of N AI is the response or action it generates based on the
input you provided. If you ask a question, the answer from Tag
PT is the output. If you ask CagPT to
translate a sentence, the translated sentence
is the output. Just like your baking
assistant needs clear instructions to
create a cake you dream of. AI needs accurate and
well formed input to generate the desired output. And just as your assistant
success depends on the quality of ingredients
and instructions you give, the accuracy and
usefulness of AI's output depends on the quality of the input of the
prompt you provide. Oh, I love real word analogies. They get me hungry and in the mood for
baking a cheesecake. Yeah. Bet back to our
inputs and outputs. Inputs to AI models such
as GPT four are prompts, which are sequences of
tokens as we learned before. We can type in a
simple questions, a sentence to complete, or even a long paragraph
for the AI to analyze. For example, I love pasting the whole block post paragraph for AI to improve and analyze, but we will talk about
these methods a bit later. Then AI interpres the tokens to understand what
exactly we want. And the whole magic lies in the way AI
generates responses for us. I find it really similar to
how a human would respond. Of course, as you already know, the process is different. And AI generates a
response token by token, not what by what, but
still it's a bit similar. So take away The
interaction between input and output is the most important part
for prompt engineering. By gaining a deeper
knowledge of this dynamic, we can create prompts more skillfully and predict
the AI's reactions, and it enables us to have more seamless communication with the AI and understand it.
9. AI Response Mechanisms and How AI Talks Back: AI response mechanisms
and how AI talks back. Let's unveil the magic
of AI responses. Now that we know why we
need an input and output, and what they really
are, let's explore, how super smart AI, kind of like a robot brain
comes up with answers for us. We call this the AI
response mechanism. Is like the AI's way of thinking
and talking back to us. Let's dive into it. Imagine you're playing
a word association game with a friend who's really good at understanding patterns. You say a word and your friend responds with another
word that's related. They do this by thinking about the words meaning
and connections. Now, think of the AI
response mechanism like the transformer
architecture used in AI models. It's a bit like
your clever friend, but super charged
with technology. Input. You give the
AI a sentence or a question just like
giving your friend a word. Attention and understanding. The AI uses its
transformer brain to pay special attention
to the words in the input. It understands how they
relate to each other, similar to how your friend
understands word connections. Processing. The AI thinks
deeply about the input. How it analyzes
patterns and meanings, much like your
French does to come up with a related word? Output. Just like your friend
responds with a related word, the AI generates a response. This response is based
on the patterns it discovered and the information it knows from its training. So the AIs response mechanism with its transformer
architecture is like a super smart friend who
can understand and process information to give you thoughtful answers based
on the input you give it. Now let's think of AI
response mechanism like a language game played
by a team of players, each with a specific role. This game is also a bit like the transformer architecture
used in AI models, and that's why I will
use it so you can more easily imagine how
the process works. So imagine you're
a question master, and your friends are the
transformers. And listen. Each transformer has a unique
skill encoder transformer. This friend listens carefully to your question and
breaks it down into smaller parts like understanding the words and their meanings. Attention transformer. This friend pays
special attention to important words and figures
and how they are related. It's like focusing on the
key parts of your question. Memory transformer. This friend remembers all
the important details from previous
questions and answers. It's like keeping a notebook
with past conversations. Decoder transformer. Finally, this friend puts
all the pieces together. It takes one of the encoders, attention and
memory transformers say and forms a complete
answer to your question. The game goes like
this. The first step. The question master, give your question to the
encoder transformer. Second step, the
encoder transformer understands the words
and their meanings. First step. The attention
transformer highlights important words for everyone to focus on. The fourth step. The memory transformer checks its notebook to see if there's anything
useful from the past. The coder transformer
takes everything from other transformers and crafts
a well formed response. As you can see, the whole
process is like effort. Just like in AI models with
the transformer architecture. Each part does its
job to understand, remember and generate responses based on the input it gets. Is, AA is so smart thanks to this process called
transformer architecture. Without diving too deep
into the technical style, because I imagine
you don't want to spend three years
listening to this theory. This process and this
architecture helps a GPT or other AI model, read and interpret
text In a way, that's a bit similar to humans. Okay. Next important thing
you need to understand. L et's enhance our real
word language game analogy with the concept of probability and the probability score, which are also very
important to understand AI models and understand the full concept of
prompt engineering. Because I just want you to leave this class to finish this
course with a feeling that now you really understand
the way we can communicate with AI models
and the way it responds. You can always skip
this theory part, but I really secretly hope you also find it super interesting
because, well, I do. And if you don't skip it and
you understand this process, you will be much more confident when talking to your friend, CPT or simply any
other AI model, any other AI tool. So let's go back to my
analogy and let's enhance this language game analogy
with the concept of probability in a
probability score. Imagine you and your
friends are playing a language game using
a magical board. This game will be a bit like the transformer architecture
used in AI models. And now we're adding the idea of probability and
probability score so you can imagine and
understand it better. You ask the question master, start by writing your question
on the magical board. Each of your friends, the friends you already know,
the encoder transformer, attention transformer,
memory transformer, and decoder transformer has
a different colored pen. Encoder transformer. When you write the question, the encoder transformer
reads it carefully and uses its pen to underline
the important words. It assign a probabilities
call to each word showing how likely they are to be the key
parts of the question. This friend just listens
to your question and carefully breaks it
down into smaller parts. For example, if you ask
what's the weather like? They might assign the higher probabilities
to meaning related to weather and lower
probabilities to other meanings. Attention transformer,
attention transformer. This body pays attention
to important words and figures out the relationships. It assigns probabilities course to how connected
different words are. If your question contains
words like today and rain, the attention
transformer might give a high probabilities
sco to the idea that you're asking
about today's rainfall. Memory transformer, the
memory transformer, checks its magical node book, which contains past
conversations. It looks for similar
questions and responses to find out
what worked well before. It assigns a probability scot to different response options based on their success in the past. If a similar question has been asked before and got
a good response, the memory transformer
might assign higher probabilities to
these similar answers, the coder transformer. This is where the
probabilities scores really come into play. The decoder transformer takes all the information from
the other transformers, including the
probability scores, and crafts a response. It chooses the words
and ideas that have the highest probability of being a correct and
meaningful answer. The Coder transformer takes all the highlighted and
remembered information. It uses its pen to draw
a response on the board. The intensity of the color represents the
probability score. The darker the color, the more likely the response is to be accurate and useful. As you all play this
magical language game, the colors and
intensity of the marks on the board help
you understand which parts of the question
are most important and which responses are more
likely to be correct. Just like in AI models, probability and probabilities
calls guide the game making the responses more
reliable and meaningful. So imagine that each
transformers answer comes with a little flag that shows how confident they are
in their response. The decoder transformers answer is the one with
the highest flag, the one that has the
highest probabilities. So this AI game show involves your transformer
friends working together, considering probabilities
and choosing the most likely and meaningful
answer to your question. Just like in real AII models, using the transformer
architecture, because in the
transformer architecture, the final response is based on a combination of understanding, relationship between
we, memory of the past conversations and the likelihood of different
answers being correct. So how does GPT pick
the best response from so many actually
countless possibilities. You can already tell. Every potential next token, the next part of the response is assigned a
probabilities score. The one with the
highest score gets to be the next token
in the sequence. So key takeaways. AI models predict
responses based on patterns a lend
during data training. AI models like CGPT
understand the context of our prompt of our questions with the help of
transformer architecture. AI models generate
responses by predicting the next token based on the
highest probability score. And trust me, this is a
really important part of understanding the
mechanism behind AI models. It will help us interact
with AI more effectively. By understanding how
the AI functions, we can improve our
ability to create prompts that lead to the specific answers
we're looking for. And in the upcoming chapter, we're about to unveil the top secret recipe
for cooking up some seriously ASM
proms. So let's go.
10. The Anatomy of an Effective Prompt: Mastering the art
of great input, the anatomy of an
effective front. Working with AI like our ASN GPT is like
having a conversation. The questions you ask can actually significantly
change the answers you get. So let's explore what makes
a really great question. We are looking at
three main things. Being super clear, we will call it specificity and clarity. Knowing what's going on around
contextual information, and setting the right tone and style, specificity and clarity. Providing clear and
precise prompts, to GPT is like handing the AI a well marked
path to follow. I can't stress this enough. Crafting your proms with is the key to
receiving in depth, and high quality
replies from AI. Think of it that way. Imagine you're
guiding a friend to find a hidden treasure
in a big forest. If you say go and find
something cool in the forest, they might get lost and not know what they
are looking for. They may miss the treasure and come back home with nothing. But if you say, follow the
river to the big oak tree, then take ten steps to the left and look
under the big rock. They will have a
much better chance of finding the treasure. Specific instructions, specific prompts work
the same way with AI. Instead of asking very general, unclear and hard to
interpret questions like, tell me about dogs
and give me hints, what dog could be
the best one for me without iring AI any
detail about you. You could ask. Can you explain the difference between a
laboratory retriever and a German shepherd and
then ask it to provide information about the kind
of care these dogs need? And what are their
special needs, and which one is the better
choice for a small house? You need to specify your needs. This way, you're giving GPT
a clear path to follow, just like providing
your friend with a detailed map to the treasure? This helps the AI understands exactly what you are looking for resulting in more accurate
and detailed responses. Another simple example
for a war map. Instead of bok proms, like, can you give me some
information about Barcelona and more specific
questions such as, can you provide some details on the history of
free hoouses of Gaudi in Barcelona would
generate a much better response. It's like giving the AI a better road map to the
answers it. Take away. Instead of using
open ended proms, we need to make them
specific and clear. Look, what's the difference? Here are the examples.
O open ended question. Tell me a funny story my audience may enjoy.
Specific and clear. Can you write a short about 20 sentences funny story about the way man tried
to make his friend fall in love with
him. Open ended. What's the weather like?
Specific and clear. Can you provide the current weather conditions
in Paris France? So why open end questions
aren't the best choice. It's always better to understand things for examples, right? Imagine you're asking an AI tool to choose a movie for
your movie night. If you say, pick a movie for me, the AI might suggest something from a
comedy to a thriller. It's like spinning
roulette wheel. You're not sure where it will lend and if you will
like the result. Now, think about it being
more precise and say, please recommend a heartwarming, animated movie suitable for family gathering and
family film night. This time, the AI
knows you're looking for something that
brings smiles to thesis, and it will consider movies like finding Nemo or Toy Story, so your family can have
a great film night. Your specific
request gives the AI a better understanding
of your preferences, just like telling a friend
you're in the mood for pizza with extra
cheese and pepperoni. So when you interact with
the AI, it's very similar. If you ask, tell
me about animals, you could get a wide
range of information. However, if you ask, explain the unique hunting
techniques of chatak and how they're Speech
helps them catch them free, you're steering the AI towards a more detailed
and focused response. This way, you're
increasing the chances of getting the information
you're really curious about. The key takeaway, better
prompting, better results. Contextual information. Just as we draw from what we know and what we have
experienced to make our conversations
with our friends or with IC workers richer, including background information
and our prompts can act as a GPS for steering
GPT responses. Imagine you're trying to find a specific shop in a big mall. If you just say, tell
me about the store, you might hear about
any store in the mall. But if you say, tell me the apple store where they sell the newest
icons in Mac books. You're pointing in
the right direction. Oh, imagine you're quizzing a friend about someone famous. If you ask, tell me about
some actor named Emma, you might get details about any Emma in the show
business world. But if you say, tell me
about Emma Watson, you know, the brilliant actress
from Harry Potter Movies, you're giving your
friend context, and they will likely talk
about the right Emma. You adjust on the same page. Similarly, C GPT and
other AI models don't have personal experiences of
any knowledge like humans. But of course, I is super
smart and spotting patterns. So think of it like teaching a pet parrot to
mimic your words. When you add context
to your pros, it's like showing the parrot the exact phrase you
want it to repeat. And by doing this,
you're helping GPT find the right pattern from its training and generate
the most feeding response. So by adding context, you're essentially helping it choose the most
relevant pattern to follow and increasing
your chances for high quality
accurate response. Look, here is a huge difference. Voc prompt. What's the
situation in Palermo, Italy? Contextual? Can you provide the latest heat and white
fire statistics and guidelines in Palermo,
Italy? Vogue prompt. Tell me about Sun, contextual. Can you explain the
physical properties and orbital characteristics
of Saturn de plant in our solar system? So take away, sprinkly of prompts with detailed clear
directions and add context. Without this, you
might end up with lengthy vogue responses that
wander all over the place?
11. Setting the Tone and Writing Style: Setting the tone
and writing style. GPT models can be
extremely good at picking up the tone and
style of your prompt. So if you're serious in your
question and your prompt, you will likely get a
serious answer back. But if your style
is more casual, or humors, the I
can match that too. The GPT model is like
a style chameleon. It adapts to the tones
you said in your front. Imagine it's a conversation
with a friend. If you're talking seriously, they will respond
in the same manner. But if you are being
laid back or funny, they will mirror that ipe too. Oh, think of it as dressing
up for an occasion. When you're heading
to a tens event, you put on a formal suit
or a beautiful gown. But for a casual hangout, you slip into your
favorite coffee jeans and a white t shirt, right? Similarly, if you ask, Can you explain the
process of photosynthesis? In a formal way, you will get a detailed and serious response written in the same
serious writing style. But if you ask, break down that plant magic thing for
me with a playful touch. AI response will
match your tone. For example, consider
asking about superheroes. If you ask, please provide a synopsis of Batman's origins, you will likely get a
neat and formal answer. We'll check that in a minute. On the other hand,
if you ask, hey, Spill the bins of spill the bins on Batman Superhero
beginnings with a wink. You will receive
a response that's just as fun as in casual. Both questions are asking
for similar information, but the style of response will
likely be quite different. So let's see it in practice and let's
analyze the difference. As you can see this response, is very serious.
It's very formal. It's like a post on film web or any other movie related platform where there aren't any jokes. Only fact checked series data, series information
about our superhero, and let's check what will we
get with the second prompt. So Hs peel the binds on
Batman superhero beginnings. And as you can see, GPT
mirrors the way we asked it for help because the style
is also not so humorous. We need to specify
our needs if we want to guess a very
humorous answer, but it's much less formal. Your style sets the stage for the AI's performance by aligning your tone
with your prompt. You're like a conductor guiding a musical piece and AI harmonizes its
response accordingly. I've prepared some
examples to highlight how the tone and style of the
prompt can shape AI response. Look. The formal prompt. Kindly explain the
fundamental principles of quantum mechanism, particularly focusing on the Hasenberg
uncertainty principle. Here is the response we got. Now, let's look at
the informal prompt. Hey, could you make quantum
mechanism make sense? I'm really intrigued by that
Hesenberg uncertainty deal. In the end, quantum mechanics is a crazy but proven
branch of physics. Yeah. Yeah, sounds good. Okay. Now let's look at
the professional front. Please offer a
comprehensive overview of the changes in
the Europeions, fiscal policy and their potential impact
on small enterprises. As you can see, the way you
frame your question sets the stage for the AI
response in a real way. Just like how you approach to a friend differently based on whether you're having a
formal chat or a casual h, the AI adapts its answer to
match the style you've set. It's really important
to remember about that when taking in your prompt. Because depending on the tone
you choose for your prompt, formal, informal,
professional, casual, academic, conversational, persuasive, narrative, descriptive, technical,
enthusiastic, sincere, humorous, sarcastic, witty,
friendly, passionate, diplomatic, assertive,
colloquial, layman expository, you will get a bit
different response from AI. So key takeaways. Basic prompts will only
get you generic answers, and that's why we should
upgrade our prompts. Care about providing specific
clear props with context. Also, our word choice and the tone of our
prompts matter a lot. By thoughtfully
selecting the way you express yourself,
when creating proms, you can steer GPT or other AI models toward generating responses that
align with the context, audience, and purpose
of your prompts. Whether you're aiming for a professional,
academic, technical, or relaxed interaction, your choice of tone
and style matters. Successful prompt crafting
calls for specificity, adequate context,
appropriate tone, and sometimes clever playing. And we will talk about that
in one of the next chapters, along with examples and even more practical
considerations. So stay with me and let's go.
12. Prompting Techniques: Role-Playing Technique: Prompting techniques.
Inspecting prompt engineering involves various techniques to optimize the output
we will get from GPT. Now we are about to dive into free big ideas in the
world of crafting proms. Free prompting techniques. And first, we'll focus on role playing technique
and few shot learning. They may sound like black magic at first, but don't worry. We will make it simple
and easy for everyone. And you will be
surprised by the way, these techniques can
change the quality and accuracy of the
results we get. Actually, here is
the most fun part, crafting prompts to get
awesome responses and using different techniques
to get different results. So imagine prompt engineering
like a learning adventure. You will also learn by practice every time
you chat with GPT. Whenever you have a chat, it's like gaining wisdom
or how you create even better prompt because you
gain new observations, and it all comes from practice. Think of it as upgrading your
AI conversation strategy. It's like gaining
experience points in a video game of chatting, and now we're going to discuss the techniques to
speed up the process. Role playing technique. This one is really exciting. Role playing technique is
extremely powerful and is very useful in
almost every situation in almost every case. There is this interesting
approach that involves treating the AI model as a character
in in your given snara, which very effectively
integrates aspects of specificity,
context, and tone. Let's say our prompt
sounds like this. You're the hea cha teaching a novice cook how to
create a Gurman meal. This role play technique
creates a tailor context, experienced chef
introducing a beginner and establishes a fitting tone,
friendly yet informative. So through this strategy, you're steering the AI
toward the specific path, resulting in responses that are laser focused on
the right target. Now, let's observe how AI will handle different roles
when we tell it to act, and here are the results. As you can see, we got a lot
of strategies to the stress. And I think we can really get those from a professional
psychologist. Imagine your technology
experts simplifying the concept of block chain
for a non technical audience, emphasizing its
security features and real world applications. Your assistant chef, explaining the technique of baking the perfect cheesecake
to a vis cook, is very similar to
our first prompt. Making the perfect cheesecake is an and I'm here to guide you. Okay. Here comes the guidance. You're a stand up
comedian performing a Hilary's routine about the
quirks of modor technology, blending observational
humor with witty anecdotes. Wow, is really funny. It's like a roast
for a modal word. Act as a caring parent, giving advice to your teenager about making responsible
decisions at the body, discussing peer pressure
and personal values. Imagine you space scientists briefing astronauts
on the preparations needed before
launching into space. You're a detective
in the crime novel, provide a theory about the mysterious incident that
took place at the airport. Act as a high school
biology teacher explaining the process of phyto synthesis
to your students, using diagrams and
relatable examples. Pretend you a fitness coach, giving a pep talk
to a client who feels de motivated
about their progress. Act as a tour guide explaining the historical significance of a Roman classm to a
group of tourists. So as you can see, AI really performs well
in these tasks. When you ask it to pretend it's a travel guide
describing a new city, you basically turn it into
your creative tour guide. I've seen many bloggers and influencers creating
their e books and guides with the help of AI. So it also holds a huge potential for product
based AI businesses, although, as always, I don't recommend relying only on AI. I recommend using it as
your writing partner, your brains starting body. But I wouldn't
advise copy pasting the AI content into ebooks
or other digital products. So if you want to do this, edit the output,
Audio storytelling. You know, that's my approach. Why role playing
technique will give you better results
than regular proms? From my experience, this technique helps you get
the best results from GPT. When you assign it in a role, you get much more appropriate
responses to your prompts. Asking CPT, a question
will always get you a response of some
sort, but its relevance, tone, and the level of
details might not be suited to your needs
to your requirements. This can be easily changed by framing your question
within a role. Thus assigning a role to Chachi Pit really
changes the output. As always, let's
see in practice. Let's ask Chachi
PT this question. Can you explain how
the moon works? Okay, and here is the result
we got, as you can see. The result is quite formal,
it's quite serious. We did it without
assigning a role. And the answer goes into some details about
gravitational interaction, orbit, and rotation,
and title effect. But what if your audience was
a class f of six year olds? So this is where
assigning a role also can definitely
help adjust the result. So let's do this one more time and this time
assign to GPT role. For example, a role
of the teacher. So the prompt will be this. Act as a primary school teacher, you are teaching a
class of six year olds. Can you explain the
way the moon works? As you can see, assigning this role really
changed the output. Now it's much better and
you can use it right away. A role playing technique
makes AI pretend to be a certain person or
behave in a certain way, and it modifies the tone, style, and the death of information presented based on
the assigned role. When it comes to the
death of the information, let's exemplify
it with asking To DPT to write a coffee
place review for us. The difference will
be huge. Wait for it. So the first pront is this. So the result sounds friendly, and I really like it, but what can we do to take
it to another level and add more details to it as we don't want our review to
sound so generic. Yes, we will assign a role, and this time it
will be a role of the coffee places
critic and blogger. So the prompt will be this. You are a professional
coffee critic and blogger. Write a review of here you insert the coffee
place of your choice. I chose the one from my neighborhood,
which I really love. And as you can see, this
review is much more advanced. AI has added details to it, and it also sounds
much more serious. Now, let's see what will
happen if we ask you to act as a professional coffee critic and blogger writing an
article for Vogue Italia. So our prompt will be this. You are a coffee critic and blogger writing
for Vogue Italia. Right along an emotional
review of our coffee place. Okay. Okay. Now the review
sounds really intriguing, and I don't know about
you, but for me, it sounds much more touching and interesting than the
two previous options. So key takeaways. Use role playing techniques to get more personalized results, style text, and
improve its accuracy. The accuracy of
the result can be significantly improved with
role prompting technique. Role playing technique makes the results much more suitable for specific context
and go target audience.
13. What Are Zero-Shot Prompting and Few-Shot Prompting: What are zero shot prompting
and few shot prompting? Now, you will learn 01
and few shot prompting. If you talk with
an AI enthusiasts, you will often hear the terms zero shot prompting and
few shot prompting. Or maybe you already
have heard those. To understand those techniques, we will need to go back to how a large language mode
generates an output. In a moment, you will learn. What is zero shot and
few shot prompting? How to experiment
with them using GPT. Zero shot technique. Now we will learn 01
and few shot prompting, but let's start with the zero shot technique as
it's the most basic one. Officially, zero shot prompting enables a model to
make predictions about previously unsen data
without the need for any additional training. But let's make it easier. Let's make it sound
less complicated. Using zero shot prompting is all about giving the
model a simple task. You just show it a prompt
without any examples and ask GPT or any other AI power tool to come up with an
answer for you. And this is important. All the instructions and role playing scenarios
you've seen in the previous lessons
are examples of zero shot prompts.
It works like this. We just give the large
language model a task to complete without
any instructions, and the model will then
guess what we want based on its own training and the way
it interprets our prompt. Let's see how zero shot prompting
works with the example. So here is my Zero shot prompt. Write me a description with
adjectives and nouns of a Ninja queen walking in the
winter landscape of France. W zero shot doesn't work
the way we would like it to and the result doesn't
match our expectations, it's a smart idea to provide demonstration or
examples in the prompt, which leads to one
of shot prompting. In a second, we will
discuss the way we need to modify the
prompt to turn our zero shot prompting into one of shot prompting,
one shot technique. One shot prompt thing
is used to generate a more accurate response with additional data in the
input. In our prompt. This additional data can be a single example or a template. What's important, a one example. That's why it's called one shot. We provide only one example
or only one template. So do you already have
an idea, what we can do? What we can add to
our previous prompt to transform the prompt from the previous lesson to turn it into one shot
prompting technique? To remind you, our zero
shot prompt was this. Write me a description with
adjectives and noun of Ija queen living in the
winter landscape of Frince? Yes. I will type
in one example of the output structure I would
like to get back from C GPT. The AI will then interpret
what I want from it based on this one example and is
one example training. To use this one short technique, our prompt will look like this. Write me description with
adjectives and nouns of an Ida queen living in the winter landscape
of friends like this. Here we have the example that
we want Cage Pt to read, interpret, and then we want it. We just want Chagp
to be trained on this example to provide us a very similar output
in this template. So here's our example. Appearance, long blond, blue
eyes, and counting figure. Her close is adorned with
delicate snowflake motifs, the description of character, the description of superpowers, the description of weaknesses. So we want to use this
template. Here's our result. As you can see, I use the
structure, I gave it, and now the result is
much more structured and and I've got
exactly what I want it. On shot prompting is
the best way to show GPT the direction in
which we want it to go. Now I have a much better, much more detailed result. Here's our key takeaway. One shot prompting, we show the model only one
complete example to guide it to train it on our
example or on one template. Few short lending technique. The next prompting
technique is called a few shot prompting
and it is also known as in context learning. It's very simple. It's as
simple as incorporating several examples into
your prompt to provide the AI tool with a very clear, even clearer than with one
shot prompting picture. Of what you want to
receive from it. So to put it simply, view shot prompting is a
technique where we type in a few examples,
typically 2-5 examples, so we can get better
results quicker and to better adapt GPT to the
results we want to get. Because when we add an
example to our prompt, the model will understand
our requirements, what we want, what
we need much better. For example, if we say that we'd prefer the description
in ballot point format, I will mirror our template. And that's interesting. When we add a few examples, the chances of getting exactly exactly what we
want is even higher. So take a look at how this method works
with this example. Here is the beginning
of the prompt. Classify the sentiment of the following sentences
are positive or negative. First example, sentence,
I love this coffee. Sentiment, positive. Second example sentence. The ice cream I adder was
terrible sentiment, negative. Third example, sentence. The cold brew beans were extremely tasty,
sentiment positive. For example, sentence, I had a terrible experience
with a bartender there. Sentiment negative. Then we give CagPT the
sentence we wanted to classify basing on the
previous formula we gave. Here's the sentence to classify. The Panama beans presentations
was incredibly boring. And of course, Chat
PT got it right because it knew
the way we wanted to classify the sentence and it already knew the rules
of this classification. In this example, the
few shot prompt recipe provides the AI model
with a clear task. It's a sentiment analysis, and these additional
instructions that include the
exact patterns of the desired result
desired output from GPT. By using this few short
technique in this prom, the AI model is
guided to generate a more accurate
classification for the sentence we wanted to
classify so for this sentence. Is like we are teaching the model what
exactly do we want, and we are showing it patterns
that are important for us, the patterns it will need to use when giving
us the result. In a moment, we'll discuss
different situations where using this technique
can be incredibly helpful, and I will show you even
more practical uses for your everyday life, whether personal or
maybe in your day job. Key takeaways for now, A few shot prompting
technique is also known as in
context learning. It involves giving a model a few examples of templets showing how
to perform the task. What is the difference
between zero shot, one shot, and a few
shot prompting? You already know it, but I
want to summarize what we've just learned to make sure it stays in your mind
for a longer time, and you know exactly what
the difference is about. Let's go. Zero shot prompting is where AI does the task we wanted to do without
any additional training without providing any examples or templates, just like that. Prompt. Translate the
following English text into Japanese our text. Why can summer last
all year long? Here is the output from Cha GPT. The task was very simple, so we didn't need to add
any examples or templates to guide jpTy how to how to perform the task
we wanted it to do. We used zero shot prompting
technique because the model didn't need any example to
perform such an easy task. I can understand and
execute tasks like these without having any
explicit examples of the desired methods, patterns, formats, or templates. It's just a really simple task, and we don't feel
the need to add any more details
examples or templates. And what else can we use zero shot prompting
technique for? Actually, a lot of things. Easy things where
the examples or templates just aren't needed. And here is another example where the zero shot
prompting is the best idea. The prompt. Summarize
the main idea. In the following paragraph, here we give CGPT the
text we wanted to read, and we get the output. We didn't have any
desired template or any special requirements. We wanted to give CGPT. In these examples, our model
is given clear instruction, very simple, clear tasks without any examples
or demonstration. The tasks were so easy that
the model can understand it. And generate appropriate outputs that will most probably
meet our needs. However, as you already know, zero shot prompting may not always give you accurate
or desired outputs. Then one of shot prompting will be a much more
effective approach, especially for more
complicated tasks because by providing the model with
demonstrations and examples, it can really better
understand what you want and then perform the task
more accurately that way. So when it comes to the previous example
with summarization, it's always up to you, and it depends on a few factors, mostly whether you have
specific needs or not, and you can choose between
zero shot prompting technique, one shot prompting, or
a few shot prompting. It's always up to you. In protip, try comparing
the results yourself and see the difference for yourself because that's
really interesting. And I think that's a really
interesting experiment to notice the way the output is changing basing on the
way we change the prompt. So do it. When it comes
to text summarization, actually, fusot prompting
can also be super useful. In this case, as this
method can improve your text summarization by providing examples or
well summarized content, the summarization
you really liked. This will help AI to generate more informative summaries that will be very similar
to your examples. So one shot prompting involves a single example
or a single template. This means that when you add one example or one
template to your prompt, is the one short technique, and use one shot prompting when you want
to nach the model, nat the chargP t in
the right direction without overwhelming it with
many examples, like that. The prompt. Translate the
following English sentences into French, Italian
and Japanese. Here's an example and here
we provide the template, the formatting which we want
to get. Here's the example. I want to be cappuccinos, and here's the example that
French is the first language. Italian is the second language, and Japanese is the
third language. And knowing how we want
the format to look like, now translate, Don't add
any sugar inside, please. And here is the result we got. As you can see the
formatting is the same. As you can see in this output, GPT notice the template, the pattern we provided it with, and the result already
has this right pattern, the pattern we want it. That way, we can save time and make the result structure
the way you need it. You don't have to do
it manually later. With this just one example, I can grasp the
essence of the task of our prompt and then generate
the desired response. And this is incredibly
powerful because it allows you to easily fine tune the behavior of
the model without, you know, extensive training. Just one example, just one
example with the pattern, for example, or
with the template. So when to use one shot
prompting technique. In simple uncomplicated tasks, for tasks that are
relatively straightforward, one shot prompting
can be enough for a guiding the AI
model effectively. Familiar tasks for the model. If you already know
that the task is within the scope of our
AI model straining data, and it has demonstrated success
with very similar tasks. One shot prompting may
provide adequate context for generating high quality
responses you want. Then a few shot prompting
means using a few examples, for example, two, three, four, or maybe even
five examples. Few shot prompting is a really effective strategy
that can guide AI to generate such high quality and high accurate and well
structured responses, structured just the
way you want it. It will be beneficial when dealing with more complex tasks, where providing a
range of examples helps the model better
understand the desired outcome. These examples,
also called demos, if you want to know
the official term, enable the model to
identify and generalize the pattern from a few examples from a few instances
we provided with. Just like that,
look. Prompt, and give you a topic and you reply with a bullet points list like
in these examples. Topic. Here we put our examples that have a very well
visible structure, and we want GPT to be
trained on these examples. Here is the result we get
later when GPT already analyze the examples we've
given, we've typed it in. From this example, you can see that the model somehow learn how to perform the
task by providing it with these three examples. By carefully curating
these que proned examples, we can steer the model
in the right direction. Then we won't have to
modify the output, already generate the output, or modify it because we can have the amazing result right from the start by providing
these examples. So here is the key takeaway. Use a few shot proms when a single example might
not be enough for the model for guiding the
model or when you want to demonstrate a pattern or
trend in a few examples. And here I've prepared
a little comparison. You can take a screenshot of it, so we will always remember what's the difference between
those prompting techniques. So you will remember about the biggest advantages of those prompting
techniques and it won't, you won't forget when
somebody asks you. What's the few shot
prompting technique? You will know how to answer. And I think this knowledge
isn't good only in theory. It's a very practical knowledge. Even though you may forget that it's called a few shot
prompting technique, but you need to remember
the way you can provide a few examples to show TGPT what
you want from it. This is the biggest
power you can have. In that way, you can
level out the output and get desired results
so much quicker. So I just can't
stress it enough. It's so powerful. Let's sum it up when fuot prompting can
be extremely useful. First, complex tasks for
tasks and prompts that require a deeper
understanding of patterns, or when you are dealing
with less common topics, few shot prompting will help the model by providing
a few examples to learn the structure and
context much more effectively. Then it's really helpful for less familiar
tasks for the model. The task is not well covered within the model
straining data or the model struggles to generate accurate responses
with just one example, you will see you
really will see that a few shot prompting will improve the AI's
understanding of the task, and you will get
much better output. Higher occurancy
needs, When you need higher accuracy or more
contextually relevant responses providing two or maybe even up to five
examples will improve the model performance by emphasizing the
pattern and tone, writing style, or context
required for the task. Do you want some real
life examples to see it all in practice?
Sure. Here we go. So few shot prompting can be your game changer in these
cases for these purposes. Creative writing and
content generation, because we can apply
fico prompting to creative writing and
content generation tasks such as generating stories, generating article,
essays, marketing copy by providing examples of the
desired writing style, tone, and structure. Next, template based
content generation, when generating content
based on specific templates, such as contracts, business
reports, legal documents. Few shot prompting can help ensure that the
model generates text that complies with this
required format, structure, and language. I providing examples of properly formatted
documents will help the model generate
content that meets these established norms of the specific domain
when you needed to. Here we have The
details of formatting, we need GPT to learn from the examples. Code degeneration. You can use fu shot prompting to enhance code generation
tasks by providing demonstration and
good examples of the desired output
for a given input. Yes, this will help
the model generate more accurate and efficient code based on the context
you provided. I then we can use
this method for data extraction, and formatting. You already know it. In tasks where information
must be extracted from unstructured text and presented
in a structured format. For example, in a
format of tables, lists, or key value pairs. You can use fuse
prompting to guide the model in generating
the desired output. In these examples of formatted output will help the model understand
the structure. It should apply to
the result while extracting and organizing the relevant information
from the text. And as you can see,
a few shot prompting really is a game
changer. Fun fact. Many people I talk with didn't expect that models
like Tajipi can give you such fantastic
high quality results wind prompt with this technique. So I'm really curious
about your opinions. How do you feel about these
prompting techniques? Which one you already know
you will use most often. Let me know in the
common section in the discussion section.
14. Chain Of Thought Prompting Technique: Chain of class
prompting technique. Here we go with
the last technique for today. Is it useful? Yes, it is, especially for
more complicated tasks. It's really good to know it. This method encourages the AI to reason through
complex problems or more complicated
tasks by asking it to list the steps it took
to reach the answer. It's really powerful.
For example, instead of directly
asking the AI to write a block post
on a specific topic, you can first request an outline or bullet list of key points to
include in the post. Once the AI provides the list, you can then ask it to write the introduction following
the provided structure. This logical step
by step workflow will help generate more coherent and well
organized outputs, and you will be amazed
by the results. This is called chain of
thought prompting technique. And this prompting
technique is when used in the context of writing proms
for language model like GPT, is all about gradually
building up complexity or specificity in prompts to
guide the model's responses. Okay, Okay, let me explain this technique in a more
easy to understand manner. Think of the chain
of thought technique like building with Lego blocks. When you start building
something with ego, you don't immediately jump to the most complicated structure. You begin by connecting
a few basic blocks and then you add more
and more blocks to create a complete model. In the same way,
when you want to get a detail or very
specific response from language model like CG PT, you don't start with a
complex question right away. That's what this prompting
method is all about. Instead, you can build up
your question step by step, adding more details and more
context with each step. Trust me, this helps
the language model follow your line of thought
and give you the response, the output you're looking for. In essence, and the chain of thought prompting technique is like constructing a staircase of information that guides
the language model towards a specific
type of response, just like how you build up
a legal model step by step. And chain of fought
prompting method is a style of f shot
prompting where the prompt contains a series of intermediate reasoning steps. But I know, I got
to show you how this technique looks
like put in practice. At its core, chain of
fought prompting is all about guiding I tool, a large language model
to think step by step. Look, here is an example of this method for
solving math problems. Do you see how we
are guiding AI step by step to get the response, the final response,
but not lose the track of the steps AI needed to take. Here is an interesting fact. Here's how they
describe this chain of flood prompting technique
at Cornell University. I just thought it would
be interesting for you, so I decided to add it here. I always like to explain everything in my own
language in my own words, but I also love
discovering how very, very wise people
put it into words, for example, from Cornel. Here's the way they did it. Here's the exact
difference between standard prompting and a
wain of flout prompting. As you can see in many cases, for example, when
solving math problems, we can get the right the relevant and accurate result only by using chainel
thought prompting. Because look, W with
the standard prompting, we will get the wrong
answer because TGPT just won't divide this action
into a few steps, a few steps that are needed to give you
the correct answer, and that's why the answer from the chainel thought prompting
technique is a good one. Now let's test it out
with my examples. I need to tell you
that we can also use chain of thought method for casual text two by providing these step by step instructions. For example, this is how you can ask TGPT for a film
recommendation. That's a very funny
example, but yes. That way, you will ensure that the model knows your taste, your preferences, and that the whole process will be
really carefully processed. Now, another example. The standard prompt without a chain of flood prompting technique will sound like this. Imagine you're planning a
road trip with your sisters. You want to calculate
the total cost of fuel for the trip. The distance between
your starting point and destination is 100 miles and your car's average
fuel efficiency is 50 miles per gallon. The current price of
fuel is 450 per gallon. Calculate the
estimated total cost of the fuel for the trip. That would be the
standard prompt. This is how it will sound like. Here is the same prompt, but with a chain of thought
prompting technique. Imagine you're playing a
rot trip with your system, you want to calculate
the total codes of the fuel for the trip. The first part is the same. But then give me a response following this
pattern. First step. To calculate the
total codes of fuel, we need to determine
the total number of gallons of fuel
required for the trip. First, let's calculate
how many gallons of fuel are needed to cover
the entire distance. We divide the total distance
of miles by the cars average fuel efficiency of
miles per gallon, second step. Since we can't have a
fraction of a gallon, we need to round up to
the nearest whole number. Therefore, the car will require approximately gallons of fuel for the entire trip, Third step. Next, we multiply the total
number of galloons by the price per gallon to find
the total cost of fuel. Therefore, the
estimated total cost of fuel for the road trip is. I think this is a
very good example of one shot technique mixed
with the chainel thought. This is a very good
example of guiding argPT, and this is a very
good example of using chanel thought
for, for example, math problems or actually
casual problems, when you want to, for example, know the good answer, and you want to make
sure GPT understands the steps it needs to take to give you the correct answer. So as you can see,
the chainel thought prompting is a technique
that involves breaking down complex tasks into a series of
interconnected fronts. And instead of relying
on a single output, the mode is guided
through a sequence of prompts that refine and
build upon each other. By doing so, the model can better understand
your intent and produce more accurate and
contextually relevant output. In contrast to simple prompt, a chain of thought
prompt instructs the model to break down complex problems
into smaller steps to produce reasoning along
with the final solution. In that way, that's great. We can follow along. And see if the
answer is accurate. Also that way, we can
better understand how AI calculates or
understands things, and we can easily
tell if the answer is right or wrong because
we understand the steps. The chain of thought prompting
breaks down problems and it gives you more
interpretable answers. Here are our key takeaways from this chapter about chain
of thought technique. Chain of thought prompting
technique is all about guiding the model to
think step by step. It simply breaks down problems. In this technique helps you get more interpretable
answers because by guiding the model through
a sequence of proms, you just increase the chances of obtaining accurate and
relevant responses.
15. How Can You Always Get the Best Results?: How to always. Always get the best results. Yeah. That's the most
important thing because no matter if you use
zero shot, one shot, a few shot prompting technique or maybe a chain of thought, there are a few things you
need to remember to get absolutely the best
quality responses from AI like always. Let's discuss the
key ones to level up all your prompts to
get the best results. Define your needs. Yes, if you want TPT or any other model to produce
some creative writing, then you will achieve far
more impressive results by giving it the relevant
information context. In this instance, you can refine the output by adding
information about the intended use
of the output and some details about
your target audience. So define and describe who is your target audience and what your business or your brand, or your profile, or
your project is about. Always be specific. For example, instead of just
saying fashion industry, specify sustainable
linen lingery Foman. Highlight your unique
selling points. If your business or your personal brand has a unique angle, you
need to mention it. For instance, write handmade
cookies baked without sugar rather than just handmade
cookies or only cookies. Include information about
your target audience. Who will read the output? You can include key
information like demographics, age,
gender, location, occupation of the reader,
psychographics, interests, behaviors, values, pain
points, and needs. Highlight what
problems your product, your service, your project, your blog, solves for
your target audience. That way, CGPT will be able to better
understand your needs and what exactly highlight in the output it will
generate for you. And this is also very important define your
social media platform. Look, if you want to write
content for social media, it's important to include information about your
communication channel and the prompt as each
social media platform has its own criterias, which must be met. For example, Twitter has a totly different
character limit than Instagram and posts on Linked in need to have a totly
different tone than the post you want to publish on
Instagram Fritz, for example. That's a totally
different format, totally different tone,
totally different purpose. You need to mention
it in your prompt. For example, like this at
the end of your prompt. You can also add more
custom instructions. I've noticed that many times adding these custom
instructions will help you achieve such a better
quality of the response. What kind of custom
instructions? I will give you my favorites. So Experiment with these. Be highly organized and use ballot points, provide
detailed explanations. I'm comfortable with
lots of in depth detail, but explain them in an easy way. Such as solutions that many
people wouldn't think about. Discuss safety only when it's
crucial and non obvious. If the quality of your
response has been substantially reduced due
to my custom instructions, Please explain the issues. These custom instructions
will help you get so much better responses
from GPT in so many cases. So experiment with these. Really? Now, let's summariziz
best prompting practices. Don't be afraid to experiment. Try different approaches,
different techniques, and iterate gradually correcting the model and take
small steps at a time. In case of two short outputs, ask for multiple suggestions and edit your prompt to
get better results. Keep an outcome focused
mind and ask yourself, Which technique will provide me the best results with my
case with my problem, and ask yourself this
question each time so you can use the best
prompting technique. Provide examples. If possible, show the model examples
that represent your desired tone
or desired formats? When zero shot doesn't work, try one shot or few
shot prompting. Always remember, the
good prompts result in more focused relevant
and desirable outputs. And last, not least provide
clear instructions, always incorporate
relevant context, and iterate and refine the props based on
feedback and evaluation.
16. Resources for You: Hey, everyone. This is
Kate from the future. I know. Te travel
is real. Who knew? You might notice something
a little different today. Y, I'm wearing a completely
different blouse than I did while recording
the rest of the cars. Why? Well, let's just say that my laundry room is a bit of
a war zone at the moment. But I promise what is extra organized is
the extra goodies. I created for you.
So here's the deal. I went ahead and put together two workbooks to help you get even more out of the course. Because I know some
of you like to go above and beyond when
it comes to learning. And honestly, I'm
right there with you. I wanted to make sure
you have everything you need to really dive in and practice the skills we've
been covering because we both know that learning
happens when you do. So the first workbook
is my gift to you. A little thank you for being
the part of this journey. It's packed with some
extra examples and summaries of what we've
covered in the course so far. Think of it as the companion to the course
that helps you grasp the key concepts and gives you a space to practice
because let's be real. The more you practice, the better you will get at using those prompting
techniques. So download it right there
in the resources section. But wait. There is more. I've also created
a second workbook for those of you who are like, Okay, Kate, I love the course. But I want more. I want to see how I can really use these prompting techniques
in my everyday life. Whether that's for
my creative project, my professional work or just brace storming that genes idea. I've had in the back of my mind. So the second workbook
is filled with even more practical
examples and tasks. I'm talking about zero
shot, one shot, shot, and chainel thought props
that can really help you in both creative and
professional settings. It's got everything from writing prompts to help you
write the novel, the one you've been
thinking about for ages. Two ideas generation fk
growing your personal brand, or managing your
business like a boss. So basically, it's
the Let's take this to the next level
guide in a nutshell. And while this one is paid, I've made sure to keep it super affordable because
I want you to have access to all this juicy
practical goodness without breaking the bank. So whether you're grabbing the free workbook to reinforce
what you've just learned, are you ready to dive into the paid workbook for
even more examples, and hence on practice, I've got you coverage. I'm really, really
excited for you to exploit these workbooks
because I know how powerful it can be to
apply what you've just learned especially when it comes to something as dynamic, as a. Whether you are here to
boost your creative writing, grow your online presence, or just level up your
professional game, these proms will
help you get there.
17. Final Words and My Question to You: Final words. The skills of Tach PT and other
large language models are only going to expand. But the golden basics will
always remain the same, so don't hesitate to experiment with these
techniques whenever needed. And I have to say, I'm so proud of you for
finishing this course. Good job for both of us. I really hope you are going to implement the techniques
we've talked about. And thanks to them,
you will level up your processes and your both
personal and business life. Also, of course, don't hesitate to ask questions
if you have them. Every question is
more than welcome. So if you have any questions or comments, share your feedback, share your questions, ask me your question in the
discussion section. That's why. That's
what it's here for. And if you enjoy the course, and if you want to make
me extremely happy, please review the course and post what you think about the course in
the review section. If you don't have time, it
can be only one sentence. For example, I enjoyed
this and this. I think the chapter was
the most interesting one. Why reviews are so
important for me? Because that way, thanks to you, I will be able to reach
more people who might need my help and who
might need my course. As the more reviews
the course has, the better visibility it gets. Also, please tell me what
would you like to see more or maybe less
in the next courses? Or maybe there are some topics or some techniques that
you are dying to learn. Let me know I can't
wait to hear from you. So see you there and see
you in the next one.