Transcripts
1. Introduction to Prompting Masterclass: Most people are just
throwing ideas at chat EPT and hoping
it gets it right. But here's the deal.
Prompting is a skill. And in this class,
I'll teach you how to use it like a superpower, so you get better outputs, faster results, and way
less AI frustration. You'll learn how to
structure prompts that actually guide the
AI, not confuse it. We'll cover personas,
tone, structure, and walk through my brain
framework so you can break down any task into
prompt ready steps, whether you're writing,
building systems, or just want Chachi BT to stop
giving you weird answers. This class is your fix. We'll tackle common mistakes, decode what Chachi BT really
needs from you and build reusable prompt templates that
save you time every week. From beginner basics to
advanced strategies, you'll walk away with
total prompt confidence. If you've ever thought, why doesn't Chat EPT
get what I mean? This is the course
that fixes that. By the end, you won't just
use AI, you'll command it. Let's turn prompting into
your unfair advantage.
2. Prompt Essentials - Crafting Prompts: The art of prompting a prompt
is how you talk to the AI. It's the instruction that
tells it what to do. AI is highly intelligent, but it's not a mind reader. Think of prompts as directions
for a virtual assistant. Good prompts equal better,
consistent results. Bad prompts equal vague, generic or are totally off
from what you expected. By the end of this lesson,
you'll know how to craft effective prompts
that work every time. You'll learn how to
identify bad prompts, write clear specific prompts, and use building blocks for
a repeatable framework. Let's get into it. The task. Start with a clear
action of what you want, keep it simple and action based. Use action verbs. These are things like
asking it to write or explain or summarize or list
or reword or translate. Specific using clear
direct language. This helps the AI
to stay on track, and it avoids it inferring
what you might be wanting, which could actually
be incorrect. Then one pro tip is to guide the AI on what you do want instead of
what you don't want. So by this, I mean, it can sometimes fixate on negative things and include elements of it in the response. So try using positive phrasing instead of saying don't
give me a long summary, instead say, give me
a concise summary. The role assigning a role is a key part of good prompting.
What is a role, though? A role assigns a persona
to the responses. So it's able to shape the tone. It's able to give
contextual perspective, and it's able to give responses
in the appropriate voice. An example of this is
terminology that may be used for a specific
industry or topic. And this is especially
important when you're creating content for
your target audience. Think of roles as narrowing down the expertise so
that you're able to add the clarity and
perspective that is needed for those expertise. It also means that
the responses are given back in the
appropriate voice using the correct terminology for that industry or expert. Some example roles are things like a customer
support agent, where you would say, you are
a customer support agent, apologize for a delay, or you are a doctor, list three future health
risks based on this history, or even other roles
like a teacher or a business or life
coach things like that. There is no fixed list of
all the available roles. It is up to you to just
look at what is going to be the best expertise
that you can tap into for a particular
purpose or prompt. And we dive into roles and personas in more detail
in an upcoming lesson. Building block number
three, giving examples. AI absolutely loves
patterns and examples. It just shows it how
you want it to respond. To do this effectively,
show don't tell. So by that, provide one
to two examples for complex task to help
demonstrate the pattern you wanted to follow.
How many examples? Well, one to two is enough. Provide one to two examples
for complex tasks. This helps to demonstrate the pattern you
wanted to follow. And there's no magic number
of how many examples. But generally speaking,
one to two is enough for it to get a good idea
of what you're wanting. Of course, if you have
a more complex task or there's multiple
steps involved, then it might be a good idea
to include more examples, especially if the context and the request changes
through the prompt. So if you have a multi
step process and each step is needing
a different example, that's where you would
add in a few others. Examples are even more important when it comes to
complex tasks because they clarify the expectations
and they give the AI a very clear guide as to how
it needs to respond to you. If we have a look
at an example prom, this would be as a copywriter, write ten short, catchy Instagram captions
for a fitness brand. Keep the tone motivational
but friendly. And example one, this is
the tag line we would use. No shortcuts, just sweat,
strength and progress. Example two, you don't have to be extreme, just consistent. Now, write eight more
captions like this. And from that, the
AI can understand the format and the tone
that you're wanting. It's then able to do
the remaining captions with greater success. Building block number
four, the output format. This is where you
want to specify the structure and the
desired response format. Examples would be write
in three bullet points or summarize in one short paragraph or list steps in
a numbered order. Next is to set the length. Examples are make it 100
words in length or keep it to three sentences or limit the reading
time to 2 minutes. Specifying the output format is also going to save you a
lot of time with editing. This means that you can use
the responses straightaway and you don't have to spend any more time getting
it to how you want it. Building block number
five, extra info. This is where you are giving all of the contextual
information, all of the relevant
data that the AI needs to be able to give you
the best possible result. It's important to establish
and specify the audience. So who's the intended audience. This will also determine
the level of detail and the terminology used and the
tone used in the responses. You can imagine, a
very technical topic would need a lot of detailed and technical terminology
in the response versus something that's a bit more
casual and conversational. Tone. This is where you specify what sort of tone you want. Is it going to be professional
and instructional or is it going to be
casual? Should it be fun? Specifying this in the prompt is an important step to getting it to respond in
the correct tone. Then timing, this
is where you say, when will this be used
and by that I mean, is it going to be a piece of text for a
Christmas campaign? If so, it needs to be festive. Maybe you're
drafting an email to a colleague working on a project and a deadline is looming, so you want to emphasize
the urgency of that, and that needs to come
across in the response. So it just helps the
AI adjust the tone, the urgency, and the
relevance to fit the moment. It's always a good idea to
put this extra info and contextual information at
the start of the prom. This way, the AI has all of the contexts that
it needs and all of the data to be able to follow the steps
according to that data. If you think of a machine, it's a very linear process
in how it operates. So it's going to take all of
that first information and then execute based on
that initial information. If you put it at the end, it just makes it more of
a challenge for it to incorporate that
into your response. This is due to the AI
giving more prominence and weight to information that appears earlier
in the prompt. Next, we're going to
be looking at what a bad prompt looks like
and how to improve it. If the prompt is unclear, the result will be, too. So it's a lot like that saying
a garbage in garbage out, and this is where
you need to give it clear concise details to be able to give you
the best response. Let's look at some bad prompts and identify what
is wrong with them. The first one, tell
me about marketing. Vague, too broad. There's no direction or goal, and just very inefficient. Make this sound better.
What's wrong with it? It's missing context, and better is a very
subjective term. One person's better might be
very different to another's. So it's important to be specific in what
better actually is. Then the next one writes
a blog post for me. This is inefficient because
we're going to have to provide the AI with
more information, and there's going to be a
lot more back and forth. Whereas if we had provided that information to begin with, we wouldn't have to do
multiple passes at. Also doesn't have any topic. There's no link specified, and of course, there isn't
any tone seen there. Then a pro tip is to instruct the chatbot to ask you questions to gain more
info and understanding. So by that, you can then finish off your
prompt by saying, ask me for any details or information to help
you better respond. Something as simple as that
would work really well. Next, we're going to look
at fixing bad prompts. We'll see them before and after. Here we have a few bad prompts, and the first one is
tell me about marketing. This is very vague. So a better prompt will be write five beginner marketing tips for small ecommerce stores
and use a casual tone. The topic and the context
is beginner marketing tips, and we've identified
the audience as being small ecommerce stores. So it's going to be
speaking their language, and we're also specifying what tone we want and that
it should be a casual tone. Make this sound better. That is also very vague. A better prompt would
be to say rewrite this paragraph in a persuasive
tone for business owners. Again, we are saying
what it needs to do. We're giving it some
tone directions, and we're giving it some
cues as to the audience, which is the business owners. Next prompt is write a
blog post. Very vague. This is going to result in a lot of follow ups needed
and just extra work, whereas a better prompt
is going to say, write a 300 word blog post about time management
in a friendly tone. So from there, you can
see, we're not going all out and putting
in a detailed prompt, but we are providing very specific elements and
all of these building blocks into the prompts to get the best possible response and the most efficient
one as well. And one thing I've
noticed is that most prompts can be easily
fixed just by being more specific by specifying the output and
providing an example. These three elements
alone can fix all of your prompts
or at the very least, make them significantly better. So I urge you to try this, and even just these
three elements is going to hugely improve
your prompting abilities. The key takeaways for the lesson is that good prompts are clear, specific and structured
for the best AI responses. Remember to use the
building blocks that we learned about
here in this lesson. Those are state the task, assign a role, provide
one or two examples, let it know what
format you want it in. Is it a list? Is it a paragraph? Is it a table, and also to
provide the extra information. So the context surrounding
the request or any relevant information
you feel that the AI would need to be
able to respond better. Of course, you are going to sometimes get undesired results, and that's where you iterate. So don't be afraid to
refine your prompts, give the AI feedback, give it updates to make, and it will learn
from those responses. And most of the chatbots, especially Gemini and Chat CIBT have persistent
memory so they're able to understand and learn from the interactions
that you have with it, to be able to respond better. More you're prompting, the more practice
you're getting and the more it's able to know
exactly what you want. And using these building blocks, you're able to get
consistently better results. So strong prompts
mean you're getting your results faster and
the results are better. That wraps it up
for this lesson. I hope you're able to see that these elements and
building blocks are crucial for getting
better results and prompting your
way to success. In an upcoming lesson,
we're going to look at assigning roles, which is quite an
important topic, and you'll be able to
see just how to do that more effectively
and what options there are for being effective with
assigning a role to your AI. I'll see you in the
next lesson. Goodbye.
3. Prompt Essentials - Roles and Personas: Going to be talking
about roles and personas in this lesson. AI chat bats, like Chachipiti
are generous by default. By that, I mean, when
you ask questions, you're getting the default
version of Chachipit. Their knowledge spans
all subject matter, but when you ask a
question, you'll get a generic cookie
cutter response. It's when you use AI personas that the magic starts to happen. To illustrate the
power of personas, here is an article 0N Zi Net. It shows how GBT 4.5
took the Turing test. Now, this test involves a human judge chatting to
both a human and a computer. The judge then has
to distinguish the computer from the human
based on their responses. There were two prompts used. One was a minimalist prompt. The other had
additional instructions on what kind of
persona to adopt. Responding to the interrogator, specifically a young
person who is introverted, knowledgeable about Internet
culture, and uses slang. GBT 4.5 had a win rate of 73%, meaning it fooled
the human judge into thinking it was a
human 73% of the time. So this just shows the power of personas and how you
can really leverage the subtle nuances and tone and language of
that specific persona. The touring test, it's not a
direct test of intelligence, but more of a test
of human likeness. And here on the screen, we can see an example of the prompt that was
used during this test for the AI to adopt the persona
needed to be successful. AI personas can shift
the tone, depth, and delivery style based on
who they're pretending to be or who they're speaking
to, I E, the audience. When you ask the same question, but using an AI persona, the AI won't just
respond differently. It also thinks differently. Consider this. Let's say you ask a question of how does a
combustion engine work? You'll get a very
different answer coming from a mechanical
engineer versus a primary school teacher versus a Formula one race car engineer. The teacher might
respond with It makes tiny explosions to move
the car like magic. A mechanical engineer would
say something like it converts fuel into energy
through a four stroke cycle. And lastly, a race car engineer would ask, well, it depends. Are we optimizing for toque, RPM, or thermal efficiency? So in summary, when
you want to tap into more specialized
expertise and thinking, use those specialized
personas in your prompts. Because if you're getting
very technical responses, but the intended audience is
not technically inclined, it's all going to
be lost on them. Understanding role prompting. So this is where you want
to give a clear direction. Roll prompting is assigning a
specific persona to the AI, which means you want
to get the perspective and the expertise of
that specific persona. Personas are not
necessarily needed when you need simple
and generic responses. But when you're looking for very specific and very
tailored expertise, that's when you need to activate the specialized knowledge, and using roles focuses
the AIs approach to this. Also having a consistent
voice rolls maintain a consistent tone and perspective throughout
complex conversation. So what that means is that it will keep the
level of detail. I'll keep the tone,
and it'll also use the same thinking and angle
based on the given persona. Personas you choose have an
impact on the output style. Assigning a role or persona
shapes how the AI respond. So it's able to
match the audience, it's able to set the
appropriate tone, and it's able to adjust
the level of detail and the depth of content that
it gives back to you. Let's look at example here. Without a role, we're saying, write a review of
this new pizza place. And of course, as
you can imagine, you would get a result that
is probably quite generic, bland, and it just lacks specific expertise
and perspective. As if you used a role
with that same prompt, so you are a Michelin
guide reviewer, write a review of this pizza. As you can imagine, the language would be a lot more refined. There would be technical
culinary references and details that only a professional reviewer
would be able to know. So this is where you would
get a far more detailed, structured and
appropriate response. What are some roles to try? Well, there's no fixed list
that you can work from. But generally speaking, if
you consider domain experts, so people that are considered
the top of their field, those are the personas
and the roles that you want to
use in your proms. So let's go through
a few examples here. One would be a copywriter. You could say act as a copywriter or you
are a copywriter, but it's always a good idea
to tap into a specific role. So looking at an industry, saying you are a copywriter
for a B to B sales company, or you are a copywriter
for a software company. These industry
specific additions that you add there can
really make the difference. Some other roles to try, so a content strategist, a social media manager,
a web designer, SEO specialist, email marketer, data analyst, customer
service agent, or a project manager. These are just a handful of some of the roles
that you can tap into and ask the
AI to respond as. An example, prompt
using one of these would be you are a
startup advisor, suggest three quick
improvements for a landing page
targeting SAS founder. Some interesting
results, you could even try using celebrities or famous figures or any well known personality
when you are prompting. Some examples could
be Shakespeare or some old English style
responses or David at Attenborough or some wildlife
documentaries type stuff. Or Elon Musk from
the perspective of a revolutionary industrialist or technologist or Stephen King, if you're looking for
some compelling writing, Gary Wnechuk if you're
wanting to tap into a more social media guru type of style or Neil deGrasse Tyson, if you're looking for very technical astrophysicist
type of content, you also have
Beyonce if you want a more lyric and
musical based response, and, of course, Anthony Robbins, if you're wanting to
motivate and inspire. These are just a handful of the celebrities
and famous figures that you could tap into each one of them having
their own unique style that they can bring
into your responses and provide a unique voice,
tone, and perspective. Then an example prompt would be in the style
of Stephen King, write a short story
about and then you would insert your
topic and set in, and there you would
set your location. The benefits of role prompting in professional
communication are that you can tap into the
expertise of certain role. For example, a customer
service role would allow you to create an empathetic
solution orientated responses. So it would take
on that persona of being a helpful
customer service agent. If we look at taking on
a marketing persona, that's where it would
transform negative situations, so the challenges that your
customers are having and turn them into
positive opportunities with upbeat messaging. And this would in
turn help to convert your landing pages or your
sales pages better effect. And then also taking
on a leadership role, you're able to activate an authoritative role that
is clear on communication, and that could inspire action. You're also able to
improve the responses you receive by using roles
in your prompting. Examples could be maths
equations where sometimes AI does struggle to go through all of the
complex thinking. So by using a role, you're able to avoid that. The examples here
are a math equation, what is 100 times 100, divided by 400 times by 56. Not having a role assigned may lead to calculation
errors or it may not follow a logical and
mathematical workflow to get to the answer
that you're looking for. But where you use a role such as a mathematician to solve
that same equation, it would then break down the task and work step by
step to achieve the result. And this would lead to more
accurate results because it's taken a methodical and logical
step by step approach. The difference being, is
that you're not asking the standard vanilla out of the box version of the
AI, the role prompting, it's going to activate the specific expertise
and the patterns and the methodical approaches that those roles would ordinarily
take in their everyday work, and you get to tap into that by stating that role or
persona in your prompts. Here are a few tips for effective role
prompting. Be specific. So this is where
you are stating in clear language what
you are looking for and the role that
needs to be assigned. So like we touched on that
copywriter role earlier, an example here is you are a
copywriter at a SAS company, and here we're putting
in the audience and saying targeting founders. Test variation. So what you think could be the
same role or persona, if you vary them slightly, you might get different results. So trying different roles like
a coach, mentor or expert, they all fall under
the same umbrella, but using each of
them separately might yield different
and better results. And then, of course, you
want to pair the roles with instructions that are
relevant to that role. So combine the role
with clear tasks, and that would also
lead to better outcome. In conclusion, by using roles, you are tapping into expertise of that
given persona or role, which has very
specialized knowledge, and these responses are
going to be tailored to your needs as well as
the intended audience. There's going to be better
communication, and by that, the tone is going to be appropriate and the style
being used is going to be better suited for all of
the tasks and situations that you are using
role prompting for. And lastly, this is a
foundation technique. It's an essential skill for
effective AI interaction. So practice your role prompting, try different things and see how you can get the most out
of your AIs responses. It's an important step to take, and it's worth
getting right because the results that you get are
not only going to save time, but they're going to
be more effective and better suited
to your use cases, as well as those of
the intended audience. Wraps up this one. I'll see
you in the next lesson.
4. Prompt Essentials - B.R.A.I.N Framework: In this lesson, we're
going to be learning about the brain framework. This is a framework that I've
used with great success, and I wanted to
share it with you. It's very simple in
its application, and it has all of the
elements you need for those consistent results
that you're looking for. So what is the brain framework? Well, it's a set of building blocks that you can use to
get to the perfect response. So Brain stands for
different words. The first one is background. This is where you are setting the stage and you're
providing all of the essential and
contextual information that the AI needs to be able to
process the request correctly. In addition to providing all of the relevant details and
context putting this first, you are doing what's
called front loading. This is where you're providing all of the information up front. LLMs, such as Chat ChIP T, they use what's called
an attention mechanism, and this is where it
weighs the importance of information appearing
early on in the prompt. So this is why we're
putting it in the front. It's also good
because we're framing the entire prompt by
giving it the information, so it knows which path to
go down and it has all of the information that
it needs to process the also want to have a clear
problem statement in there. So this is just stating what
you are trying to achieve, what you're struggling with, and how you want the AI
to solve that problem. Next is R for role assignment. This is where you are assigning
a role and a persona, and this is going to
influence the AI's approach. So you're giving the AI the
perspective that it needs, and it's able to tap
into those expertise and just get a whole
nother level of expertise versus just the
default standard Chachi BT and the responses that that
version would give you. Think of it as activating an expert role inside
of the chat bot. Next is the action. This is where you state
what you want the AI to do, and you're wanting to
use clear action verbs. These are things like explain, summarize, compare. Inputs are next. This is where you're providing
the necessary information, things like references, as
well as one to three examples. Of course, the more
examples, the better, but as a minimum, aim for about one to three. And this really just helps augment the background
information that you've given. So the background information
is the context it needs, and the inputs are the
references and examples and any other data points
that are relevant to the prompt and
the desired outcome. Then next is N for narrowing. This is where we are
narrowing down and constraining the focus
for the response. So we're adding in parameters, such as the number of words
that we want in the response. We're looking at
audience identification. We're saying it's for
a specific audience, and maybe we're also talking
about the complexity level, the level of detail that we
want from the response based on the level that
the audience is at. Maybe they are at a lower education level or
at a higher education level. It all just depends on the
topic that is being discussed. This is a brief overview
in the next slide. We're going to look at each
of these a bit more closely. Background. Background is where you're going to be providing the essential context that helps the AI understand your
request properly. This is where you're including relevant details and all of the background
information that can help the AI get to the
response that you want. Adding in relevant context. So anything that you think might be needed and just
to give it more context. Think about if you were speaking to a friend or a colleague and you needed their help with something want to give them the information
that they need. So why do you need this done? What are some of the
things that you're looking to get out
of this request? What would be a measure of
success once it's completed? All of the relevant details
surrounding the request and surrounding the information and the data that you are providing, anything to just help the AI
know which direction to take and what sort of thinking to use when it's
providing the response. Then a clear problem statement. This is where you're stating
what the problem is and the solution and the outcome that
you're hoping to achieve. And by giving the background, this is creating a
solid foundation for more accurate and
useful responses. Without the background, you might not get the
responses you want. Or there's going to be
just more effort and time needed going
back and forth, interacting and engaging
with the chat bod, just getting it to
understand what you need. And there's no doubt
that providing well defined
background information and context is going to
lead to better outputs. Role assignment, we touched
on this in earlier lesson. This is where you're
assigning a role to influence the AI's approach. The role you assign
will guide its tone, its expertise level, and the
perspective that it takes. It even changes how the AI
thinks about your request. And as I mentioned, there is no strict or defined list of
roles that you can tap into. But just think of who you
would hire or who you would go to if you were looking to get somebody to help
you with something. So you need an expert
in a particular field. What would that role be? What would their title be? And that's essentially
who you would designate as the expert in your proms. Some example proms
here are things like a doctor or a teacher,
programmer, financial advisor. There's also creative roles. You could tap into a poet, a storyteller, an
artist, a filmmaker. More analytical roles,
a data scientist, a researcher,
detective, historian. And then if you need more
of a programming role, tapping into software engineers, data analysts, database
administrators, or web developers is going to get the best results for you. Next, we have A for action. Want to define what
the AI needs to do by specifying clear outcomes
and using action verbs. So you would define the task. You would use verbs
such as explain this concept to me or compare
these results for me, summarize this email for me, analyze this data for me. All of these are examples of clear and unambiguous actions
that the AI needs to take. Then you want to clarify the
outcome and also specify the structure and what
is the purpose and desired result that you
are hoping to achieve. For example, Are you asking for an entire blog article
from start to finish or are you just asking
for the outline of a blog article to be
able to use further? Then another
important element in the action is to
specify the format. How do you want the response, and in what format should it be? By that, I mean, ask for things bullet lists,
numbered lists, tables. You can even ask for
actual file formats. So requesting it in CSV comma
separated values format, or even a Word document. These work as well. But it should be noted that not all chatbots are
able to do this. ChachiPT is one that is able to handle
these file formats. Inputs, this is where
you are providing the necessary materials
and data for the task. So everything that it needs
to be able to achieve this. Think of it as
giving a assistant, all of the tools, all
of the information for them to be able to
complete the task for you. Without some of them, they might have to fill the
gaps themselves, and that information or that approach might
not be correct. So giving them
everything upfront with all the references
and the tools and the data points is the
best action to take. Then references and
examples of how you want the response are an excellent way to show
what you're looking for. Aim for one to three
references or examples. The more the merrier because
AI is incredibly good at detecting patterns in the references and
examples you provide. It's then able to convert those into the format
that you're looking. More context leads
to better outputs. Then lastly, N stands
for narrowing. This is where you are setting the constraints
around the request. That could be something like a length constraint where you
specify the word count or the response size in how many paragraphs you want and how long the bullet list
should be, things like that. You're also specifying
the format requirements. What is the structure
of the output? Is there a specific
style that you want? Are you looking for
a unique layout? These are all opportunities
to put all of this important information to
help get a better response. Then audience focus. This is very important, letting the AI know who
is this intended for? Who's going to be using
this information? What is their level
of education? What is their experience? So are they a beginner? Are they intermediate or are they
at an expert level? This would all determine the kind of terminology
and the level of detail that is included
in the responses. Again, here we're looking
at the complexity level, and setting the education
level or the technical depth means that your response is going to be much more
tailored for your audience. These are all levels
and parameters that you can specify in the narrowing
part of your prompt. Narrowing focuses
the AI's response, and it pulls out all of that
irrelevant information, and it just gives the AI
laser focused targeting, which is going to
yield better results. This all together.
So what would a prompt look like with all
of these elements in place? First up is B for
the background. We would say, I'm a
marketing manager preparing a presentation on the
latest social media trends. Next is R for role, and we're asking the AI
as a social media expert. So we're asking them to
activate that expertise. Next up is action. We're asking it to summarize the top three social media
trends for businesses, and we're asking it for a specific year to make sure
that it's current and fresh. And we're asking it to
include relevant statistics. This is always good to get some nice juicy stats to be
able to use in our content. Then I for inputs, we would attach a dataset, so this could be a CSV file, which has all of the
social media trends that would be attached
there as an input. It's the data that we are providing and the
reference material. Next up is N for narrowing. So here we're asking
to keep each trend to two to three sentences to
keep it short and punchy. And of course, we could
take it a step further here and add things like the tone
what sort of tone we want? Is it professional? Is it casual? Is
it conversational? We can also specify the
education level of our reader, give it more information
on our audience. All of these things are
possible and would get you closer to the best
possible response. And that is the brain framework. You can think of
it as instructing the AI brain to give
you what you want. That wraps it up. I hope
the lesson was helpful, and I will see you in
the next one. Goodbye.
5. Essential Prompting Tips & Techniques: This lesson, I'm going
to help you improve your interactions with Chat CIPT to get the best
possible responses. By the end of this lesson, you should be able
to get smarter, faster and more accurate responses from the AI.
Let's get into it. AI is incredibly smart, but it's not perfect, so it is going to
get stuck sometimes. Let's have a look at some
techniques to get around this. If the chatbot gets stuck or fixates on a pattern and
keeps repeating mistakes, best thing is to just start
fresh and start a new chat. AI models can get trapped
in loops based on previous inputs as
well as the context that they're including
in their responses. So the best thing is just to
start a new chat completely. This should fix the issue. When you encounter coding issues and you're finding
that you solve one problem only
for a new one to be created and you get
into this loop, best thing is to ask
the chatbot to create a summary and put into
key bullet points, everything that has been
discussed, the full context, copy that out, start a new chat, and begin again
with a clean slate. This helps to clear everything out and you can start fresh, and much of the time, this will solve the issue. Next, just editing
your original response is a really quick and
effective technique. So if the chatbot is providing
incorrect information, instead of debating or arguing, just edit your original
message and re run it. This removes all of
the bad context, and it's going to
improve the response and get you closer
to what you want. Simple structures
for your prompts. If you're finding that you're
still getting problems, breaking down complex tasks
into numbered instructions is a good way to give the exact requirement
that you're after. An example of that
would be step one, list the key points. Step two, expand on each point. Step three, add an introduction. Step four, summarize
with a conclusion. That has everything
that you need, and it's going to
force the AI to work step by step.
Another thing to try. I know we've discussed about providing the background
and the context first, but you can also state
the goal first, as well. This can sometimes give
you better results. So state the outcome, state what you want to
achieve very early on. This can sometimes give
much better results. And last one here
using templates. So when you find something that works and you're happy
with the results, save that, keep it
stored safely and create a library of all of the
prompts that work for you. You can store them in
an Excel spreadsheet, an air table, a Google
Doc, a notion document. You could even use a text
expander or auto text to have a library where if
you hit a specific hot key, it will then bring up
all of your prompts that you can search through and
select the best one needed. Any of these methods would work. The important thing is just to keep a collection of the prompts and templates and references and examples that are
working well for you. Handling refusals effectively. Sometimes the happer will not follow through
with your request, and then that's when you need to identify what is the
reason for that. So identify the refusal type. And you would ask, is this a policy restriction or
a system limitation. From that, you can understand, is there something that it is just not going to create for me? Or do I need to rephrase
or rework the request? Be able to get around this. So a good technique is
to rephrase the request. And in that rephrase, you would be more explicit
and detailed and provide additional supporting context to really give the
AI what it wants. If you hit a roadblock, why not try reassuring it of its capabilities and reminded that you've done this before, so you can do it again. Something as simple
as saying try again, we'll get it to start
the process over again, and I've had good success with just asking it to have
another go at it. So give that a try. Getting the AI to
follow instructions. Sometimes you just
need to give the AI a nudge and a bit of guidance to take it down the right path. One technique is to
frame things positively. So you are describing what you want instead of
what you don't want. So describing what you want, not what to avoid. We touched on an example
in an earlier lesson where if you ask the AI to not
include any pink elephants. It might fixate on
the word elephant, and you could get a green
one instead or elements of an elephant somewhere in
the response or the image. This is especially
true when you're asking it to generate images. Then testing for accuracy, you would ask the AI to ensure this is
factually accurate. It's important to remember that the training for these
large language models are on datasets and training knowledge up
to a specific point. So anything after that point, the AI would not have
information about. And to fill the knowledge gaps, it might start to
make up information. So this is where
you need to verify the factual
correctness of these. Another technique is asking
it to do an online search, so a web search
or search online, and there it can gather all of the data that's currently online to be able to give
you an accurate response. Then this one I really love
challenge assumptions. So you would ask the AI to request critical
thinking and to even disagree with you and take purely objective approach
to its response. This is a great way to
ensure that the AI is not just trying to make
you happy and agree with you and give you
everything you want. It's really giving you the
information that you need. So it's being impartial,
it's being objective. And in the process,
it might even stimulate some different angles and approaches to ideas and topics that you might be
discussing with the AI. Improving results
and fixing mistakes. So when you're
getting incorrect or odd or just responses
that you don't want, the best thing is don't
argue with the AI, update the prompt
and hit generate. This is something that is
quite frequently needed. In generating AI images and
especially when using ChatBT. And it gets to a point where all of the previous context and requests are just influencing its ability to handle
things correctly. So best thing is just update
the prompt and regenerate. Or if you're really still
encountering problems, start an entirely new chat. And that is the next one. If you get persistent problems, then begin a new
chat completely. Then another way to
deal with this is to use one AI to
fact check another, so you are cross checking so you would copy
Cha CiPT's response and copy it into
Claude or Gemini or perplexity and ask,
is this accurate? And how can it be improved? Do you'd be surprised at
some of the results you get when one model is fact
checking another's? It's quite an interesting
process. Why not give it a go? Now, moving on to optimizing AI for business
and productivity. Self improving prompts. This is where you
are asking the AI to rewrite your
prompts for clarity. This is a great way
to take what you have written as a
prompt and ask the AI to better structure
it and give it more details and more clarity for the best possible response. This is probably one of the
top tips that I can give you, and it's it's so
effective because the AI will rewrite your prompts in the language that it
would best understand. So it's a really
effective thing to try. Then for business use
cases and frameworks, as we touched on in
previous lessons, it's really good to assign
expert roles and give it the business context
and the data to get the best possible
structured outputs. And remember, you can
also ask for outputs, not just in text format. You can also get it
in structured format. So that'll be things like CSV files or tabular
data, things like that. Then using the AIs memory
to your advantage, you can have the full
context of the chat. So that is all of the responses
as well as your prompts. This is super valuable to keep the chat going and to
keep that context. So just because you've
ended a session doesn't mean that you have to start a new chat over again. So keep a record of your chat, share them and save
that URL somewhere. When you want to pick up again, you go straight back into that chat and you have
all of the context, all of the prompts everything that you need is
already in there, and you can just pick up
from where you left off. Now, we're looking at
performance issues and output issues and
how to handle them. So in an upcoming lesson, we're going to be
discussing how to humanize chat GPT responses, but a request as simple as
asking for plain language and a natural tone and also to break down the request
into clear steps. These can work wonders for your outputs. Switching models. This is where you would
use different models for different purposes
because each of them excel in different areas. For example, you
have for example, Gemini and perplexity are
really good for research. Claude is really good
at natural language. Chachi Bit is great
for brainstorming, structuring and refining data. Each of them have
their strength. So make sure to use the
best language model based on your requirements. And nowadays, a lot of them have very
generous free plans, so you're able to swap
out and switch over to another platform and use it with relative ease and without
any additional cost. If you're concerned about
Cloud AI and privacy issues, then consider running
a local AI model such as Mistral or ama. These are really
great to ensure that your internal data
processing is up to the standards and the privacy
levels that you need. What's more, it means
you can fully customize the environment that
these models are working. Next, let's look at some
more prompting strategies. Here is another simple
framework for you to use. It is the craft framework. So this is where you are
providing the context, the role, the action, the format, and the target. Self evaluation, you're asking the AI to rate and
improve on its answers. An example would be
rate your response 1-10 on accuracy and clarity. And then also how could
this answer be improved? And from that, you can
iteratively improve the output and get to the
best possible response. King an iterative approach. This is where
instead of spending too much time crafting
the perfect prompt, you just cast a wide net, you start broad and you say, give me a general
response first, then we'll refine it together. These steps there are to
give it the initial prompt, receive the response back, modify, improve,
guide it a bit more. And then improve on that even further until you get
that optimal response. So the approach
you would take is basically give me a
general response first, then we'll refine it together. And here are a couple of
great resources, prompt base. This is a great website
to find a huge collection of prompts for all
different AI models, whether it be language models
or image generation models. It's got something for everyone there, so it's
worth checking out. Other great prompting
resource is AIPRM. Both of these options
are really great, and they have the web's
largest collection of prompts, so
worth checking out. And that wraps it up. These were some of the more basic prompting tips
and techniques. In the upcoming lessons,
we're going to be looking at some more advanced
tips and techniques, so stick around for that, and I'll see you in the next lesson.
6. Intermediate Prompting Guide: The previous lesson,
we looked at some of the basic and foundational
prompting techniques. In this lesson, we're
going to explore seven distinct prompt types that serve different purposes and produced different results. Each prom type has a
unique benefit and an optimal use case that can dramatically improve
your results. We're going to examine how these prompt structures
work, when to use them, and I provide practical
examples to help you implement them in
your own AI chats. This is going to help you
to achieve more precise, creative, and useful prompts. So let's get into it. What are these
seven prompt types? Well, they are listed
here as we can see. They are step by
step instructional, contextual and role based
chain of thought reasoning, also abbreviated to COT, self critique and refinement. Creative ideation and expansion, compounded prompting,
and lastly, multi modal or data
driven prompt. Let's move on and look at each
one of these step by step. First up is the step by
step instructional prompt. This is where you are
breaking down a series of tasks into logical steps
that the AI needs to follow. Reason for this is it breaks the complex task down
into logical chunks, and this means that
results are a bit more predictable and you're going to be getting consistent
results as well. An example of that would be, as we see on the screen now, step one, propose a book
title on productivity. Step two, write a one paragraph
summary of that book. Step three, list three key
lessons from the book. There we have three
very distinct tasks that it needs to follow and each one follows the next one. So once we have the results
from the first step, it then iterates and improves
for the following steps. The key benefits of this is
that there is no ambiguity, so it reduces confusion. It keeps the outputs consistent and
predictable and you have a clear and structured message that you are sending to the AI. When is the best time to
use this type of prompt? When the task follows
a clear sequence. So there's a logical set of steps that need
to be followed, and you're able to
combine those all into a single prompt, like with writing a fixed format or building something
step by step. So you're just working
through a series of steps. The next type of prompt is the contextual or role
based scenario prompts. And this is, as you learned
in previous lessons, where you're putting the
AI into a specific role, such as a teacher or a
consultant or a character, this gives the AI, the context and the depth and realism to be able to respond
in the best way possible. You might remember when we give the AI a persona or a role, it not only responds
differently, it also thinks differently. So it thinks based on
that role and persona. This brings the contextual depth and relevance from
those expertise, and responses are a lot
more well structured. They're a lot more
well thought out, and they will
contain the level of detail that is relevant
to that persona. Some examples of the
contextual or role based prompting
time is as follows, as a travel consultant
specializing in ecotourism, create a five day
sustainable itinerary for a couple visiting Costa
Rica on a $3,000 budget, highlight eco friendly
stays and activities and explain the environmental
impact of each choice. Here we've given the role
of a travel consultant, and we are giving all of the relevant background
information and context. It has all of the
details available. We're saying that it needs
to be a five day trip, sustainable itinerary, so incredibly specific there we've given a budget as well, and we're also looking for
activity as you can imagine, with all of those
contextual details and all of the parameters
that have been set, the AI knows exactly
how to respond. The key benefits,
as you can imagine, with all of this
contextual information and detailed info
that's provided, the AI is able to respond
in a very thorough way. So it has all of
that information. And it would even match
the tone and voice of a consultant and give you all of the details
that you would need. So this will also help you to boost creativity
because you're going to be getting all
of these options provided for stays
and activities. And it would just allow
you to come up with the perfect trip packed with all the activities
that you want. It's time to use it.
Well, anytime you want expert level responses or content with a particular
strong point of view. That is the view coming
from an expert or somebody very specialized
in a particular field. In this case, it was
the travel consultant. It's able to give you the relevant point of
view from that expert. Next is the chain of
thought reasoning, and this is a really powerful
one because it encourages the AI to think step by
step through its answers. This helps to improve the logical flow and
it's able to show you all of the step by step
thinking that it is doing. A lot of the language models and chatbots allow
you to actually see the step by step thought process that
the AI is going through. And here we can see an example of a chain of thought prom. The idea is to
have a first this, then do that next, then finally do this. In the example here, analyze why the 80 20 rule is effective
in project management. First, define what
the 80 20 rule is. Next, provide a step by step example of it applied
in a project scenario, and finally, present one counter argument
about its limitations. Using this chain of
thought prompting, it's better at problem
solving and accuracy. You're asking the
AI to slow down, take a step back and look at everything
in a logical sequence. Instead of just
trying to give you the quickest answer possible, it is going to think through all of the context through
all of the steps, through all of the requirements that you've put in
there and really bring its reasoning
capabilities to ensure that it gets to the
best possible answer for you. This is the benefit of
breaking it down into steps. When is the best
time to use this? Well, explaining a
concept or analyzing a problem or just making sure the AI doesn't skip
logical steps. And then one thing to note with the dedicated reasoning
models such as 01 and oh three Mini and
Gemini two point oh flash, they already have step by step reasoning built
into the flow. So when you submit a prompt using one of these
reasoning models, it's already built in there. So this kind of prompting is not as effective with
those, and in fact, it's actually not
needed because you will see that they take a chain of thought and step by step reasoning approach to
your prompts by default. So it's just not needed
for those models. The self critique and
refinement prompting technique. This is a great one
because the AI is self critiquing and self
evaluating its own work. This produces a cleaner and
more thoughtful response. It's especially good
for writing tasks, such as articles or social
media posts or presentations, as well as summarizing long form content or other
types of summary needs. An example of the self
critique prompting type, draft 150 words summary of the benefits of
AI in business. Then critique the summary for
clarity and persuasiveness. Based on your critique, rewrite the summary to be
more clear and convincing. So we're asking it to not
only critique its response, but also then to improve on it. So, in addition to
the self critique, it's got a second step of rewriting the summary to be
more clear and convincing. The key benefits, of course, this is going to lead to
a very effective prompt. It's going to enhance clarity
and you're going to be sure that you're going
to get closer to the desired output
that you want. It's pushing for self
improvement on what it produces. The best use case it's
great for editing, editing of written content, rewriting, of course,
taking existing content, rewriting it in a
different flavor or a different tone,
things like that. Polishing and improving
on first draft. If it has created a
first draft for you, you can use a prompt like
this to further improve. Though this technique might be less effective with
reasoning models. It's worth trying
it out and seeing the results for yourself and
your particular use case. Then the next prompt type is the creative ideation
and expansion prompting. This is really good
for just generating a diverse range of ideas
and brainstorming content. And it's particularly useful for brainstorming and marketing. If you're wanting to
explore fresh ideas or just expand on
existing short inputs, then this is the prompt to use. You'll be amazed at how
it can just inspire more creativity and boost the thought process
when you are using it. An example of creative ideation, list ten social media post ideas to promote a new
electric bicycle. Then take the top
two ideas and write a catchy tag line and a two
sentence caption for each. This is just a simple
way to squeeze out a bit more from the AI
model and ensure that it is taking the cream of
the crop the two best ideas that is relevant to the product and it's going to write the
catchy tagline for that. Benefits of this
is you're able to rapidly generate a
lot of diverse ideas, and for marketing,
this is essential. It's also great for
content creation, and you're able to take small ideas and concepts and
really just run with them and turn them into big ideas that can really help
you in your campaigns. Best use case, of
course, by the name, creative ideation, it's really
great for content ideas. Ad campaigns,
generating ad hooks for your social media ad campaigns or search engine ad campaigns or your print ad campaigns,
that sort of thing. I can help with the ideation of that hook to be able
to have effective ads. If you're looking for product
names as well or listings or digital bundles of some
sort or anything like that, it's really good just to take a SD idea that you have
and just exploded into a whole series of
brainstorming ideas as well as provide you with inspiration to get to the result
that you want. A top tool is ChachiBT
because it really does, at least in my experience, I find that it does excel at
ideation and brainstorming. That is a top tool to use when you are doing this
kind of compound prompts. These are when you are combining several tasks into a single one, the AI can deliver a complete and cohesive answer that is ticking all
the boxes at once. It's providing you
everything you need. This is obviously
a huge timesaver because you're layering in all those tasks and it's making everything more
coherent and useful. Example of that would be create a Linked in post about
focus and productivity, start with the
definition of focus, share a short personal story, illustrating its
importance and end with a call to action
inviting comments. There we are saving
time by bundling these steps into a single
prompt and we're able to get more content
out of it and of makes it perfect for content with structure
and purpose, such as social media
posts where you would have these
structured elements, such as the hook, the body, the conclusion, and maybe a personal
anecdote, best use case. As we've been seeing, it's
great for social posts, marketing emails or any
format where you just want multiple elements like a story called action plus
the definition, all contained in one output. Multimodal or data
driven prompting. This is where you are getting other types
of media formats, looking at charts
or spreadsheets, images, audio files,
video, that type of thing. You're able to use this data in whatever format you have it in to provide context
rich responses. Most of the chatbots
nowadays are able to use any kind of data in whatever
format that you provide. Whether that is images
or spreadsheets or charts and even audio. So whatever the format, it's able to work with it. An example of this
multimodal prompt is using the attached
sales dataset, summarize the key trends in employee performance
last quarter, then generate to bar chart in textual description
or code that compares the sales of each
The key benefit of this is it handles data analysis
and reporting with ease. So if you have charts
and spreadsheets, you're able to use them as well. But not just that, it supports visuals and even code snippets. So you're able to put
in any kind of image, whether that's an infographic or photos or anything like that, and it will then be able
to use that as the input. So instead writing out a long text explanation
of what it is. You're able to just add that
to the chat and it knows exactly what it's
working with to be able to give you the
best possible answer. Best use case, of course, for analytics and data
reporting, it is really good. But of course, summarizing from files and extracting
data from files. If you have PDFs, you're able to drop in those PDFs and it
can work with them no and it can work with all kinds of
structured data as well. Whether that is something as simple as a spreadsheet
or something even more structured like a JSON file or a web development
or coding script, it can handle all of these
media formats with ease. That wraps up everything. I'm sure you can see by
mastering these prom types, it's going to lead to richer and more effective AI response. These slides, you're
able to see which of the scenarios each of these prompting types
are best suited for. I hope you can go out and try them just to see the
difference that it could make. So why not apply
these techniques and see how you're able to get clearer, more
structured responses. That wraps it up. I'll see
you in the next lesson.
7. Advanced Prompting Guide - Superprompts & Pseudocode prompts: We learned about the foundational basic
prompting techniques as well as some more
intermediate techniques. Next up in this lesson,
we're going to be looking at three advanced
prompting techniques. They are the super prompt, a compound prompt,
and the pseudocode. These are all really
powerful prompts, and I can't wait to
share them with you. But before we dive in, I just wanted to
touch on something. You might be thinking,
Well, James, a lot of these prompts
are looking very similar. You're not imagining things. There is a lot of overlap
between all of these prompts. Each of them has a slightly
different purpose, as well as a slightly
different technique. So it's important
for you to have a look at all of them
that are available, and then you're able to pick
and choose the elements that you feel are going to give
you the best results. There is no fixed way to get the perfect
response from an AI, but using the elements
from these prompts, you're able to get
it pretty close. The purpose of the super
prompt is to achieve a precise and high quality
output from a single task, and we're able to do this
by giving the AI all of the information and context and examples and
guidance upfront. Basically, we're giving
the model a fully loaded, well structured prompt to
get a rich output from it. Then the compound prompt, this is where we are combining multiple actions or questions
into a single prompt. We're asking it to do
multiple related tasks in a single goal. This is really good for
automating workflows or tasks that have related
steps to follow. With that out of the way,
let's dive into them. The super prompts,
with the super prompt, you're giving a
comprehensive structure. So you're packing in all of
the detailed instructions. You're giving the context
and the background. You're giving it some examples, as well as constraints, and this is all being packed
into a single prompt. So you're loading it with all the information that
it needs from the start. This is really effective
and it helps guide the AI toward a very
specific outcome, and it reduces any ambiguity. So there's no room for
misinterpretation, and it's also a reliable way to get exactly what you want. Specific outcomes,
we are directing the model to a precise outcome. So superpmps make it
clear what you want and they leave less room
for misinterpretation. Or any ambiguity. Speed and completeness,
as you can imagine, although it takes a
little bit of extra time upfront to put in
all the information, that time is saved by
not having to iterate on the responses and have all of the back and
forth with the AI. The likelihood is you're getting an accurate and detailed
response from that first prompt. This is an effective and
quick prompting technique. So you're getting a
thorough output by giving it all of the instructions and all of the content upfront. So instead of iterating
step by step, you get a solid first
draft in one go. The use case, this is ideal for brainstorming, rich
creative outputs. It's great when you
want the AI to generate long form content such as emails or articles
or marketing copy, and you don't want to do
all that back and forth. So you're trying to get
everything in a single shot. An example of this would be a basic prompt, would
look something like this. Give me marketing
copy for product X. Where as a super prompt you
are a marketing expert, write a 300 word launch email for product X aimed at
small business owners, start with a catchy intro about their pain point,
time management, then introduce product X as a solution with two
benefits backed by fax, with a friendly call to
action to try a free demo. Use a confident upbeat tone and include one short
customer testimonial. Then you would provide
three examples of how you would like the
output to be structured. As well as any additional
data or context, things like PDFs or spreadsheets
or graphics charts, anything that is going
to be relevant that you can just really load the AI up with and ensure that it has all of the
information it needs. Here, from this example, you can see we're using the
role prompting technique. We're giving it a persona. We are clearly stating the task and the action
it needs to perform. We're giving it constraints. We then giving it all the relevant information
about the audience, and we're clearly
stating that we need a catchy intro and we're
targeting the pain point. Then we're introducing the
product as a solution. And we're asking it for two
benefits backed by fax, and then we're also including information about the tonality. A prompt like this is going
to get you a very thorough, very detailed and very
targeted response. It's a great one to use, and although a little bit of
extra time upfront needed, it is well worth it to get that fully comprehensive
response from the AI. Next up is the compound prompt. As I mentioned
before, this combines multiple actions or questions
into a single prompt. We're wanting it to do
multiple but related tasks in a single go. It saves a lot of time
because we're merging all of these steps
into a single request. AI will consider all of the components together
before it makes its response. Thing to note is there is
a bit of balance required. So to avoid confusion
from too many tasks, just make sure to keep
them tightly related. Otherwise, it may send the
AI off track a little bit. And this type of prompting
is best used for related multi step
tasks that benefit from an integrated and
time saving response. Now, here we have a
compound prompt example, act as a startup advisor
for a software company. Given this idea and
you would insert the idea for a app
or a SAS product, analyze three pros and cons using market data and
competitive benchmarks. Recommend one monetization
model that fits a solo founder with
limited capital and explain why
it's the best fit. Outline the first three
lean startup steps to validate this idea
with minimal risk. Use a friendly, no fluff tone. And then I always like
to make sure to let the AI model know if it has any questions or it
needs more context. If you need more context, to give a better response, ask me follow up questions
before answering. This is a really great
way to make sure that all the gaps are filled
and that the AI model is not making up all of the information and it
has everything it needs. The pseudo code prompting type, this is one of my
favorites because the results you get are
incredibly accurate. It's a code inspired
prompting language that helps AI like ChachiBT understand your instructions more clearly. Be these AI models are
built on code platforms, understanding code is
built into their DNA. So if you provide the prompts
in a code like structure, you're going to be able to get very, very accurate results. So a code like structure provides clear instructions
in a code like format, and this, of course, reduces any misinterpretation
and ambiguity. Now, having mentioned
code like structure, I don't want you to be
put off a say, James, but I don't know programming
or coding or development. It's not so much a structured, perfect syntax as you would find in
programming languages. It's more about providing very
specific logical requests. While we have a code
like structure, it's merely a simple syntax, a simple language that
we're providing to the AI to understand
it more clearly. So it has clear organization, we're defining the
data, the rules, and expected outputs
all in one place. You're going to see
from the example how simple it can be and
how easy it is to read. Don't get put off. It is just a name that's given for this type of
prompting technique. Then lastly,
reusable components. You create the logic
once and you're able to reuse it across
different proms. Once it understands what
you're looking for, you can then reuse that
for an entire chat thread, and that makes it really
efficient as well. This is best used
for complex tasks requiring structured logic, defined outputs, and
reliable execution. Now let's move on to an example. Here we have two examples, a plain language prompt. You can see what the
prompt would look like in plain language if we were to spell it out for the AI model. And then on the right is
the pseudocode prompt. As you can see, the total
characters for each of them, 275 characters in the
plain language prompt, of course, that's a lot more wordy and it's going to
take longer to write out. That is far more efficient. So looking at the prom, from the list of e commerce items, extract the product name, the price and URL. Filter out any items where
the price is less than $20. Format the output as a comma separated list where each item is
represented as name, price, and URL, and then we are inserting
the item list here. The pseudocode prompt
version of that. Now, as I mentioned before, there is no language or no fixed syntax to use for this kind of
pseudocode prompting. It's merely just a logical set of instructions that
you're providing. Almost think of them as very basic Google sheets or Excel formulas that
you're prompting with. So that same example, we're going to extract products. The input is items. We're saying that
this is a list, and the output we want
is in CSV format. We have a filter there and
we want to filter out all of the items with a price
greater than or equal to 20. So it's going to look
down that column and start to filter those out. The fields we're working with
are name, price, and URL, and we are going to
provide the list of items. And here we're going to see a demo of what that looks like. This is an example of
the list of products, and now we're going to run both of these
prompts so that you can see the results that
each of them will give us. And as you're going to see, the results are the
same for each of them. I'm going to run each
of these prompts independently and you'll be able to see the
results from them. So let's take a look at that. And here I'm putting
in the prompt. And also, we can see
here is the spreadsheet. It has a very simple layout. We have some headers. They are product name Price URL, and from there, we are
going to run this prompt. Here we can see it's giving
us the results back. So yes, it has given the
correct list of items. We see that they are filtered to those that are greater than $20. However, it is
missing a few things. We're missing the headers, as well as it is
not in CSV format. So we'd have to
copy this and paste it into a blank spreadsheet, which is not ideal. I'd need to ask it just to add
in the headers and also to create it in a CSV file format. Not a big issue, but it's just an additional step
that's needed here. And there we go. We have the CSV format and
everything looks good. Now, let's start a fresh chat, and we're going to try
the same prompt again, but with the pseudocode prompt. And here is the
pseudocode prompt, going to upload the exact
same reference file and going to run that prompt. And as you can see
here from the result, it has gotten everything
that we're asking for. It even includes
the headers there, so it saved us a step, and we also have it as
a CSV preview window, so we're able to hit
the download button and work with our CS they both more or less took the same amount of time to
actually run the prompt. However, the pseudocode
prompt was able to get to the result that
we wanted in less time. I got it the first time and we were able to
start working with our file and start
using it straightaway. Whereas the first
prompt where we had the natural language, the
conversational language, it needed some work after, so we needed to iterate
and there was a bit of back and forth to get
to the same results. So this just shows
the efficiency when using the pseudocode
prompting technique. The main benefits of the
pseudocode is clarity, removes confusion by
using precise syntax. There's less guessing for the AI about what's expected
and what it needs to do. Consistency, this is
the beauty of it. You're going to be
able to now get consistent results
for each response. And when you're using the pseudocode prompts within
a long message thread, that becomes very important. So it's following
those specific rules, and you can expect the
same output each time. Also define conditions and formats to help standardize
these results as well. Efficiency, as we
just touched on, it's going to require
fewer words and oftentimes it's going to
get better results as well. Complexity, it can handle some pretty advanced tasks more effectively than
natural language, which is especially powerful for tasks that mimic programming, automation or structured
data process. If you're working
with data such as spreadsheets or
tables or charts, that sort of thing, this
is where it really shines. You might be asking, well, which prompt technique do I use? Which one should I be using? There's no one size fits all. Each of the prompting
strategies have their use cases and
their best applications. So when to use them, but there's nothing wrong
with combining strategies. So to get optimal results, you combine the
strategies because complex task can require a
bit more complex prompting. So this is where you might
choose the super prompt, but you could add in elements
from another prompt. So you could have a super prompt combined with some step by step. Compound prompting tasks. Strategic selection. This is where you try to match the prompt type to
the specific needs. And as you saw from this lesson as well as
the previous lessons, where the best use
cases for each of them, that's where you think
about what prompt is going to give me the best result here and you use the
most appropriate one. Look at each of the strengths for a particular
prompting technique. As you practice,
you'll be able to see which prompt types give
you the best results. So I'd love for you to
give these a try and see which one is going to work
for your most common task. And just ask yourself, how can AI help me and which prompt is going to
get the best results? So I encourage you the next time you have a difficult
task or a workflow, think about which prompt
and which technique you'd like to use and which one is going to be best suited
for the task at hand. That wraps up this
advanced prompting lesson. I will see you in the next one.