Transcripts
1. Course Introduction: Will your job be eventually
taken over by an AI? Yes, most probably. Will it happen in
the near future? Probably not. Then maybe you're
asking yourself, why should I bother
learning a new skill? If everyone will use
AI in a few years, what difference could it make to give you a straight answer, if you learn from engineering
in a few years from now, it won't make any
difference at that point. Why? Because it's probably going to become a mandatory scale
for everyone to have. So it would simply be just
another drop in the ocean. There would be nothing special
about it in the future. Some people choose to
believe that using an AR model requires nothing
but a few simple sentences. And somehow, magically, the
future AI models will become so smart that they'll be able to read our minds and
understand what we want. I think these expectations
are a bit unrealistic. When we interact with modern AI models to solve
problems or generate ideas, we use plain natural
language in our inputs, but it's like writing. Although you may
be able to sketch some simple ideas or write a certain number of
pages every day. That doesn't make you a good
writer. Think about it. Averaged skills cannot produce
more than average results. Have a look at some of the
free modules in my course. I think you'll understand pretty fast that
prompt engineering and artificial intelligence
in general are not just another hype. The good news is that
we're still very early. Now it's the perfect time
to sharpen your skills and be prepared for the future
by going beyond the average. Whether we like it or not, AI is bound to change the way we work and live in ways that
we can not yet imagine. So stop missing out
on opportunities. The moment you'll begin
studying prompt engineering, you will immediately start
improving your productivity. The tools are already available. Some of them are
even free to use. The concepts that I'm
teaching in this course are not specific to
a particular AI. Although I make a lot
of references to chat GPT or GPT for the skills that you'll learn in my class
are actually applicable when interacting with
any model in general. Whether you want to solve
problems, summarize data, come up with ideas
or create stories, images, or any other type
of generated content. The skills that you
are building in this course are going
to be very useful. And they are going to
be useful not just now or tomorrow or
in the near future, but in the longer perspective. Although the models
will evolve and the technology will certainly
get better in the future, the core principles
will remain applicable. I've optimized this
course to give you the most important information in the shortest possible
amount of time. Because I know your
time is precious. There is a clear learning
structure in place. We'll start with the basics and gradually move on to
more advanced topics. There'll be lots of
examples and quizzes also. You'll be able to verify your understanding
of the subject. Don't have access
yet to check GPD or GLUT4, Not a problem. You get free access to chat, GPT-3, what digital
playground that I've built. It's like having a
dedicated laboratory where you can immediately
does the concepts that you learned need
more examples for real-life scenarios?
I've got you covered. There are more than
250 prompt examples that you can download
and use immediately. But compared to the rest of
the course, to be honest, these are not that
valuable anyway, I'm teaching you how
to build prompts, so you won't really need
those examples in the end. This is not a technical course. I won't be showing
you how to build an AI or fine tune
an existing model. I have created this material for any level of technical ability. So there are no
technical prerequisites. If you're looking for
more technical aspects of artificial intelligence, I'm working already
on other courses, but this one is for everyone. In case you'd like to
share anything about your learning experience or have any questions regarding
the material. You can reach me on LinkedIn
or Twitter and we can chat. Thank you for taking
part in this. I hope you will enjoy
the experience.
2. What is Prompt Engineering?: Alright, now that we've
covered the introduction, Let's talk a bit about
the main subject. What is prompt engineering? Simply put, prompt
engineering is the art and science of designing effective inputs and
fine tuning parameters for AI models to get
the desired results. It's a mix of language skills, problem-solving, and
applying logical thinking. In many ways, prompting is
similar to writing code. It requires a similar
mindset and approach. You have to understand the
problem and be able to clearly define it and break
it down in individual tasks. Prompts play a crucial role
in working with AI models. They are like the
steering wheel and the gas pedal of a car
that helps you steer and control the
AI's capabilities and guide it towards
the right destination. With a well-crafted prompt, you can unlock the true
potential of AI models and make them work for
you in amazing ways. Think about it in a
matter of months. Now, most people will be
using this technology like chatbots and virtual assistance in everyday jobs and activities. But not everybody will have the same proficiency
in doing so. Just like with any tool
available out there, the average user will
get average results. So before we dive deeper
into prompt engineering, Let's briefly touch on some popular AI models you
might come across today. The first example is GLUT4, which is the latest
version of OpenAI is groundbreaking generative
pretrain transformer, which excels at understanding and generating human-like texts. Meet journey. It's another example. This is a powerful model focused on creating images from prompts. Another one is stable diffusion. Also a generative AI system
that is primarily used to generate detailed images based on textual descriptions. So basically, we are using
prompts to create images, but it can also be applied to
other image related tasks. And there are many other less popular generative AI models. As artificial intelligence
research continues to evolve, will definitely see more
cutting-edge models emerging in the near future. These models all have their unique strengths
and use cases, but they all have
one thing in common. At least for now. We use prompts to harness their
capabilities effectively. Because there are so
many topics to cover and so much knowledge to
explore in this course, we will only focus on text-based
generative AI systems. Prompt engineering
can be applied to a wide variety of tasks, from writing emails
to creating artwork, generating code, or
even composing music. By understanding how
to craft prompts and fine-tune AI
model parameters, we can guide these
powerful AI systems to assist us in our everyday tasks and professional activities. E.g. GLUT4 can be very useful in text-based tasks like
summarization of large datasets, classification of
data, text generation, language translation, conversation, or even rewriting or
spell checking content. So why should you learn
prompt engineering? As ai continues to
shape our world, knowing how to work with AI models becomes
increasingly valuable. Ai is rapid development has
brought about some concerns, particularly when it
comes to jobs and skills. Automation and AI
in general have the potential to replace certain job roles
and tasks for sure, which can be both exciting
but also unsettling. On one hand, AI can free
us from repetitive tasks and allow us to focus on
creative and strategic work. But on the other hand, it also has the potential
to make some jobs obsolete or requires us to adapt our skills to
remain relevant. It's not really about
the product itself, whether that chat GPT, GPT-3 for Google Bard, or any other product. Artificial intelligence
is here to stay. The recent open letter
signed by 1,000 researchers where
they advise for a break in the
training of ai models. I think that's the best proof that AI should be
taken seriously. By mastering prompt engineering, you'll be able to enhance your productivity and believe it or not, even your creativity. Discover new ways to
tackle complex problems, adapt to the changing job market and stay ahead of the curve, and also develop a
valuable skill set that can be applied in various
professional settings. Remember, this course is all about making
prompt engineering accessible to
everyone regardless of your background or
technical expertise. I'm going to use examples
and real-life scenarios to show you how to solve real
life problems using AI. In short, learning prompt
engineering is an investment in yourself and your ability
to thrive in the age of AI. In conclusion, we've covered what prompt engineering is and why it is important
in the age of AI. Now, it's time to talk a
bit about the resources and additional tools that you'll find in the contents
of this course. See you in the next module.
3. A short talk about the additional resources of the course: Welcome back everyone. Before we start exploring the basics of
prompt engineering, I want to take a short moment
to tell you more about the additional resources and tools that I have
included in this course. Apart from the videos that I hope you will enjoy watching, that are two very
important tools in this course that you
should consider using. The first is the
testing playground, which is a free interface
where you can chat with chat GPT and experiment
with your prompts. Here is the link
to the playground. And you can also find
it in the resources attached to the last
lecture of the course. The other item I wanted
to tell you about, it's a collection of
prompts that I've built and which can be very
useful as a starting point. This can also be found in
the resources attached to the last lecture in
the last section. A few words on that. By the way, you should
consider the prompt in that collection as
just a starting point, as I said, on which you can
build more complex ones. Solving an actual real
life problem using AI can readily be accomplished just by using a simple prompt. So you'll probably
have to tinker a bit, experiment and refine them until you get the right result. Be creative and try to
define the problem first. Okay? That's what I wanted
to share about the additional resources
in the future. I'll be adding more
material in that section. So take a look at
it once in awhile. Now it's time to
dive deeper into the nuts and bolts of
crafting effective prompts. In the next module, we'll
explore the basics of prompt engineering
and I'll share some best practices to
help you get started. So let's begin our
journey and get to work.
4. Important Definitions and Key Concepts: Hello everyone.
This is module 1.4. I know I've promised to begin our talk about the basics
of prompt engineering, but there's one more
important topic to address. Until we go further to the more detailed
chapters in this course, we need to clarify
some basic vocabulary. Otherwise, you may
not understand correctly the more advanced
topics in this course. So I've prepared this lecture in which I will explain
in simple words, some of the definitions that
you should be familiar with. These are the terms that
I'm going to explain. So let's begin. Large language models. A large language model is an AI model that understands and generate natural language by learning from huge
amounts of data. They are very good at
understanding context, detecting patterns, and
making predictions. And in general, for almost any task involving
natural language. Generative pre-training
transformers, GPD. But GPD is a type of AI model designed for natural
language processing tasks. Gpt is a specific type of
a large language model. Actually, it basically creates an output by predicting the
next word in a sentence, which enables it to generate human-like responses in
a variety of contexts. And by the way, this is just the
technical classification which is really not necessary, is not relevant for
the average user. Generative AI. A genetically of ai is a specific category
of AI tools and models like chat GPD that
generates something like text, video or images, e.g. this is different
from other types of AI systems that are
designed to make decisions or create
recommendations or categorized on data. They can be very
useful in scenarios like content creation,
design and problem-solving. A token. What is a token? Well, GPT models in particular, understand and think
at a token level. A token is basically
a chunk of texts that goes into the AI
model to be processed. Depending on the model, this chunk can be a
single character, a word, or a
combination of words. What is AGI? Artificial general
intelligence refers to an AI system that has the
capability to understand, learn, and apply knowledge on a level comparable to
human intelligence. It's the kind of AI that we usually see in the
Hollywood movies. As far as I know, at the current moment I'm
recording this video. Artificial general
intelligence has not yet been achieved
anywhere in the world. But the ray is to get
that is faster than ever. Another important
topic is ai alignment. Ai alignment is the process of making sure that the goals, values, and behavior of AI systems are aligned
with those of humans. This is crucial for creating
AI systems that are safe, trustworthy, and
beneficial to society. Because we don't want to
end up like in the movies. Reinforcement learning
from human feedback is a technique used to
train AI systems by providing them with
feedback from humans. So it's actually a way of
aligning them to our values. This feedback helps
the AI system learn to make better decisions and adaptive behavior to better align with human
values and goals. Chad, GPT and GLUT-4 are examples of AI models that were trained using
this technique. So we can say it's a method that's used to achieve a
better alignment of an AI. What about ai fine-tuning? Ai fine-tuning is a technique where a pre-trained model is further train on
a smaller set of data specific to a
particular domain. These tuning will improve the model's performance
on the respective topic. It's like taking a person with standard education into
a specialized course for a particular domain. Chat GPT. Chat GPT is an AI chat interface made
by a company called OpenAI. Gpt is basically just
a website based on an AI model called
GPT 3.5, durable. Although during this course
you might notice that I'm calling chat
GPT an AI model. This is inaccurate
but more convenient. So I apologize in advance for
this intentional mistake. And finally, GPT for this is opening eyes
latest and the most advanced large language model available inside the same
interface called GPT. That's all still there. I hope that wasn't too much
information to process. And don't worry, you
can make a bookmark to this module and come back anytime when you might need
to see some definitions. Again. That's it for this module. See you in the next
lecture where we will start discussing about
practical prompt engineering.
5. Understanding Prompts: Inputs, Outputs, and Parameters: Hello everyone and
welcome to Module 2.1, where we're going to discuss the building blocks of
prompt engineering, the inputs, the outputs
and the parameters. Don't worry, they
may sound technical, but you'll see that
these concepts are pretty straightforward
and easy to understand. So let's dive right in. First. Let's talk about the inputs. Inputs are the starting point of any prompt
engineering journey. There are simply
the task questions or information that
you provide to the AI, helping it understand
what you want. Think of it like a conversation
you're asking the AI a question or giving it
a task to perform, e.g. you might input the
following text. Here. The input guides the AI
in understanding that you want it to translate a specific sentence
from English to French. So it's essential to
provide the AI with well-defined inputs in order
to get the desired results. Remember as a general rule, garbage in, garbage out. A mediocre input will
probably result in a mediocre output.
Keep that in mind. Next up, we have the outputs. These are the responses. The response is generated by the ai based on the
input you've given. In our previous example, the output would be the
translated sentence in French. Obviously, the quality
of the output largely depends on the clarity and
how specific the input is. Finally, let's discuss
about parameters. The prompt parameters
are the settings or knobs that you can tweak to
customize the AI's behavior. You can think of them
like dials on a studio where you can adjust
the sound settings to get the perfect sound. There are some common terms related to prompt
parameters that you should get familiar
with, like temperature. The temperature
controls the randomness or creativity of the AIs output. Higher values will make
the AI more creative, while lower values will make it more focused
and deterministic. Another one is maximum tokens usually sets the maximum
length of the AI's response. If you want shorter responses, you can reduce the
max tokens value. Don't worry about using
these parameters though in most scenarios where you'll be interacting
with an AI system, there are no settings in the interface itself that
you can actually tweak. So basically you're fine tuning these
parameters by the way, in which your phrasing
your problems. So it's all in the prompt. Now, let's see an example
of using parameters. Here is the input. This is going to be our prompt. As you can see, the
prompt contains the task and the
required parameters, and everything is
in plain English. Parameters as
creative as possible, limit the length to 50 words. So by tuning the parameters, we got creative
and concise story about whiskers, the cat. Alright, now you should have a better understanding of the basic structure of a prompt, the inputs, the outputs,
and parameters. In our next module, we will explore how to craft simple prompts using
this fundamentals. Remember, prompt engineering. It's all about communication
and experimentation. So don't be afraid to
try new things and keep refining your prompts until you get the results that
you're looking for. Happy prompting.
6. Crafting Simple Prompts: Techniques and Best Practices: Hey everyone and
welcome to Module 2.2. In this section, we'll explore how to craft
simple prompts, different techniques to
increase effectiveness, and some best
practices to follow. What makes a good prompt? Well, there are three
key aspects to consider. First of all, is the clarity. The prompt should be
clear and concise. This helps the AI understand your request
more effectively. Think of it like
trying to explain a concept to a
complete stranger. As you don't know anything
about that person, you have to use clear
and concise wording to avoid any kind of confusion. The next important
aspect is contexts. Provide enough context to
guide the AI's response, but not too much to
make it overwhelming. Just like in a conversation, too little contexts may lead to confusion and
misunderstanding, but too much contexts may
also confuse the audience. And then we have creativity encouraged to be creative and explore different solutions. Obviously, sometimes we need creative responses,
but sometimes not. It largely depends on the
problem we are trying to solve. So let's look at some examples to get a better understanding
of these concepts. Imagine you need the AI to help you draft an email to a client. This is an example
where probably you will need a bit of creativity
in the response. Otherwise the output might look like more like a template. Instead of creating a
certain connection. Instead of just asking, write an email to a client, which is simple but two general
prompt for this problem, try to be more specific. You see this prompt
provides clarity, contexts and encourages
a creative response. Now, let's say you need a catchy social media
posts for your bakery. Again, in this scenario, creativity is really important. Instead of asking, right, the social media post
about my bakery, which again is very generic. Try something like this. You see this prompt
sets the tone, provides context, and lets the
AI's creative juices flow. Now that we've seen
some examples, Let's talk about a
few best practices for crafting simple prompts. Start with a clear action verb. This helps the AI
understand your intent. Be specific about
your desired outcome. This helps guide
the AI's response. Experiment with
different approaches. If the AI isn't generating
the desired output, try rephrasing or
providing more context. To wrap up this module, let's do a little
practice activity. I want you to come
up with a prompt for the AI to write a haiku
about a rainy day. Remember the three key aspects, clarity, contexts,
and creativity. You can practice
and experiment with this little homework using
the playground interface. If you come up with a
very funny response, feel free to share it with the rest of us in the comments. That's it for Module two
point to keep practicing, your skills are getting better with every new
prompt you create. Next up, we'll dive into evaluating and refining
prompts, stadium.
7. Evaluating and Refining Prompts: An Iterative Process: Hello and welcome back. This is module 2.3 about
evaluating and refining prompts. By now, you've learned about the basics of prompts and
how to create simple ones. In this lesson, we'll explore
how to evaluate and refine your prompts using
an iterative process that's easy to implement, an accessible to everyone. So let's begin evaluating
and refining prompts. It's like taking
care of a plant. Imagine you have just planted a seed and you're
excited to see it grow, you'll need to water it, give it sunlight, and maybe even talk to
it in a kind voice. Similarly, when you
create the prompt, you'll need to nurture it, tweak it, and learn from it. The first step in refining a prompt is to
review its output. Let's say you're
using an AI model to generate the recite
for your food blog. Your initial prompt
could be something like create a vegan lasagna recite using eggplant
and mushrooms. After inputting the
prompt into the AI model, you receive an output. Now ask yourself, is the output accurate and
relevant to the prompt? Does it cover all
aspects of the prompt? Is it creative and
engaging enough? Is this what I'm looking for? For our vigor lasagna example, let's say the output
is a short recite with just a few ingredients
and minimal instructions. It's accurate but not as detailed or engaging
as you'd like. Don't worry, this is
just the starting point. We'll make it better. Take note of the
observations you may have, as there'll be essentials
for refining the prompt. Once you've reviewed the output, it's time to modify the prompt. This could involve
changing the phrasing, adding keywords, or
providing more contexts. In our example, because we are talking about a blog article, it's probably a good
idea to make it more detailed and engaging
for the audience. So let's modify our
vegan lasagna prompt. As you can see, by
providing more context, we can help guide the AI model to generate
a better output. In our example, we've
added step-by-step detailed instructions and reach tomato sauce to make the
prompt more specific. By the way, in many cases, step-by-step actually works
like a magic formula. Sometimes, especially
when dealing with a complicated problem. Adding step-by-step into
our prompt will make the AI more focused and
logical in its response. Now that you've
modified the prompt, test it out. That's the process. Inputs, the revised prompt, and check if the output
has been improved. If it's still not quite
there, don't worry. Remember, this is an
iterative process. Continue refining the prompt by going through the
steps we discussed, reviewed the output, modify
the prompt, and test it. With each iteration,
the output should become more accurate and
relevant for your problem. If you want to have a better
control on the process, try changing your prompt
in small increments. So if you make changes, make small changes
with every iteration. Evaluating and refining prompts is an essential part
of prompt engineering. It's a dynamic and
iterative process that helps you fine-tune your
AI generated content. As you gain experience
in working with AI models and crafting prompts, you'll get better at
guiding the AI towards generating the desired
results in time. Once you get used to that
specific area model, you will probably
need less iterations to get to the right output. It's like developing
a new reflex. You'll constantly get
better with practice. So let's review the key
takeaways from this lesson. Evaluating and refining prompts is crucial to achieve the
outputs we are looking for. Review the output focusing
on aspects like accuracy, relevance, engagement, and whatever characteristics
you want to improve. Modify the prompt by
adjusting phrasing, adding keywords, or providing more contexts depending on
what you're trying to achieve. That's the device
prompt and iterate the process until the output
meets your expectations. Remember that prompt
engineering is an interactive and
dynamic process that requires practice
and patience. Congratulations on
completing module 2.3. You are now ready to fine-tune your prompts for
various applications. But before we get to the
more advanced concepts, in our next module, we'll talk about some
good principles to follow when interacting with
AI. See you there.
8. Basic Principles for Interacting with A.I.: Welcome to Module 2.4. In this section, we'll explore some simple but
essential principles to follow when working with
AI to create prompts. By following these guidelines, you will be able to generate better prompts and
achieve better results. The first principle is
focused on the topic. Understand the problem you're
trying to solve first. Otherwise it might be difficult to clearly formulate the prompt. Stay on track, and avoid
deviating from the task at hand. Sometimes results you'll get may steer away from
the initial topic. Make sure you bring
it back in focus. Make sure you're as specific
as possible in your prompts. This will increase
the probability of getting the results
you're looking for. If you're unsure,
don't hesitate to ask the AI for suggestions. Sometimes the answer
is right in front of you because the model may actually suggest a better
approach and give you a different perspective on the problem you're
trying to solve. Remember that having
a clear goal in mind is crucial when
working with AI, make sure you know what
you want to achieve and keep your prompts
focused on that objective. The second principle
is assumed nothing. Do not assume that AI knows something that may
seem basic to you just because it has been trained
on a huge amount of data does not necessarily
mean it knows everything. Do not assume the AI
understands the context. Sometimes it helps a lot to
provide more context hints or simple examples in your prompt actually always double-check
the output for validity. The problem with generative AIs is that sometimes they
give false results. This phenomenon is actually
called hallucination. Just like with people, because the answer is wrong, but it might be very convincing. Sometimes this may not be so important when creating
entertainment content, but when the task requires
very accurate information, the impact of these
may be significant. There's not an actual intention of lying behind the model. Really consider, is that
to be the right answer. It's basically our job to question and
validate the output. As a conclusion, ai can
be incredibly powerful, but it's not perfect. It can also be
incredibly stupid. Sometimes, approach
your interactions with AI systems with a healthy dose of skepticism and always verify the
information it provides, especially when accuracy
and precision matters. The third principle is
start with simple prompts. Begin with straightforward
and simple prompts. Make sure you are precise and
clear in your intentions. Use simple language. Gradually add
complexity as needed. When the result begins to
resemble your intended output, you can start adding contexts, details in the question, or try to refine the
format of the output. Like e.g. asking the AI to answer in a specific language
or using a specific style. So when working with AI models, it's best to start with simple prompts and
build upon them. This approach will help you understand the
models, capabilities, and limitations better and make it easier to refine
your prompts over time. The fourth principle, this
is iterate and improve. Find the simple prompt to begin with and build from there. It might take a number
of iterations to get what you want, but that's okay. Keep track of previous
prompt iterations so that you can go back and
reuse them if needed. Sometimes when the AI interface, it's built like a chat
features such as width, chat, GPT or GPT for the interface itself will take care of that and we'll keep the
whole history for you. But that's not always the case. If the chat history is
not an available feature, then it's basically your job to keep track of all
the previous prompts. The main takeaway from
this principle is that AI works best when you
embrace an iterative process. Test and refine your prompts. And don't be afraid
to make changes or go back to a previous iteration
if it works better. Now the fifth principle,
practice makes better. The more you practice prompting, the better you'll become. As I said before, it's like developing
a new reflex. Strive for improvement,
not perfection. Remember, in most cases you do not need a
perfect solution. Be aware of the tradeoffs
and try to strike a balance between cost and benefit when
interacting with AI. Remember, time is probably
your most valuable resource. The whole point in
using AI in most cases is to save time in doing
a specific activity. If you end up spending
more time than it would take you without
the AI system, then it kind of defeats
the purpose of using AI. Isn't it? As with any skill, practice is key to mastering
prompt engineering. Keep experimenting. Learn from your mistakes. And remember that sometimes good enough is better than
chasing perfection. By following these principles. Focusing on the topic, assuming nothing, starting
with simple prompts, iterating and improving
and practicing regularly, you'll be well on your way to becoming a proficient
prompt engineer. Mastering this skill
will allow you to unlock the full potential of AI
and transform the way work. Now that we've covered
these principles, Let's move on to the next
module to continue expanding our understanding of prompt engineering and
its applications. Stay tuned and see you
in the next module.
9. Role Prompting and Nested Prompts: Welcome to Module 2.5, which is about role prompting
and nested prompts. Sometimes providing more
context into your prompts can be accomplished by using a method called roll prompting. In the first part
of this section, I'll give you some examples
and explain how rolled prompting can help you achieve better results
with your prompts. First, I want to start by explaining an interesting
phenomenon in human psychology. Sometimes we tend to assign the AI with human traits
or characteristics. We basically visualize the
model as a human person. In psychology, this is
called anthropomorphism. And although it might seem
a bit childish to employ, it turns out that in some cases, it may actually be an
advantage when creating prompts when
interacting with AI. Anthropomorphism can
help users relate to the AIM or easily and create
a more engaging experience. Okay, now let's
explain the concept of role prompting or
all prompting is a technique where you assign a specific role or identity to the AI to help guide these response and achieve
more realistic results. By giving the AI or role, you can set the context and
tone for each response, making the output more
relevant for your purposes. This can be very
useful when you're dealing with problems that
require specific knowledge. Or you want the
model to generate the output in a certain style. E.g. instead of asking what were the causes of
the American Civil War, you can use row
prompting like this. Choose an appropriate role for the ai based on your needs. E.g. you can ask the AI to pretend it's a
historian, a scientist, or a teacher, depending on the
type of information you're seeking and the kind of response you're
trying to generate. Remember to take it slowly when adding more
context in order to maintain a focus prompt and avoid confusing the engine. Start with something
simple and more genetic and then
gradually refine the context by setting
a role for the AI or adding more information
gradually to your prompts. Role prompting can also be successfully used in combination with another technique which
is called guided iteration. Guided iteration is an approach where you work together with the AI in a
back-and-forth manner to refine and improve a prompt. Let's imagine you are a
researcher looking for an AI generated summary
of a scientific paper. You can combine
guided iteration, enrolled prompting like this. Oh, by the way, I think it's always
a good idea to be polite to the AI.
You never know. Okay, For the second
part of this section, I want to tell you
a few things about another useful technique that's
called nested prompting. So basically, nested
prompts involve embedding one or more prompts
within another prompt. I know it sounds a
bit like inception, but don't worry, I'll try
to explain how it works. It's actually a technique
that can be used to break down complex questions
into simpler parts. It allows you to get more
specific information or make the AI's response more
focused and comprehensive. E.g. instead of asking DAI two, in one sentence, you can use
nested prompts like this. In this example, the
nested prompt that goes to separate but related topics
about electric vehicles. First, focusing on their
environmental benefits and then addressing the challenges faced by the technology
and infrastructure. By combining these related
topics into a single prompt, you guide the AI
towards providing a more comprehensive
and connected response on the subject of
electric vehicles. Apparently, the two problems
are not so different, but you'll see that
the second version will result in a more
detailed response. This technique can also
be very useful when trying to generate more
detailed and complex responses. E.g. if you're trying to create a blog article about a certain subjects such
as electric vehicles. You could use a
prompt like this one. In the first part of the prompt, you ask the AI to come up with some facts
about the subject. And in the second part, you ask him to combine these facts and create
the actual article. So use nested prompts
whenever you want to obtain more specific and detailed information
about the topic. In conclusion, understanding
and utilizing role prompting and nested prompts can greatly enhance your interactions
with AI systems. These techniques can
help you generate more specific and
useful responses, making, making it easier to obtain the information
that you're looking for. Don't forget to combine
these techniques with the principles that we've
covered in the previous module. By doing that, you'll
be able to handle various challenges when working with AI and get the results. You're actually looking for. State yearn for the next
module where we'll explore more advanced prompt
engineering techniques. See you there.
10. Chain-of-Thought Prompting: Welcome to Module 3.1, where we'll explore the
concept of chain of thought prompting in
prompt engineering, chain of thought is
an advanced technique that involves breaking down complex tasks or questions into smaller and more
manageable prompt. This approach enables
better control and guidance over
the AIs output, ensuring more accurate
and relevant responses. Chain of thought prompting
becomes necessary when a single straightforward prompt may not provide the
desired results. It can be particularly
useful in cases when the topic is very complex
or has multiple layers. Or the AI needs additional context to provide
the relevant response. Or when a step-by-step
approach is required to guide the AI through a
specific thought process. Let's consider a scenario where you want the AI to suggest a marketing strategy
for launching a new eco-friendly product
line for a company. So this is going
to be our example. Without chain of
thought prompting, you might input a prompt like. However, the AI might return an overly generic or off target
output, such as this one. Now, let's have a look at this
example and see how we can use chain of thought
prompting to guide the AI to achieve a
more useful output. So we begin by providing
context and setting the stage. Next, we identify the unique selling points
of the product line. We then propose
marketing channels and tactics tailored to the target audience
and product features. And finally, we combine
the outputs from the previous steps to form a
cohesive marketing strategy. So by breaking down the problem into smaller
prompts were guided the AI through the
process and receive the more accurate and
contextually relevant output. This chain of thought
approach can be applied to various scenarios to improve the AI's understanding
and performance. That's it for this module. Today we'll learn how to solve more complex problems using
chain of thought prompting. Remember to experiment with
these techniques and don't forget to take advantage of
the playground tool for that. That concludes Module 3.1. In the next module, we'll
explore the exciting world of multilingual and multi-modal
prompt engineering. See you there.
11. Multilingual and Multimodal Prompt Engineering: Welcome to module 3.2, where we'll dive into
the exciting world of multilingual and multi-modal
prompt engineering. Today, I will guide you through the process
of working with multiple languages
and different modes of communication using AI. In this increasingly
connected world, being able to communicate
across languages is essential. Ai can help bridge
language barriers by understanding and generating content in various languages. Let's explore how we can
use prompt engineering to build multilingual AI solutions. One common application of
multilingual ai is translation. Let's say we want to translate an English sentence in Spanish. Our prompt could look like this. The AI would then provide
a translate it outward. Another interesting application
is language detection. To detect the language
of a given text, we could use a prompt like this. Multimodal prompt engineering. Thanks AI capabilities
of step further by allowing us to work with
different types of data, such as images and audio. A multimodal AIS
system is able to process and generate
not just text, but also images or audio, or even video content. This opens up new
possibilities and gives you incredibly powerful tools to solve many different
types of problems. Imagine we want to generate
a caption for an image. We could provide the AI with
a description of an image, or depending on the system, are linked to the
actual image file and user prompt like this. Audio transcription
is another area where multimodal AI
systems can shine. To transcribe audio. We could provide a link to the audio file or the
actual audio content and user prompt that
instructs the AI to generate a written
transcription of the audio. Multilingual and multi-modal prompt engineering is powerful, but it comes with its own set of challenges and limitations. Some languages might
not be well-supported and AI generated outputs
may not always be perfect. It's important to be aware
of these limitations and work iteratively until
you achieve the best results. As a side note, at the moment when this
video was created, not every AI system has
multi-modal capabilities. You will find out, e.g. the one that we are using for the playgrounds does not
have such capabilities. To succeed in multilingual and multi-modal
prompt engineering is essential to experiment and
iterate with your prompts. Be mindful of the
AI's limitations. Continuously learn and adapt to new techniques and
advancements in the field. There's literally new AI tools appearing every single day. Today, we've explored
the fascinating world of multilingual and multi-modal
prompt engineering. By combining these techniques, we can create AI based
solutions that are more versatile and useful across
a variety of applications. As you continue your journey
with prompt engineering, remember to stay
curious, keep learning, and never be afraid to push the boundaries of
what AI can do.
12. Understanding the Non-Deterministic Nature of AI: Welcome to Module 3.3, where we will explore the non-deterministic
nature of AI and discuss strategies for managing
the variability and uncertainty
that comes with it. Let's dive in. Ai systems, especially those
based on machine-learning, can produce different outputs, even when given the same input. So that means you will
most probably get different results each time you run the exact same prompt. The difference won't be
much, but it will exist. It might be a different
way of saying the same thing or
a different style or length of the answer. This behavior is called
non-determinism. It is important to be aware
of this aspect when working with AI as it can sometimes
lead to unexpected results. Let's take a look at
a simple example. Imagine we're using
an AI text generator to create a headline
for a news article. We provide the following prompt. Write the headline
for an article about a new
environmentally-friendly car. Given the same prompt, the AI might generate different
headlines each time. As you can see, each
headline is unique. Even though the prompt
remains the same. This variability can be both a strength and the challenge when
working with AI systems. To manage the non-deterministic
nature of AI, there are a few strategies
that can help test multiple prompt
variations to find the most reliable and
consistent results. Fine-tune AI models to improve their performance and
consistency for specific tasks. And set appropriate
expectations. Setting appropriate
expectations is crucial when working with
non-deterministic AI systems. Keep in mind that AI
generated outputs might not always be perfect or exactly
what you're looking for. It is essential
to be patient and flexible when reviewing
AI generated results. Sometimes is just a matter of
providing more contexts or being specific in
your prompt so that the variability of the
answer will diminish. On the other hand, the
non-deterministic nature of AI can also lead to delightful surprises
and creative solutions. By embracing the variability, we can discover new ideas and perspectives that we might not
have considered otherwise. This can be especially
valuable in fields like marketing design and
content creation, e.g. let's look at another example. Suppose we want to create a tagline for a new brand
of eco-friendly shoes. We give the AI the
following prompt. Write a catchy tagline for
a suitable shoe brand. Might generate multiple
creative options such as each one of these taglines showcases a different
creative angle, highlighting the power of AI is non-deterministic nature
to spark innovation. While non-determinism
can bring benefits, it's essential to stay vigilant and mitigate
potential risks. Always review AI generated
content for accuracy, appropriateness, and
relevance before sharing it with others. Additionally, be prepared
to iterate and refine AI outputs to ensure they align with your
goals and values. To wrap up, understanding
and managing the non-deterministic
nature of AI is crucial when working
with AI systems. Here are the key takeaways
from this module. Ai can produce different outputs even when given the same input, which is known as
nondeterminism. Test multiple variations and fine-tune AI models to improve performance
and consistency. Set appropriate expectations
and be prepared for some variability in
AI generated outputs. Embrace the creative
potential of AI is non-deterministic
nature to discover new ideas and perspectives. Always review AI
generated content for accuracy, appropriateness
and relevance. Be ready to iterate and refine the outputs to ensure they align with your
goals and values. Thank you for joining
me in module 3.3. I hope you now have a
better understanding of the non-deterministic nature of AI and how to manage
it effectively. As we continue to explore the world of prompt engineering, keep in mind the strategies and tips shared in this module. Good luck and happy prompting.
13. Human-AI Collaboration: Best Practices and Strategies: Hello everyone.
Today we'll explore the fascinating world of
human AI collaboration. We'll discuss how we can work
together with AI systems to create a powerful synergy that can improve our
lives and careers. To make the most out of
our collaboration with AI, it is essential to understand what AI systems are good at, where they might need our help. E.g. AI can analyze vast amounts of data and
generate content quickly. However, it may struggle
with understanding contexts, emotions, or cultural details. By being aware of the
strengths and limitations, we can delegate
tasks effectively and know when our
human touches needed. Working with AI isn't a
competition, it's a partnership. Think of AI as your teammates, someone with a different set of skills that
compliments your own. When we approach
artificial intelligence with a collaborative mindset, we can create a
win-win situation where both human expertise and AI capabilities are used
to their maximum potential. Let's explore some
strategies to help us collaborate effectively with
AI in different job roles. E.g. in content creation. Let Ai generate ideas or draft content while
you focus on refining, editing, and adding context. For customer support, use AI
to handle common queries, allowing you to focus on more complex issues that require empathy
and understanding. Or when managing projects, use AI to track progress and predict potential
bottlenecks. So you can make
informed decisions and allocate resources
efficiently. As we work with AI, we might face some challenges
or misconceptions. E.g. we might worry that AI will take our
jobs or we might be skeptical about
the quality of AI generated outputs to
overcome these concerns. Remember that AI is
a tool designed to enhance our capabilities,
not replace us. By focusing on what we
can accomplish together, we can turn these challenges
into opportunities. Let me share a few
real life examples of successful human
AI e-Collaborations. An editor using AI to
generate a draft article, then revising and polishing it to ensure it meets
editorial standards. A salesperson using AI generated product recommendations to guide customers toward items
they're more likely to buy, leading to increase sales
and customer satisfaction. A project manager leveraging AI insights to
identify and address potential bottlenecks
resulting is motor project execution and more efficient
resource allocation. And these are just
some examples. The possibilities are limitless. As we've seen, human
AI collaboration holds immense potential for enhancing our productivity and creativity by understanding
the strengths and limitations of AI. Adopting a collaborative mindset and employing
different strategies, we can build successful
partnerships with AI systems that may help
us thrive in our careers, but also improve
our personal lives. Now that we've explored
human AI collaboration, let's move on to the next module where we'll dive deeper into advanced prompt engineering
techniques. See you there.
14. Generating Ideas Using "Chaos Prompting": Hello everybody and
welcome to module 3.5. Are you an artist? Do you
work in a creative role? A few modules ago, we talked about the
non-deterministic nature of AI. Remember? I said that it might represent a challenge because it might lead to
unpredictable results. But I've also said that
sometimes randomness and unpredictability can
work in our advantage. In today's module, it's
time to embrace the unpredictable and
use the power of randomness to ignite
our creativity. Chaos prompting is
a unique approach to prompt engineering that allows you to generate unexpected and unique
outputs from AI models. It's a journey that
begins with a spark of inspiration and leads you
through a labyrinth of ideas. Each more intriguing
than the last. For a creative mind, this might just be the
right way to trigger new ideas compared to what we've seen in
the previous modules. This technique, it's a bit more untraditional because it's quite the opposite of generating precise
and correct answers. So what exactly is chaos prompting and how can we use it in our
interaction with AI? Chaos prompting is a technique that involves
crafting prompts that intentionally
introduce elements of randomness, ambiguity
or contradiction, rather than seeking precise
and predictable responses, chaos prompting
encourages the model to produce outputs that
challenge our assumptions, spark our curiosity, and open the door to
new possibilities. The beauty of this method
lies in its ability to generate a wide range
of creative ideas. Whether you're a writer, e.g. seeking inspiration for a story or an artist
exploring new themes, calles prompting can serve as a catalyst for igniting the
AI's creative potential. So, without any
further introduction, let me present you a typical
chaos prompting session. We start by generating some
random elements. Okay? I think we, we may
have too many. Let's reduce the list
a bit, shall we? Now it's time to build
and add complexity. Let's mix it all together now. Well, pretty cool, isn't it? At this point,
depending on the kind of creative work
we might be doing, we could use this output
as a seed for a story. Or we might even turn it
into something visual, e.g. taking the previous output into an image generation AI model like meat journey might result into something
similar to this. So now that we've
seen an example, let's discuss some guidelines which might be useful
in this process. First of all, embrace the
unexpected chaos prompting. It's all about
welcoming the unknown. Don't be afraid to ask
open-ended questions, experiment with unusual
combinations of words, or explore abstract concepts. The goal is to create a
space where the AI model can surprise you with is it espouses play with
contradictions. One of the hallmarks of chaos prompting is the use
of contradictions or paradoxes by introducing
conflicting ideas or combining opposing concepts, you can encourage the
model to think outside the box and generate
novel interpretations. Iterate and refine. Like any other
prompting technique, chaos prompting, it's
an iterative process. Feel free to build on
the AI models responses, ask follow-up questions, or take the conversation
in a new direction. Each interaction is an
opportunity to go deeper into the creative process
and uncover hidden gems. Also, stay open-minded when
working with chaos prompting, it's important to keep
an open mind and be receptive to
unconventional ideas. Some responses may seem strange or a complete
nonsense at first, but they can often lead to valuable insights or
spark creative ideas. Obviously have fun, above
all, chaos prompting. It's about having fun and
enjoying the creative journey. It's an opportunity
to experiment, play, and collaborate
with the model in a spirit of curiosity
and wonder. In conclusion, chaos prompting is a
powerful technique that can enrich our interactions with AI systems and inspire
us to think creatively. By following the
guidelines I've mentioned, you can unlock a world of imagination and innovation
that's going to surprise you. I bet you won't believe
how amazed you'll be above the creative potential of an AI once you start
using this technique. As you continue to experiment
with chaos prompting, I encourage you to
share your experiences, insights, and discoveries
with the community. I'm really curious
to see the kind of results your
prompts might produce. I hope you enjoyed this lecture. See you in the next
module where we will talk about using AI in
software development.
15. Writing Code with the help of AI Part 1: Hi everybody, and
welcome to module 3.6. As I mentioned in the
introduction of this course, I have designed
this learning tool for just about anybody. And up until now, I've tried to keep
our conversation as non-technical as possible. Well, in this module though, it's time to touch a bit of
a more technical subject. In an area where prompt
engineering might have a very good use
case, writing code. Whether you have a technical
background or not. I think you've heard
already about people using AI models to write code and
create simple applications, web pages or automation scripts. Actually, the developer
community right now, it's pretty much
divided in terms of capabilities of an AI model
to assist in writing code. There are a lot of YouTube
videos and all kinds of demos proving that you can start from a simple prompt and build an
app in just a few minutes. Let me tell you
the truth, though. 80 per cent of the demos
you see on the internet are a bit exaggerated. Not that they are fake, but they usually don't show you the whole sometimes
painful process of iterating back and forth, refining your prompt and
correcting bugs in the code. It's true that ai can be an incredible tool
for developers, but I want you to have
clear expectations about what it can and what
it cannot do for you. You should also know that using an AI model directly to
assist you in writing code, it's not the only
path available. There are also more specialized
tools for this purpose, which rely on AI to assist
you in software development. Some examples include good hubs, co-pilot, Amazon code
whisper, or quadriga. Anyway, let's assume you're
currently using chat GPT. You don't have too
much experience in software development. Be smart in your approach. Let's say you've written some
code in your experience, but you're not qualified as a business analyst or a software architect,
not a problem. The model can assist
you with that also. But instead of using
a basic prompt, like you should use the full power of AI with
a prompt like this one. This prompt will result in a more structured and comprehensive
assistance from the AI. In my experience writing
code together with an AI, there are a few
observations that I've noted and I would like
to share them with you. Remember the golden rule, garbage in, garbage out. If you haven't already
define the problem properly and try to get the AI to write code for an
ambiguous task. Well, you probably won't get the results
you're looking for. It also feels like a superpower. Ai allows you to tackle
problems you've never solved before and approach topics that you barely studied in advance. E.g. I. Was able to translate
a simple app from Python to PHP in a
matter of minutes, having only very limited
knowledge of PHP. But it still need
supervision and control. Having at least some
minimum knowledge of the technology
you'll be using. It's a must. The more
knowledge you have, the more useful the AI becomes. It act like an amplifier
to your productivity. Sometimes though, it makes childish mistakes because
they are very unexpected. These mistakes in the code are also very hard to identify. They might be in places where you wouldn't
even think to look. Trying to debug this kind of mistakes can be very
time-consuming. Take that into account. The max token limit
can also be an issue. Don't forget that all
current AI models have limitations in terms of the number of tokens
they can process. If you're engaged in a longer conversation
with chat GPT, e.g. at some point, the AI might simply start to forget the
beginning of the conversation. This happens because the maximum token limit
has been reached. So in order to continue
the conversation, the model needs to delete some
of the previous messages. This can be very
inconvenient when writing code because
suddenly the model may start outputting lines
of code that do not apply to your app or repeat the same mistakes that
were previously corrected. The best approach to avoid this, divide your code
in smaller modules and functions from time-to-time. During the process, submit the entire code of a module
or even the entire file, repeating the initial task. That way you regenerate the
context for the AI model, helping you to stay on track
and avoid hallucinations. Sometimes, especially with
chat GPT and GPT for models, the answer you receive
might be truncated. So you may ask for a Python function that
solves a particular, a particular task, but receive only half of the
function in the output. When that happens, you can
simply ask the model in your next prompt to continue the code and give you the
rest of the function. Okay? Because I want to
keep each module within a reasonable length. I have divided this
lecture in two parts. So see you in the next
module where we will discuss the typical process of using an AR model as a coding system.
16. Writing Code with the help of AI Part 2: Hello and welcome to module 3.7, which is the second part of the lecture describing the way developers can use an AI
model as a coding assistant. If you haven't yet completed
the previous module, I strongly recommend you
finish part one first in order to get a better understanding of this process and the
reasoning behind it. Those of you who
are experienced in writing applications
already used, tried and tested software
development methodologies. Probably. I'm not going to
address the topic of a software development
lifecycle in this course, but I would rather present you
a simple approach that you can customize and adapt to
your own development process. Remember that writing
the code itself, It's usually just the
tip of the iceberg. In most cases, is the process behind creating the
code that matters most. So here's a typical
flow that we might use to write code with
chat GPT or GPT for. First, define the
app requirements. Begin by clearly outlining
the apps purpose, target audience, desired
features, and any constraints. I also recommend you to
include at this point any nonfunctional
requirements such as security performance or
compliance requirements. This information will
provide context to the AI, enabling it to provide more relevant suggestions
and code snippets. Second, choose the
technology stack. Determine the appropriate
technology stack for your app, including the programming language development frameworks, libraries, and tools. When discussing these topics
with chat GPT provide the context of your chosen stack to get more accurate
the systems, it may come up with
additional suggestions. Third, break down the
project into smaller tasks. Divide the app
development process into smaller, manageable tasks. This will make it easier
to work with chat GPT and get specific help on individual
components of the app. For initiate the
conversation with chat GPT, providing context about
the app technology stack and the specific tasks
you are working on. Be as clear and concise as
possible when asking questions or requesting code snippets to ensure the most
useful responses. The fifth point is about seeking guidance on architecture
and design patterns. You might already have a pretty good idea on
your architecture, but it's always good to
have a second opinion. Ask Chad GPT for advice on application architecture
and design patterns that are suitable
for your project. This will help you create
a well-structured, maintainable and
scalable application. Six, request code
snippets and examples. When working on
individual tasks, asked Chad GPT for code
snippets or examples that demonstrate how to implement specific features or solve
particular problems. They replaced any placeholder
values that the model might provide with the correct values that your application
is supposed to use. Be sure to customize and test the code snippets in your app to ensure they meet
your requirements. Do not assume that everything
is working perfectly. Troubleshoot and debug if you encounter any issues or
errors during development, describe the problem
to check GPT and seek guidance on
potential solutions, debugging techniques or best practices to
resolve the issue. Optimize and refactor the code as the model for
suggestions on how to optimize and refactor
your code for better performance, readability,
and maintainability. It's really great at that. Review and test the app. Once the app is complete, review and tested to ensure it meets the initial
requirements and functions as
intended. If necessary. Ask Chad GPT for advice
on testing strategies, testing tools, and
best practices. Don't forget to test and ask suggestions also for the
non-functional requirements. The model may help you e.g. identifying security
vulnerabilities or misconfiguration in
your application. And finally, deploy
and maintain, deploy your app and
seek guidance from Chad GPT on deployment
strategies, server configurations, or
maintenance tasks as needed. Don't forget to break down your work into smaller
pieces of code. Otherwise you may receive less useful outputs
from the AI model. Remember to be patient
in this process, as it may not always provide the perfect solution
on the first trial. It's also important to test
any AI suggestions and make adjustments as needed to fit your specific
application requirements. But wait a minute, ai can be very useful, not just for writing code. There are a lot of other use
cases where we might use an AI model to improve our productivity as
software developers. Some examples in code review. Ai models can be used to review code and provide suggestions
for improvements. The model can analyze code for potential issues such as bugs, security vulnerabilities,
or styling consistencies, and even provide recommendations
for fixing them. Generating documentation. Everybody loves writing
documentation, right? Ai can be used to automatically generate
documentation for your code. Given a code base, the model can generate human
readable explanations, comments, and documentation
for your APIs, for functions,
classes, or modules. Another interesting use case is natural language interfaces. The new class of
software is about to emerge with the current
capabilities of AI, we can already build more intuitive and
natural interfaces where users rely just on natural language to
interact with our code. The learning curve
becomes easier and the overall user experience
can be improved. Instead of using
buttons and menus, you're just talking
to the application. Another use case is
requirement analysis. As you have seen in my example, AI can also act as
a business analyst. It can identify,
classify, summarize, and validate the requirements, and that can save a lot of time and effort for developers. The model could help
identify ambiguities, inconsistencies, and missing information
in Requirement Documents. Code, translation. You need to translate from
Python to JavaScript or C plus plus AI can help
you with that too. Test case generation is another interesting use
case for an AI model. It can be used to automatically generate
test cases for code. Given a code snippet that the
model can generate a set of test cases that cover different
scenarios and edge cases. Again, saving you a lot of time. Okay, that's it
for this lecture. See you in the next
module where we will discuss about some other
cool prompting techniques.
17. Using Prompt Compression Techniques: Hello everybody and
welcome to module 3.8. In this lecture, we'll talk about a very
interesting technique that we can use with the
newer AI models like GPT for. Now, I know some of you
may not have access to GPD for yet and are currently using
chat GPT or other models. But I believe it's
just a matter of time until this new model or other AI models with similar performance will
become widely available. So I think you should know how to use this
method already. Okay? Remember when
we talked in one of the previous lectures about the limitations of an AI model. One of the problems with the current models is
the maximum token limit. Language models often have a maximum token limit due to memory and
computational constraints. Well, it turns out there is a way to overcome
this limitation. The method we will
be discussing in this module is called
prompt compression. Some people have also nicknamed this
technique sugar tongue. And yes, it is very
similar to the way we compress files in our computers so that they take less
space on the disk. As mentioned before, the prompt example that I will
be using here is built and tested on GPT for you can achieve similar
results in chat GPT, but the compression
level will be much lower enough talking. Let's have a look
at this example. Okay? So looking at the output, it's obvious that no human
can actually read that. We ask the AI to
compress the text using a language of its choice and any characters it may prefer. However, the length of
the resulting texts, in this case is 30 per cent
of the original texts, which is already
quite remarkable. But let's see, what did the same way I model can
understand from that? Will it be able to
decompress it and maintain the meaning of
the text? Let's find out. Although the text is slightly different
from the original, the meaning is still there. We might improve the accuracy
further if we adjust the prompt and ask you to
provide a lossless compression. One thing to consider
though, is compatibility. Any compression
technique arbitrarily chosen by an AI model will result in texts that can only be successfully decompressed
by the same model. Which means you
cannot compress with GPD for and decompress
it with another AI. Now that we've seen
this capability, think about its use cases, e.g. you could summarize a
very long conversation with the model and
later continue that conversation
in another session and still retain the
original context. Or you could use compression
to gradually create a really long bronze that otherwise would be
impossible to use. And there are many
other possibilities. Keep this in your toolbox
and use it when required. Well, it's time to
wrap up this module and prepare for the following
topic. See you there.
18. Problem solving and generation of visual outputs: Although a GPD
model is different from an image generating
model and does not usually have
the capability to create images or any
visual responses. I'm going to show you in this lecture a simple way
in which you can produce diagrams or other
useful visualizations for your data using a
model like chat GPT. But first, let's
talk a bit about how we can use an AI model
for problem-solving. There are several methods
that we can use in order to identify the
root cause of a problem. In this lecture, I'm going
to emphasize the usage of a GPT model for two very popular
problem-solving techniques, the fishbone analysis or Ishikawa diagrams and
the swot analysis. These two methods
also happened to rely on visual representations. So I'm going to take
advantage of that to show you how
to solve problems and also how to create simple but effective diagrams using a model like chat GPT. Why use an AI for
problem-solving? Think about it. In most problem-solving
techniques, we are usually encouraged to ask for different opinions on the subject from different
subject matter experts that we might have available. Problem-solving is best when approached as a
collaborative exercise. The problem is, if you're just an individual
or a small company, you may not have the luxury
of being surrounded with subject matter experts for all the problems you're
trying to solve. You may also lack the
necessary expertise in some fields in order to play the role of the expert yourself. Your only option might be to
hire expensive consultants, which in many cases may not be a feasible option
for obvious reasons. Well, here comes
artificial intelligence. A GPD model which
has been properly trained on an extensive
set of curated knowledge, can act as your personal
subject matter expert and help you solve some of
your most complex problems. It may not be the most
qualified experts out there, but multiple studies and benchmarks have already
established that it's probably going to perform a bit better than an
average consultant. Combine that with some proper
prompt engineering skills. Yeah, that's you. And you will have a
team of experts ready to help you solve even the
most complex problems. So let's begin our
discussion with an interesting problem-solving
technique which is called the fishbone analysis, aka Ishikawa diagram, bearing
the name of its creator. This method is very simple, yet very effective in
helping you identify potential root causes for a
complex problem or event. The reason they
call it a fishbone analysis is that
the method relies on a visual representation of the outputs which resembled
the skeleton of a fish. Let's watch together how we
can use a model like GPD for to apply this method
for a specific problem. So in this example, I'm asking the model
to help me identify the root causes for the
fall of the Roman Empire. I know this is a fairly
simple example because we are analyzing unknown event for which we have lots
of data already. But don't worry, you can apply the same process
for any problem. The only additional step that you'd need to
include is to provide specific data to the
model describing the event in sufficient detail
to provide more context. In this particular case, I'm going to use a very
direct and simple prompt because I'm not looking for a precise or very
specific answer. As you can see,
the model returns the most probable cause is divided into several categories. We can then further explore
these root causes in a greater depth
if we want to get a better understanding
of the issue, e.g. okay, now, let's see how we can better
visualize the answer. The easiest approach to create a more visual response would be to specify that we want the output to be
returned as a table. Right? We've got a table, but still, this doesn't
really look like a fish bone. So is there a way to create a more visual representation
of the results? Yes, there is. The metal
I'm going to demonstrate next relies on an
open-source technology called dot language. This is a very simple but
powerful way to create graphs. And it turns out that
GPT models like chat, GPT and GPD for it can
actually speak the language. So you see where I'm going? Let's go back to our example. And there you go. All we have to do now is
copy the code and use a free visualization
tools such as this one to generate
our diagram. From here, we can
also modify the looks of the graph or export the
result as an image file. Going back to our problem, now that we've identified
the root cause is the next step is to
ask for solutions. And that's it. We just saved the Roman Empire. Would this be a good
idea? I don't know. Great. Now that we've discussed
about the fishbone analysis, let's move on to our next
popular method which is swat. While the swat analysis is not a problem-solving
technique, in a traditional sense, it aids in problem-solving
by providing a structured approach to
evaluate the current situation. The name comes from the four elements
that it can identify, the strengths, weaknesses,
opportunities, and threats. Let's see how we can apply the swap method to our problem. As you can see, this time, I'm using a more
detailed prompt. That is because swot analysis
has to be very specific on the context and
current objectives in order to produce
meaningful results. Let's see what we've got. Normally, a swat
analysis results in a visual representation of the items divided
into four quadrants. Can we achieve
something similar? Obviously, now that we have a current understanding
of the situation, the next step would be to
ask for an improvement plan. But in this case,
I would like to do something a bit
more interesting. Apart from the fishbone
analysis and Swat, there are several other
problem-solving techniques that you should be aware of. I encourage you to explore the ways in which these
other techniques, such as the five whys, the Six Thinking
Hats, mind-mapping, decision analysis or PESTLE analysis can help you
solve a complex problem. And also how chat GPT or GPD for can be used to assist you
in applying these methods. Actually, this would make a pretty good practical
assignment for you to apply the
concepts that I've explained in this lecture and familiarize yourself with some of the most popular problem-solving
techniques available. So your assignment is to
choose any problem you might be facing right now or
one that you faced recently. Describe it in a prompt
and work together with the AI model to identify
potential causes. Man, Why not? Maybe even
potential solutions? But please choose
a method different from the ones that we've
discussed in this lesson. If you think the
method you've chosen would benefit from a
visual representation, that would be a
great opportunity to exercise generating visual
outputs with the dot language. That's it for this module. See you in the next
lecture where we will discuss about other advanced prompt
engineering techniques.
19. AI-Assisted Questioning: Building on our exploration of problem-solving and
generating visual outputs in the previous module. Let's have a look at
another important aspect of problem-solving, AI
assisted questioning. This technique allows you and the AI to work together
more effectively in solving complex problems by
encouraging the AI to ask the necessary questions
to better understand the problem and provide
more accurate responses. As we saw in the
previous module, AI models like check GPT can be used to identify
root causes and potential solutions
using techniques such as fishbone analysis
and swat analysis. In this module, we
will learn how to use the AI assisted
questioning to complement those
problem-solving techniques and further refine our understanding
of the issue at hand. The key to effective AI
assisted questioning is to ensure the AI asks
the right questions. This not only helps
clarify the problem, but it also ensures that I can provide the most relevant
and accurate solutions. Here's how you can
use chat GPT to perform a IOCs that questioning and solve
complex problems. Step number one,
identify the problem. Start by describing
the problem you want to solve or the topic
you want to explore, be as specific as possible to give the AI are clear
context of the issue. Step number two, start
asking for questions. Prompt the AI to
generate questions that will help you better
understand the problem. Step number three,
answer the questions. Now it's time to give the
contexts that it needs, wants the AI provides
a list of questions, respond to them
in detail to give the model more context
and information. Step number four,
iterate and refine. As you provide answers, the AI might generate additional questions or dive
deeper into specific areas. Keep engaging in this
back-and-forth process to refine the understanding
of the problem. And finally, step number
five requests the solutions. Once the AI has gathered
sufficient information, ask you to provide
potential solutions or recommendation based
on the data provided. You will definitely get better results by
using this method. In the same way we utilize chat GPT for a fish
bone and swat analysis. Ai assisted questioning
can be combined with those techniques to create a more comprehensive
problem-solving process. So let's say this process
in action with an example. Suppose you are experiencing a decline in sales for
your online store. And one to identify the root causes and potential solutions. You could start by describing the problem to the AI and asking it to generate questions to
better understand the issue. The model might
respond with questions like you would then respond to these
questions providing the eye with more
context and information. And it may follow up with
additional questions, diving deeper into
specific areas. The thing is you'll
have to continue this iterative process until the AI has a solid
understanding of your problem. And finally, ask the
model to provide potential solutions
or recommendations based on the
information gathered. It might suggest actions like improving your
marketing strategy, addressing common
customer complaints, or offering new products to meet changing consumer demands. In summary, AI assisted questioning allows
you to collaborate with the model to
solve complex problems by encouraging the AI to
ask relevant questions, you can ensure it has
comprehensive understanding of the issue and can provide
more accurate solutions. Sometimes in this process by
asking the right questions, you will also get yourself new perspectives
on the problem and be able to come up with
out-of-the-box solutions by applying some
lateral thinking. Okay, I think it's time for some practical activity to exercise what we've
just discussed. Choose a problem or
topic relevant to your personal or
professional life. Use the AI assisted questioning process that
I've described together, information, refine
your understanding and requests solutions
from the model. Don't forget to
iterate and refine your questions to get the most valuable
insights from the model. In the next lecture,
we will explore more advanced prompt
engineering techniques, enhancing your
ability and using AI for real life scenarios.
See you there.
20. Automating Emails and Social Media Posts: Hello everyone. In this lesson we'll dive
into how prompt engineering can help automate emails
and social media posts, making our digital lives
more efficient and engaging. We live in a connected world and communication is more
important than ever. Aim prompt engineering can help us craft more
effective messages, saving us time and ensuring our content is
engaging and relevant. When it comes to crafting
prompts for emails, it's all about the context. Here out a few key
points to consider. The recipients
relationship to you. The purpose of the email,
and the desired tone. Let's say I want to schedule
a meeting with a colleague. I could create a prompt
like compose an email to schedule a meeting with a colleague discussing
a new project idea. The AI will then generate a suitable email that I
can use or edit as needed. Using templates is another
great way to automate emails. We can create a prompt that includes the template
structure and then customize it with specific details for
each unique email. I'll let you experiment with this idea on the playground
and test it yourself. See what you can come up with. Social media is a powerful
platform to share our ideas and engage with others to create problems
for social media posts. Consider the following aspects. The platform, which
might be Twitter, Instagram or Facebook or any other platform,
the target audience. And the purpose of the post. Let's say I want to promote
an upcoming workshop. They could create a prompt like that will probably generate
a tweet that captures the essence of the event while appealing to
my target audience. In this lesson, we've explored
how prompt engineering can be applied to automate emails
and social media posts. By understanding the context,
purpose, and audience, we can create engaging content that save us time and
keeps us connected. In the next lesson, we'll learn how prompt
engineering can be used for content generation in
blogs, articles, and reports. See you there.
21. Content Generation: Blogs, Articles, and Reports: In this lesson, we'll explore how prompt engineering can help create engaging and well-crafted contained in various formats. So let's get started. When it comes to
content generation, prompt engineering can
be an invaluable tool, will focus on three
main types of content. Blogs, articles, and reports. First, let's talk about blogs. Blogs often require a more casual and
conversational tone. With prompt engineering,
you can guide the AI to generate blog posts tailored
to your target audience. To craft a blog prompt, start with a clear topic and
a few relevant keywords. E.g. remember, the purpose is not to replace
human creativity. Adding that human
touch on top of the content generated by AI
will make a huge difference. Next, let's discuss
about articles. Articles are typically more formal and structured
than blogs. With prompt engineering,
you can create well-researched and
informative articles on various topics. To create a prompt for
an article provides a specific topic and requests for evidence-based
information. E.g. remember to always check the data for
accuracy and make sure it doesn't provide
outdated information. Most of the AI
technologies out there, including chat,
GPD, and GPT four, have been pre-trained
on a specific set of data and can only answer
based on that data alone. So if the training has included only data available
until a certain year, the AI will have
absolutely no clue about newer events
happening in the world. On top of that, most AI models do not possess the ability
to access the internet. That means their
outputs may not be up-to-date and the model may
not even mentioned that. Finally, let's look
at the reports. Reports are more detailed and comprehensive than articles, often presenting
data and analysis. Prompt engineering
can help generate well-organized and
insightful reports. To generate a report, your prompt should
include the topic, scope, and any data or sources that you'd like
to be incorporated. E.g. you may then adjust the tone and style to match the format that
you need for the report. In this lesson, we explored
how prompt engineering can be applied to content
generation for blogs, articles, and reports. By leveraging AI, you
can create engaging and informative content for
a wide range of purposes. In the next module, we'll discuss using prompt
engineering for task delegation and project
management. Stay tuned.
22. Task Delegation and Project Management: Hello everyone and welcome
to module 4.3 of our course, prompt engineering
for everybody. In this lesson, we'll explore how prompt engineering
can help with task delegation and project
management. Are you ready? As we begin, let's
talk about how AI can assist us in
delegating task effectively. By leveraging
prompt engineering, we can create intelligent
ways to analyze workload, prioritize tasks,
and assign them to team members based on their
skills and availability. Now, let's discuss how prompt engineering can help
in task prioritization. By inputting the
right information. Ai models can sort
tasks based on urgency, importance, and deadlines, enabling us to focus
on what matters most. Here's an example
of a prompt that can help in task delegation. By providing relevant
information, the AI model can suggest an efficient as distribution
among team members. Next, let's explore how AI can
aid in project management. Prompt engineering can help
us track project progress, identify potential bottlenecks, and generate status reports, making project management
smoother and more efficient. Consider this
example of a prompt for generating projects
status reports. By inputting relevant data, the AI model can generate the concise and informative
project status report to keep everyone in the loop. Ai can also be a valuable tool for enhancing
team collaboration. Through prompt
engineering, we can create ai models that help
facilitate communication, scheduled meetings, and
manage shared resources, ensuring that teams work
together seamlessly. Here's an example of a prompt
for scheduling meetings. The AI model can analyze everyone's schedules
and recommend the suitable meeting time, making it easier to
coordinate and collaborate. While AI can be incredibly helpful in task delegation
and project management, it is essential to
recognize its limitations. As humans, we must still apply
our judgment, intuition, and empathy to make informed decisions and guide
our teams effectively. As we wrap up this module, let's recap what we've learned. Ai can assist in dust delegation
by analyzing workload, prioritizing tasks,
and assigning them based on team members
skills and availability. Prompt engineering can help in project management by
tracking progress, identifying potential
bottlenecks, and generating status reports. And these are just
some examples. Ai models can enhance them collaboration by
facilitating communication, scheduling meetings, and
managing shared resources. It's also essential to recognize AIs limitation and
apply human judgment, intuition, and empathy
in decision-making. Now that we've explored
how prompt engineering can be used for tasks delegation
and project management. You are ready to apply these
principles in your own work. In the next module,
we'll discuss how prompt engineering
can be utilized in job roles that are
currently affected by AI or that might be
affected in the future. Stay tuned and see you
in the next module.
23. Customer Support: Enhancing Human-Agent Collaboration: Hello everyone. Today we will explore how prompt
engineering can help enhance human agent collaboration
in customer support will discuss how AI can
complement human support agents, handle repetitive tasks and inquiries and improve response times and customer satisfaction. Ai tools can be a valuable addition to
customer support teams. By using prompt engineering, we can design virtual
assistance that work alongside human agents to provide faster and
more accurate support. Let me give you an example. Suppose we have a
virtual assistant that helps answer
frequently asked questions. We can craft a prompt
like in this case, the area assistant will then provide a concise and
informative response, allowing human agents to focus
on more complex inquiries. Ai driven virtual assistants can efficiently handle
repetitive tasks, reducing the workload
of human agents. For instance, considered
a prompt like, how can I reset my password? The AIS system can respond with a step-by-step
instructions, saving human agents
diamond effort. This way, customer
support teams can devote their energy to more
challenging and engaging tasks. One of the main
advantages of a IOCs that customer support is the ability
to reduce response times. Prompt engineering
allows us to design a system that can answer multiple inquiries
simultaneously. Imagine a prompt
like in this case, the AI assistant can quickly
provide a detailed response, ensuring customers receive
the information they need without having to wait for a human agent to
become available. It's also important
to strike a balance between AIS systems
and human expertise. While AI can handle
many tasks efficiently, there are situations where human intervention is necessary. Complex issues. Empathetic understanding
and nuanced communication are areas where
humans are very good. Well-integrated customer
support system should allow for seamless handoffs between AIS
systems and human agents, making sure that customers receive the best
possible support. So by embracing collaboration between AI tools
and human agents, customer support teams can
achieve optimal results. Prompt engineering
enables us to create ai systems that compliment the skills and expertise
of human agents, leading to enhance
the efficiency and customer satisfaction. In summary, prompt engineering
can greatly benefit customer support teams by complementing human
agents with AI tools, handling repetitive
tasks and inquiries, improving response times
and customer satisfaction, and overall balancing AIS
systems and human expertise. That's all for this module. In the next section, we'll explore how AI driven personalization
and efficiency can improve the retail
and e-commerce industry. Stay tuned.
24. Retail and E-commerce: AI-driven Personalization and Efficiency: Welcome everyone to Module 4.5. In this module, we'll
explore how AI and prompt engineering
can assist people working in retail
and e-commerce, making their jobs
more efficient and helping them deliver better
customer experiences. One of the ways AI can help sales teams is by providing
product recommendations. Using prompt engineering,
we can create personalized suggestions
for customers based on their preferences, shopping habits, and
other relevant factors. Here's an example prompt. Ai can also play
a crucial role in inventory management
and demand forecasting. By analyzing historical sales
data and external factors, AI can help retail employees make informed decisions
about stock levels, reducing the risk
of overstocking or running out of popular items. Here's an example.
Obviously, for this prompt to return
the correct results, we should also provide
relevant sales and event data. In the prompt. Customers appreciate
personalized experiences and AI can help retailers tailor their marketing and communication
to individual needs. By analyzing customer
data prompt engineering can generate
targeted promotions, discounts, or product
suggestions that are relevant to specific
customers, e.g. in this module, we've seen how AI and prompt
engineering can assist retail and
e-commerce professionals in various ways, including providing AI assisted product recommendations
for sales teams, utilizing AI for inventory management
and demand forecasting, enhancing customer
experience with personalized offers
and communication. So by embracing AIs, potential, people in retail and e-commerce can create better
customer experiences, improve efficiency, and
ultimately thrive in their roles.
25. Creative Writing and Brainstorming: Using AI to Generate Ideas and Drafts: Hello and welcome to Module 5.1. In this module, we'll explore
how AI can help us generate ideas and drafts for creative
writing and brainstorming. We'll look at practical examples and discuss strategies
to make the most out of AIS systems without compromising
our creativity. One of the biggest challenges
in creative writing is coming up with ideas
or story outlines. With AI, we can generate various ideas by providing
a simple prompt, e.g. by using this prompt, the AI engine will
provide us with story ideas that we can
use as a starting point. We've all faced writer's
block at some point. Ai can help us overcome this by suggesting new directions
or perspectives. E.g. this prompt will encourage the AI to
generate unexpected twists, giving us fresh ideas to
break through the block. Ai can also help us
refine and polish our work by providing a
piece of our writing, we can ask the AI to suggest improvements or
alternative phrasings. Here's an example prompt. In this case, the engine will
return a revised version, which we can use
as an inspiration to improve our writing. When using AI to assist
with creative writing, it is essential to
maintain a balance between AI generated ideas and
our own creativity. Remember, AI is a
tool to enhance and compliment our creative
process, not replace it. In this module, we've
learned how AI can be a valuable partner in our
creative writing process, from generating initial
ideas to refining our work. In the next module,
we'll explore how AI can help us with research and
information curation. Stay tuned, and let's
continue discovering the fascinating world of
AI and prompt engineering.
26. Efficient Research and Information Curation: AI-Powered Summarization & Analysis: Welcome to Module 5.2. In this module,
we'll explore how ai powered tools can
help you save time, find relevant information, and analyze content effectively. We'll also look at some example problems that
you can use with AI engines like
Djibouti for it to make your research
process more efficient. One of the first step in
conducting research is finding relevant sources by using
a power search queries, you can quickly discover the
information that you need. E.g. this prompt will help the AI engine locate articles that
match your criteria, saving you time and effort. Keep in mind though the
possible time limitations that we've discussed in
the previous modules, the model may not have the
latest data available. Once you have your sources. Ai can help you extract key information and
summarize the content. Here's an example. With this prompt, the
AI engine will generate a concise summary of the article allowing you to quickly
understand its main points. Ai can also help you analyze data and derive
insights from it. E.g. by using this prompt, the engine can analyse data and present you
with clear trends, making it easier for you to understand and interpret
the information. To recap. Here are
the benefits of using AI powered research and new
information curation tools. Time-saving, ai can quickly identify relevant sources
and summarize content, allowing you to spend more time on analysis and decision-making. And hence the accuracy. I can process large
amounts of data and minimize human errors in information gathering
and interpretation. Customize insights. Ai generated summaries, analysis can be tailored to your specific needs
and interests. Let's imagine you're
working on a project to improve your company's
sustainability initiatives. Here's how AI can help you streamline
the research process. First, use AI to identify relevant articles and sources on sustainability
best practices. Second, use AI
generated summaries to quickly understand key
points and save time. And third, analyze data
trends and insights using AI to identify areas where your company can improve
its sustainability efforts. In this module, we've learned
how ai powered tools can make research and
information curation more efficient and effective. By using AI engines like GPD
for you can save time and hence accuracy and
gain customize the insights that will help you excel in your
professional endeavors. As you move forward. Remember to experiment with different prompts and
strategies to find the best ways to
harness the power of AI for your research and
information curation needs. And always keep in mind
that AI is here to support your creativity,
not to replace it. Happy researching.
27. Enhancing Communication Skills: AI-Assisted Proofreading and Writing: Hello everyone and
welcome to Module 5.3. Today we're going to
explore how AI engines like GLUT4 can help us enhance
our communication skills. We'll focus on proof
reading, writing, and adapting our style and
tone to different audiences. First, let's talk about how AI can improve grammar,
spelling, and style. Ai powered tools can quickly analyze your texts and provide
suggestions for grammar, spelling, and style
improvements. This can save you a lot
of time and help you feel more confident about
the quality of your writing. E.g. you might input a sentence with a few
mistakes like this one. The AI could then offer
the corrected version. Hey, I can also help you craft more persuasive and
concise messages. Let's say you're
writing an email to convince your colleagues to
adopt a new software tool. You can provide the AI with your initial draft and some key points you'd
like to emphasize. The AI might then generate
the suggestion for you. Lastly, AI can help you adapt your communication style and don't for different audiences. Imagine you need to rewrite a formal report for a
more casual audience. You can provide the AI
with a sample paragraph from the report and asked
for a more informal version. By leveraging AI
engines like GPT before you can improve
your communication skills, save time, and adapt your writing to suit
various situations. So use these tools as a helpful guide and allow your own unique voice
to shine through. That's it for this module, we'll talk about
time management and prioritization in our
next topic. See you soon.
28. AI-Driven Task Management and Decision Making: Hello everyone. In this
section we'll discuss how AI can help us manage our time and prioritize tasks
more effectively. We'll also explore how
AI can assist in making data-driven decisions and
optimizing resource allocation. So let's dive in AI. It can be a very powerful tool for prioritizing and
scheduling tasks. By providing a list of
tasks and their attributes, we can use AI to generate an optimized task order based
on factors like deadlines, importance, and
estimated effort. Here is an example. Ai can help us spot
inefficiencies in our workflows. By analyzing our work
processes and habits, ai can identify bottlenecks and areas that need
improvement, e.g. as you can see, it's all about the way we formulate the prompt. Ai can also assist in making informed decisions based
on available data. It can analyze complex
datasets and provide insights to help allocate
resources more effectively. Here's an example. Okay, To sum up the
contents of this module, ai can be a very
valuable partner in time management
and prioritization. By leveraging AI
driven task management and decision-making tools, we can enhance our productivity and make better-informed
decisions. Remember that AI
is here to support and compliment our
skills, not replace them. So let's make the
most of it and create a more efficient and
fulfilling work environment. In the next section, we'll discuss how
AI can help us with professional development and lifelong learning. Stay tuned.
29. AI-Powered Professional Development and Lifelong Learning: Welcome to Module 5.5. This time we will explore how AI powered personalized
learning paths can help you continually grow your
professional skills and stay updated with
industry trends. We'll dive into how AI can assist you in
identifying skill gaps, crafting personalized
learning plans, and discovering
relevant content. Let's start by discussing how AI driven
assessments can help you identify your skill gaps
and learning opportunities. Ai engines like GLUT4 can analyze your professional
background, e.g. accomplishments and
goals to provide a tailored evaluation of your
strengths and weaknesses. E.g. you can input
a prompt like. Then, the engine will
provide you with a list of areas to work on helping you focus on your learning efforts. Once you've identified
your skill gaps, AI can assist you in creating personalized
learning plans. It can create a list of relevant resources
such as articles, courses, or webinars to
help you learn and grow. For instance, you could
provide the following prompt. The AI can generate the
tailor learning plan, which includes a
step-by-step resources, timelines, and milestones
to track your progress. It's all about using
your imagination. Be creative. In today's fast-paced world, staying updated with
industry trends and insights is crucial. Ai powered content
discovery tools can help you stay informed by
filtering through vast amounts of
information and delivering content that's relevant to your interests and
professional goals. You could input a prompt like and the system will provide you with
a list of articles or resources that are highly
relevant for your query. Keep in mind though, as
we discussed earlier, you may not get the
most up-to-date data depending on the training
input for the model. So take that into account. In conclusion, ai powered
personalized learning paths can be an invaluable tool for professional development
and lifelong learning. By using AI engines like GLUT4, you can identify your
skill gaps, create, tailor learning plans,
and stay informed about the latest industry
trends as you continue to grow and
evolve in your career. Remember that AI is here to support you in
achieving your goals, enhancing your productivity,
nurturing your creativity. To recap, we've covered
how AI can assist you in identifying skill gaps and learning opportunities through
AI driven assessments. Crafting personalized
learning plans with AI curated resources. And also staying updated with industry trends and insights through AI powered
content discovery. In a nutshell, embrace
this superpower to support your professional development and lifelong learning journey. And watch as your skills and knowledge
continued to flourish. See you in the next module.
30. Ensuring Fairness and Reducing Bias: Hello and welcome to
module 6.1 where we'll discuss ensuring fairness and reducing bias in
prompt engineering. Today we'll learn how to create inclusive prompts and consider the impact of our language
models on a diverse audience. First, let's talk about the sources of bias
in AI systems. Artificial intelligence
models like the ones we use in
prompt engineering, are trained on vast amounts
of data from the Internet. As a result, they
might learn and reproduce the bias is
present in this data. It is crucial for us to recognize and address
this bias is to ensure that our AI
generated outputs are fair and inclusive. There are various types of
biases that can emerge in AI systems such as gender bias, racial or ethnic bias, socioeconomic bias,
or geographical bias. We should be mindful of
these biases when crafting problems and evaluating
AI generated outputs. Here are some strategies that we can use to reduce
bias in our prompts. Use inclusive language. Avoid gender-specific
pronouns and other exclusionary terms, e.g. instead of asking
the AI to generate the list of famous
male scientists, you could ask it to generate
a list of famous scientists. Balanced examples. That's another rule. When providing examples
in your prompts, make sure they represent a diverse range of
individuals and perspectives. Also, test with diverse Inputs. Test your prompts with
a variety of inputs to evaluate their fairness
and inclusivity. After crafting your prompts, it's important to
evaluate the AI generated outputs for
potential biases. Here are some steps
that you can follow. Review the content,
go through the AI generated outputs and check for any bias or
exclusionary language. If you find any issues,
revise your prompt, or adjust the AI
model's parameters to obtain more
inclusive outputs. Ask for feedback. Share the AI generated outputs with a diverse
group of people and ask for their opinions on potential biases are
problematic content. Their perspectives can help you identify and address issues
that you might have missed. And lastly, iterate and improve, continuously refine
your prompts and AI model parameters based on the feedback and
insights together, remember that prompt engineering is an iterative process and it's essential to learn from your mistakes and make
improvements over time. As we work with
prompt engineering, we must also consider the ethical implications of
our AI generated outputs. Here are a few
questions to ponder. Are we unintentionally
reinforcing stereotypes or harmful biases? Are we being fair and respectful to all individuals
and communities? Are we creating
content that could be harmful or offensive
or divisive? Asking these questions
can help us create more responsible and
ethical AI applications. In module 6.1, with discuss the importance
of ensuring fairness and reducing bias in prompt
engineering with explore strategies for
crafting inclusive prompts, evaluating AI generated outputs, and incorporating ethical
consideration in our work. By being mindful of these factors and continuously
refining our prompts, we can create AI
applications that empower and serve
diverse audiences.
31. Responsible AI and the Future of Work: Welcome to module 6.3, where we will discuss
responsible AI and its implications
for the future of work. As we embrace AI technologies, it's crucial to
ensure that they are used ethically and responsibly. Let's explore how we
can achieve this. The first step towards
responsible AI is to establish guiding principles that shape
AI development and use. Here are some key principles. Human-centered, ensuring AI serves
human needs and values. Transparency, making AI systems and their decision-making
understandable. Accountability, assigning
responsibility for AI systems, behavior and the outcomes. Fairness, reducing biases and ensuring equal
treatment of users. And nevertheless, security. Protecting AI systems and user data from unauthorized
access and use. Ai should be used to complement human work rather
than replace it. For instance, in
prompt engineering, AI can help you generate
creative ideas while you as a human can refine
and finalize the output. This way, ai can
enhance our skills and help us become more efficient
and effective in our jobs. Let's look at an example prompt that demonstrates
this collaboration. The AI system can provide various suggestions and you can select the most relevant
and appealing options, customize them to
your team's needs and plan the activities
accordingly. As AI technologies evolve, it is essential to
stay up to date with the latest advancements
and best practices. This will help you harness
the full potential of AI and ensure responsible use in your personal and
professional life. It can be a very powerful
tool for collaboration, enabling us to
work together with our digital partners in
a variety of domains. By fostering a
co-operative relationship between humans and AI systems, we can tackle complex problems
and drive innovation. In conclusion, responsible AI, it's about using AI
technologies ethically, ensuring that they compliment
human work and promote collaboration by adhering to guiding principles and continuously learning
and adapting, we can shape the
future of work in a way that is beneficial
for everyone. This brings us to the
end of module six. Thank you for joining me today and see you on the next section where we will
conclude this course and summarize key takeaways.
32. An Introduction to the ChatGPT Plugins: Hello everybody and welcome to a brand new section
of this course. In this chapter, we are going to address the most
recent functionalities introduced in chat
GPT and how those can be helpful in different
scenarios and use cases. If you've been using the chat GPT interface in the
last few months, you've probably noticed
a new feature that was recently made
available, the plug ins. What exactly is a chat GPT? In think of it as an extension to the core functionality
of the AI model. To make a very broad analogy, it's like running a software on top of a particular system. These plug ins enhance the functionality
and user experience by allowing Che ChipT to interact with third
party applications. This gives the AI model the ability to
access the Internet, to retrieve real
time information such as the latest
news or stock data. And also to interact
with live systems in order to execute tasks
or initiate transactions. Okay, But how do they work? There's a lot of technical
aspects to discuss regarding the inner workings and
architecture of these extensions, but since this course was
never meant to be technical, I'm going to skip that part. If you're interested
on the topic, you can find plenty of technical documentation
on open AI's website. Instead, I will guide you on how to activate
these plug ins, Explain what they can and cannot do and how to use them
in real life scenarios. Oh, by the way, a
short disclaimer. First, this section is not necessarily related to
prompt engineering, but it's rather meant to
be an introduction to the Chachi Pt plug ins and
the other advanced features. That being said, every prompt engineering technique
that we've discussed in the previous sections still apply because even when
using these plug ins, you'll still be interacting with the model by using Prompt. Also, you should know that the plug ins are
only available to paying open AI customers
having a PT subscription. And as of September 2023, they show up as a beta feature, which means they are not yet available as an official
production release. The reason they are already open to the public
has to do with open AI's incremental
approach in launching new features and their efforts to ensure AI safety
and alignment. Keep in mind that most
of these plugins are developed by third parties
and not by open AI. They are, however, strictly
verified by open AI through a review process to make sure that they are
safe to be published. Even so, you should be mindful of what kind of information
you share with them, because that information
may end up being processed and potentially stored outside of open
AI's environment. You shouldn't miss out on using these plug ins because
they can substantially enhance the core
functionality of Cha GPT as you'll see
in the next modules. But first, let's see how we can activate and
use these plug ins. The process is pretty
much straightforward. Once you've accessed
the Cha GPT interface, get the option to choose between GPT 3.5 which is the
older model with a bit more limited
capabilities but also very fast GPT four which is the latest and the
most capable model. Although a bit slower
than GPT 3.5 You'll notice that the previous model has no additional option
for enabling any plug ins. That is because the
plugins are only supported by the latest
model, which is GPT four. All you have to do now is select the GPT four model and then
click on the Plug ins option. Scroll down to the Plug in store and install the
plug ins you want. After the installation
completes, the new plug ins will show up in your quick access
list and you can now enable them
whenever you need. Just to make it clear, the
plugins are installed on your open AI account and not
locally on your computer. Once you've installed
some plugins, you'll be able to
enable a number of maximum three from the list
of all available ones. The plugins you enable will then be active for this
particular session and the AI model will
be able to interact with them when required
by your prompts. There are currently almost
1,000 available plugins, with many of them having
overlapping functionality, and some even requiring that you authenticate on other websites to access that functionality. Some of them are free to use, while others may
require a subscription and all that stuff can make
things a bit confusing. It's not my goal to
provide you with a detailed description of
every available plug in. Instead, in the next modules, I will give you some examples of useful plug ins in
real life scenarios. You'll get an understanding of how they work and you'll be able to explore and discover
new ideas along the way. Therefore, see you
in the next module.
33. Deep Dive into the ChatGPT Plugins: Ready to find out more about the Chachi pet plug ins and how you can use them to enhance your productivity in
day to day activities. In this module, I'm going to teach you how to
use some popular, but also very practical
plugins available right now. Using them will definitely
make your work easier and improve the already
great experience provided by Chat GPT. As a reminder, there are currently almost 1,000
plugins available. So I'm not going to discuss about all of
them in this course. I'll show you a few plug ins
explaining how to use them. And from there you can explore on your own and
experiment with others. Because once you understand how they work and
how to use them, it becomes a very
natural process. Let's begin with Wolfram. Wolfram Alpha is a
computational knowledge engine developed by Wolfram Research. It's basically an online
service which answers factual queries by computing the answer from structured data. Like a big database
in general, Wolfram, it's optimized to give
concise factual answers rather than long articles
or web search links. The goal is to directly
answer the user's query. The wall from plug in
is just a clever way of integrating GPT four
with Wafram's knowledge. To use this particular
plug in in chat GPT, all you have to do is first to install it and then activate it also to make sure that the AI model actually uses
the wall from plug in. Just add that in your
prompt like this. Let me show you some key
features of this Plug in. It can provide real time
data based on your queries. That is possible because
unlike Chat GPT, some of the Wolframs knowledge is actually updated
in real time. Well, that used to be the case at least before October 2023. At the moment, open AI is gradually launching a
new feature in chat GPT, which will enable the engine
to browse the web for the latest information
without any kind of plug in. But still, you could already do that using the Wolfram
plug in for example. You could type something like, it will tell you the
current weather in London. It has a vast collection
of curated datasets and can do computations on
them to generate answers. This includes data
on weather, finance, geography, science,
nutrition, and others. It can generate
visualizations like graphs, charts, and maps to present
answer in a visual way. For example, you
could ask GPT to. It also has verified knowledge of real
world data and facts. That makes it a very
useful tool whenever you need to fact check an
answer provided by Ch, GPT or any other
large language model or any kind of
information whatsoever. Because remember
hallucinations, you can never completely trust the answer provided
by an AI model, especially when logic, math, or factual events or
entities are involved. But as it turns out, you can use war frame to verify any questionable
answer. For example, probably the most important
feature is that it can perform complex mathematical
and scientific calculations. You can type in a
function or query like it will show the graph
and compute the result. As you can see, sometimes it may return a
very long answer, but you can obviously tweak
that with the right prompt. In summary, the war from plug
in can help you validate any factual information whenever
precision is important. It can also compute answers to factual queries using real time, up to date information. It's using its own curated data rather than crawling the web or predicting the
most probable answer like a large language model. Okay, let's talk about another very useful plug
in which is web pilot. This plug in gracefully
solves one of the main problems of a pre
trained model like chat, GPT, The inability to
browse the Internet, and lack of up to date real time information
about the world. I know open AI introduced this as a core
feature in September, but still in my opinion, web pilot provides
better results. Web pilot can be used to extract data from any link
provided by the user and feed this data as
an input to chat GPT for example. You
could use it like this, although with some limitations. Sometimes web pilot is able
to extract data from links. Even if the data isn't store
in classic HTML format, it might be able
to read most PDFs, docs, or Excel documents. You just have to provide
a proper link to the document. Here's a protyp. Many of the currently
available chat GPT plug ins don't really have a technical documentation or a description of their
potential use cases. Their name is also not always representative for their
actual functionality. This can create confusion
for the user not knowing exactly what the plug
in does or can be used for. There is an easy
solution for that. Actually, you just have to
activate a specific plug in in your GPT four chat session and use the following prompt to get a more detailed description. The third most useful plug in, in my opinion, is Wikipedia. It can answer general
knowledge questions or supplement the
knowledge of Cha GPT. For example, if you
have a question about a historical event, a
famous personality, a scientific concept, or any other general
knowledge topic, the plug in can search Wikipedia for relevant information. It can also provide
information on current events or offer
insights on breaking news. If something
significant has just happened and it's
documented on Wikipedia, the plugin can be used to
retrieve that information. When using the
Wikipedia plug in, it passes your prompt
as the query to search Wikipedia and then shares
the results with you. Apart from these three plug ins, there are many others you can explore on the plug in store. I'm just going to give
you some suggestions. For example, diagrams, show me. This is one of the best ways to visualize any kind of data. It can create a lot of diagram
types including graphs, sequence diagrams, entity
relationships diagrams, user journey maps, Gantt
charts, and many more. It's also interactive, which means that once it
generates a diagram, you can get a link
to edit the diagram online if you wish to
make any modifications. Aipdf's another interesting one. This plug in is designed to help you extract and
understand information from PDF documents
without having to manually sift through
the entire document. It can either summarize
the contents or allow you to ask questions
about your PDF. The Earth Plug In offers several functionalities
related to generating map images and obtaining coordinates
of locations. Need help with a
foreign language. You can use the speak plug in if you need to know if there's an
AI based solution for a particular problem. There's even a plug in for
that, it's actually called. There's an AI for that. You explain your scenario
and the plugin will recommend potential AI
tools that you could use. That's just the tip
of the iceberg. Have fun exploring
other plugins. You know the drill. Now,
see you in the next module.
34. The Code Interpreter Plugin: Hi everyone. This entire
section is dedicated to exploring some of the
most useful plugins available to use in Chat GPT. This module in particular focuses on advanced
data analysis, which is a very
special kind plug in one that's been
developed by Openai themselves and has a different and much
more complex behavior than all the other
plug ins out there. You may already be
familiar with it under a different name,
the code interpreter. It's been recently rebranded by Open AI to provide more
clarity on its purpose. As you're already well aware, a large language
model is nothing more than just a very complex
prediction machine. It basically starts from the user's prompt and builds
a response by establishing the most probable combination of words based on the
patterns it has seen during its training
on vast amounts of text that sometimes
works like a charm, magically producing very
convincing and accurate outputs. However, as we discussed
in our previous lectures, in some cases the AI model may struggle in returning
accurate results. I'm talking about the so
called hallucinations and troubles with
mathematical computation or logical reasoning. It turns out that word
or token prediction alone is not enough to solve every real life
problem out there. This is where the special
plug in comes into play. If I may use an
oversimplified analogy, think about the two hemispheres
of the human brain. One hemisphere is logical
while the other is creative. You can think of the
code interpreter as the more logical part of
ChagPT's architecture, as suggested by its name, This plugin has the ability
to run code and return very precise algorithmic results whenever the model determines that such a method
is more appropriate. When having this plug
in activated chat, GPT will analyze your
prompt and decide whether the task that has
to be solved can be solved creatively by
the AI engine alone. Or whether it should be
solved by writing and executing a program in
an algorithmic fashion. Sometimes it may
employ both methods. For example, it may run
a simple program to get very specific results
and then combine these results with some
creative AI ingredients. Probably the best part, it enhances the way you interact with the A model
by allowing you to upload files and download the resulting output.
Which is great. You can use it for analyzing
specific documents, uploading or generating
Excel or Word files, images, and many more. Okay, activating the plug in is very much straightforward. After initiating a
new chat session, in the chat GPT interface, you have to select the GPT
four model and then you'll have the option to enable the advanced data
analysis plug in. Keep in mind that
the plugin will only be active for the
current session and obviously it will
have no impact on other chat windows
you may have opened. Once you provide a prompt, the AI will determine
whether it needs to call the plug in depending on the
task that you have provided. If the engine determines that
the plug in is required, it will automatically launch
it and use its output. In solving the task, you can wait for the task to be
completed in the background, or you can click on
the Show Work button and see the actual code. It builds in real
time. Here's a Protyp. If you just need a
task to be completed without chat GPT adding
any additional comments, you can append this at
the end of your prompt. Sometimes the
plugin will provide a visual answer that will be displayed on the
chat GPT interface. Sometimes it may
provide you with a link to download
the resulting file. If you prefer to download
the resulting output, you can also specify
that in your prompt, together with a file
format you prefer. Now it's time to also learn about the limitations
that unfortunately come with using this plug in because there are a few
limitations to consider. First of all, it only generates code written
in the Python language. And while it has several hundred Python packages pre installed, it cannot by default install any external
Python packages. Any limitations of this language and the existing
packages which are extensions to the core
Python functionality will also limit the
capabilities of the plug in. It does not have Internet
access, unfortunately, you cannot use it
in combination with Internet capable plug ins such as the web
pilot for example. It also builds a
temporary environment that is deleted at the
end of the chat session. Any files that you may have
generated will be forever lost if you haven't yet
do them on your computer. Yes, while it allows you
to upload documents, you can only upload files
of a certain maximum size. Okay, This module was meant to be just a quick
introduction into the purpose and capabilities of the advanced data
analysis plug in. See you in the next module for some real life use cases
for this technology.
35. The Code Interpreter Plugin Part 2: Hello and welcome back. As I've promised,
it's time to explore some practical scenarios where the advanced data analysis
plug in may be useful. In this module will
cover topics such as optical character recognition,
image manipulation, conversions from
different formats, generating presentations,
data extractions, and other interesting scenarios. Do you wonder what kind of magic this plug
in is capable of? Let me show you some
of the best use cases. First, optical character
recognition using Tessera. Tesseract library is a great
tool that can recognize and converted or handwritten text in images into machine
readable text. In other words, if
you have a picture of a page from a book or
a handwritten note, tesseract can read the text in that picture and give it to you in a format
that you can edit. This process is also known as optical character
recognition, OCR. Here's an example. Upload your image file
and use this prompt if you want. You can also ask the advanced
data analysis plug in to reformat the
extracted text in a more convenient way. Extracting tabular data from
PDFs using Came Lot Pi. Have you ever tried
to copypaste the data from a table from a
PDF to an Excel file? You've probably noticed that
it's a hit and miss thing. Most of the times it
doesn't work as it should. It turns out there's a
Python library for that, which can be used in the
advanced data analysis plug in. Here's the prompt you should use after you uploaded the PDF. Here's the result.
Converting PDFs to editable word documents. Talking about the PDF, sometimes we might want to edit the contents of
the PDF document, but that's impossible unless
you purchase a license for the Adobia Acrobat
software or use some conversion services
available on the Internet. Those services though,
they are either limited in functionality or require a fee for the unrestricted version. That's not a problem
anymore with the advanced data
analysis plug in. If the PDF already
contains selectable text, you could use the
following prompt. Or use a prompt like this one in case your PDF is just
a scanned document. As you have already noticed, sometimes the plug
in may encounter different errors returned
by the code it generates. The nice part is that
it's able to understand the errors and
make the necessary corrections to make it work. It's persistent if the code it generated the first
time does not work, it's going to keep trying different approaches until
it gets the right result. How to create Powerpoint
presentations? Here's another
interesting example. Are you still using the
good old Powerpoint for your presentations? I do. You can use the advanced
data analysis plug in to create Powerpoint presentations.
Check out this prompt, That's just a basic
scenario with a bit of creativity in your prompts and maybe uploading your own data, you can create great
presentations in no time. I think that's enough
for one module. If you want to see more, check out the next video.
See you there.
36. The The Code Interpreter Part 3: Welcome back. In this module, we'll explore even more
cool things you can do with the advanced
data analysis plug in. Let's jump right in, face
extraction from photos. The open source computer
vision library, Open CV in short, is one handy Python
library which enables real time image
and video processing, object detection,
facial recognition, and other computer
vision applications. Let's see what it can do in the advanced data
analysis plug in. Have you ever wanted to extract individual faces
from a group photo? Simply upload your group
photo and use this prompt. The plugin will automatically detect the faces, crop them out, and you could also
have the plug in, save them for you into
individual files if you want. Creating animated
ifs from videos. We live in the age of memes. Ever wonder how you can take
a memorable sequence of your favorite movie
and turn it into an animated Jeff that you
can easily post online. Here's an idea to
accomplish that. Upload your video sequence
and use this prompt. The plug in handles the video processing and
conversion for you. If you want, you could
try asking the plug in to add text or add any
additional effects. Just be creative with your prompt advanced data
visualizations. The Bock Library in Python allows you to
create interactive and visually appealing
data visualizations for modern web browsers. I know we talked
already about creating diagrams and different
data visualizations, but what if you could actually interact
with your diagrams? Just upload some data and
try a prompt like this one. It will generate
the plot code and even host it for you
to view it online. But you can also download
the diagram using the link and share it or
host it wherever you prefer. X KCD style diagrams want to convey a concept with a
touch of humor and simplicity. Diagrams are no fun, but if you want something
more free hand, you can have it draw diagrams in the stylistic hand drawn
look of the XKCD comics. Yeah, there's a
library for that too. Here's an example Prompt Visualizing GPX courses. How about planning your
next outdoor adventure? If you're passionate
about nature and outdoor
activities like I am, you probably know already how
useful can a GPS track be. Sometimes it doesn't only
prevent you from getting lost, but it also allows
you to monitor your progress during
the activity. Unfortunately, GPS data is hard to read for
a human at least, but not for an AI model. Coupled with a very
smart plug in, here's a way you can visualize courses and elevation profiles. The plug in will
crunch the numbers and generate the visualization
automatically, creating Youtube thumbnails. You can even create
Youtube video thumbnails. Well, probably not the
best looking thumbnails, but it's a quick alternative. In place of more advanced
tools, here's the prompt. It will mix together
the image and text into a properly sized and
formatted tum nail for you. Obviously, you could tweak the prompt to achieve
better results. Epub to doc X conversions. Finally, converting
books is a breeze. Simply use a prompt
like this one. The plug in handles each step automatically to give you a
clean word document output. As you can see,
the possibilities are endless with this plugin. I hope these examples have
inspired you to explore and get more creative in using
Chat GPT and its plug ins. The advanced data analysis is using Python under the hood. If you want to have
more control on its outputs or achieve
even better results, I'd suggest you learn a
few things about Python, thus making your first steps into the world of
computer programming. Trust me, no matter what's
your current job right now, it's definitely
worth the effort. Okay, up next we'll talk
about another new feature which can be really
useful in chat GPT, the custom instructions. See you in the next module.
37. The Custom Instructions Feature in ChatGPT: Hello again and welcome to
another exciting module. Let's talk about another
brilliant feature of Cha GPT, the custom
instructions. Although introduced
only recently, this feature proves already
to be really useful for those who want tailored
responses to their prompts. In this module, we'll uncover
ways to use it in order to command GPT to produce
content in a specific manner, create structure, guide its thinking process
and much more. First of all, let's have
a quick look at how we can enable this feature
in the Cha GPT interface. The option is actually
located in the main menu. In the first box, you can
provide the model with some details about you
such as your location, line of work, age, or any personal preferences. By doing that,
you're encouraging Chat GPT to customize
its answers for you. But I personally
wouldn't really use that one because in practice I would need
to change it too often. The thing is you can only have one safe profile at a time. Which can be pretty
inconvenient if you're moving back and forth
between different personas. The second box, though,
has a lot of potential. It basically allows us to fine tune the responses
we get from the model. I'm going to show you
some clever ways in which this feature can
be used to improve our experience when
using Chat GPT. For example, you can limit the length of the response
to a certain size. Sometimes the answers
produced by Chat GPT are extremely long and contain a lot of
redundant information. There's an easy way
to correct that in a semi permanent
fashion by adding a custom instruction like
this one and making sure you enable the instruction
for all the future chats. From now on, you'll receive
more concise and to the point answers using a
certain answer style. Let's say we work on
a bigger project that requires a longer
conversation with chat GPT, maybe even using
multiple chat sessions. But we also want to maintain the same consistent
response style throughout our entire
interaction with the model. There's an easy
solution for that. Avoid using certain words. Generating text with an AI model is quick and very convenient. But I think you've
noticed already that all AI models tend to repeat certain words like they're
obsessed with them. Unfortunately, that's
a side effect of the training data used in
their learning process. The good news is that we can ask the model to
avoid certain words, thus making the output sound
more natural and less AI generated encouraging interactivity by asking
clarifying questions. We talked about AI assisted questioning in one
of the previous modules. Sometimes this method
can be used to explore different
perspectives of an issue and enable the model to gather the necessary information to better understand
the problem at hand. And we can add custom
instructions to accomplish that. In every conversation
where chat GPT considers that the clarifying
question might be useful, obtain a certain number of suggestions instead
of a single answer. Another useful instruction that we could use can make chat GPT provide multiple suggestions
for certain queries. Here's an example, of course you can customize
it as you like, obtain code only answers
avoiding lengthy observations. Everyone who's used GPT four as a coding assistant knows
already that the answer it provides is not always
to the point and sometimes includes a lot of useless explanations
or too many comments. A quick and permanent
solution would be how to avoid hallucinations, as we discussed in
the previous modules, sometimes the outputs
produced by I models like GPT four, may
be unreliable. This has to do with the way large language models
work by design, and the quality of
their learning process. There are a number of techniques
that we can use in order to increase the reliability
of a model's response. But if you're looking
for a quick improvement, here's the way you could
use custom instructions. This one is my favorite. I'm actually using this
instruction all the time. Define shortcuts in the prompt that trigger specific functions. To be honest, this is by far the most practical way of
using custom instructions. Actually, all the
customization methods that I've shown you
so far are great, but they all have a
major inconvenience. Once enabled, they will affect all future responses and they will limit your entire
user experience, and that's not ideal. A much better approach,
in my opinion, is to define shortcuts
that you can actually use when and only when you really need to
customize the response. Here's an example of how
can you accomplish that. From now on in my
future chat sessions, I'll be able to
activate a shortcut just by adding a certain
parameter in my prompt. Of course, you could also use shortcuts in the
first box to define different personas
that you can later activate with a simple
parameter in your prompt. One thing though about using custom instruction
is that you need to remember you're actually
using them otherwise, especially if you
customize too much, you will end up having a completely different
user experience. That's why I strongly advise you to keep your customizations
at a minimum and always acquire a taste
for your preferences first that you know exactly
what to change and how. Keep in mind that the length of the text you enter as
a custom instruction or the total custom
instructions you have will decrease your
actual context window. The more instructions you add, the less memory will
be available for the entire conversation.
Try to keep it short. Always add customizations in small increments and make
sure they do not overlap, as this may lead to unexpected
behavior If you're in a chat session and want
to have a quick view of the custom instructions
without using the menus. You can also hover
your mouse over the information sign at
the top of the screen. It's incredible to think of the many applications
for this feature. Remember, the more specific and clear your instruction is, the better tailored the
response from Cha GPT will be. There you have it,
a brief overview of the custom instruction
feature of Cha GPT, but trust me, we've only
just scratched the surface. Experiment with
your own ideas and discover the vast number
of possibilities.
38. Course Conclusion and Key Takeaways: Hello everyone. As we've now reached the final
module of our course, I'd like to take a
moment to recap some of the most important
things we've learned together and share
some key takeaways. Throughout our journey, we've explored the world of
prompt engineering, starting with the
basics and moving on to more advanced techniques
and best practices. We've seen how
prompt engineering can help us in our
everyday tasks, in job roles impacted by AI, and even in our personal and
professional development. We've also gone through some simple but
useful case studies that show how regular, non technical users can harness the power of AI
engines like GPT four, to elevate their
professional skills and increase productivity while
still preserving creativity. We talked about Chat GPT
and GPT four in particular. But remember, these
concepts can be applied when interacting
with any AI model. Let's have a look at some of the key takeaways
from our course. Prompt Engineering. It's a
powerful tool that allows us to interact with AI
models more effectively. Simple prompts can
be transformed into more sophisticated ones to
achieve better results. We can use prompt engineering for better productivity
and inspiration. Ethical considerations,
though, are crucial when working with AI power
tools including fairness, reducing bias, and ensuring
privacy and data security. As we wrap up our course, I want you to know that this
is just the beginning of your journey in the world of AI and prompt
engineering in general. I hope the concepts
and techniques we've discussed will inspire
you to continue learning, experimenting, and
discovering new ways to leverage AI in your personal
and professional life. Thank you for joining me in this exploration of
prompt engineering. I'm truly grateful to
have experience with you and I wish you all the
best in your future endeavors. Never stop learning.
And always remember that the power of AI
when used wisely, can be an incredible asset to help you unlock your
full potential.
39. Course Conclusion and Key Takeaways: Hello everyone. As we've now reached the
final module of our course, I'd like to take a
moment to recap some of the most important
things we've learned together and share
some key takeaways. Throughout our journey, we've explored the world of
prompt engineering, starting with the basics and moving on to
advanced techniques. We've seen how
prompt engineering can help us in our
everyday tasks, in job roles impacted by AI and even in our personal and
professional development. We've also gone through some
amazing case studies that show how regular non-technical
users like us can harness the power of AI engines
like GLUT4 to elevate our professional
skills and increase productivity while still
preserving our creativity. Let's take a look at some key
takeaways from our course. Prompt engineering is a
powerful tool that allows us to interact with AI
models effectively. Here's an example prompt. Simple prompts can
be transformed into more sophisticated ones to
achieve better results. We can use prompt engineering
to enhance our productivity and creativity in various
domains such as writing, research, communication, time
management, and learning. Ethical considerations
are crucial when working with
ai powered tools, including fairness,
reducing bias, and ensuring privacy
and data security. As we wrap up this course, I want you to know that this is just the beginning
of your journey in the world of AI and prompt
engineering in general. I hope the concepts
and techniques we've discussed will inspire
you to continue learning, experimenting, and
discovering new ways to leverage AI in your personal
and professional life. Thank you for joining me in this exploration of
prompt engineering. I'm truly grateful to have
shared this experience with you and I wish you all the
best in your future endeavors. Never stop learning. And always remember
that the power of AI, when used wisely, can be an incredible asset to help you unlock your full potential.