Transcripts
1. ChatGPT for Beginners | What to Expect from this ChatGPT Tutorial: Getting started with adept through trial and error
can take a lot of time. And with all these new updates and tools coming from
all over the place, you need a solid foundation to stay sharp for a
lot of patience. Hi. I'm Leo. And welcome to my course
on getting started with Chagp for practical
everyday use. With over a decade of
experience in marketing, and now being a full
time product manager for AI R&D products. I'm here to Make
complex concept, simple and actionable,
without having you spend 40 hours watching
repetitive videos. In about two, 3 hours. Yes, just write about
your Netflix night. You'll go from beginner to
a confident ChagpT user. Now, how we do it this fast? Well, we don't spend
10 minutes talking about creating an account
for each tool mentioned. Instead, we go over a quick structured
introduction and move on building the
skills that you can actually use with
confidence day to day. Here is a list of
what we'll cover. Discovering CPT,
its limitations, alternatives, and core features. We'll discover actionable prompting techniques
for everyday use. We'll learn to handle inconsistencies,
hallucinations, and plagiarism. We'll learn to create
and use custom Dipts. We'll also learn
about modalities and tools beyond Chachi PT. This course is perfect for beginners who want to integrate CGP into their daily work
without feeling overruned. It's ted for all ages. Whether you're a
marketer, a student, or just a professional looking
to expand your skills, and get more confident
and fluent with Chagp. This course has got you covered. So thank you for
considering my course. I invite you to have a look at the course contents and preview the lectures that are
available for free preview. If you're ready enroll
now, don't wait. Anyway, I can't wait
to see you there. O
2. Choosing a Large Language Model: ChatGPT vs. Gemini vs. Claude: Competition in the AI
world is super aggressive. CA GPT by Open AI made a huge breakthrough making generative AI available
for the masses. But now, there are some
decent alternatives, and the choice
isn't that obvious. You will see a few
lectures that use GPT 3.5, as this was the best option
at the moment of recording. But after the
release of GPT four, I recorded all of the lectures using the most recent model. I know that many
of you still use the free version of
GPT, the model 3.5. And now, since the
competition grew so much, I would actually
recommend something else, something much better than
the free version of ChagpT. So let's first talk
about the free versions. The free version of Cage PT has only one advantage for me. And this is its ability to
apply custom instructions. Basically, this enables you to enter a prompt
that will describe the context about you and about how you want
Chagp to respond. My recommendation so far is to stop using the free Cage PT. To either use the GPT
plus account with all of its most recent benefits and features or try one of the
following alternatives. Let's start with the first one. Google Gemini. Google Gemini is often overlooked by
many professionals. And in my opinion, it actually has a few amazing
advantages up to its leave. Its number one advantage is its integration with
the Google Ecosystem. Gmail, YouTube, Google
Pixel phones have tons of features and benefits that you can use even in
this free version. This ability has
been rolled out and shut down a couple of
times here and there, but in the end, I
believe it's going to be the long term
advantage of Google. The other advantage
of Google Gemini is that you can check its
responses with Google Search. Gemini will break down new response into pieces
and separate searches and find search results that can confirm the chat response. I also kind of like the user
interface more than CA DPT. In the free version, you
can also upload your image and ask to create a text
based on this image, which is kind of cool as well. The quality of responses
in the free version, in my opinion is also a bit
better than in GPT 3.5. Let's talk about the next
one anthropic cloud. I love anthropic cloud
for many, many reasons. The free version sounds much
more natural than GPT 3.5. It offers the
largest token sites on the market at the moment, which means the best memory
of all the three models. You can use it to
run longer prompts, process larger documents, have a longer conversation throughout which Cloud
will remember the context. So if you summarize a two hour long video from YouTube using a third
party tool like Harpa AI, for example, Cloud is often the best model to choose
for this kind of task. The other interesting
advantage of Cloud is that it has
an incredible OCR, which by far is the
best out there. Federal dictionary here, OCR stands for optical
character recognition. If you want to ask
questions about an image or create captions for an image or decipher a
written handwritten text, having a great OCR
is a huge advantage. If I had to choose one
free large language model, I'd definitely go for
Cloud at this moment. In fact, I still use
it from time to time despite using a
GPT plus account. By the way, the recent family
of models called Cloud three claims to outperform
GPT four in some use cases. And amazingly, the second most
powerful model they have, which is currently
Cloud three Set, is available for free. Yes, you heard it
right. It's free. But talking about
the paid versions, GPT four still
offers great things. GPT four offers custom GPTs and a few more nice features
that make me stay with the GPT so far.
Right, let's wrap up. If you use a free,
large language model, tri cloud or Google Gemini. And ditch that free GPT. If you're looking
for a paid motel, consider GPT four or clad if talk in size is the
most important thing for you. All of these things are fine, and approaches from
this course will work for you unless
they don't require a specific functionality like
Custom PTs, for example. That's it. See in a few
seconds and cheers.
3. How ChatGPT Works: Strengths and Limitations: In this lecture, we are
going to discuss GPT. What are its strong
and weak sides? Chagpt is a natural language
processing chat bot, driven by generative
AI technology. It allows you to have human like conversations and
actually much more. Chagp can answer
questions and assist you with tasks such
as composing e mails, essays, and even writing code. And if you dig really deep, it can be integrated
with a lot of other tools to assist
you in more tasks. Here are some of the downsides. First limitation is that Chagp is not a reliable
source of information. As of now, adipti
doesn't provide accurate sources of
information that it uses. It was trained on billions
of information entries. But it doesn't know what
these entries really mean, which of these are more accurate and which
are less accurate. Moreover, you don't have much control over
the generated data. Each regeneration
would result in slightly or even significantly
different answer. Which is actually a
blessing and a curse. That also means
that the response that you'll get might be different from the one that you see in the
demos of this course, even if you enter
exactly the same prompt. You might think, Leo, they already have this
browsing feature. You can browse with CJP. Well, in 2024, this
feature is still unstable. Although GPT now has this
browsing functionality, it's not currently very good and accurate at searching and
processing information. It often produces mistakes, blends them with hallucinations, very confidently, that
it's hard to notice it. I expect some improvements
in the future, but that's what we
have right now. For the time being, I would recommend using research
specific tools like being perplexity or u.com for any tasks that
involve browsing. The second downside is a
continuation of the first one. The confident mistakes that ChagpT makes are
called hallucinates. You see, Chagpt is trained to write using a language,
many languages, actually. It can produce
faulty information very confidently without
you even noticing it. It can generate non
existing sources of information that look
very, very believable. It can even pretend it knows
the content of a link, but in fact, GPT
is just guessing the topics within the URL slug. Here's a high level, but very practical tip. Your good use case,
no hallucinations. Bad use case, lots
of hallucinations. And a fancy prompt wouldn't
fix a bad use case. The next downside is its lack
of emotional intelligence. Gibt can simulate
natural conversation, but it lacks the emotional
and real world experience. It's the intelligence of a
human conversation partner. C hag Bt can have the
difficulty understanding and responding appropriately to subtle nuances in
communications. But hey, we humans, we are sometimes even worseth
this. So let's move on. The next downside is that CGP itself is not
very good at maths. At first, this was
a huge problem. But now, GPT plus users can access some
of the improvements. For example, by triggering giptis advanced data
analysis functionality. Or by using well from
Alpha Custom GPT. The next drawback is about
privacy and security concerns. The use of GPT requires the exchange of data and
information with the system. This poses potential risks in terms of data
protection and security. It's important to take appropriate security
measures to ensure that the sensitive data is protected and doesn't fall
into the wrong hands, which actually already happened. Number six is the
computational cost. CGP is a highly complex and
sophisticated AI model. It requires substantial
computational resources to run. So organizations
should carefully consider their
computational resources and capabilities before using
CGPT let's say on premise. I believe this problem is
temporary, and with time, we might be even able to run large language models
offline on our smartphones. It's actually something that Apple is working on right now. But at the moment of
recording this lecture, it's far from being
in production. By the way, Sam Altman, the founder and CEO of O
pen AI is actually looking for energy resources to scale
the product to new levels. So like most other
AI technologies, CGP is great at finding patterns and analyzing
data that you provide. CGPT does way more than just writing social
media and block posts. It can provide almost
an infinite amount of ideas and points of view
in a matter of seconds. You just need to prompt CJPTi w right and be aware
of its limitations. So the advantages actually
outweigh the disadvantages. By far, far. One
other advantages that Capti users don't utilize enough is it no
professional frameworks. So whatever copywriting
formula you have, whatever format of business
or marketing analysis you want, it can handle it. The primary advantage
of ChagpT over other large language models is its simplicity in customization
and personalization. And it's very easy to
integrate in your workload. You can set custom instructions, custom PTs, and incorporate
Chagpt with other tools. For example, harp ai u.com, descript all of these tools, utilize Chagpt to enhance
their multimodality. And by the way, multimodality
is an important term in EI. It essentially
refers to processing various types of
media such as text, audio, video, two D images, three D images, and so on. So the list can go on. But it's important to
remember that CGPti is just one tool in your huge
professional toolkit. And the most important tool is your critical thinking
and your experience. Yeah, let's move on
to the next lesson where we will look at
this tool in more detail. See in a few seconds.
4. Getting Started with ChatGPT: Feature Overview: Hey, friends. Welcome back. Today, we'll cover
the core features of GPT to help you navigate its capabilities
without getting lost in the documentation that is sometimes kind of
tricky to navigate. To be honest. We'll briefly go over what each feature does and when you
might want to use it. By the end of this
lecture, you'll have a solid understanding of the product and what
it can do for you. Fortunately for us, users, and maybe of b, unfortunately for
individual instructors like me who also work full time. CGP is constantly
changing and improving. Some features might differ by the time you're
watching this, so to keep track of updates, add Open AI release notes in your bookmarks and
follow them on LinkedI. On Linked in, they actually post the most
meaningful updates, so it's really
effective to do so. All right. Let's break
down the core features. And speaking of core features, we have to start with
models and model updates. In our context, we talk about large language models,
also called LLMs. They are the core of chat bots like ChagpT,
cloud, Gemini. These are deep learning
models trained on huge datasets to generate content from human
language fronts. Open AI offers diverse models
with varying capabilities. GPT 3.5 actually
made GPD a star. It's fast and available
without an account. However, the newest
flagship GPT 40 is much more advanced. It provides quicker and
more accurate responses across text, voice, and vision. Yes, this O stands
for omni modality. Our next core feature is
image generation with DL E. DLE creates images
from text prompt. The latest version DL three
allows you to generate images and dit them using a
selection brush and text. While it's a handy tool to have, it's not the best image
generator out there. For professional needs,
consider other options like W Firefly or mid
journey or S diffusion. One more visual feature is OCR, optical character
recognition. When you upload an image, GPT will be able to read
and analyze it. Voice mode. Voice mode actually
sets a couple of different features and
puts them together. You can convert text to
speech and vice versa. This feature is powered
by the whisper model, which excels in multilingual
speech recognition, translation, and
language identification. On mobile, you can also enable
background conversations, which lets you keep
your phone shut down, but still talk with GPT. It's cool if you
want to do something and have someone to talk with. The next feature is crucial, and it's data controls. Data controls in Chagp let
you manage your chat history and decide if your conversation
should train the models. Chats. You can export your CA GPT data or even delete your
account if you want to. Additionally, you can share chat links with others
in your workspace or just create a link to
share the whole conversation. The next feature is
about integrations of Microsoft OneDrive
and Google Drive. You can now apload
files directly from Google Drive or
Microsoft OneDrive, making it easier for
Cha gPT to access and understand your documents, spreadsheets, and presentations. Although I wouldn't really recommend presentations for now. The next feature is memory. CGP can now remember
details across chats, and it enhances its ability to provide relevant responses. To make Chagp remember something
about you, simply type. Remember that I
like, for example, short and concise responses. This feature can be turned
on or off in the settings. Let's go to the next one.
Custom instructions. Custom instructions allow you to tailor Tagp's responses
to your needs. If custom instructions
are turned on, the settings will apply
to new conversations and will be applied to any
new chat that you create. However, you can't change them. Amidst the conversation. You'll need to start in U chat for any changes to take effect. There's also a set
of limitations in terms of what kind of
instructions you can provide, and what volume of
instructions you can provide. You also can't upload any
files as an instruction, and you can't easily manage
multiple custom instructions. But don't worry.
For these needs, we have a much better feature. And this feature is custom PTs. Custom ptes are
specialized versions of Cagpt that can be tailored
for specific purposes, just like custom instructions, but on larger scale
and more variety. My opinion, it's one of the
most important features now. They allow to provide
extended instructions, keep constant knowledge base, and even API connections. There's also a
huge collection of custom deputies built by
other people and companies. And of course, you
can build your own. One of the cool features of custom DPTs is that
you can change them within a single conversation by simply tagging
that custom GPT. It significantly
optimizes workflows, especially considering that
you can use your files as the knowledge base and lets you connect
third party tools. And if you're technical, you can connect third party
tools via API. Okay. Now, speaking of API, there are two types
of API features. Normally, when we
speak about API, We speak about the
API that Chachi PT has to enable developers to embed Chachi PT into
their applications. But there's also API that can
be used within custom PTs. And in this case, you integrate third party tools
into Chachi PT. The next feature is browsing. So models like GPT four, GPT 40, and possibly all
the future models will be able to
browse the Internet. Although browsing
capabilities are still yet to be improved, and it's still not the
best search tool in the mid of 2024. It's
still cool to have it. Our next feature is
advanced data analysis. Dvanc data analysis, formerly known as C
GPT Code Interpreter, by the way, it's still
code interpreter in certain parts of
documentation and interface. Anyway, it runs Python code
in a sandboxed environment. But what it means for us is
that we can upload data. And generate insights from
it, or visualized data. It's best suited for TXT, Doc X, or CSV files, but also comes
with privacy concerns. You can access the feature by selecting the GPT for model and triggering this feature by saying use advanced
data analysis, too, and then describe the action that you
want to perform. Last but not least as of 2024, CGPT is available
practically for everyone. It's available on MACS
desktop as an app. It's available in web interface, IS, and Android smartphones. And that's a wrap on the core
features of CGPT for now. But of course, We have to stay updated because there's
always something new. To do that, check out their
release notes. That's it. We've gone through the
core features this past. I'll leave the text
version of this lecture in the resource section
so that you can easily reference it if
you forgot something. That's it for now and
see in the next lecture.
5. ChatGPT Data Privacy Settings: Everyone. In this lecture, I want to draw your attention to an aspect that I'm sure
many often overlook. Something that even a few
employees at Samsung have done. By the way, check out the
story on Tech crunch. Just Google Tech crunch, Samsung, C GPT, Band,
something like that. And you'll see the
first link. All right. So if you're like
me working in an enterprise setting, handling
corporate information, There is a crucial
setting that we should remember to utilize, particularly when
you work with data that is confidential or at
least close to confidential. So here's how to ensure at least partial
confidentiality within Cha GPT. First, you'll need
a GPT plus account while working with
confidential data. Then within CA GPT window, open settings, then
in data control step, disable chat history
and training. The reason I say partial
confidentiality is that the data is stored on open AI
servers for 30 days anyway. So I'd suggest removing
any brand mentions, sensitive data or any data that can identify your business, and you'll probably
get away with it. If you're using a free
version of a GPT, I would recommend to not enter any confidential
data at all. It's also related to Bing Chat and Google Gemini and Claude. As these large language models, use your input to
improve their models. I know you came here
for other things, but safety comes first. So please be safe and never underestimate the privacy
as it starts with you. Alright, cheers and see
in the next lecture for stuff that's a bit more
fun than data and privacy.
6. Optimize your ChatGPT Productivity with Memory Feature: At first glance, the
memory feature in CGP might seem bit unnecessary, especially if you're just
starting out with it. And in fact, I was a bit
skeptical of myself. When I played around with it, I discovered its potential to significantly
boost productivity. So today, I'm going to walk
you through how I've turned this seemingly
simple feature into powerful productivity tool
and how you can do the same. The most significant advantage
of the memory feature is its ability to be customized and edited to fit your
specific needs. This flexibility is
what transforms it from a basic tool into a
huge productivity booster. Now, let me explain you how
I use it and why I would recommend it to anyone looking to speed up their workflow. As usual, first,
I define the goal and the context that
I use frequently. This could be anything specific, like the product you manage, a description of a social media
account that you oversee. Or a campaign that
you're working on. Essentially, think of something you often refer to
in your prompts. By setting these
contacts in the memory, you save time and
ensure consistency in your interactions
with Chad GPT. Another smart use of
the memory feature is storing some of your prompts that you
use most repeatedly. For instance, I often
need to rewrite texts into PowerPoint slides
with a specific format. Instead of typing this
request every single time, I create memory for it. You can also do this for short descriptions of styles or formats that you
frequently work with. This way, GPT remembers and recalls exactly what you need, saving you from retyping or re explaining the same
things all over again. The next use case is
particularly useful if you collaborate with
different stakeholders for a long period of time. Let's say you frequently work
with someone named John, who is a very goal oriented, and focused on
timelines and budgets. Let's say CMO. You can create a memory that
captures John's priorities. Later, when you
will be preparing a report or a presentation
where John will be present, you can prompt Chagp to review the content that he prepared
from his perspective. This allows you to anticipate his questions and be
prepared to address them. Let's walk through the process
with a practical example. All you have to do is
just start a new chat in GPD and type in. Remember this. Every time I type slash, shortcut, you will
trigger the following. Here, shortcut is the term that you create for the memory. After the following, you
enter the specific context. For example, remember this. Every time I type slash PPT, you will reformat the last
response into presentation. For example, Remember this. Every time I type PPT, you will reformat the
latest response into presentation slides using the best frameworks
from McKinzie. Use bullet points instead
of long paragraphs and ensure each slide headline
is self explanatory. Remove all excessive language. Let's look at one more example. Say your main project is an AI speech
recognition platform. You can create a memory
like this. Remember this. Every time I type slash AS R, you will trigger the context of my speech
recognition product. Here are the features. One, two, three, four, five. Here's the target
audience, one, two, three. Here are the problems that
this product solves, one, two, three, and here's the
text tac, one, two, three. So by setting up these simple, but effective memories,
you significantly reduce the time spent prompting. Not only does it make
your interactions with a GPT faster, but also keeps your prompts consistent and
easy to customize. Even if you're
using custom GPTs, these memories will still provide the same
productivity benefits. Guys, take the time to set
up your memory feature. It's an investment that
pays off quickly by cutting out your repetitive
parts of prompts. Your managing complex projects. We're just looking to optimize how you accomplish
your everyday tasks. ChagpT memory feature can easily transform
your productivity and interaction with ChagpT. Hope you found this useful, and it saves you a lot of letters type. See
you in a few seconds.
7. MacOS app Overview: Hi, everyone. In this video, we are going to discuss the
MacOS app for Chachi Pit. So let's start at the top, from left to right,
just as we read. So first of all, here
you have the sug that opens and
closes your sidebar. In the sidebar, you have
the search within chats. You can search for
keywords within conversations or for your chats. Then you see your custom PTs, you see Explorer GPTs. When you press Explore GPTs, you're basically direct to
a new window where you see the featured GPTs
or where you can search for more or
access your PTs. Right here, you can
search public GPT. Oh then you see project. Project is a new feature
at the time of recording. Basically, this is a combination of custom GPTs and
file management. In this case, it's
chat management, something that we've been
waiting for quite a long. Right now, you can have
this on paid versions, and I'm sure that pretty
soon everyone is going to have the access to
project on hat GPT. For some reason, at this stage, you cannot create new projects
in Maques desktop app. You can only do so
in the web version. I'll provide a separate video on HAGBTPjects so that we don't get distracted
within this video, but you can use
your project here. And the button to
create a new chat is over here on the
left, on the top. Then next, you see
the model choice. You see these models here. You can create a temporary
chat or look for more models. Typically, the hidden
models are the ones that people stop
using gradually. Now let's move on
to the chat bar. In the chat bar, what you
see is the plus button, and you see these features. It really reminds
me of Apple notes. You can upload a file, a
photo, take a screenshot. You can take a screenshot of any screen that you
have at the moment, and it will be automatically
attached to this chat. You can take a photo
with a camera, for example, in this case,
I'm using a web camera. Or you can toggle web search. Web search, by the way, has also received a
lot of improvement. I wasn't happy
with web search at all in the previous iterations. This one is actually
much better. This one duplicates
this search the web. So if you want your prompt
to toggle websearch, just press this part, and the
web search will be enabled, and the web search
will be enabled for any query that you make. This is a work with button, and basically it's most
useful for coding. You can choose apps
that you can work with. So AGBT will integrate
into that app. So far, the support for
apps is not very huge, but I'm sure that we're
going to get more and more and most of the apps
are going to be supported. So this button enables you
to work with other apps. So far, the choice
is not very wide. It's primarily feature
targeted at coding. So far, you can use nodes or a few of the other text editors. On the right of the chat box, you have the two
options for voice mode. The first voice mode
is this microphone. I use it most of the time
because I just like telling the context and speaking with ha JPT instead of
typing long prompts. And with the most recent models, long prompts actually
work very well. So I decide to just tell
my context as a prompt. Then you press this checkmark, and you can see that
this transcription is actually not bad. It's actually very good. The voice mode is
slightly different. It's very similar to what you have on your smartphone app. Right now, let's talk with JAG PT within voice Mode.
How do you feel today? I'm here and ready to
chat. How about you? How's your day going?
My day is going fine. Let's have a look
at what's available within this voice mode. Right here, you can share the
clip of your conversation. You can access settings. You can mute your microphone to not interrupt the voice mode. You can work with other apps
or close this voice mode. Once you finish your
voice conversation, you have it documented
within a chat, which is pretty handy because you might want to
get back to it. One more important place within this app is actually
this one over here. You can check for updates, and you can also go to settings. Within settings, you have all
of your typical settings. Plus, you have some of the additional settings
like correcting spelling automatically
launching at Login, and map provider. I'm using Google Maps. There is no particular feature to work
with maps at the moment, but I assume it's coming soon. And of course, you can
check for updates, and you have additional
settings to work with apps and you
can manage the apps. So which ones are enabled
and which ones are not? Just as with the web option, you can customize your ha JPT with memory or
custom instructions, and you can set up the data
controls and put this off, archive all of the chats, delete all of the chats
or export your data. One more important feature of HAGPT that I actually
don't use that much, but maybe you will
is the shortcut. With this shortcut,
you can start a conversation wherever you are. So let's press this shortcut. In my case, it's option space, and you have this mini chat box. For example, this
is pretty useful when you don't want to have
a lot of interruptions. For example, you're browsing
and you want to ask something and you can immediately switch
on the voice mode. One more way to switch it on anytime is to press
this icon of ha GPT on the very top and
we'll start the chat anytime while you're browsing
with less disruptions. All right, that's
pretty much it for the desktop app of HGPT. I hope you find it useful. Please expect more and
more features coming into the ha GIPT and deeper and deeper integration
into the MacOS system.
8. ChatGPT Prompt Engineering: Learn to Communicate With Generative AI: Keeping up with AI advancements
is a daunting task. One day, you'd find out that being polite with AI
gives you better answers, and you go spend
time testing it. The next day, you'd see some sort of a
magical symbol that supposedly makes ajipti open a pandora box of intelligence. In reality, that's a
huge waste of time. Instead of focusing
on random posts, tips, tricks, and hacks, dedicate your
attention to mastering the long term principles
of prompt engineering. In following lessons, we'll dynamically go through
the principles that won't change regardless of the latest half an
hour ago release. This will especially be useful to someone working
on marketing tasks. But all of these
approaches can be applied to other
professional fields. Many of these approaches have a much more complex nature and come from machine learning. But what we casual business
workers and marketers need, which need an approach
that works with AI on daily basis without
writing any code. So let's think of
this as learning to communicate with generative
AI tools like GPT. By the way, one
exciting side effect that I noticed on myself and many other people is that prompt engineering
is a transferable skill. If you create or communicate
tasks to other people, you'd find yourself being more structured ineffective
in your communication. But pay attention, this is not a technical course
for developers. It's rather for business
related professionals like marketers, managers, content creators, and others, you need to learn
writing prompts to solve daily
tasks effectively. If that's you, you're
in the right place. Let's get started.
9. Understanding Generative AI Prompts: Wonder how AI models like Cap produce texts that
seems almost human. Well, the key is in the prompts, the instructions and the
contexts that you provide. This video discovers
the crucial role of prompt in guiding AI. What is the prompt? Prompt
is the input, the data, instructions, context, or examples that you give
to a NAI model like CPT. This input is the
foundation in guiding the AI to generate relevant
and meaningful responses. It concerns text,
images, and beyond. In other words, garbage
in, garbage out. What's a good prompt built off? Start with instructions,
clear task guidelines. For example, write news article summarizing the latest
Space X launch. Second, context, the background information
that frames the output, then the input data, specific examples or data
points to integrate, then output indicators
like desired format, tone, length, or other
output characteristics. We'll discuss all of
that in more details. But first of all, why does
prompt quality matter? Well, a poorly crafted prompt doesn't activate
AI capabilities. It sort of feels like driving a powerful car in a traffic jam. And vice versa, a well
crafted prompt ensures that you direct the use
of AI capabilities. You integrate the precise
information and context. You have control over tone,
style, and structure. And you also enhance
the accuracy, nuance, and creativity in
the results that you get. So with the growing
role of prompts, the niche of prompt
engineering has developed, focusing on enhancing
prompting techniques. Various tools are also
available to help craft, optimize, and evaluate
prompts effectively. But ultimately, prompt
engineering isn't really an engineering in the
broad meaning of the word, It's rather a skill
of setting up the task the right way
with the right words. Let's sum up. Prompts form the essential bridge between
human intent and AI output. Therefore, Mustering prone
writing empowers you to fully use AI capabilities
across applications, helping you win in the human
slash AI collaboration. Let's discover this in the next videos. See
in a few seconds.
10. The Ultimate Prompt Structure for Versatile Use Cases: What makes a good prompt? What are the components of one good prompt and
how to build yours? Prompt is the main
instrument that we use in large language model like Cage PT when we
interact as users. And it's what JAI needs to,
well, generate something. That could be our question, a step by step
instruction, or a task, depending on how
much information and instruction you
give to Cage PT? ChagpT is going to
either come up with. This could be our question, a step by step instruction, or a task, depending on how you give this
instruction to Cage P? GPD is going to
either come up with a high level generic answer or a more profound more
detailed response. So what are the components
of a good prompt? We already discussed
this on a high level, but let's go into a
bit more details. First, it's an exact
and detailed task. Think of the large
language model, a bit like an SEO. You need to put the right
keywords to activate the right parts of the
model straining data. If you struggle with
it for your prompt and you don't have
the inspiration or time to optimize your prompt, try the tool called
prompt Perfect. The link is over here. And also, it's in the
resource section. It automatically optimizes
your prompt by either removing excessive words or adding more details depending on
what you put into the system. Okay, let's go back to
talking about the prompt. So it's actually the
reason why assigning a role in the beginning of
a prompt is a good idea. It's the reason why
asking to follow specific frameworks related to your tasks is also
an amazing approach. Second, a good prompt should
have a context and a task. A good prompt needs to
provide information or context needed to
complete the task. Think of this as if
you're explaining your task to a new
employee or a freelancer. Third, your prompt needs to balance clarity and
precision with details. Avoid contradictions
and unnecessary words. Try to structure and
format your prompts. The fourth component of a
great prompt is singularity. It's one prompt,
one goal, one task. Bona would definitely
like that one. All right. Try to use simple language and avoid confusing the
large language model. The next step is to include negative instructions
where necessary. Meaning setting limitations on what not to do or what to avoid. Just as we would add
negative keywords on Google search ads, this also helps us avoid
any irrelevant answers. Tip number six, if you see yourself repeating
the same use cases, and the same type of prompt, invest in testing and iterating on developing
your prompt. Refine it. Add what you
think needs to be added. Once you get impressive
results multiple times in raw, save that prompt in your
notes or a template library. Now, let's put it all
together into one prompt. We'll start with
assigning a role, mentioning the industry, the
company, what it's doing. Then we'll continue saying
what we will provide, what kind of output we
expect from Chachi Pt. Then we set rules on what it can do and what it cannot do. Then finally, we provide any relevant input or like
market data, for example, that universal all round
prompting technique will save you hours of experimentation and will
get you thinking in the right categories
while prompting. All of the other prompt
engineering techniques that we will discuss here will relate to these components and principles in
one way or another. So see in a few seconds for the next lecture where we go and discuss those techniques.
11. Mastering N-Shot Prompting: In prompt engineering, the term shot basically
means an example. Shot prompting is a technique for guiding gipty and generating specific responses by providing specific examples or
also known shots. This approach is particularly useful for marketing
professionals and business professionals who need to create structured content, such as blog posts,
social media post, product descriptions that align with their brand tone and style. So Why use shot
fronting and gupty. Reason number one is
improved content relevancy. So shot fronting ensures that
the generated content meets specific requirements
and is tailored to the brands voice and style. Reason number two is
increased efficiency. By providing clear
examples in context, shot prompting
reduces the amount of editing and rewrites, allowing us to save time
and focus on other tasks. Reason number three is
enhanced creativity. Well, Chad GPT's ability to generate content based
on a few examples, encourages creative
thinking and experience. And you can lead to
innovate ideas that might not have been
considered otherwise. For example, why not
search for inspiration from more creative industries
than the one of your brand? If you work in a tech company, why not use inspiration
from a fashion brand? To use shot fronting and Chagpt, identify the specific goal or task that you
want to accomplish, such as creating a social
media post or writing a product description or series
of CO meta descriptions. Then find relevant examples of content that align with your
goal or your inspiration. That could be a successful
social media post of your brand or an example of the best selling
product description. In other words, use
successful examples to guide ChagpT in creating
similar content. Number three, construct
a prompt that includes context and examples, and then use Chad GPT to
generate the content. Let's suppose you want to create a social media post for
a new product launch. You find a successful post from a similar brand
or competitor. Use it as an example to guide
T GPT. Here's an example. You're a world class
marketer for a tech company. Your task is to write
social media post. Using specific context
that I give you. Then you enter your context. This can be as
detailed as you want, and it could actually have multiple sections
or even file tat. And then provide a post example. So if you don't
provide an example, that would be a
zero shot prompt. If you provide one example, that would be one shot prompt, and if you provide
more examples, that would be N shot prompting. By the way, five shot
prompting is one of the standards for benchmarking
large language models. So, yeah, let's add
a post example. By providing this
context and example, GPD can generate a
social media post that is similar in
style and tone, ensuring that your
brand's message is effectively and
consistently communicated to your target audience. By the way, not many people
are talking about it, but shot prompting is actually a technique that is also
used in other modalities. Image generation,
you can provide a reference and generate
a similar image. This way, you get so much more control over
the colors, the composition, and the overall style, and
you actually have to prompt less because the model
would take a lot of parameters from
the image itself. While it won't keep the
reference face the same. The images are still going
to be quite similar. Let's have a look at an
example in Adobe Firefly. Let me open my Adobe Firefly and go to text to
image functionality. Then I'll applaud this image
of myself as a reference. And enter a basic prompt. In this case, the
reference image is a shot. Therefore, we get
one shot prompt. In consumer image
generation tools, one shot is typically
all that you get, but it's already helping a lot. Now, let's compare an image
generated without an example, which would be a zero shot. With an image with example, one shot, not to say that one
is better than the other. But to me, it is so obvious
how much more control you get and how you increase your chance of getting
closer to your reference. Now, is it it? Well, almost, let's look
at how it works in audio. In audio, your shot
is an audio sample. Let's call to Stable audio and follow the same logic
we did with the images. By the way, stable audio is a
tool that allows you to use NAI to create music
without lyrics. It's actually ideal
for background audio. And by the way, it's a
licensed stock music, so you can use the audio commercially if you
have a subscription. So here we are in St audio. Let's upload this audio sample. And now let's give a prompt. Let's listen. It's a new track, but you can already hear the impact of the example
that you provided. And you can actually
adjust it as well. Now, let's discuss a couple of more practices for shot
prompting and Cha GPD. Number one is use clear
and concise language. Ensure that your
prompt is easy to understand and includes all necessary context
and examples. But try to make the prompt
itself relatively simple. Different tools and modalities may have their prompt guides, so please check out as a
rule of for text models, start with a verb like Search, do perform for images. Don't start with the verb. Start with describing
the subject. We'll discuss this
in more details. For audio, don't
use verbs as well. Instead, describe the style, temple, instruments, et cetera. Then next, provide
relevant examples. Use examples that are
relevant to your goal and align with your Brand stone. It's a great way to get closer to the intended
brand tone of voice. You'll just have to
make less edits. Isn't that beautiful? Well, depending on what
you're trying to achieve, you can either provide
reference examples of great campaigns as an inspiration or use your
past best performing work. Example. You could use posts with the highest
engagement right on reach. Or if you're doing Instagram, you can choose the
posts that were shared or sent to someone
else the most, as Adam Moser recently
said that they actually consider
share on reach metric. By the way, one more
great source for examples is the website
that I follow, and I love. So it's here marketing examples. Really great resource,
highly recommend you. So once you get the output, don't forget to actually read it and refine the
generated content. Just to make sure that it meets your objectives and
quality standards. Unlike many instructors, online, I highly encourage not to let the AI do an unsupervised
work for you, especially if it goes public. But with the help
of these methods, you'll have significantly
less work at this stage. So remember this idea, create content with
AI. Not by AI. So by incorporating shot fronting into your
ChagpT workflow, everyone can create a
more consistent content, ultimately driving better
results for their campaigns, presentations, e
mails, communications, and more. All right. Hope that now at the
end of this lecture, you're ready to go and
use this technique. We'll discuss more
ways to communicate with JAI and I'm excited
to see you there.
12. Chain of Thought Prompting Technique: Welcome back, everybody. Many of our tasks in
life are multistep. Meaning, we need to take a few logical steps
to solve a problem. Well, for more complex
and multistep tasks, we have chain of
thought prompting. So what is chain of
thought prompting? Well, chain of thought is a prompt engineering technique designed to enhance the
reasoning capabilities of large language models by
breaking down the problem solving process into
series of intermediate, simple and manageable steps. Unlike other prompting methods that typically expect
a direct answer, chain of thought prompting
guides the AI to generate the sequence of reasoning steps leading to the final answer. Sherlock Holmes would love that. But if you are not
Sherlock Holmes, Why would you use
chain of thought? Reason number one is clarity. By dividing complex problems
into smaller steps, chain of thought clarifies
the reasoning process, making it easier to
understand and follow. Reason number two
is verification. It allows you to trace how
your solutions are derived, which builds trust and helps you identify where
things went wrong. Reason number three is accuracy. The systematic approach
of chain of thought prompting improves the
accuracy of AI responses, especially in complex
reasoning tasks, such as arithmetic,
common sense reasoning, logical reasoning, or
symbolic reasoning. Reason to use chain
of thought number four is in depth analysis. For example, for marketing, chain of Fhought can be
particularly effective developing a multi
step strategy or analyzing customer feedback or any data basically
comprehensively. Let's discuss the
use cases actually. The number one is, of course, marketing
strategy development. Creating a detailed multi
step marketing strategy would actually require a chain
of thought of some sort. Even if you're doing this
without AI, then data analysis. For example, you
could break down customer feedback into actionable
insights, but the way, the idea of analyzing data and then pulling
assumptions from that data makes a
huge difference when working with NAI models, and chain of thought is actually a great implication
of that approach. The next use case would
be competitive analysis. You can systematically evaluate competitors strength
and weaknesses, especially if you pull up all of that information and
provide it to C JBT. Than performance analysis. For example, you could analyze your marketing
campaign, performance, step by step to identify areas for
improvement in next campaign. So let's go step by step on how to actually apply chain
of thought in Chagpt, and you will not require
any code at this stage. So step number one has
always identified the task. Start by clearly
understanding the problem or question you want
the AI to solve. Step number two would be to start the initial prompt
as you would normally, begin with a standard prompt, such as a question or
a problem statement. For example, what are the potential competitive
advantages of an AI driven video editing app? Step number three would be to add chain of thought directive. Include the directive like, let's think step by step to guide the AI through
the reasoning process. And even if you don't
have the steps, adding this phrase already
makes a difference. If needed, outline the
logical sequence of steps, the AI should follow. For example, first, consider the modalities involved in
the video content creation. Then outline the
potential must have, should have, and
could have features of AI video editing desktop app. Third, identify user segments
and combine the features into sets of features
per user segment need. Sounds complicated, but it's
easier when you see this. So let's have a look
at the complete prompt in the context of
increasing adoption of NAI. What are the potential
competitive advantages of an AI driven video editing app? Let's think step by step. First, consider
modalities involved in video editing,
content creation. Second outline the
potential must have, should have and could have features of AI
video editing app. Third, identify user segments
and combine features into sets of features
per user segment need. Cool, but you learn
everything in comparison. So now let's see what
a response looks like in comparison to a
plain zero shot prompt. I'll put the responses
side by side. Chain of thought on the left. And zero shot on the right. Feel free to pause the video, to review, compare,
and see for yourself. If you're listening to this
lecture in audio only format, come back later and
just have a look. Because reading it all out loud would just take
a lot of your time. So it's probably the content that's easier to see
rather than here. Okay, so the zero shot
one sentence prompt, produce somewhat a superficial,
high level answer. It doesn't help
any vision really. Moreover, some of
the suggested items are not features at all. And while the chain of
thought example, of course, needs revision, Look at how much more detail and precision you get
out of this result. By the way, to enhance this
technique even further, you can combine chain of
thought prompt with one, two, or even end shot prompting. In this case, your shots will be part of the context provided, and chain of thought will
be your instruction. I hope you already see
the pattern and see the potential use cases and
applications in your life. Please note that while I provide product management examples
and marketing examples, because that's my background, and that's what I
do for a living, These approaches can be also applied to project management, business analysis, and overall life situation problem solving. Okay, let's summarize
the key takeaways. By using chain of
thought prompting, business professionals
can significantly enhance the speed and effectiveness and clarity of AI driven analysis
and strategies. It would lead to a more
informed decision making and better overall outcomes. Then use chain of thought for more complex and multi
step requests that require Breaking the problem down into multiple sub requests. And for even more control over the response,
provide those steps. Don't forget to provide your
context if it's needed, and for the best precision
and steerability, combine chain of
thought with one shot or end shot or even
rag inspired fronting. That's it for this video. I hope that you can see the
use cases for yourself, and you're willing to
go and try it out.
13. Iterative Feedback Loop Prompting Technique [Self-Consistency Prompting]: Imagine you're working
on a complex problem and GPT isn't quite
getting it right. Just something seems off. Well, this is where feedback based prompting
comes in handy. Without further ado, let's
go for a step by step guide. Step one is initial prompting. Start by giving ChagpT your usual prompt and
receive its first response. As a step number two, ask CGPTi, to criti its own response
and suggest improvements. This step can significantly enhance the quality
of the output. Step three, use perspectives. Think what your stakeholders might say or ask
and who they are. When prompting GPT
specify the perspective, such as, let's say, provide feedback from
the copywriters point of view or from CMOs, like Chef Chief
Marketing Officer. That could be asking to provide feedback from the
copywriters point of view or CMOs point of view or
CEOs point of view, right? And this would actually add a valuable feedback
layer to your process. Once done, identify
the suggestions you find most valuable and ask PT to elaborate on those points just as
you normally would. To make it a bit more practical, use the prompt templates from
the Prompt library to get started or to revise the
materials from this course. As I'm signing off, I'm getting ready to record
the next lecture. I wanted to check out
this conversation with CGPT as a practical example, the real world example. If you're ready, try this
technique on your own and see how it can improve
your interaction with ChagpT. Any way? See in
the next lecture. Cheers.
14. Self-Asking Prompts: 5 Why Approach: And, everyone. In this lecture, I wanted to show you
a very cool twist on the ask before answer
prompt engineering technique. And this one is going to be a
bit different in its logic. So we are going to use
the five Y technique in order to get to more in depth and more relevant
solutions to our problems. This is a great way to go
towards your desired output. Through a little bit of a
problem solving approach. So let's go to my screen, and I'll show you how it's done. Typically, when we ask
GPT to ask us questions, we ask to do it right away, then we provide the response, and then CGPT comes up
with the desired output. In this case, we go a
bit more sequential. The five wise technique is a
problem solving framework. That allows to get into the matter of the problem
by asking five questions? Let's start with Y and
do it step by step. So here we are in CJP, and let's start with
the first example. The context is I'm preparing a local media press release
for a fitness studio. The prompt would be
ask me questions using five Y technique.
One question at a time. Once done, suggest a list of
five topics and headlines. So here's the first question. Why is the Fitness studio issuing a press
release at this time? Would say to improve website
authority and awareness. Then why does the
Fitness Studio believe a press release will effectively improve website
authority and awareness? I would say they have a good domain authority
and website traffic. So it was Y number two, then Y number three directly responds to
my previous answer, saying, why does having
a good domain authority? And website traffic, make a press release
suitable tool for further improving website
authority and awareness. I would say that it
helps me rank higher on my target keywords and get more website
visitors and leads. Then I have the same question, but to my last answer, why is ranking higher on target keywords crucial for your Fitness Studios
online success? I would say leads and
customers with high LTV. Then the fifth Y actually
strikes the real reason I really happens like this in real life
when you interview customers for your products
and product interviews, then you really get to the
truth in the fifth, why. The response is that
competitors in my city don't invest enough in SEO and
getting good search traffic. And this approach gives
me a marketing advantage. Now, finally, based on my input, here are five potential
topics and headlines, for your press release that could leverage your
fitness studios, SEO and competitive advantages. Of course, I did not describe the advantages of
the studio itself, but let's have a look
at the examples. The reason why we have a
list of five topics and headlines is that some of those are just not
going to be good, and let's accept this fact. So let's have a look
at these five options. Topic Number one says
unique fitness approaches. The headline is innovative
training techniques, how the Studio name
stands out in the city. O say it's too blunt.
Let's move on. The next one,
transforming lives, real success stories
from Studio name, in generic, but better. And the third one is the
one that I actually like. Building a healthier community, fitness studios
impact in the city. And that is actually one that
I would probably choose. Let's have a look at the
two other number four, meet the experts behind
premier Fitness Studio. I would play around with it, but I like the experts behind I would even change
it to people. So, you see, you don't rely on Chagp in the final
final final everything. You are submitting this, so it's our responsibility to come up with a final result. And don't be afraid to tweak
a few words here and there. And the last one is actually very GPT style with
revolutionizing, fitness in the city. Not realistic at all.
Sounds a lot like Cagpet. I would skip this one. But at least I have number
three and four here. Which are actually
workable results, and something that I would
need to think of on my own, maybe for a bit longer. All right, so let's summarize. Using five is with self
asking or ask before answer prompt engineering
technique is a great way to arrive at new ideas
through problem solving. I hope you have a chance
to give it a try, and you like what
you get out of it. And see in the next lecture.
15. RAG-Inspired Prompt Engineering Technique: So what is retrieval
augmented generation? Introduced by Meta, Retrieval
Augmented generation, also known as Rag, is a technique that combines
information retrieval with language models to generate more accurate and
contextual responses. Reg models pick relevant
information from a knowledge base and use it to guide the
generation process. This results in a more
relevant output that is grounded in facts and better aligned with a given context. However, building Reg is a technical process,
quite advanced one. And it requires a specific
set of programming skills and resources to build a reg
application or API integration. With the techniques in this
lesson and some limitations, of course, we can take advantage of g inspired approach in GPT. That's why it's more
correct to call it inspired prompting rather than
a pure g application. Over the next couple of minutes, we'll discuss why use Rg, the best rag use cases, how to build a no called
g inspired custom GPT, in char PT, how to minimize hallucination in
the custom GPT Rg. So without further
ado, let's go. First of all, why use g
in prompt engineering. Reg offers several benefits
for prompt engineering. Number one is improved accuracy. By retrieving
relevant information, Rec can generate
responses that are factually correct and less
prone to hallucinations. Though, speaking
of hallucinations, there is an nuance that we'll discuss a bit
later in this video. Number two is enhanced context. Reg uses contextual information from the provided documents, enabling them to
generate responses that are more relevant and reliable. One of the most
important benefits is the expanded knowledge. Reg allows language models like GPT access information
beyond their training data. It enables to handle prompts
about current events, user specific data,
or company documents, or let's say product
information, pricing, et cetera. Now, let's talk a bit about
the use cases in business. How can you use Reg? Well, some of the most
promising reg use cases for business environment include
customer support chat bots. Of course, by accessing up
to date product information, Rec empowers chat
bots to provide more accurate and contextually
appropriate responses. Number two is business
intelligence and analysis. Well, you can generate market analysis
reports or insights by retrieving and incorporating the latest market
data and trends. You can also use your brand
guidelines, strategies, tactics, and let's say product or service information
to for example, consult generate
ideas for content, create content and maybe
some presentation decks that can really speed up the process and improve the relevancy of the
content that you create. Speaking of content,
content creation is a very important use case. Rec can improve the quality and relevance of the content by pulling in accurate
current information from various sources. And it would make
your content more informationally
saturated, so to speak. Reg approach can be used to maintain your tone
of voice as well. Let's talk about how to build
a rag with no code in GPT. Well, as I said before, Rg is a technical term. Normally, you'd need
a technical person or even a team to build a g app, but non techs can also take
advantage of this approach. For example, by
creating custom PTs, which are basically
rags in their nature, or simply prompting an LLM that supports text file attachments. Let's go through step by step instructions on creating
your rag with no code, by creating a custom GPT. Step number one is always
is to define the goal. This is a crucial
step that will impact how you prepare your
knowledge base and prompts. And the best way to start, if you choose one
primary use case and work your way from there. Then prepare your
knowledge base. Gather the relevant
documents, articles. Data, you want your C system to have access to and refer to. Just copying and pasting and
throwing tons of information into one document wouldn't
really work to its best. Only to some extent. With your
intended use case in mind, make sure that all titles
and final names are named so that it's easy
for the LLM to scan. Make sure you use
the same keywords in the knowledge base and your prompts to maximize
the scanning accuracy. This way, the system
would trigger the right parts of the document. The best formats for GPT are Doc X and CSV, and
of course, DST. Images are ignored. It's critical that you know
your knowledge base quite well and can access or
updated when you need it. From my experience, I'd
recommend to stick to one document as it works
better than multiple ones. Let's go and create Custom GPT. So go to GPT, click Explore PTs, and create a new GPT, and click Configure
to manually enter the instructions and
upload the knowledge base. Use a combination of prompting
engineering techniques, give a role, context, task, and step by
step instructions. Now, this prompt part will be important because otherwise, GPT will blend the
training data and data from your knowledge base and produce a hallucination. For example, it could pick a product description
from the knowledge base and hallucinate its price and availability by triggering
the general training data. To avoid this in your
custom GPT configuration, use the phrase, search, the knowledge base two. But This will trigger the data analysis
function in CI GPT. Now, step number four is to
test and refine your prompts. Save Custom GPT and test how it retrieves information
from your knowledge base. Challenge it to
provide responses with factual information from
your knowledge base. Review your
instructions, prompts, conversation structures,
and knowledge base. Make sure to optimize
the vocabulary and keywords to ensure
the perfect match between Rag and prompt. Now let's go over some examples of Rag prompts for
different use cases. Number one is customer support. So you are a customer
support agent for let's say your company name. Use the provided
product information to answer the
following question, and then customer
query. Market analysis. You are a market
analyst for industry, provide a summary of the current market
trends based on the latest industry
reports and data. Then you would attach your industry reports
and data, of course. Now let's have a look at
the content creation prot. You are a content writer
for brand or type of brand, if it's not very popular. Use the provided brand
guidelines and target audience information to write type of content about topic. And then you can even
provide a couple of notes. Here's an extra layer of
protection from hallucinations. Once you have the answer
that you like, prompt this. Double check the details in this response for alignment
with my document, or knowledge base if
that's your case. Find out if there are any
discrepancies between this text and the document
provided. Let's summarize. Using rag techniques
and prompt engineering, marketing and business
professionals can generate more accurate contextual
and effective content to support their
business objectives. And while some of
the examples that I provided are from marketing, most of them are from marketing, because that's my experience. This can be applied to anything. For example, if you're learning, you can take notes from
a course that you are taking, summarize them neatly, add your particular
information of your project, and then create a
Rg just like this, and you will communicate with the knowledge
that you've gained. And in my experience, that creates a great way to implement the knowledge that you gained and to practice it, because we often take
a course and forget most of the stuff in the
span of a couple of months. So this helps to actually apply the knowledge that we
gained, not forget it. All right, that's it for this g inspired prompt
generation technique, see in the next lecture.
16. Prompt Writing for AI Image Generation: Like it or not? AI image
generation has taken its niche. Interestingly, it doesn't really substitute a real photography, but has an interesting
function, if used correctly. You have ever tried to
generate an AI image, you probably noticed that
writing a prompt for image generation can
be significantly different from writing a
prompt for text generation. That's why this
video we'll cover the specific techniques for writing AA image
generation prompts. With slight adjustments,
these techniques can be used practically in
any AI image generator, such as mid journey,
adobe Firefly, staple diffusion, gt images, or whatever other mage
generation comes out. We won't go into the specifics of each
particular tool in this video. Instead, we'll focus on
writing those prompts and getting more creative ideas
flowing. Okay, let's start. So the main difference between writing prompts for
text models and writing prompts for
image generation models is the language and the goal. Prompting a text model, you would often guide
the model using action verbs like
search, describe, analyze, write,
rewrite, prompting an image generation
model requires a slightly more
descriptive approach. That's why you would want to stay away from guiding
the model this way. Actually take a look at components of the prompt
for image generation. These components
are subject action, environment, composition,
style, and visual effects. The main component, of
course, is a subject. The rest can be
optional and added for more control
over the generation. So if you put one word, let's say a cat, you'll
already get some result. Let's go through each
component one by one. So the subject, name and describe your
subject with attributes. For example, if your
subject is a cat, you can describe it as a fluffy orange cat
with green eyes. The more specific you are, the more control you'll have
over the generated image. Next component is action. What is your subject doing? What's going on in the frame? What's happening
around the subject? For example, a fluffy
orange cat with green eye, lazily stretching
in warm sunlight. The next component
is environment. Here, we specify where
all of this is happening. So that would be a fluffy
orange cat with green ice, lazily stretching
on a cozy plush, pillow in front of a
roaring fireplace. Well, these are
more words, right? It's building up.
The next component is composition and angle. This one requires understanding some basics of photography. The cat in the center? Is it a close up shot or
a wide angle shot? Is it shot from low angle from below or from above
or from the top? Let's add this part. A fluffy
orange cat with green eye, lazily stretching in
the warm sunlight on a cozy plush pillow in front of a roaring fireplace
shot from low angle, with fireplace, softly
blurred in the background? Next comes style
and visual effects? Scribe the visual style
or specific effects? Is the image black
and white or color? Is there a specific
color palette that you're looking for? Is the image supposed
to look like painting, art or photograph? Is there a specific lighting
effect that you want? For example, harsh daylight, warm sunlight, golden hour, blue hour, counter light? Are the shadows soft or hard? Do shadows create leading lines? Or maybe you want to use
rembrant lighting on a portrait? Well, of course, if you're
generating a portrait? You can see, even if
you have experience in visual arts or photography
or videography, there's still plenty
of room to expand your vision and creativity
and just go and learn. So let's expand our
example even further. A fluffy orange cat with green
eyes lazily stretching in the warm sunlight on a cozy plush pillow in front
of a rolling fireplace, shut from a low angle. Hard shadows create
leading lines. Okay, last but not
least is gear. This one is quite optional. But advanced users who
know the specifics of gear can gain even
more creative control. By naming photo
parameters and gear, you can control
the focal length, the blur, the bok. However, some tools like Adobe Firefly don't
allow brand naming, so you need to be a
bit careful with this. Sometimes it works,
sometimes not really. But let me give you a couple
of ideas that work for me. So, I've never
actually shot on film. I like the looks of Portra 400, Kodak Gold, 200 and Fuji Ostia. But yeah, if you
remove the brand name, it still can pick the
relevant training data. So let's expand our
example even further. A fluffy orange cat
with green eyes lazily stretching in
the warm sunlight on a cozy flush billow in front of a roaring fireplace
shot from low angle. Hard shadows create
leading lines, shot on 85 millimeters F 1.4 L port 400, and
let's have a look. Beautiful. Last but not least, don't forget that you can
use images as a reference. In this case, the
model takes care of a big part of what
we've just discussed. So your prompt should
be much shorter. So here are a few
more practical ways to improve your
image generation. Learn the limitations
and current biases. You'll be surprised at how biased image generation
models can be, especially if you start talking about races, genders,
and nations. So if you're trying to
generate something like this, be really cautious and
pay special attention to understanding whether you're actually hitting a stereotype. Make sure that you have this human quality
control and diversity, equity, inclusion vision
built in in your mind, so you are the best
biased filter. Then next step is, sometimes, less is more. The very simple prompt
can really work well. From my experience,
sometimes long, detailed prompts like
the one we created can create unique and
controllable results, but on the other hand, these may result in more
unexpected artifacts and glitches on the image. Learn what's adjustable
and fixable. Through community images and
pay attention to prompts. Get inspired, learn photography
terms, styles, and gear. Avoid negative prompting
in the actual prompt. But if the tool provides
those ugly artifacts like fingers or hands or something ugly
that you don't like, and the tool that you're using allows you to add
separate negative prompt, then use that negative prompt and input everything
that you don't want, for example, weird fingers, crossed fingers, and so on. So now in the end
of this lecture, I want to just show
the image results from different types of
complexity of prompt, and the examples of the
techniques that we applied. So yeah, let's turn on some music and watch. O
17. Incorporate your Data Into ChatGPT: The integral part of Chagpt and large language model
prompt engineering involves providing the right
context for your prompts. How is this achieved? How do you actually provide the context so that it counts? It's important to
note that Chagpt isn't a Wikiped
or search engine. Even though it does have that browsing feature that
you can trigger by saying, search something, but it's not as transparent as
we'd love it to be. In this video, we'll explore
the options for collecting context more efficiently and how to provide this
context into GPT. This video consists
of two parts. The first part, we discuss
options for quickly collecting the
information needed for the context of a prompt. The second part
covers how to import your data into GPT. Let's
start with the first part. Option number one,
this is the simplest, the fastest, though not
the most precise method. In the mid 2024 interface, browsing isn't
visible as a feature. However, if your prompt
starts with the verb search, the Chat GPT browsing feature
activates automatically. After Spring 2024 update, this feature has
become more usable, which is why it's actually
included in this lecture. Option number two, use
AI search to gather information and
then import it into GPT as a document or
a part of the prompt. The three most popular
AI search engines currently are Bing AI, also known as Microsoft copilot, basically the same interfaces,
perplexity AI and.com. While Microsoft copilot
is a broader system, Bing AI search and
Microsoft copilot chat bots have almost identical interfaces
and functionality, and definitely the same mode and algorithm under the hood. Perplexity and u.com are my
favorite search engines. I prefer perplexity for more precise and
focused searches and u.com for broader searches. Option number three, this method is the most time consuming, but offers the greatest control over the data you collect. It involves using Google
for manual data collection, and then structuring
it into a document and feeding it into Ca GPD via custom GPD or
attached document. Method number four is ideal when most of the
information you need is from long blog posts or specific videos like
lectures, webinars, interviews, This case,
I recommend using the Harpa AI Google
Chrome Extension and the most recent and
capable GPT model with the largest context window, also known as Token size. So you will use all
of that to summarize the necessary information
into a single document. Once you've collected the
information for your context, you'll want to into cha GPT for further processing
and prompting. You'll have three no
code methods for this. Yeah, we are in the second
part of the video already. The first method, if your data is condensed
and not extensive, simply paste it in the end
of your prompt as context. Second, if you have more data, and it's only needed for this
particular conversation, create a document in
a TXT or DOC format or a spreadsheet in CSV format, then attach it into
your prompt if your LLM allows
it as C GPD does. And the last method
for this video is to create a custom
GPT with your data. This option is best when
you plan to frequently use this data and have
a specific case that you will repeat
again and again. In this situation, ensure that the data is verified
and structured cleanly. Then create a mini g inspired
prompt in your custom GPD. Remember to format your
document to ensure all titles and texts are relevant to
your prompts and cases. It winds up our quick overview
of how to collect data for CGPT from scratch and how to import it and
feed it into CGPD, depending on your use case. I hope this format
gives more information, less amount of time and
it's quite practical. If you enjoyed this course, please don't forget
to leave a review in the top right corner orver
it's placed, please do that. It really helps to find more time to record more
lectures and updates. And if you have questions about the content or the
topic in general, feel free to reach out and I'm available to respond
to everyone. No I generated. I'm here. I'll be happy to chat
with you and support. That's it for this video and see you in a
couple of seconds.
18. Refining ChatGPT Responses: Talk about how we
can significantly improve the results using
the prompt refinements. Tip number one is to simply
regenerate response. You'll be surprised
at how different the responses may get
from attempt to attempt. Tip number two is to
rate the response. When you rate a response in GPT, it asks you exactly
what you didn't like. After your feedback, it suggests
an alternative response. And for me, quite well. Wait number three is
just ask what you need. And refinements can be like beacon size or write
in simple English or this by alphabet or by value or whatever criteria
you may find effective. You can also ask
to summarize this into particular amount of text. I'll provide more
of these templates in the resources
of this lecture, so be sure to check it out. And by the way, it's also
in the prompt library. All right. Let's M one. Number four is my favorite, and it's about the
output format. You can either put
it in your prompt right away or ask to
rewrite in the refinement. I like to ask to write
in a table format, because for me, it's
a comfortable way to quickly digest information. Also, I notice that it improves
precision in the output. By the way, if you need to
paste your table into CGPT, just type put this
data into a table, and then paste your table. It will look messy
in your prompt. But most of the time, it figures it out correctly. Alternatively, you
might want to ask to output the format in
Markdown or HTML. All right. Let's summarize.
Use refinements to significantly improve the
results that you get from CPT, regenerate the
responses, ask for refinements that you need
and customize the formats. I hope that these
four simple tips will help you improve the results that you're getting with GPT. And I hope that it's going
to be fashion effortless. See in a few seconds
in the next lecture.
19. Prompting Practice Activity: We've learned how to use the most important prompt
engineering approaches for marketers using GPT. But really passive learning without practice and
feedback isn't as effective. I want you to take the most of the time you spent
with the course. Here's an activity for us. Define the goal of your prompt. Then use the prompt
engineering techniques that we've discussed
in this section. Create a prompt for
your typical task. Try to combine different prompt engineering
techniques if needed. Then submit it in
the Q&A section or just send me a message if
you want to be more private. I will provide you with my detailed feedback
on your prompt. Sure. This won't take
as much of your time. But it will definitely
improve the way you use these prompt
engineering techniques and how flexibly you
can think about them. That's it. See you there.
20. Collect And Integrate Your Context Into ChatGPT: The integral part of Chagpt and large language model
prompt engineering involves providing the right
context for your prompts. How is this achieved? How do you actually provide the context so that it counts? It's important to
note that Chagpt isn't a Wikiped
or search engine. Even though it does have that browsing feature that
you can trigger by saying, search something, but it's not as transparent as
we'd love it to be. In this video, we'll explore
the options for collecting context more efficiently and how to provide this
context into GPT. This video consists
of two parts. The first part, we discuss
options for quickly collecting the
information needed for the context of a prompt. The second part
covers how to import your data into GPT. Let's
start with the first part. Option number one,
this is the simplest, the fastest, though not
the most precise method. In the mid 2024 interface, browsing isn't
visible as a feature. However, if your prompt
starts with the verb search, the Chat GPT browsing feature
activates automatically. After Spring 2024 update, this feature has
become more usable, which is why it's actually
included in this lecture. Option number two, use
AI search to gather information and
then import it into GPT as a document or
a part of the prompt. The three most popular
AI search engines currently are Bing AI, also known as Microsoft copilot, basically the same interfaces,
perplexity AI and.com. While Microsoft copilot
is a broader system, Bing AI search and
Microsoft copilot chat bots have almost identical interfaces
and functionality, and definitely the same mode and algorithm under the hood. Perplexity and u.com are my
favorite search engines. I prefer perplexity for more precise and
focused searches and u.com for broader searches. Option number three, this method is the most time consuming, but offers the greatest control over the data you collect. It involves using Google
for manual data collection, and then structuring
it into a document and feeding it into Ca GPD via custom GPD or
attached document. Method number four is ideal when most of the
information you need is from long blog posts or specific videos like
lectures, webinars, interviews, This case,
I recommend using the Harpa AI Google
Chrome Extension and the most recent and
capable GPT model with the largest context window, also known as Token size. So you will use all
of that to summarize the necessary information
into a single document. Once you've collected the
information for your context, you'll want to into cha GPT for further processing
and prompting. You'll have three no
code methods for this. Yeah, we are in the second
part of the video already. The first method, if your data is condensed
and not extensive, simply paste it in the end
of your prompt as context. Second, if you have more data, and it's only needed for this
particular conversation, create a document in
a TXT or DOC format or a spreadsheet in CSV format, then attach it into
your prompt if your LLM allows
it as C GPD does. And the last method
for this video is to create a custom
GPT with your data. This option is best when
you plan to frequently use this data and have
a specific case that you will repeat
again and again. In this situation, ensure that the data is verified
and structured cleanly. Then create a mini g inspired
prompt in your custom GPD. Remember to format your
document to ensure all titles and texts are relevant to
your prompts and cases. It winds up our quick overview
of how to collect data for CGPT from scratch and how to import it and
feed it into CGPD, depending on your use case. I hope this format
gives more information, less amount of time and
it's quite practical. If you enjoyed this course, please don't forget
to leave a review in the top right corner orver
it's placed, please do that. It really helps to find more time to record more
lectures and updates. And if you have questions about the content or the
topic in general, feel free to reach out and I'm available to respond
to everyone. No I generated. I'm here. I'll be happy to chat
with you and support. That's it for this video and see you in a
couple of seconds.
21. Prompt Engineering Summary: Congrats on completing this training on
prompt engineering. You've learned the
foundations of writing effective prompt for
large language models. Throughout these lessons, we covered the prompt
design principles, advanced techniques like fu shot and chain of thought prompting. And we learned how to
apply these principles across multiple practical tasks. So stay curious, keep
experimenting and find new ways to save
your time with prompting. But also recognize the current
limitations and risks. These models require your
guidance and critical thinking, and most importantly,
real life experience. After all, you'll be
the one responsible for a real world decision
making and consequences. We now have a powerful tool to augment your intelligence
and experience. Use it responsibly. The future of prompt
engineering is bright, and I'm excited to see
what you'll built. So share it with
me if you want to. Thank you for joining this
journey. Happy prompting.
22. Customize ChatGPT: Build Your Own Custom GPT: Everyone, in this tutorial, we're talking about an
update that, in my opinion, has a significant
breakthrough in how we customize and
collaborate in Cage PT. I use it all the time, and I can't imagine
going back to ChagpT versions without this. So we'll talk about custom PTs. What are custom ties?
Who can use them? What are open Ayes plans and vision for this
feature, how to access, use, and create your
own custom deputies, how to make your custom deputies more effective and
accurate over time. You will learn how to
use custom deputies, how to find a good use case, and how to create your
very own custom PT. You'll also come across some of the most useful custom deputies
that I've tried so far. So hope you are as excited
as I am. And let's go.
23. What are Custom GPTs: Features and Interface Overview: All right. Let's talk
about custom PTs. Open EI, introduce DPTIs, a new form of CA GPT that users can customize
for specific tasks, both professional and personal. We, of course, are going to use PTs for
professional reasons. At the moment, other LLMs
like Google Bart, now GeminI, and Anthropi Cloud don't
offer this feature, which makes it a significant competitive
advantage of CA GPT. GPTs are created by
providing instructions and extra knowledge and actually choosing capabilities
like web search, image creation, data analysis,
and API integration. While custom instructions
launched earlier, they laid the groundwork for PTs by allowing basic
preference settings. Custom instructions
are more limited, but are available in the
free version of GPT. CPT plugins are now gone
and have become GPTs. WN AI claims that GPTs will evolve and become
more sophisticated, potentially acting as
real world agents. So let's be ready
for those updates. It's really easy to
create a custom GPT, as well as access the
community built GPT, which is a huge source. So why would you, first of all, want
to use a custom GPT? Well, many creators expected that GPTs would
monetize per usage, but that's not the case yet. Maybe by the time you're
watching this, it changed, and I've become scrooge MacDoc
Rich building this GPTs. But the good news
is that building a custom GPT can benefit
you in several ways. First, you can create a virtual assistant
personalized for you. Automate repetitive tasks
like proof reading a copy. You can personalize answers and suggestions based on your data. You can go through
repetitive workflows. Or my favorite one, create a personal mentor
with relevant knowledge. This tutorial we'll
learn how to create custom PTs with
zero coding skills. It's quick, easy and accessible for everyone
with GPT plus account. So let's switch to my
screen and navigate. Right now, you see the side
panel of the C GPT interface, and let's click on Explore GPTs. You're going to see
this search page with the categories of GPT, the search bar, the featured, and the trending GPTs in
different categories. Don't forget to check out what's out there
from time to time. I actually like testing them. Maybe it's just
me. I don't know. In the right top corner, you're going to
see this my GPTs, create and your account logo. Well, This is such a
quick developing product that so many things are
changing very fast. So don't be surprised if in your interface right now when you're watching this,
something is different. The idea should stay
more or less the same. I don't think they're
going to make super critical redesigns. But if some icons are located in different sites or
in different colors, just be ready for it. So when we click my GPTs,
we're going to see this. There's a list of your GPTs that you can access and add it. You can also see which PTs
you use most frequently. Maybe if you see that some of the GPDs are the most
frequently used ones, then it would mean that you or your team members, of course, if you shared this
GPD with them, use certain PDs more often, that could lead you
to a decision to pay more attention to improve it or to collect more feedback if that's
applicable in your case. For now, Let's
just create a GPT. So ditch that create mode. We don't need it for now. I that create mode,
you would just have a dialogue with the GPT, and then it would create
the instructions. But I suggest to
get more control over what you're going to get and go straight to
the configure mode. By the way, you're going to see a place to upload your image, which would be the
plus plus circle. You can use deli to
generate that image. But for my PDs, I often use different photos or icons that I can
recognize from the crowd. Then name and description. These fields are mandatory, but it's up to you. I have a system of
naming conventions, just in case I have so many GPDs that I created
that I can't navigate them. So this helps me sort them out. But these fields don't really affect the way your
custom GPT behaves. So let's move forward. Now, these fields are some
of the most important ones. So Instructions.
This is your prompt. And unlike custom instructions, this bits a huge
huge volume of text. I wasn't actually able to
write a prompt that would be so long that I don't
fit within the limits. Then there's conversation
starters field. I typically use just start, and then in the
instructions, I type When the user presses
start, do this and that. That is my typical scenario. But think through
the scenarios and let these scenarios guide
your conversation starters. In the knowledge part, you can upload different
files and then enable code interpreter to work with them and process them. The knowledge base is actually the most
important thing for me in the custom deputies
because I can upload files with my print, tone and voice, with
information about the products, the company, the
department, right? So this is actually
what helps you customize your GPD the most. Then capabilities, I typically
turn on all of them, especially if you have
the knowledge base do enable code interpreter
and data analysis, otherwise, it wouldn't work. Now, let's talk
about the actions. Actions allowed to retrieve information to take
actions outside of GPT. Basically, this is
an API integration, and if you are confident, if you're technical,
you can do that here. There are a couple
of work arounds for non technical people to
generate those API calls. But I haven't found one that works reliably on scale
with different tools. These solutions are
just not that stable. But what you need to
know is that you can import the schema from a URL, There is actually
actions GPT that can help you write this
open API schema. By the way, don't confuse
Open API with Open AI. These are two completely
different and unrelated. But this actions GPT
is a custom GPT that Open AI created to help
create these API schemas. If you enter the schema, you'll be able to test how
the API calls are retrieved. And for that, you'll
see this right screen. And it's almost
exactly the same thing when you just write
instructions. There you can test
your custom PTs. Debug them and see whether your instructions actually
work and iterate. This is really convenient. However, what I recommend is to save your custom
GPT after you make any change because I made a couple of changes and
then debug them this way. And there were cases
when I just lost them. So from now on, as soon as I make a
change and I want to debug it, I just save it. For saving, you're going
to see this update button, it's going to be in green. When you press Create, you're going to see
this share GPT. There are three
options to share it. You can keep it
only for yourself, especially if there's some data that you don't want to share, you can share it via link, especially if it's cool
when you work in a team, and you use the same
GPT for your team. For example, you have
your tone of voice, or you have company descriptions,
product descriptions. Then for teamwork,
this one is great. And if you're open to share
your GPT with The public, then you can publish
it in the GPT store. By the way, it's a cool place to get a backlin from
because in your settings, when you enter your
profile details, you can enter your
site and get a pling. I'm pretty sure that
it's no follow backlin, but still it's
backlin from Open AI. All right, that's pretty
much it for the walk around. And in the next video, we're going to look
at the tips on how to prompt a Custom GPT,
see in a few seconds.
24. Custom GPTs: Crafting Effective Instructions: This video, we talk about
custom deputies instructions. As you begin working
with custom deputies. This is the most important
part of building your own put. This video contains some of the official
recommendations by Open AI, and also some of the information that I caned with experience of building my own
custom deputies and using the community
based custom deputies. By the end of this video, you will be able to write
instructions that make your custom deputies perform ply, reliably and accurately. Without further ado, let's start enhancing your instructions
for custom deputies. Start off your custom
deputies instructions by assigning a role and
a goal of the chat bot, as well as some of
the basic contexts, but don't overload
it too much for now. Next, divide the multi
step instructions into simpler manageable steps. This helps the model follow
these steps more accurately. However, in GPT instructions, you'll need to break
down the steps a bit differently than you
would in a normal prompt. Normally, you'd need
to say something like when the user enters this, then first do this, then that. And this can become
confusing and messy for the algorithm within the
custom gibt context. So instead, use trigger
and instruction pairs. Let's break down
a quick example. Type trigger, user enter start, then instruction, ask
following questions. Then trigger, user provides
answers to the question. And instruction would be
to analyze the context, search the knowledge base for the most appropriate framework
for the user's situation. Next up, use shot prompting, clearly marked instruction
sets and call outs for a few shot examples to just make sure nothing
gets confused. Providing examples, you can get even more consistent responses if that's what your
use case demands. Now, if you're using
knowledge files, which I encourage you to do, provide explicit
instructions on using them, include specifying
the file names at which stage and
in which case, they should be
researched by GPT. The same thing applies to
browsing. Browsing is needed. Mention at which stage
with the phrase, use browsing to do something. So by following
these guidelines, you can optimize the performance
of your custom GPTs and ensuring they produce
reliable and acts. Now, I'll turn on some music, and let's look at example of my copy editor GPT Instructions. I use it quite
often in real life, and you can, as well, I'll share the link
in the resources, or you can just type the name that you see on the
screen in the GPT store. All right, so that's it for now. See in the next video. O
25. Custom GPTs: Optimizing Your Knowledge Base: Hey, welcome. Today, we will explore the knowledge
feature in PTs. The feature is designed to
enhance the performance of these language models and make it so much
more personalized. This feature allows us to upload files containing
additional context, which PTs can then access when
responding to our queries. So how does GPT knowledge work? L et's break down how
this feature operates. So you can use GPT editor
to attach up to 20 files. Each file can be up to 512 megabytes and contain
about 2 million tokens. While you can upload
files with images, only text is processed, so images are ignored.
Don't bother. It's just a waste of tokens. Actually, from my experience, I would recommend to
upload the least amount of files possible to
avoid any confusion. If you absolutely
need multiple files, you'll need to
provide instructions on when to parse which file. Let's now talk about how Custom GPT process
knowledge files. So the GPT takes the text and divides
it into smaller parts. Then it converts these
parts into a form that GPT algorithm
can understand. These are like code chunks. Then these parts of
text are stored for later reference so that GPT can access them
more consistently. Then when a user
interacts with your GPT, it can access these stored files to provide context
to the user's query. However, They cannot
download the file, but they can get information and request information
within those files. So please consider this
as privacy feature. Now, while processing
your documents, GPT selects either semantic
search or document review depending on the
situation and Pront. I like semantic search more because I believe that
it works a bit more accurately in terms
of retrieving the accurate information and reducing the amount
of hallucinations. But let's talk about both. So Semantic search retrieves
relevant text parts. It's ideal for Q&A style prompts where you ask for specific
information from the document. Pad for product
information reference, like features,
pricing and so on, corporate training data
or just your courses or notes that you upload as guides for your
future reference. Now, let's speak a bit about the document review
way of retrieving data because I think that in the future and coming
iterations of CGPT, it's going to be
massively improved. But it's just my bet. Anyway, document review
reviews the complete document. It's best for summarization, translation prompts, or requiring the entire
context of the document. In this case, for now, it's best to keep a relatively
short single TXT or Doc X document that's neatly organized with special attention to keywords and headlines, no fancy formatting or
whatever like that. Now, for a minute,
let's talk about when to use custom DPT
knowledge base. Well, it's best to use
knowledge base for context that don't change
all that frequently, or you'll spend a lot
of time managing them. But think in these directions, like employee handbooks, policy documents,
marketing strategies, product information, tone
of voice guidelines, or any guideline
or documentation that has to be accessed
and referenced, frequently, especially by
multiple team members. For example, right, it's
cool to have this tone of voice accessed by four of your team members who work on the same channels
and style of copy. Great. Now, let's
proceed with tips for maximizing GPT knowledge
based accuracy. So Step number one
is formatting files. The Parser works best with
simple single context formats. So avoid any overloaded docs
and PDFs. By the way, PDF. Yes, they can be
uploaded and compatible, but avoid those multic
PDFs and complex layouts. It's also true about
PowerPoint slides, basically. Then principle number
two would be to guide the GPT behavior in
the instructions. You'll want to use the
instruction section in the GPT editor to trigger
the search through the uploaded files before generating it from
the base model or searching the Internet. All right. That's
pretty much it. Thank you so much for joining this session on GPT
knowledge files. By understanding
these principles of building custom DPTs, you can effectively
use this feature to enhanced
capabilities of GPDs. So I worry if it sounds
a bit too much at once. We'll look into examples, and once you see it in practice, it's going to be much easier. And then once you
create a GPT or two, you will exactly know what
you have to write and do to create a new
GPT in the future. Thank you so much for watching till the end and
seeing the next video.
26. Custom GPTs: Best Practices and Pitfalls: So, guys, in this video, we are going to
discuss a couple of best practices that didn't
fit the previous lectures, but I still think they're really critical to creating
great custom pt, and they make a
lot of difference. Without further
ado, let's start. Tip number one would be to avoid over reliance on
the GPT Builder. The GPT Builder is probably useful to someone who has
never used custom GPTs, and it has a lot of limitations. So it tends to overwrite and
adapt to your instructions, making it unreliable for
defining the exact behavior. So instead, use
the configure tap to craft precise instructions, especially if you already know something about
prompt engineering, and if you watched the
previous videos as well. The next step is more of a
mental slash emotional thing, and I see a lot of people
struggling with it. So creating a
functional custom GPT involves numerous iterations. So be prepared for
cycles of testing, analyzing the results, refining the custom instructions,
and knowledge base. Start off with a
simple use case and then gradually build
the complexity. Don't try to make something
genius right away. This iterative
process ensures that your custom GPT provides a huge value
consistently over time. Okay, so if you actually
ignore this step, I have next s tip for you. And this is do not overdo. So avoid trying to make one
GPT handle too many tasks. Focus on one task and define
clear acceptance criteria. If you need multiple use cases, consider creating a
pipeline of deputies. This is possible by using multiple deputies within
one conversation. So you just add a custom
GPT to your site panel and tag it just as you would tag
someone on social media. Let's have a look at an example. Want to use a custom
copywriting GPT to create social media copy in your
ton of voice in your topic. And then you might want
to create a prompt for an image generator because Dali is not a good
image generator. So once you have this copy, what you want to do is
just tag the mid journey prompt writer and ask to
create a prompt for your post. And it will read this context that you have right
now, and it's done. Then you can go to your
image generator and use this prompt to
generate an image. One more example would
come from user research. You can use a user research GPT to generate interview questions, then summarize the responses
once you have them, and then analyse
the responses using a different GPT to visualize them and
get better analysis. For example, using
wel Form Alpha GPT. What I like about this
approach is that it requires planning
your process and identifying how multiple
PTs can stack together to solve your repetitive
tasks and workflows, therefore, increasing
your productivity. Well, you see, productivity is producing more volume and
less time and effort. And maybe with a bit more fun. If your use case is common, search for existing
community GPT. There are great
GPTs built both by individuals and by companies. But of course, for
specific needs where specific data or guideline
or tone of voice is needed, then creating your own
GPT makes a lot of sense. But for general tasks, using existing solutions can save a lot of time of effort at the sacrifice of some
personalization and individuality. So the next one is also
a bit mindset related. You need to somehow decide whether your custom
PT is good or not. So to evaluate your
custom GPT effectiveness, if you work with
the team, perfect, ask your team to share conversations they had
and provide feedback. Therefore, you will on one hand, see what kind of steps
they were trying to make and maybe try to improve
your instructions this way, or just provide just use their feedback to
improve it further. But if you don't have a team, there are also a couple of
metrics that you can have as guiding lines for improving
your custom dputes. And in my opinion, these
are the following. One, your prompts
become shorter, then you have to use
less prompts in general. And then your answers become
more detailed, accurate, and relevant, then your
conversation with the base model. That's pretty much it for now. Let's quickly summarize. So avoid relying on GPT Builder, use high quality, well
formatted knowledge files, test, and iterate continuously. Define clear acceptance criteria and goal for your custom PT. For general needs, use existing solutions or at
least try looking for them. And of course, combine different prompt
engineering techniques. For example, you can
combine role playing, chain of thought,
shot prompting, all of these work
perfect together. And speaking of
prompts, think of this. When creating instructions
for custom GPT, your prompts will
have to match and complement the instructions
of your custom GPT. So you may want to think
through this logic, so it happens naturally. Right. That's pretty much
it for this lecture. Thank you so much for
staying till the end. And I look forward to seeing
you innovate with your GPT, coming up with
creative use cases. And if you want to
share your custom pt, please send a link to your custom PT and paste
your instructions. And I will be glad
to have a look. Also, if you have a conversation
with your custom put, and you want someone
to provide feedback. I gladly will.
Anyway. Thanks again and see in the next lecture.
27. Beyond ChatGPT: Powerful Generative AI Tools in Each Modality: Still only associate artificial
intelligence with ajipty, it's time to explore
the world beyond that. With thousands of tools
at your disposal, It would be a sin
to use just one. How do you figure out and find the tool that will
best handle your task? This video is full of spoilers, but it also gives an actionable overview of the most important
tools out there. Let's start with the basics. The artificial
intelligence we mostly talk about recently
is generative AI. It generates new information
based on the data used to train the models and information
entered by the user. This information is
also called Prompt. NAI, has certain modalities, for example, text, images, audio, video, and code. And this tutorial, we'll
discuss the specifics of these modalities and consider the best tool and
their applications.
28. Text Modality: Exploring Text-Based Generative AI: Let's start with the
most basic one and most common one that
is text generation. Text AI models, such
as Chagpt are trained on huge amounts of data to understand
and generate content. Their value lies in their ability to understand
the provided content. Make conclusions and create
clear human like text. Naturally, text is also one of the simplest
communication media. It's a great starting point
for prompts to create content in other modalities like
audio, image, and video. Therefore, you get tools in
the following categories, text to speech, text to image, text to audio, and
text to video models, and even text to code. However, these modalities
have their limitations. They can hallucinate, which
means they can present incorrect or false information and do so very convincingly. Therefore, they need to be controlled and
properly conditioned. At the moment of recording, according to the
user benchmarks, the best text model is Cloud
three pus from entropic, and it's consistently competing for the first place
with GPT four. It's a paid model,
but in my opinion, even the free version of cloud is powerful
enough and sometimes definitely not inferior in quality to the paid
version of CPT, compared to the free CPT. Anthropics cloud is definitely
more advantages in terms of argumentation and the
naturalness of language. It's text just sounds more simplistic and
natural. At least to me. Additionally, it has good
optical character recognition, OCR, meaning it can
understand hundwritten text, PDFs and image format. Logos in photos and
details in photos. So you can upload
an image and ask to generate captions
about something, and it will be super relevant. By the way, if you work in NoSan and use its
AI text generation, You're already using Cloud. The second most
popular and one of the best text models is the
paid version of CA GPT, which is GPT four. Its huge advantage is the
ability to personalize through custom GPTs that will generate content according
to your unique requirements. Alternatively, you
can use ready made GPTs from GPT store. I'll share the PTs
I built myself. And also some of my favorite deputies from the
GPT store in this tutorial. On top of that, the
paid version can be enhanced through third party
services and automation, such as ZPR, or
mak.com or ElevNC. Another advantage of GPT four
is the code interpreter. It's a powerful application for working with code
and data analysis. For example, if you have a
survey of 1,000 respondents, You can upload the file, and GPT will group the results, analyze them, and
then you can even prompt it to make data
driven assumptions. Compared to other alternatives, the free version of GPT, which is GPT 3.5 is inferior. As this version often
ignores instructions, has limited functionality
and generates sometimes very
superficial answers. One more tool to
consider in text, NAI is Google Gemini. Its advantage is the integration
into Google Services. With its help, you can easily search for
information documents, navigate e mail, analyze
text, video, and more. Moreover, it has
a great feature. It checks for hallucinations by verifying the results
using Google Search, and it also provides several response options
for your comparison. But talking about OCR,
remember what that is. Yeah, optical
character recognition. So Gemini does not work with photos of people,
unlike Cloud does. The paid version of Google
Gemini is also tempting as it provides a smarter,
huge token size, also known as context window, and also additional 5 terabytes of cloud storage as a part
of Google one subscription. I need a lot of search, so I'm considering this
one for permanent basis. And all of that comes at around $30 depending
on your country. Can also go to this
website over here and see the current leader board among text large
language models. Here, you can also
test the models side by side and choose
the one you like more. Please do not gas light this resource as your free C GPT, don't
tell it to anyone, and don't abuse it, because
this gives our community a chance to choose the right model without
paying for all of them. Running all of these
models implies costs that may get
quite significant. So let's be respectful of
what this organization does.
29. Image Modality: Generative AI for Image Creation: Let's move on and talk about image processing
and generation. Currently, we're talking about generating images from text, images from images and
images from sketches, which is actually part
of images from images. These tools can be
divided into in painting and out
painting functions. In painting
functionality allows you to change a certain
area of an image using text commands,
or selection tools. Out painting generates
content around the image. For example, if you need to turn a vertical poster into a
horizontal one or vice versa, AI will add the
appropriate background for a larger format. Also a popular use case is
background replacement. It's available in photoshop
now with Adobe Firefly three, but there's also a tool called
photo room that allows you to turn one product photo
in a product photo set. Very handy for e
commerce and retail. Additionally, you can
transfer structure or style from a reference image
to the target image. Unfortunately, all
existing models currently struggle with text. It's actually one of the
reasons why me and my team and R&D built an enterprise tool
that aligns with brand book, target audience and
generates image and text separately
and then puts them together seamlessly so that the image and the subject and the image and the
text don't overlap. So what are the tools
for image generation? So, most of the tools have
similar functionality, but differ in result quality
and various licenses. One of the most
popular models for generating images is
D E from Open AI. But honestly, There are many newer and more
advanced systems, which I will talk about later. Dali, in my opinion, has relatively lower
quality at the moment, and it often generates less realistic images with a very recognizable
curtoon like style. But it's easy to access and use. It's available in Chagpt, Ping AI, and Microsoft copilot. Currently, the leader
among AI tools in this modality is mid journey, at least from the point
of view of image quality. It operates on a discord server, which is not very convenient
from user perspective. The company is already testing a convenient web
interface and the ability to customize various parameters
of generated images, and it's also
accessible via API. Thanks to its
powerful algorithms, mid journey provides
high quality results, although it is quite expensive compared to some of
the alternative tools. And let me tell you a bit
more about my favorite tool, which is a Dobe Firefly. It's currently
available for free as a part of creative
cloud license. It's also nicely integrated
into Doby products. Has an intuitive interface with a wide range of tools for editing images and
generating them. It includes effects,
composition, focal link, lighting,
styles, referent images. It's just perfect. I
love how Dobe Doss. Thanks to such an interface. In my opinion, it's the best way to learn
to generate images. On top of that, their models are trained on Adobe stock data, so you can generate images
for commercial use. Once again, it's getting
integrated across all Adobe products like
photoshop, Express, and premier. If you're into all of this, you'll benefit from Adobe
fire fight the most. Next one is stable diffusion
from stability AI. Its interface is not as
convenient as Firefly, but any company or
developer can download the source code for free and
adapt it for their needs. Also, there's interface
called Dream Studio, and it's available
in Lernardo AI. The fact that it's open source opens up wide opportunities for integrating image
generation technology into various applications
and services. And thanks to an
active community, stable diffusion regularly
receives new features, optimizations, and buck fixes. By the way, there is
a custom GPT that integrates image
generation model, and you can integrate it
yourself if you want to as well.
30. Audio Modality: AI for Music and Speech Generation: Okay. The next
modality is audio. So audio modality
has two directions. Speech generation and
music generation. Generating audio
from speech to text, which means transcribing
speech into text, then speech to speech, converting speech into one
language or another language or another voice or language. The descript app that
I mentioned before, In addition to transcribing, also allows cloning your
voice and generating a voiceover through text
to speech technology. Additionally, I use the
tool called 11 Labs, which allows you to
work nicely with your own or with a proposed
voice from the community. You can type the text or read it out loud and generate
audio with another voice, choosing the tone,
accent, mood, and more. This is very convenient
when there is no opportunity to
record a voiceover, or there is just no
good microphone around, or you just need to improve
the sound that you have. For music generation,
you can use Sono, which creates songs with vocals based on text
descriptions and instructions. But in practice, I use stable audio two
point at the moment. That's the latest version. I use it for generating instrumental music like the
one you can hear right now. Okay, so you have text audio and audio to audio in
staple audio two point no. It means that you can
write a prompt or hum or sing a melody or a beat
to get music from it. It allows you to create
a track up to 3 minutes long choosing the
actual precise length, style, instruments, and moody need or dit a
track that you upload. It's ideal for background
music, for videos. It's especially
convenient that you can manually determine the
length of the track. It means that your audio
track for your short, up to 3 minutes long
video will have the right dynamics and will
not stop in the middle. By the way, St Audio also has a separate page
with Fronting Guide. It's quite simple, even if
you're not an audio file. But of course, if you are,
you'll benefit from it.
31. Video Modality: AI-Powered Video Creation: Let's move on with
video and animation. The main way to work with
video remains text to video, but there's also video
generation from Video. Or video from pictures. Runway and pick collapse are currently the two leading
solutions on the market, specializing in video generation using artificial intelligence. They have similar
basic functionality, although runway stands out for its greater flexibility
and capabilities. In runway, the user can not only enter text prompts
for video generation, but also directly draw or
upload images in the interface, defining what and how exactly
should move in the video. This allows for more
precise control of the movement of objects, characters, cameras, et cetera. E collapse on the other hand, is more focused on ease
of use and automation of video generation
process based on text instructions without
additional settings. I assume that's going
to change quite soon. Both tools also support video generation based on
existing video content. The user uploads
the original video and AI aplos visual effects, objects, or animations, according to the text
description in the prompt. I'd also keep an
eye on Firefly from Adobe doesn't yet have a wide
range of video features, but it's promising to become capable of removing some
objects from videos, add defects, create storyboards based on script and lots
of other cool stuff. The development of AI tools is a priority for
Adobe right now. I'm confident that
their functionality will expand and explode, especially based on their
collaboration with open AI, pickle ups, and Runway. But, you know, video is a
unique modality in terms of content generation as
it contains music text, voice, images, subtitles,
and descriptions. So lots, lots of stuff
from other modalities. Therefore, many tools integrate video processing
technologies instead of full fledged video content
generation from text. In this perspective, one of my favorite tools is descript, which uses speech recognition, which is like speech to text. Model. To transcribe audio from video like this
one into text format. The transcript synchronizes
with video track, allowing for easy video editing. Right in the transcript, you can cut out
unnecessary pauses, repeats, or failed takes. Actually, just run one click. All edits are
automatically reflected in the video without the need of tedious manual cutting
in hours of boring work. Similar functionality is
also available in premiere, but descript is much simpler for those who are not
professional video editors.
32. HuggingFace: In this video, I
want to show you a cool alternative to the
paid versions of HGPT, Google Gemini or EnthropiCud. It's called Hugging Face hat. Hugging Face, if you don't know, is a huge community
dedicated to AI, particularly to the opensource. They have this hugging face chat that uses open source models. To use this, you'll
need to create a hug inface account and go
to hugface dot c slash CHAT. Once there, you'll see a familiar interface with
similar to HA JPT features. Let's look at a couple
of the components. When you start a new chat,
you can choose a model. Let's check out the settings
and select a model. Because this is free, sometimes the biggest and best
models might be busy. If that happens, feel
free to switch to a different model based on
the current leaderboard. Metasama is one of the
coolest open source models. You can also create
your own assistant or use the ones created
by the community. Once you select a model, you'll see a button
called tools. Among the tools, there's
image generation, image editor, URL fetch feature that you can use to parse URLs. Document parsing,
calculator, and web search. Keep in mind that these are different tools from
different sources. Most of them are open source, but they are of really
great quality these days, especially considering that
you can use them for free. I'm not sure how long
it's going to stay free, because it obviously cost them millions or even
billions to run, at least in terms of
instance and computation. Let's base our chain of thought prompt as an
example to compare. Now let's take a
moment and compare how GPT four responded to
exactly the same prompt. So please pass the
course if you need to just to read and have
a look at the comparison. In my opinion, there isn't much difference in
the response quality. It's not that drastic and
can be a matter of taste. I think they both followed
my prompt pretty accurately. You can look at the
assistance based on certain models
or create your own. When creating your assistant, you can set up
features like name, description, start messages,
like conversation starters. The functionality is
pretty similar to ha JBT. However, you can choose the model and add settings
like temperature. By the way, temperature setting is very important in
large language models. It defines how predictable or creative your
response should be. With a maximum value of one, which is going to be
very creative and zero, which is going to be
very predictable. So depending on the task that you're trying
to accomplish, you might want to
adjust this setting. By the way, really quick update. Hugging Face has
just added a ton of useful tools into
custom assistants. They added a search of tools because there are so
many of them now. You can just start typing the
tool that you want to see. For example, fetch URL, which would allow you to
get any link as a contact. Now really worth checking out, even if you're using a
paid version of JGB. Now, let's go back to reality and talk about
one of the downsides. There aren't many privacy
settings, obviously, and you can't make
assistance private, which means everyone can use
them once you create one. If that's not critical to you, I'd say this will be a
decent alternative to paying for HAGPT for
many, many users. And if that's you, well, we just saved $20 a month, which is roughly $240 a year. Pretty cool, right.
Hope this was helpful for you and see you in a couple of seconds
in the next video.
33. Integrate ChatGPT Into Google Sheets or Excel: Welcome back, everyone.
By the end of this video, you will learn how to add Cage
PT into your spreadsheets. Adding CGP to your spreadsheets
can help you augment the already vast opportunities
of spreadsheets or Excel. There are multiple ways
and apps to do this, but I will share the most
reliable one that I've tested. First, let's talk
about why you might want to integrate GPT
into your spreadsheets. Well, first, it's extremely powerful at working with text. You can do translating,
extracting entities, tagging, categorizing, correcting
grammar and spelling, and cleaning up data
and formatting. You can create taglines,
headlines, add copies. Product descriptions,
subject lines for e mails, and drafts for blog posts. Using gibt integration
for sheets increases your productivity
by allowing you to get many variations quickly, Save answers in sheets
for easier retrieval, avoiding constant
running like back and forth with copying
and pasting and CGP, and benefit from all the
Google Sheet features, including real time
collaboration. Basically, this is where you get rid of the repetitive
monotonous tasks. If you have the same prompt,
but multiple variables, that would be your
perfect use case for using CI GPT and spreadsheets. Let's discuss some
of the use cases. Say you have a
prompt for writing SEO titles and
meta descriptions. You can paste your prompt
and put variables in certain cells of
your spreadsheets and scale it as much as
you want down the column. Okay, Don't worry if that
sounds unclear a bit, right now, we'll walk you through step by
step in this video. Discuss a few more use cases because it's really
important for learning. So categorization. If you have social
media comments parsed into spreadsheet, you can use GPT to categorize them into say three categories, positive, negative, and neutral. Or if you have a restaurant
and collect reviews, you can categorize those
reviews into service, kitchen, and other items. Translation. If you
need to translate certain pieces of text
into multiple languages, you can automate this process, even up to 80
languages if needed. Data formatting and cleanup. Let's say you have names, e mails or links parsed into spreadsheet in random
inconsistent formats. You can use this integration to keep the formatting
consistent. You can also use GPT and spreadsheets to
interact with images. This can be useful if
you need to generate multiple product descriptions or SCO old image descriptions? Yes, you can also use images
here. And it's awesome. Web browsing is also a feature available
in this integration. So if you have many browsing requests and
you need to parse information from many
browsing requests with different variables, you can scale it
in spreadsheets. So is it worth to
learn about it? I think yes. Does it
help your productivity? Absolutely. With this
motivation in mind, let's go ahead with a
step by step guide. Step one is to install
your extension. You'll find the
link at gptfow.com. We're in the resource
section of this course. Next, go to your
Google Spreadsheets and find it in the
extensions list. Find GPT f work Extension, and click Enable GPT functions. You'll see a pop
up on the right. You can create your API
key or pay for API usage to use all of the most
recent paid models by GPT. But in my experience,
DPT Mini was more than enough for
these spreadsheet tasks. So let's go ahead and
choose DPT Mini model. Now, let me quickly show
you what the extension has. Basically has GPT
functions and bulk tools. In the GPT function tab, you'll see tips, best practices, and list of functions. Click on the list of
functions to see a huge list. Click on any function,
to see an example, documentation, and even a
video guide on how to use it. These guides are like a couple
of seconds up to a minute. They're very short
and super useful. Great for referencing,
whenever you need it. Why suggest whenever you
need a certain function, just click on an example, copy it, and replace
with your pront. Or if you need to watch
that video guide. You'll also see settings
here, the safe mode, auto replace formulas,
model choice, and even custom instructions. Yes, you can use custom
instructions to define how creative the answers should be by adjusting the temperature. There's also an
assistant that will help you generate formulas based
on your text description. But if you're confident in Excel or spreadsheet, you
probably don't need that. However, you can also
get a formula explained, which is pretty handy
if you're working with someone else's
spreadsheet or you work in a document that you
haven't touched in a while. Let me show you how to
use the GPT function so that you have the
logic for most cases. Once you've enabled
the GPT function, chosen the GPT model and edited
any custom instructions. Let's do this simple example. We'll take a short list
of cities and create a fun fact for a stand up
comedian to open the show with. In Column A, I'll make
a list of cities. Of course, I'll start with
the city where I was born and raised and where I live at the moment and where
I actually record it now, and then I'll add Paris, Vienna, Amsterdam, Madrid,
Barcelona, and Milan. Column B is where the magic happens.
Let's create a formula. We type equals PT, open brackets, quotation mark. I'll paste my prompt. Act as a stand up comedian. Create a punch line for
stand up comped opening. Make sure the punch
line emphasizes the most well known
fact about the city. The punch line should
be under seven words. Let's close the quotation mark. Coma, and then
I'll pick A three, which is the first city
and close the bracket. Great. Now we have that
first one worked out. Then I'll press Option on
Mac or Control on Windows, go to the corner of the sale and drag it all the way
down to the last city. Now, we should wait a bit, and we'll see all
of these rows with punch lines that follow
my instructions. Of course, In a
real world example, I'll probably need to add
an example of a joke. In my prompt, but let's keep it simple for the
purpose of this video. Now, let's go to Bulk
tools and have a look. You have Translate, extract entities,
classify and search. Let's click, Translate. Here's how to set
up the translation. Pick, Translate, sell in column. Let's pick B. Detect language. Okay. Then to Spanish
and put result in column C. Let's add some
translation instructions, for example, act as a
professional translator here and translate from
culture to culture. You can also add
any source words like glossaries and set
up custom instructions. There's a lot of
flexibility here. Unfortunately, I don't
know Spanish yet. So let me know how good this is. I will double check it by
using a reverse translation. So I'll do C to English to D. I'll choose all nine raws
and run the action. Now, let's compare
the translations. I think it's pretty
accurate here. Of course, we can have
some variability, but it's quite natural even when using service of a
professional translator. And of course, you can
translate to another language. This two offers
extensive functionality across various platforms. It can be integrated
with Google Docs, Microsoft Excel and Word, and even your G mail. Yes, I G mail, it can be embedded to help craft and
manage e mail responses. However, for me, the
most valuable use case is its integration
with spreadsheets, where it significantly enhances productivity and
data management. For more information, visit
their website to explore additional functionality or to find specific features
that you need. The tool offers
instant guidance, making it quick and easy to use once your setup
and familiar with it. Let's wind up with
a quick recap. CGPT integration in spreadsheets
enhances productivity by automating repetitive tasks and enabling text based operations. Key use cases include
text processing, content creation,
data categorization, translation, and formatting. The integration allows
for scalable operations, handling multiple variables
with a single prompt. The tool is versatile and compatible with multiple
platforms like Google Doc, micros Excel and
Word, and Gmail, but spreadsheets are still the
most valuable application. Okay. Hope this was practical and productive
lecture for you. Not too boring, I hope. And I really hope that it will increase
your productivity. See you in the next lecture.
34. AI Research Tool: Perplexity.ai Overview: Research is integral
to our daily lives. We rely on it to make
informed decisions, solve problems,
innovate, and learn. Traditional research methods
can be time consuming and require extensive information,
gathering and analysis. CGT browsing has also been a frustrating
feature for a long time. They launched it,
then it didn't work. Then they unpublished it, and then they published
it again. Now, it works. It's not that bad,
but it's also not as accurate and advanced as
research specific AI tools. Anyway, perplexity is an
innovative research tool that packs impressive features for in depth searches
or data analysis. It has an advanced multi step reasoning process
under the hood. You're familiar with
GPT and Google Search. Using perplexity AI will
be super easy for you. The core feature it offers
is the pro search mode. It multiplies your
searches and then dissects the most relevant
information into a response. And what's the cool thing is that you also get the sources. The P search would analyze
the search results, take intelligent actions
and initiate follow up searches that built
on previous findings. Perplexity offers a free
version and a paid version with additional features for
around 20 bucks a month, which seems to be industry
standard at the moment. With the paid version,
you can integrate almost any of the
high end models currently available
like CID GPDD, for latest version
of Cloud model, Gemini, without the need to buy separate subscription
for each service. You can also attach
files such as PDFs, images or text files
directly into your queries. Perplexity also features
a quick search, and it's ideal for fast, accurate answers, backed
by reliable sources. However, for in depth research, requiring comprehensive
analysis, upgraded pro search
is way, way better. Free version limits
pro searches, but it's enough to decide
whether it's worth paying for or a free
version is sufficient. The free version of
perplexity also allows you to enter detailed custom
instructions, like ajipti. But here's one more thing that
you can do in Chagp, well, you can organize
your searches into collections by topic or project. This way, you can easily find your past searches by topic, you can even share a
collection with someone. One of my favorite features
of perplexity AI is focus. It allows you to narrow
down your search to specific areas
like academic papers, read it, or Wafram, which is ideal for
precision and reasoning. For example, if you need to make an activities interests and
opinions marketing analysis. Just type in your request, choose pro search,
and click focus, and let's choose Rdit. And you'll get the results
from reddit discussions. What I like about
Reddit is that, while these are just
people talking. And yeah, it's cool
to get that insight, and it takes a lot of
time to go through all of these threads and links. But what if you need to search
elsewhere? No problems. Add site and column. And then type the site
that you need to search. Preferably just
copy and base it. From the next step, perplexity AI clearly
cites its sources, making it easy to see where
the information comes from, and it's even easier to verify
this information this way. You don't have to
search it manually. I also encourage you
to pay attention to the follow up questions
because they often are examples of a good prompt
and a good way to follow up. So you can choose how
you develop your threat without typing new prompt
every single time. If for some reason, perplexity
doesn't work for you and you're looking for
a similar alternative, there's a direct competitor,
and it's called. With, the approach is
absolutely the same. Super easy to get used to and to impossible
to resist using. Feel free to put the
video on pause and read the conversations
or even better, try them yourself
in your context. Meanwhile, bye bye, and see in the next lecture. Oh
35. AI Research Tool: This Perplexity AI Feature Fixes Faulty Sources: Hi, everyone. AI searches
are actually awesome. However, they have
a tiny problem. If you want to control sources, you mainly have to focus
on choosing one source. But what if you want to
control multiple sources? Well, if that's the case, we are going to troubleshoot
this right now. I'm going to show you one troubleshooting
feature of perplexity AI search that most people
have never heard about. And the reason is that
it's kind of hidden a bit. Over the next minute, you're going to learn how to remove the sources
that you don't want in your final answers.
Let's have a look. So we have this query here. What are financial literacy
tips for Black Friday? Because I'm actually
recording this a couple of weeks before that. So let's show the sources. 'cause let's imagine that I
don't like some of those. Right here on the screen,
you see all of the sources. And what if I don't want
one of the sources? For example, if I have Hardword, I might want to remove
this good housekeeping. But you don't see
any place to remove. So you have to choose
this tiny box here, and suddenly you get this removed source button over here on the bottom,
right hand corner. Let's click on that one,
and you're going to see the search result, the compilation,
regenerate it again. And you can do this either
to one source that you want to remove or to multiple sources that
you want to remove. Let's try multiple sources. One, two, three, let's
say, let's go this way. Perfect. Now, we have only three sources
that we approved. This will also help
you make sure that the sources are actually the ones that you
trust and like. Also, perplexity makes
mistakes with dates. For example, if you
want to look for something 2024 or 2025, you may still get occasional
results from 2022, 2023, and so on. So that is where I
would definitely remove the older sources
because whatever you do, the sources that you're
using are very important. And I wish that in
this and other tools, there was a lot more
attention to this. Anyway, I hope you found this useful and see you
in a few seconds.
36. AI for Research: Consensus.app for Scientific Insights: Welcome to today's session
where we will talk about using AI for
research purposes, using Consensus AI app. Whether you're a student or a product manager or business
analyst or a marketer, Consensus AI is a helpful tool that simplifies the
literature search process, helping you access, filter, and synthesize research
findings effectively. But what if you want to
write a blog post with decent citations and references
to reliable sources? You can also do it in Consensus. Let's have a look at
what makes it special. To begin, let's head over
to Consensus homepage. The interface is simple
and user friendly. You can start by typing in any research question
that you want to explore. For example, if you're
investigating a topic like, how does technology
impact customer service, Consensus will provide you with a summary of
relevant research papers. This summary serves as a key takeaway from papers
that it has analyzed. And for the most
part, it is accurate. However, from my experience, it's necessary to look
into the references. They often contain
more information on the study and the source, which might be super valuable. You might also find
that the source covers slightly
different context from the one that you need. So being a bit extra
cautious here, it's also important
because all of the AI models tend
to make mistakes. Next, let's talk about
the copilot feature. It enhances your research by synthesizing
information in real time, making the presses more
efficient and thorough. I'd say great for summarizing the responses and navigation, but often needs a bit of editing and digging into the
sources to actually use it, especially if you want
to use it commercially. When you click on
a study source, Consensus AI generates AI driven summaries
for individual papers, and this is where I noticed the hallucinations got
noticeably decreased. These summaries provide a quick overview of
study relevance, helping quickly
determine if a paper is actually worth looking into. What I love about consensus is that it also
helps you understand the significance of a research
that you're coming across. Consensus categorizes papers by their influence and relevance, such as whether a
study is highly cited or involves non
randomized controlled trials. This categorization aids in assessing the importance
of the research and deciding which studies to prioritize and where to
get more information from, and whether you can make any
conclusions based on it. One other unique benefit
of consensus, AA, is its advanced
filtering capability. Can refine your search based
on criteria like study type, for example, control
human studies or observation or sample size
or year of the study. This level of filtering
allows to obtain more relevant results tailored to your specific research needs. I researched AI related topics
and studies after 2023, and the results differed significantly from the
ones from 2020 and before. Guess why. Interesting.
Why would that be? One more trick becomes visible if you ask
yes or no question. For example, let's ask is
online learning effective? Consensus with aggregate
research findings to show overall consensus
within the field, helping you identify areas
of agreement quicker. I guess that's the root
of the tool naming. And last but not least, consensus A let's you
manage your research list. You can save your searches, create lists of specific studies and export citations
in various formats. By the way, it also offers related searches allowing you to explore your topic more
deeply and comprehensively. And I think they understand how to ask the
right questions to consensus so it gives you the right papers and
the search results. Now, let's talk about how to write a block piece
in consensus. Well, since the
beginning of HajiPT, I kept thinking writing
box with ha GIPT without any additional input
doesn't make sense at all because it's not really
informative and interesting. You have to compete
somehow with what people can ask from ha
JIPIT on their own. But when you have an access
to this amount of research, you suddenly get
an opportunity to generate interesting
informative content. Let me show you
one more example. Let's go to consensus and
ask to write the block on the ways sleep deprivation
impacts cognitive performance. But instead of just
going into the results, I'll go ahead and use the filters and choose
only the last two years. This way, we create content that is based on
the recent studies. So we would be able to claim that based
on recent studies, we have this, this,
this and that. You know, the
summary I see right now looks pretty interesting. Now we can move on
checking the sources, looking for any
additional insights and additional verification. And once done, you can
put it all together in HAG PT using your favorite
prompts, then edit it, you know, yourself,
add some visuals, possibly even AI generated ones, and you're ready, you're done. Maybe a few keywords here
and there, and you're done. Alright, let's wrap
up. Consensus AI is a great tool for researchers. It's now available
as a custom GPT, and it integrates
seamlessly with hat GPT and offers the copilot
feature in a custom GPT. You can give it a try for free using custom GPT of Consensus. However, the full
functionality with filters, collections, et cetera comes
at around $9 per month. So if you do a lot
of research and you want to dive deeper
into your existing project, I think this will significantly expand
your capabilities for research or optimize
the time that you need to spend to find
the right research. That's it for this lecture, and have a wonderful rest of the day or night or
evening or whenever it is that you're watching this video and see in a couple of seconds
in the next one.
37. AI For Presentations and AI Content Generation: Visualize any Text with Napkin AI: Lecture, we're going to
talk about an amazing tool that helps us visualize
any text or part of text. Whether you're making
a social media post, creating a presentation
or need to make a visualization
for your website, you're about to
learn exactly how to visualize any text in
just a couple of minutes. The tool I'm going
to share about today is called Napkin AI. Currently, it's free, including the beta version with
full functionality, and it's still free. However, I suppose that it might become paid service by the time you're
watching this lecture. So here's how it works. Let's go to website called Napkin AI and sign in
or create an account. Next thing that you're going
to see is an empty sheet. And this app, it's
called napkin. You can create a new napkin
and you can start with a blank napkin like I do
right now or draft with AI. Basically, you give a
prompt and you get a text. For this purpose, I'm
going to paste a bit of this lecture that I've
scripted for myself here. As the next step, we want to select a part
of the text and hit this blue electric icon. Great. Now, I can see a list of different options on
how to visualize those. And there are actually
a lot of them, and you can generate even more. And even more. So really a lot of
what you can do. Let's select one option. For example, this step
by step looks good, but I don't like
the curvy thing. Let me choose
something more linear. Let's stick with
this one. This one. Now, once I've
chosen the template, I can go and choose
its variants. I'm going to see different
icons, different colors. Now, once we have
colors and the layout, you can adjust the details. For example, you can
choose a different font. Not too many of
font at the moment, but it's a very new tool. It's better. So no
complaints here. Let me choose Mont zerot here. One great thing that
I like about it is that you can customize
about anything here. So we can change the text. We can change the colors. And you can do that for
practically any element. Cool. Now, let's
talk about saving. I think they made
it really nice. To export your visuals, click on this arrow
pointing down. You have three formats to
save PNG, SVG, and PDF. What I also like is that you can switch color mode to
light mode or dark mode. Depending on the place that you're going to paste
to, for example, sometimes my presentation
is in dark mode and sometimes in light
mode and sometimes both. In PNG and SVG, you can make a
transparent background, which is also awesome. And you can pick the resolution. For example, you can
make a bigger file. I would suggest making
the biggest one. And sometimes when you don't want to make a separate file, you can just copy it
into clipboard and then paste it anywhere
where you're creating. So it's just Control C, Control V or Command,
if you're on Mac. Let's try one more. I'll
just do one sentence here. I also like that it
suggests the first option, and it's usually something
that I tend to like, and I find a prop it. You can select
multiple elements at the same time and
edit them right away. For example, in
these three lines, I'm going to change it to white. Or we can change it to yellow. Pretty straightforward. The desktop experience
looks much better. It's much easier to
hit because there's a lot of elements going on here. So desktop is much more
comfortable platform. I believe a tablet
would also work well. You can't generate
just images like you would in Adobe Firefly
because the purpose of this tool is to visualize your text and give
you some flexibility. Overall, for me, this tool
is saving hours of time. I paste it into my social
media post when I don't have photo or video to put
into my social media post. You can even add it into
your video if you like. My primary use case is to paste
it into my presentations. I just wish they add
a bit more font or let me add my font and
possibly maybe save styles. But for a new product, this is pretty amazing. This tool can save
you hours of time. And it's becoming
increasingly popular. Lots of my colleagues also start using it and
for a good reason. It's highly recommended.
Give it a try.
38. Building Your Generative AI Toolset: That's pretty much it.
Let's navigate over the few more tips around
NAI tools to summarize. There's no universal tool that would work equally
in all modalities. It's best to choose different tools for
different purposes. How do you do that? Here are a few more tips to help you
choose the optimal tools. First, formulate the tasks
that you want to assign to AI. Think, which repetitive work Other processes can be
automated or improved with AI, and then just approved by. After that, determine
the necessary modality that suits your task. It's better to choose a
tool that is strongest in certain modality or
incorporates multiple tools. To achieve one goal. Focus on personalization
and automation. These are two main
benefits of AI. Look for opportunities to customize the tool
to your needs, style, and data, and of course, automate repetitive routine
tasks and processes with AI so that you can in the end just approve
or edit it slightly. But while automation
is good, please, please stay away from
unsupervised use of NAI, especially for content creation. Just just stay away from the idea of creating
500 pounds in 1 minute. Just please just trust
me. You don't want that. Be critical of universal
multifunctional tools that promise to do
it all for you. In most cases I've seen, these are just containers
and interfaces for ChagpT. However, there are solutions like the script that
I mentioned before, and they integrate
several different tools to save time for a
specific use case. This one is counterintuitive, but invest in real
life experience. Explore cases, communicate
with colleagues with people who have more
experience than you do or who have
different experience. Attend various conferences,
master classes, travel, learn to
distinguish good from bad. I bet you know how to do
it without this training, so it's not the focus
of this course. Last but not least, you don't
need hundreds of tools. Instead, learn the ones
you like really in depth, and look for ways to
creatively chain them. You can even automate
some of the tools using make.com PR
or relevant CI. I regularly use round
20 applications. And in addition to those
I mentioned previously, I use perplexity ai.com
for information search. Sometimes I use Gama app for creating presentations
or sel posts, and there's also one of
my favorites Harp AI. Which can be added
as an extension to your browser to analyze
pages, YouTube videos. For example, it's
great for learning, content repurposing,
summarizing information. You can analyze
keywords from articles, especially when it
comes to LSI keywords, sort your G mail, interestingly, you can even monitor updates of a certain component
on competitors pages. Say the price of a product, or you can do it for
yourself, not just work. Remember this video on
the eve of Black Friday. Anyway, AI is just an assistant. Do not lyndly trust
it and rely on generated results 100%
without any supervision. On contrary, try to be more interesting than the
default ChagpT response. Use AI consciously and always critically
evaluate its work. As at the end of the day, you are responsible for
the final result, not CGP. All right. S in seconds.
39. AI Mindset: Keeping a Healthy Relationship Between Human and Artificial Intelligence: Series of videos, I want to talk with you about
keeping a healthy, productive relationship between your human intelligence and
artificial intelligence. Without further
ado, let's jump in.
40. Why AI Mindset is Important: Video, we'll talk about building a habit. Think of it this way. Rewire your brain to consider, C AI solve my task? Identify which tasks can
be automated using AI. And what do you'll still have to handle manually?
Ask yourself. Can I chain a few tools together to get this done or to
get a part of this done, what would I outsource if I had an assistant or two
assistants or ten assistants? Is this something
that I often repeat? Well, by answering
these questions, you'll pinpoint areas
where it's worth investing your time to create
and document your workflow? Let's take text formatting and proof reading, for example. These tasks are repetitive
across many professions. But if you're a project manager, not
what you're paid for. So prepare your prompts or set
up a custom dept for this. Alternatively, if
you use notion, you can save a prompt with formatting and proof reading
guides in your favorites. And then speak your text into notion and just
press that prompt. Let me show you an example. So this is a text that I
randomly speak into notion. Sometimes these are
just my thoughts. I think that I need to create
a series of lectures for my students to keep a healthy
relationship with AI. Let's see what it gives us back. Now, if I want to create
an e mail from this. Cool. While you build a habit of treating your daily
tasks this way, you'll find more
efficient solutions frequently and get more
done in less time. What's most important,
you'll get more time to make important meaningful
decisions and tasks.
41. AI Mindset: Use Generative AI for Learning: This video we'll talk about developing our
cognitive skills and keeping our learning process
despite the AI expansion. So if you watched an AI
presentation by Open AI or Apple, you might think
with these tools, Will we ever have
to think again? And you've got a
very good point. Technological
advancements can either make us smarter or well, not that much, and basically
forget how to think. Let's say smartphones. With smartphones, we no longer have to
memorize things like phone numbers,
maps, or birthdays. Notifications and
social media dynamics reduce our ability to focus. And therefore, our
memory that we don't use becomes
weaker over the years. And may even lead to various mental diseases.
So here's my point. Use this technology
to become smarter and not to outsource
your thinking. And to do that, let me give you a couple of practical
activities to help. Use a I to assist you in learning and practicing
what you've learned. We're not visual or
auditory learners. We use all senses to learn and different information type is perceived better through
different media. For example, you can
use speech II or 11 labs to listen to
any texts on the go. So basically any text
to speech software. Can help you do that. Then you can turn YouTube
videos into summaries, chat with those summaries or
chat with those documents. Get Chagp ask you questions
based on a document. You can test your
knowledge by asking ajipti to ask you questions based on a document
that you upload. The next approach would be to develop emotional
intelligence. Invite a friend
for that carbonara that you prepared a
couple of steps before. Socialize, communicate
with real people. Try to understand
their feelings, why they say what they say. And how you can positively
impact their state. It's a skill that can be
developed throughout your life. I have so much to learn in
this direction as well. I think it's something that
we should develop life on. Another great cognitive exercise
is to master new skills, learn new sports, languages, or musical instruments,
if you can. Not only it's useful
for your brain, but it's also kind of fun, last but not least, do an AI detox from time to
time. Take a break from AI. It's going to be a period where you don't use any AI tools. So how do you know when you need to focus more on
some of these approaches. If you feel uncomfortable
writing without CA GPT, that's a sign that some of
these activities are needed. Of course, you won't
do all of it at once, but you can choose how you feel and which one you need
more at this moment.
42. AI Mindset: Reverse Engineer Your Thinking: The next technique to keep
our relationship between human intelligence and
artificial intelligence is to reverse engineer
our thinking. This approach makes you
a better communicator, not just with jpt, but with other human
beings in general. To reverse engineer
your thinking, deconstruct a problem
for a project. Break down a complex project into smaller manageable parts. Identify which parts
can be tackled with AI. For example, you want to cook a carbonara. What do
you need to do it? Well, you'll find a recipe, make a list of products, go, buy them, and
then maybe you'll even invite someone for a
dinner and do the cook. So which of these tasks is best to outsource to
AI? Recipe, right? Wrong. Well, unless you
want to test your luck. Tasting a recipe and getting a shopping list. Oh,
that's much better. Sort the shopping list by shelves that are
typically close to each other in a shop like Walmart to minimize
the time in the store? Absolutely. It's an easy
way to not forget anything, and save a couple of minutes. Well, what if the recipe
is a YouTube video? Now, problem. You can still get a summary in a few clicks. Harp AI, Google
chrome extension, or bogo Temini would
be the optimal tools. But there are others
as well, of course. The next approach to keep a
healthy relationship with AI is to think about how you think and reflect on
your thought process. Do you use any logical
techniques or frameworks? Understanding this will help you replicate and enhance
these processes with AI. Then study how experts in your field think
and solve problems, Replicate those processes
as well, using AI tools. The next approach will help you become a better communicator. And it's about goal setting
and task assignment. Analyze how you set
goals and assign tasks. Can you improve this process using prompt
engineering techniques in real life by creating
prompts with examples, references, you know, explaining how tasks
can be delegated, providing the
necessary contexts. All of these things
are important while delegating tasks and giving them to other people or teammates. Thank you for attending
this lecture. Remember, productivity is
a very personal stuff. And with AI tools
and techniques, we can only enhance what we
have built in ourselves. And when you improve
what you have within how you distinguish
good from bad, how you understand
your thought process. This is naturally
going to improve how you use AI tools like C GPT.
43. AI Mindset: Avoid Overreliance: This video we'll talk about over reliance and loss of
critical thinking. So over reliance on NII
is a rising problem. If you know the GPTH
vocabulary markers, and overall you played around
with it for quite a while, you see it everywhere. Articles at universities,
CEOs, tech giants. You'll notice how everyone
suddenly started delving in, fostering, harnessing,
unleashing, unlocking everything. And revolutionizing everything. So you see relying too
much on AI can lead to a decline in critical thinking and
decision making skills, especially among young workers. But even more experienced
professionals can fall into this strap because
it's so easy to take the path of the
least resistance. There is a good reason for that. When you see a perfectly
formatted, error free text. It subconsciously feels like
high quality detailed work, and it's tempting to
accept it as it is. And this psychological trap can hit even the most
qualified people. So here's how to deal with it. Number one, seek real life
experiences and examples. Two, learn critical
thinking techniques and understand common traps. Three. When using
TGPT generated text, ask yourself, Have I ever
spoken like this before? Doesn't sound like me? When have I last used this word? F, verify any data
by searching for the source on Google or cross referencing
provided sources. By following these guidelines, you'll avoid an oversight and potentially a lot of
accountability issues. And just to be clear,
using GPT at work is fine, and it can boost your
productivity significantly. However, not supervising
its outputs can lead to uncomfortable and
even dangerous situations. SGPT can't be held responsible
for misinformation, but you easily can
44. AI Mindset: Summary: Don't worry, each of these
activities will help you blend human and artificial
intelligence in the healthiest way possible. That's it for this video and see you in a couple of
seconds in the next one.