Transcripts
1. Introduction2: Hey, there. I'm Victor Cuevas. I'm the founder of
the knowledge Star, and I'm the creator
behind my digital dad, a YouTube channel where
I help parents and kids learn technology together
in fun and simple ways. I've taught thousands of
learners of how to use AI and digital tools to
solve real world problems, whether it's
building automation, streamlining work, or even
learning as a family. Now I'm here to help you
master prompt engineering. This class Prompt
Engineering Basics is designed for beginners, whether you're a content
creator, teacher, student, business professional,
or just AI curious. There are no prior
experience required, and all you need is access to a conversational AI
tool like Chat GBT. You'll learn how to understand the essential building
blocks of a great prompt. You're going to craft clear and detailed instructions
to get consistent, high quality AI responses. You'll experiment, refine, and perfect your prompts
through iteration. The value of this
class is simple. You're going to walk
away with a powerful, reusable skill that
saves you time, boost your creativity, and helps you get better
results from AI. Whether you're writing
emails brainstorming, creating lesson plans
or generating content. Your class project will be to create a personal prompt
library starter kit. Now, you're going to
choose your own use case, write three to five strong
well structured prompts, using the methods I teach
and test and refine them. Share your kit with
your friends and family so they can learn
from your creativity. I'm so excited to have you here. This is going to be
fun and practical. Let's get started.
Seeing the first lesson.
2. AI Explained MMC: In this lesson,
we're going to cover artificial intelligence,
also known as AI. Now, what the heck is AI? My simple explanation, AI is a system performing tasks
typically done by people. The first point in this lesson, AI is an umbrella term. Machine learning, generative
AI, deep learning, and many more all fall
under the AI umbrella. I'm going to leverage
Aspen Digital's AI graphic to help break down
the components. I will also leave a link to
the PDF for you to download. Let's start with the first zone. The first zone of AI is known
as automated reasoning. AI automated reasoning is a process where
computers use logic and algorithms to draw conclusions and solve problems without
human intervention. It's almost like a lot
of if then statements. Now, let's use Pac
Man as an example. There are AI ghosts and they use their superior
reasoning skills to deduce optimal path to
ruin our Pac Man's day. They don't need a map
or some type of GPS. They just use simple rules and a serious grudge
against yellow circles. The next zone of AI is
known as machine learning. Machine learning is
a data driven system that learns and improves
from experience. Now, if you use
Amazon or Netflix, then you've interacted with
machine learning tools. These platforms analyze your browsing and
purchasing history to recommend products, movies, or shows you're
likely to be interested in. Machine learning has sub levels. First I want to point out
is supervised learning. Now, in this instance, the AI model is trained on labeled datasets to answer
the question, what is that? Based on characteristics
or features, there are various techniques
within supervised starting. One technique is classification. Now, if you have a credit card, you may have experienced that transaction
is not approved. You call the credit card
company and find out the system identified the transaction as
potentially fraudulent. Although annoying at times, it's intended to protect you. This is an example
of how AI classifies transactions as fraudulent or legitimate based on
the learning patterns. We also have
unsupervised learning. The AI model is trained on
data that is not labeled. Answering the question,
what's similar? By finding patterns, structures, or relationships within
the data on its own. Clustering is one
technique used in unsupervised learning.
Think of Nike. Nike is an example
using clustering to segment product offerings
based on athletes, fitness buffs, and
your Bastanistas. The next area is
reinforcement learning. Reinforcement
learning is based on the model taking actions
and receiving feedback. Good and bad. Taking a
trial and error approach. Now, if you use Siri,
then you're using an example of AI that's used
as reinforcement learning. Virtual assistants
such as Siri uses reinforcement learning to
improve their conversations. The last area is generative AI, which is quite simply creating
new content such as text, images, videos, or music. When using tools
such as hat GPT, you're using generative AI. Chat GPT can generate text and images based
on your text input. One additional key term in the generative AI space to be
aware of is multimodel AI. This is the ability of the AI model to generate
multiple types of data, such as text images, video, et cetera,
in one platform. I personally think this is a crown jewel for any
company in the AI space. As you hear about
conversation and AI, or if you're thinking about
implementing an AI solution, first figure out which area of the AI map we're talking about. I'll see you
in the next lesson.
3. What Are LLMs MMC: In this lesson,
we're going to cover large language models
also known as LLMs. Before we kick off LLM topic, we need to start with
foundation models. In the early years, AI was
trained for specific tasks, limiting its range
of functionality. However, foundational
models change that. Foundational models are
also known as base models. These models are trained
on a vast quantity of data at scale to be used
for a variety of tasks. Now, how can these
large language models perform these magical tasks? They go through two phases, pre training and fine tuning. In the pre training phase, the model is generally trained on data that
can be found on. Anything from web
scraping, books, transcripts or anything
else that is text based. Now, in the fine tuning phase, the models become better
at specific tasks through techniques such as
reinforcement learning, some use cases, a chat
bot that can help you with your homework or answer questions about a topic
you're interested in. Think of a self driving car that can take you safely from
one place to another, a system that can recognize pictures of animals and tell you what kind
of animal it is. This brings us to
large language models, otherwise known as LLMs. Although both
foundational models and large language models
are both AI models, they differ in scope. An LLM is a top of model focused on language
related tasks. Now, there are two key concepts that we need to
discuss regarding LMs. One is language models and
natural language processing, also known as NLP. So first, what is
a language model? A language model
refers to a type of model specifically
designed to generate human like text or predict the probability
of a sequence of words. Language models learn patterns and statistics from large
amounts of text data, enabling them to generate responses that make sense to us. An example, we can start
with the word cat. The model will predict the
next sequence of words. Here, we get the cat
sat on the floor. You see, it's not
magic. It's just math. The next concept is natural
language processing. This is how LLMs are able to perform language
related tasks at scale. Natural language
processing is defined as the branch of
AI that provides computers with the
capability of understanding texts and spoken words in the
same way a human being can. Some examples of how
LMS are used today. Chat GPT by open AI. A conversation AI chatbot
that can answer questions, generate code, all from text. Another use case is documentation, translation
and summarization. Today, you can upload a PDF
to Chat chPT or Cloud AI, for example, to
get key summaries and even translate it
into different languages. Well, that covers it for LLMs. I'll see you
in the next lesson.
4. LLM Creativity MMC: Now in this lesson, we're
going to learn how to control creativity by
understanding temperature. LLMs are designed to predict the next word in a
sequence, and guess what? So do you and I. How would
you finish a sentence? The sky is. Which word you choose given the
following options? Blue, the limit or bananas. Well, pretty sure you
would have chosen blue or the limit,
but not bananas. Guess what? For LLMs, bananas is a feasible option. I know. Isn't that just bananas? LMs store probability for every word that could
possibly follow the sky is. Can adjust the probability of the predicted word with a
setting called temperature. Now, temperature controls how random the output of a
large language model is. Chat IPT, for example,
has a range 0-2. The lower the temperature,
closer to zero, the more focused and
predictable the output. The higher the temperature, closer to two, the more
creative and random the output. This is just a
control knob to be aware of when providing
prompts in a chat, since it'll influence
the output. Let's test out an example
in open AI's playground. First, let's set the
temperature near to two. Now I can ask who's
Tony Robbins? Starts off strong, then completely shanks the
rest of the response. At maximum temperatures, most models are
incomprehensible. Now let's try the other extreme and set the temperature
near to zero. Bingo. The model chose the
most likely output every time. When you set the
temperature to zero, the model eliminates a lot of the randomness
from the predictions. We call this type of
model deterministic. A deterministic model produces the same output for
each given input. Now let's go through
some scenarios. You're drafting a resume for
a job you're applying for. What temperature would you set the model to? Low temperature. Now, if you wanted
to write poetry, what temperature
would you choose? Medium or high, either one. The key here, depending
on the desired output, your temperature is
a control knob that can influence results. I'll
see you the next lesson.
5. Improving Models MMC V3: This lesson, we're
going to cover how LLMs get good at what they do. Now, often you hear we've
trained the model on a ton of data. Here's
a question for you. Do you think it's sufficient for the model to be trained
on the data just once? Well, it actually requires
multiple iterations, and this brings us to
a term called epoch. Each pass through the train
data is called an epoch. Why is this required?
Now, recall, LMs are neural networks. Before they're
trained, predictions stored in the model are random. So even after one epoch, the LLM cannot predict
words correctly. So how can a neural network of an LM approve over
each training session? During each epoch,
the neural network compares its prediction
with the original data. Prediction is usually
off by some amount, which is called the loss. The loss is a difference between the predicted and actual values. Now, like a rocky movie,
after each epoch, our model becomes bigger, better, faster and stronger. Now, you think we should continue to train like
Arnold Schwarzenegger, to the point, there
is zero fat on the body or in this
case, zero loss. Now, the answer is
no. The loss is zero. The LMs predictions fit
the training data exactly. Which means it can't generate
new and exciting things. This is known as overfitting. The LLM is overfit when
it replicates patterns in the training data to
the point that it cannot generate new data
or generate new patterns. The key to prevent
overfitting is to monitor the
model's performance. It's not a one size
fit all solution. At the end of the
day, training epoch is just one factor
for LM improvement. More to come in the next
lesson. See you soon.
6. Preprocessing Tokenization MMC V4: This lesson, we're
going to cover pre processing and tokenization. Now, can the English
language be messy? Absolutely. There are
typos, abbreviation, inconsistent capitalization,
multiple spellings, list goes on. So think
of this sentence. I eight Grandma. One comma changes the entire
context of the sentence. Computers depend on consistency. So to make text readable,
we need to clean it up. The process of
turning raw text into a clean dataset has two stages preprocessing
and tokenization. Now, both of these steps
help standardize text in a corpus before a models
trained on it. What's a corpus? Well, a corpus is
a collection of written texts that a language
model is trained on, such as all of Elam Ma's tweets. The first part of the
process is preprocessing. Now, pre processing is
the process of cleaning, transforming and
organizing raw data to make it suitable
for AI models. This involves tasks
such as clean up missing values in the data
or removing outliers. This is extremely important
to ensure AI models learn meaning patterns to
make accurate predictions. As I say, bad data
in is bad data out. The next step is tokenization. Tokenization is a
process of breaking down text into smaller
units called tokens, such as words or
characters to make it easier for computers to
understand and analyze language. Here's an example. I played
basketball yesterday, period. How many tokens do you see? Now, considering stems, affixes, and punctuation, we
have six tokens. Another method is using algorithms to build
tokens from characters. An example of this is
Byte pair encoding. Byte pair encoding is a tokenization algorithm that builds tokens from characters. Here's a quick
example. Imagine you have a big block of legos and
you want to build a castle. Now, you notice that you
keep using the same pairs of legos together over
and over again. Now, instead of picking up
each piece one every time, you decide to create
special blocks that combine the frequently used pairs to make building faster and easier. Now, by creating
these special blocks, you make the building
process more efficient. And just like a
byte pair encoding makes text processing more efficient by combining
frequently used pairs of characters or words. So tokenization
essentially for building a vocabulary that can be
used to train in LLM. In practice, LLMs are trained hundreds of billions for tokens. See you in
the next lesson.
7. What is Prompt Eng MMC: This lesson, we're going
to cover the following. What is prompt engineering? And why is it important? What is prompt engineering? In general, it's just an input to an AI model to
guide an output. The prompt could be
a string of text, as you've seen in chat GPT. The prom can be other forms of media such as images or audio. Let's take a look
at example using Anthropi chat bot Cloud AI. You can use a free version
or the pro version, which is about $20 per
month as of today. Just know, the paid
version lets you use the premium features more
often than the free tier. Now, in the prompt
window of Cloud AI, I can ask the
following question. Explain what is generative AI in a funny tone in 400 characters. We get a funny response. Now, if I wanted to
take this idea one step further and create
a post in Twitter, which is limited to
280 characters, I can. Just need to tell Claude to
rewrite in 280 characters. Now, let's take it one
more level higher. Say I wanted the output framed
in the following format, stages of whatever the topic
is beginner, beginner vibe, intermediate intermediate
vibe, advanced, advanced vibe, and
master Master vibe. I can just ask Claude to
rewrite and use this format. You can see here, the process of prompt
engineering, as of today, is taking a prompt
and testing it out, refining it until we get
an effective output. Do I see prompt engineering
changing over time? Absolutely. In the near future, I think it'll be
conversational speech. But for now, we're
sticking to the basics. Now, there are many ways
to improve the prompt, such as applying webpage
development techniques such as HMM markups. Now, why is prompt
engineering important today? By writing good
prompts, AI models can be guided to create
relevant information. Recently had a gentleman tell me about a software
application and small business idea that was created based on code
provided by CHAT GBT. To me, it is simply amazing. In the next video,
we're going to cover what is a good prompt.
I'll see you soon.
8. What Makes A Good Prompt MMC: Now in this lesson,
we're going to cover what makes a good
prompt for genitive AI. An acronym for you to
remember, I craft, examples. This encapsulates seven key
components of a good prompt. And what are these
seven key components? Instruction. Context,
role, arrangement, formatting, tone, and
finally, examples. Let's walk through each
of these individually. First, let's cover instruction. Here, we clearly state what we want the model to do.
Here's one example. I'm going to
instruct Chat GPT to tell me about the benefits
of being a Jedi Master. The second component is context. We need to provide
background information or details to help the model
understand the situation. Now using our framework, let's now try to create
a video script for Mr. Beast. A quick note, I mentioned HTML markdowns. I keep my framework
organized by using markdowns such as
the number symbol. Here's an example, instruction. Generate a video
script idea for Mr. Beast's YouTube video that involves a large
scale charity event. The contexts create
a video idea that combines Mr. Bea's
signature style of grand gestures and
heartwarming moments with a charitable cause addressing current social issues or needs. I think I have a
chance for Mr. Bees. The third component is role. Here, we give the model an identity to assume
while responding. The role can be
completely different, and if we tell our
model to act as a teacher rather
than as Mike Tyson. Now, normally, we want to
start with act as whatever. One example, we can ask
our chatbot, in this case, ChatBT to help us lose
weight by creating a morning routine based on a
role played by Tony Robbins. So our instruction will be
write a morning routine. The context is 30 pounds overweight and I
need to lose weight. And the role is act
as Tony Robbins. The next component
is arrangement. This is how we want the
information presented. We may want our information structured with an introduction, three main points, and the
conclusion. Here's an example. Write a story about
how Humpty Dumpty beat up the Big Bad Wolf with
three little pigs story. The context, Humpty Dumpty has been working out with Mr.
Arnold Schwarzenegger, and it's built six
foot five and strong. The role act as Morgan
Freeman for narration. The enragement should have three paragraphs
introduction, middle and end. I know this is a
little bit random. Let's see what we get. We now have story time for the kids. Now we can add formatting. This is the desired
format for the output. Here we can change the
structure of the text. Let's take the previous
example and ask Chat GPT to maybe add a table of the
characters in the story. Using the same instructions, I'm going to add format, make a table of the
characters in the story. And as you see, we
get our table of characters. We're
now ready for tone. Here we have the
opportunity to determine the tone or style
of the response, such as being happy,
silly, angry, whatever. I'm going to make
this really brief. So instruction, give me a brief history of Michael
Jordan in two paragraphs, but my tone, I want my story
sarcastic and humorous. The style of the output has
changed sarcastic and humors. We're ready for our last
component, which is examples. This is a technique to guide the model on the type of
response you're looking for. What we're going to do is we're going to use another example, bringing in all the components,
including examples. So first, the instruction, write a joke about the rise
of AI in everyday life. Context. In 2024, AI has become an integral part of
daily routines from smart home devices to AI
driven customer service. People are increasingly
relying on AI for various tasks leading
to humorous situations. The role, we're going to
act as a stand up comedian, the arrangement,
present the joke with a clear setup
and punch line. Format, it's going to
be brief and concise, ensuring the joke is easy
to read and understand. Tone is light hearted
and humorous, and we're going to
give it an example, such as, why did
AI go to therapy Because it had too
many unresolved loops. I hit Enter, and we get our
headliner joke, just kidding. What you now have is a framework for understanding
good prompt structures. Onto the next lesson,
I'll talk to you soon.
9. Prompting Strategies MMC: In this lesson,
we're going to cover a few AI prompting
strategies that will help AI do harder jobs and
give smarter answers. The three techniques
that we're going to cover zero shot chain
of thought prompting, few shot prompting, and few shot chain of
thought prompting. First, let's cover zero shot
chain of thought prompting. Zero shot chain of
thought prompting involves asking a model to solve a problem by
explicitly instructing it to think through the steps without providing any examples. Technique is useful for
complex reasoning tasks where you want the model to explain its thoughts process. Let's
check out an example. Now imagine you want to solve a problem like what
is 15 plus 27. Now, instead of just
asking for the answer, you prompt the model
to think step by step. In hachBT, I'm going
to use this prompt. Let's think step by step
and what is 15 plus 27? You can see the model
walks us through step by step to get
the final answer. Now, let's discuss
f shot prompting. View shot prompting
involves providing the model with few examples of the task you
wanted to perform. Now, this helps a
model understand the pattern and generate
the desired output. Let's cover another example. Suppose you want the model to translate English
sentences to French. You provide a few
examples to guide it. So in my prompt, I'm going
to answer the following. I translate the
following sentence to French and first is Hello, how are you and have
the French equivalent. I love to read books and
the French equivalent. The weather is nice today
and the French equivalent. So, finally, I'm going to enter, translate this
sentence to French. I am learning to code. We get our answer. I'm
learning to code in French. Now, the third technique is few shot chain of
thought prompting. Few shot chain of thought
prompting combines the principles of
few shot prompting and chain of thought prompting. You provide a few examples where each example includes a
detailed reasoning process. Now, this helps the model
understand not just the task, but also the reasoning
steps involved. Let's walk through an example. Let's say you want the model
to determine if the sum of odd numbers in a
list is even or odd. You provide the examples
with reasoning. So here, I entered the odd numbers in this group
add up to an even number. I have a list of numbers. And adding all these numbers
is the answer gives 25. The answer is false. And I do the same thing with a
few other examples. So finally, my question is, the odd numbers in this group
add up to an even number? And then we get our
answer. So in summary, zero shot chain of
thought prompting instructs the model to think step by step without examples. Few shot prompting provides a few examples to guide
the model on the task. And few shot chain of
thought prompting combines few shot examples with detailed reasoning steps
to guide the model. By using these
techniques, you can improve the model's
ability to perform complex tasks and generate more accurate and
reasonable responses. See you the next lesson.
10. Advanced Prompting Strategies MMC: Now in this lesson,
we're going to cover a few additional
AI prompting techniques used in
prompt engineering to improve
interactions with LMs. The three techniques
we're covering is generative
knowledge prompting, least to most prompting
and emotional prompting. So first, we're going to cover
genetiveKledge prompting. GenetalKledge prompting
involves using AI to generate relevant knowledge
statements that can help solve a specific task. Now, this is useful when we want to get a
thoughtful response, since we're asking
the model to create potentially useful
information about a given question before
generating a final response. Let's check out an example. Imagine you want to create an article about climate change. Use generative knowledge
prompting to gather key inputs. In Chat GBT, I'm going
to use this prompt. Generate some key points about the impact
of climate change. Now we got some key points. I can add a follow up question. Using the generated key points, write an article about the
impact of climate change. AI uses the generated points to create a
comprehensive article. Now let's discuss east
to most prompting. Least to most prompting
is a hierarchy approach where prompts are given an increasing level of assistance. Now, the idea is to start with the least intrusive
prompt and gradually increase the level of help until the desired response
is achieved. Now, this is useful because
it allows AI to break down a problem into subproblems and then solve each
one individually. Let's cover an example.
Suppose you want the model to break down the steps to teach a
child to use a spoon. Now, in my prompt, I'm going
to enter the following. I need to teach my child
how to use a spoon. Don't solve the problem, but break it down
to subproblems. The model breaks down the
task to smaller subgroups and provides guidance on
how to tackle each task. Now, the third technique
is emotional prompting. Emotional prompting
involves adding emotion cues to prompts to enhance the performance
of AI models. These cues can make the task seem more important or urgent, potentially leading
to better responses. Let's walk through an example. Now, imagine we need
help with our resume. A non emotional prompt. Can you help me write a resume? Now let's add some motion to it. This is very important
to my career. Can you help me write a resume? Get a more polish response. So in summary, general
knowledge prompting generates relevant knowledge
to inform task completion, making it easier for users to
gather and use information. Least to most prompting
uses a hierarchy of prompts from least to most
intrusive to guide behavior, promoting gradual learning
and independence. Emotional prompting
adds emotional cues to prompts to enhance
the quality responses, making interaction more
engaging and effective. Now, by understanding and
applying these strategies, users can effectively guide AI models to produce more
accurate and relevant outputs, simplifying their
interactions with the tech. See you
in the next lesson. M
11. AI Hallucinations MMC: In this lesson, we're going to cover one of the
challenges in AI, which you have heard about in
the news AI hallucinations. So what are AI hallucinations? These are incorrect outputs
generated by AI models. The AI model can
sometimes say things that sound true but are
in fact inaccurate. This happens because they don't really understand
the world like humans do. They just combine information
in ways that often work, but can lead to mistakes. What are the causes
of AI hallucinations? Now, there are many factors
and cause A hallucinations, but here are a few.
Training data limitations. Although huge models such as ChatBT that's been
trained on the Internet, the information that
it was trained on may have been incomplete
or inaccurate, leading to generate
incorrect information, such as Google's AI overview. AI model biases. If the training data
contains biases, the model might
reflect these biases in its output such as Google's gem generation
of racial diverse Nazis. Complexity of language. The human language is
highly context dependent. Sometimes a model misinterprets
the context or fails to understand the subtleties
resulting in hallucinations. In some cases such as in 2023, when Microsoft Bing AI produced some creepy conversation
with its users. How do we combat hallucinations? One method is using self
consistency prompting. Self consisting
prompting involves generating multiple responses
to the same prompt, analyzing them for
consistency and selecting the most coherent answer as a final output. Let's
check out an example. In Chachi PT, we can
ask when I was six, my sister was half my age. Now I'm 70. How
old is my sister? Let's think step by step. Let's grain another answer
for the same question. Now, let's do it one more time. All three lead to
the same conclusion. Which builds trust
in the answer. Now, another method
is role prompting. Role prompting is a technique used in prompt
engineering to guide an AI models response by assign it to a specific role
or character to embody. Let's check out an example. We're going to use the prompt. You are a friendly
kindergarten teacher. Explain what photosynthesis
is to a 5-year-old child. Now, here the role is
kindergarten teacher. The context is friendly and
explain to a young child. And the task is to
explain photosynthesis. Response not only leads to
a more accurate response, but also to a more tailored and appropriate response
based on the audience. Now, there's another
technique called retrieval augmented
generation, also known as RAG. This allows the AM model
to access information from external knowledge sources
as context for the prompt. Now, normally to implement
RAG into a workflow, there are technical
requirements. We can initiate a simpler
version of this concept by uploading documentation
into the chat bot. Let's take an example. Here in scri.com, I can get
a PDF version of books. Now say I want to use
the four agreements as the example. Download
the document. You can head over to PerplexEAI
and upload the document. Now I can ask for key insights related to
the four agreements book. One more technique is self
evaluation prompting. Self evaluation
prompting involves asking an individual or an AI system to reflect on and evaluate their own work,
responses, or capabilities. Let's check out an example.
Starting from Chat GBT, I provide the following prompt. Explain the concept
of photosynthesis. Next, you add a self
evaluation prop, evaluate the accuracy and completeness of the previous explanation
of photosynthesis. Now we can add a follow up prom. Based on your evaluation, revise the explanation
of photosynthesis. Now we have a complete response. Now you can see the
challenges with hallucinations and
how to combat them. It's always important to
fact check AI outputs since hallucinations may arise. I'll see you in the next lesson.
12. AI Risks Ethical Concerns MMC V2: On this lesson, we're
going to highlight some of the risks and ethical
concerns with genitive AI. Genitive AI, while
powerful and innovative, comes with significant risks, particularly in
amplifying biases and spreading misinformation
through AI hallucinations. Now, let's start with
amplified biases. GenertI models are trained on vast datasets
from the Internet, which inherently
contains biases. These biases can be
related to race, gender, ethnicity, and more. An example in 2023, the stable diffusion
model depicted professionals such as
doctors and engineers, predominantly as white males, while nurses were depicted
as white females. Misrepretation does not align with the actual diversity
in these professions. Next is AI hallucinations. A hallucinations occur
when genera models produce information that appears plausible but
entirely fabricated. These hallucinations can lead to the spread of misinformation. Can have serious consequences. In the case of
Mata versus Abaca, a lawyer used chat GPT
for legal research, and the AI generated false
citations and quotes, leading to significant
legal repercussions. Some additional dangers include copyright issues
and sensitive data. On the copyright side, there's significant
debate over who owns the copyright for AI
generated content. The AI itself, the
developers or the users. As recent as June 2024, some of the world's
biggest music labels, Sony, Universal, and Winner Records,
filed a lawsuit over AI copyright infringement
against Sno and UDO. While, AI companies have
previously argued that their use of copyright
material falls under fair use, the record labels contend
that Sno and UDO are profiting from replicating songs without
transformative purpose. This sets the stage for the
critical legal debate over the boundaries of fair use in the context of AI
generated content. There is also the risk of sensitive data leaks
and privacy laws. Users inputting
sensitive information into public AM models poses a significant risk of
exposing confidential data. Let's take it one step further. AI models may inadvertently
generate or reveal personal information
potentially violating privacy laws like GDPR. In 12023 example, chat GBT users reported
instances of data leakage, including exposure
of personal data, conversations with a chatbot,
and login credentials. Some users found they
could access details of other user proposals,
presentations, and conversations. There's not a simple
technical solution to mitigate all these risks, but I like a phrase coined by the AI Exchange
founder, Rachel Woods. If you will not put on read it, do not put in the AI chat bot. As an individual,
you just need to know when and where
to use AI tools. Although there are tons of
advantages to use these tools, it's important to
understand some of the risks. See you
in the next lesson.
13. Conclusion2: Wow. You made it. I just want to say a
huge thank you for spending your time with me in this prompt
engineering based class. Now, I'm Victor Cuevas, and I'm so happy I
was your guide on this journey into the
world of prompt crafting. You've now got the tools to design clear
structure prompts, experiment and
refine your results, build your own prompt
library to save time and spark creativity in your
work or personal projects. So whether you're using
prompts for brainstorming, content creation, lesson
planning or productivity. You now have a skill
that will help you grow in value as AI
continues to evolve. If you enjoy this class, I'll be grateful if you
leave a good review and follow me here on
Skillshare for future classes. I've got more exciting
content coming your way to help you and your family
thrive in this digital world. Thanks again for
learning with me. You crushed it. Now go
make some AI magic, and I'll see you
in the next class.