Transcripts
1. 1 Introduction: Hello and welcome to our
course on Generative AI. I wonder if Thank
you for enrolling and taking the time to learn about this exciting
and rapidly evolving field. In this course, we will explore the different
aspects of Generative AI, including the OpenAI
company and it's Research, Prompt Engineering, GPT and
diffusion models and how they can help you
with everyday tasks and managing your time
more efficiently. Whether you're a full-time
working professional student or just someone
who is passionate about learning or technology. We will also cover
different tools and applications such as ChatGPT, DALLE-2, Midjourney, Bing
Chat and Microsoft Designer. To begin with, we will
first start by covering the theoretical components
so that you can build a good fundamental
understanding of the subject. We will then provide
practical examples and hands-on exercises so you can apply the concepts
you learned and gained valuable
skills and knowledge. By the end of the course, you will have a
comprehensive understanding of Generative AI and how to use it to enhance your creativity
and productivity. We're excited to have you
as part of our class. So without further ado,
let's get started. Here's a quick look at
the course curriculum. We'll start by introducing the
concepts of Generative AI. Will talk a little bit about
the OpenAI company and it's research will introduce
a couple of AI models. Next, we will cover Prompt Engineering and why
it's important to provide good prompts and
how they can affect the quality of your
output and data. In addition, we will cover the benefits of
leveraging Generative AI and how they can help you with design, creativity
and productivity. After we develop our
understanding of Generative AI, we will dig in and start using applications that
are built on top of the AI models and APIs through hands-on exercises and
practical examples. These include
ChatGPT, Midjourney, Bing Chat, Microsoft
Designer, and Adobe Firefly. Near the end of the course, we will cover an important
topic which is how not to use Generative
AI and best practices. And we will wrap up the
course with some final words.
2. 2 Generative AI: Okay. So what is generative AI? Okay. Generative AI is a type of artificial intelligence that can generate new content or data, and this new content or
data is going to be very similar to the training data it has been provided previously. It has the potential
to revolutionize various industries and create new forms of creativity
and productivity. There are many applications
of generative AI, but will cover the top three
that are most popular. The first one is natural
language processing, also known as NLP. Generative AI can be used in NLP to generate human
like text or speech. GPT three and GPT four are language models that are
developed by open AI. They can generate articles, poetry, computer
code, and so on. You can even build chat bots or conversational bots
on top of those, and you can even have it translate texts from one
language to another. Next, we have computer vision. Generative AI can be
used in computer vision. It can generate realistic
images and videos. DL two is a generative model developed by Open
AI the company. It can generate unique images
based on text prompts. You can even use it to edit or manipulate images
through its features. Lastly, we have robotics. Generative AI can be
used in robotics to generate new behaviors
or movements for robots. It's reinforcement
learning algorithms can generate new actions
based on feedback. Just like with anything else, there are challenges
and limitations associated with generative AI. One is data bias. These models are only as good as the data
they're trained with. If the data they're being
trained with is biased, then the generated content
will also be biased. Then we have
computational resources. These models require significant
computational resources for training and
generating new data. This makes them
less accessible to smaller organizations
and individuals. So what happens is that few of the larger organizations
who have the power and resources to build this
will build them and then offer it to smaller
organizations through paid subscription. Lastly, we have overfitting. Sometimes this can become too specialized and unable to
generalize two new data.
3. 3 OpenAI company and Research: Alright, now let's
take a quick look at the OpenAI company and Research. Openai is an artificial
intelligence research laboratory. It was founded in 2015. It first started
as a non-profit, but later became for-profit. And it has received
massive amount of fundings from giants
such as Microsoft. It offers subscription-based
services to those who want access to their
technology and platform. The company goals
and visions are developing safe and
beneficial AI technologies. Advancing the state
of the Art in AI research and promoting
AI education and policy. Openai has some notable achievements that are
worth mentioning. In late 2020 to the company announced the
development of GPT-3 and made it available to
the general public for free in early 2023. It really is ChatGPT for
underpaid Subscription Model, which also comes with new
features such as plugins. And the development of GPT
five is currently underway and may become available
for use by the end of 2023. They have made contributions
to the field of robotics. They have also made some other code open
source such as evils, which is a framework for evaluating large
language models. And you can find that on GitHub. In addition, they have
many publications and papers to help AI
community and the public
4. 4 AI Models: Alright, now let's start
introducing some AI models. In this section, we're
going to cover two models, GPT four and DALLE-2. Gpd four is Generative
pretrain transformer four. And DALLE-2 stands for diverse All scale transformer
by a lifelong experience. Two, these are both advanced
Generative AI models, and they each have
different capabilities, applications, and limitations,
which we'll cover next. Gpt four is a deep learning-based
Generative AI model developed by OpenAI. It uses transformer architecture and it is trained on a
massive amount of text data, which makes it
capable of generating high-quality texts
in various formats, such as articles, stories, called blogs, and much more. It has the ability to
understand natural language. And it can generate
human-like responses, which makes it great for
human interaction and for applications such as
conversational chatbots, similar to ChatGPT. Dalle-2 is also a Generative AI model
developed by OpenAI. It uses the diffusion model. It can generate images
based on textual input. It also uses a
transformer architecture. It is trained on a diverse
range of image and text data, and it is capable of generating highly detailed and
realistic images. Here's a quick overview of the capabilities of
these to AI models. They are capable of generating high-quality text or images
based on textual input. They can understand
natural language and generate
human-like responses. You're good at
answering questions and providing
relevant information. They can translate takes from
one language to another. They're good at summarizing text or Conversations are Chats. And they are good at generating personalized recommendations for users based on the prompts. Here's an overview of
the applications of these two models and how end
users can leverage them. They're good at generating high-quality content for blogs, websites, and social media. They can enhance Chat
Bot interactions for purposes such as customer
support and customer service. They're great for automating translations and
summarization tasks. They can generate
creative writing prompts and assist with writing tasks. They're good at creating personalized
recommendations for users. And they can generate highly detailed and
realistic images. Of course, there are
several limitations associated with
these to AI models. First one being biased. As we covered earlier, like ALL AI models, GPD four and DALLE-2 can be biased based on the
data they are trained on. Next, we have output quality. While GPT four and DALLE-2 are capable of generating
high-quality output. The quality of their
output can still be affected by the quality
of the input Prompt. So if you're not
clear and concise, if you're not clear about your intent and
you don't provide enough context that will affect the content that
it generates for you. And lastly, we have
generalization. While GLUT4 and DALLE-2 can generate highly diverse
and creative outputs. They may still struggle to
generate outputs that are significantly different
from their training data
5. 5 Prompt Engineering: In this section we're
going to cover the basics of Prompt Engineering
and how he plays an important role when using these AI models and how you can leverage them to create prompts that will give you the
best quality output. When using Generative AI models such as GPT, prompts are needed. And these are usually in
a form of textual inputs. There are typically
two types of Prompts. One is questioned and the
other one is instruction. Prompt Engineering is
the process of designing Prompts that can be used to
guide Generative AI models. It helps them to generate specific content and
completing specific tasks. The goal around
Prompt Engineering is to create a
Prompt that provides enough information to
the model to generate the desired output without being too specific
or too general. Well-designed prompts
can save time and increase productivity by guiding the model to generate the desired output with
minimal input from the user. There are different
types of prompts, but we'll cover three
in this lesson. First we have the tax Prompts. Text prompts are the
most common type of Prompt and can be used to generate text-based
contents such as articles, stories, or code. Next, we have Image Prompts. Image prompts can be
used to generate images based on a specific
visual concept or idea. Lastly, we have
dialogue prompts. Dialog prompts can
be used to generate conversational responses
or Chat Bot interactions. Here are some best practices to consider when creating Prompts. Be specific, as
specific Prompt will help them model generate
more precise output. Be concise. A concise Prompt will
help the model focus on the most relevant
information and be diverse. A diverse set of prompts will
help them model generalize better to new inputs and
generate more creative outputs. So in summary, here's why
you should care about Prompt Engineering and
well-crafted Prompts. Prompt Engineering is
a critical part of using Generative AI
models effectively. By designing
well-crafted prompts, users can guide these models to generate the desired output
quickly and efficiently. And with the increasing
sophistication of the Generative AI models, Prompt Engineering is
becoming an essential skill. In this lesson, I'd like to
cover three examples for Prompt Engineering and
well-crafted prompts that you could use in real life. So the first one is generating attention-grabbing headlines. So here's an example. Generate a social media posts about the benefits of
using our product. Now here we've specified
and we're telling the model to generate a post, but it's of type social media. So based on its
previous training data, it knows what a social
media posts looks like. We're also being
very specific of telling get the context
and the intent, which is the benefits
of using our product. And it can generate us
a high-quality output, assuming we have given it enough information previously
about our product. Next, we have generating personalized social
media content. So here in the second example, our Prompt is generated
social media posts about the latest tech
news that would be relevant to a user
interested in AI. So again, this is a well-crafted
Prompt because we've specified the type here as
social media, being the Post. The type of posts. It's going to be talking
about a latest tech news. But it also narrows down the context to not
just any tech news, but tech news that
is relevant to somebody who was
interested in AI. So it's being very clear, it's being very concise and
it's being very specific. And this will lead to
a very good output. And the last one here, we have generating social
media posts for businesses. So here we have our
Prompt is generate a social media posts about
a new product launch. That highlights is key features. Again, this is the
intent is clear. The context is clear
and obviously, having provided IT context and information
about the product and its features previously, this will generate a
high-quality output
6. 6 Design Creativity Productivity: In this section, we're
gonna cover how using Generative AI Tools and Applications can
help us with design, creativity, and productivity
in our daily lives. Generative AI can
enhance creativity and streamline design tasks leading
to increase productivity. It can help us by
generating new ideas, exploring different
design possibilities, and automating repetitive tasks. It can also help us by handling
somebody tedious work, allowing us to focus on more high level tasks such
as concept development, leading to more efficient
use of our time. Prompt Engineering can
be used to generate specific design elements
such as logos and patterns based on
certain specifications. It will save designers
a lot of time, and it also helps lead to more consistent and
cohesive designs. Here's some tools you
can use to utilize Generative AI for
your daily tasks. Chatgpt, which is a conversational Chat
Bot built on top of the OpenAI APIs and the GPD
model to generate content. Next we have Bing Chat, which is built by Microsoft and is another conversational
AI tool that can be used for generating
contents such as social media posts
and blog articles. And lastly, we
have Google board. This is very similar to ChatGPT, but powered by
Google's most advanced large language model, palm to. The last three tools we covered
generate textual output. And in this lesson we'll cover three tools that will
generate images. First, we have Midjourney. So Midjourney is a
Generative Design tool that uses AI to generate unique visual designs such as posters or book covers
based on the user input. Next we have Microsoft Designer, and this one is a
design tool again, that uses AI to assist
designers and creating accessible designs by providing suggestions and recommendations. And lastly, we have
Adobe Firefly. This one is an experimental AI power tool that can be used for generating unique
visual designs such as abstract graphics
and illustrations. Here are some examples of
what you could potentially accomplish with the help of
Generative AI in design. So Generative AI can
be used to generate unique logos for brands based
on certain specifications. It can be used to generate
unique patterns for textiles, such as clothing or wallpaper. It can be used to generate unique musical compositions based on certain specification, such as genre or mood. And lastly, it can be used to generate unique visual RPCs, such as abstract paintings
or digital illustrations
7. 7 ChatGPT Account Setup: Now that we have walked
through the theory behind Generative
AI and the Models, it's time to go through
some hands-on exercises and gain practical experience
to bring it altogether. We'll start by ChatGPT. You don't need a
whole lot to set up and get started with ChatGPT. All you need is a device with a browser and an
Internet connection. Openai has also released
ChatGPT app for iOS so you can use it natively
on your phone if you like. Okay, before using ChatGPT, we first need to create
an OpenAI Account. In order to do
that, just launched the browser and
go to openai.com. Once you're here, on
the right-hand side, you'll see login and
you'll see sign-up. If you already have an account, you can click on Login and putting your username
and password and login. If you don't have an
account, click on Sign up. And once you click on Sign up, you can put in
your email address and go through the
registration process. Once that's all done and
you verify your email, you can login with your newly created Account
and start using ChatGPT
8. 8 ChatGPT Questions and Instructions: Alright, once you gone through the registration process
and you login to ChatGPT, this is the user interface
that you're presented with. Now let's take a
quick tour together. On the left-hand side, this is where you have
all the historical data. So one of the nice
features about ChatGPT is that every time
you start a new Chat, it will return, retained that Chat and all of the information. So if you ever wanted to go back and take a look at
specific Prompt that you generated or a
specific output as a result of a prompt that
you put into ChatGPT. This is the perfect way
to actually do that. And you can even continue to continue that conversation with ChatGPT if you want it to. So this is one of
the nice features. It keeps a record of all the Chats you've
had in the past. And you can also, there's some actions
that you can take care. For example, you can clean up your history and the Chats by
just simply deleting them. Now on the left-hand side here, because this is my
personal account and I use this for personal projects. I've blurted that's why
you can't see anything. But once you start chatting
with ChatGPT, it will start. Your Conversations
will start showing up here in the left-hand menu bar. And on the top here you
have plus new Chat button. So whenever you want it to, wherever you want to actually start a new Chat with ChatGPT, all you have to do is
come here and click this and it will load a
brand new Chat and you can continue on the center
here we simply have just some examples and
some informational facts. So here's some example
prompts that you can use to get
started with ChatGPT. You can either just type this in here and they send
the message area where you can simply
just click this prompts and it will
auto fill them for you. And these two columns here, there's just some general
informational facts about the capabilities and limitations of ChatGPT that you
need to be aware of. Now we're ready to run through some exercises and go
through some examples. But before doing so, one
thing I would highly recommend is that whenever
we go through an exercise, first try to go
through it yourself and then get some
help from ChatGPT. And get ChatGPT to
do that for you. And then compare the two to see how accurate
the results are. For our first example, we're going to start with
something very simple. When it comes to learning. Chatgpt makes a great teacher
and a great instructor. You can literally
ask it to explain anything to you in simple terms. You can even give it persona, you can give it. You can make IT expert
in a specific subject and you can ask it to teach you things in very simple terms. So it's easy for you to learn
specific topic or subject. And sometimes things that would take you days to research. You could learn through ChatGPT
in a matter of minutes. Let's say you're a
psychology student attending university and you're about to take a course in neuroscience. Now you have no idea
what neuroscience is, but you'd like to find out, and this is exactly how you
can leverage ChatGPT so that it can explain to
you on a high level what the subject is all about. But let's go to the
prompts section here and type explain neuroscience. Now one thing we
can do is we can stop here and run their Prompt. And ChatGPT will try to
gather as much information so that it can explain on a high level what
this object is about. Now one thing I would like
to bring to your attention is the topic of
Prompt Engineering, which is something we touched
on in the earlier lessons. And how it can play a
pretty crucial role in the quality of the output that ChatGPT will
generate for you. Here we simply talk
about the water. So we're trying
to get ChatGPT to explain to us what
neuroscience science is. One thing we can do
is we can provide it a little bit more
context so we can, for example, talk
about the how and who. So the audience this
is actually for and how ChatGPT should go
about explaining this. And that should, that
makes our Prompt clear. It makes it concise. And also it gives
ChatGPT more contexts. So it can have go and find more relevant information from the web and it's training data. So let's go ahead and I'll
add a little bit more to make this a more
well-crafted Prompt. So while we can say is explain neuroscience in simple terms, suitable for a student who is psychology major in second
year of university. So here we talked
about the what, and here we talked
about the house. So in simple terms. And then here we talked
about the audience. Who is this four? So this
is for a student who is a psychology major
in the second year. So we're being very, very specific with our Prompt So let's go ahead and run this. Now. Chatgpt will do its best to try to come up with as much
information for us. But one thing I would like you to remember is that this is just going to be
a very high level and simple Overview
of this subject. But it will help you grasp
the basic idea behind it. So as you can see here, ChatGPT has finished
explaining this. So you can read through
this and get a basic idea of what this object
neuroscience is all about. In our last example, we instructed ChatGPT to do something for
us and that was to explain a specific subject. In the next example, we're going to ask
ChatGPT Questions. So just like similar
to how you would go and type a question in a
search engine such as Google. You can also do the exact
same thing in ChatGPT. So you can ask a question. So for example, let's say you want to enroll in
an MBA program. And you would like
to know what type of things you would learn about finance when you
attend that program. So we can ask that
question from ChatGPT. So let's go ahead and do that. What will I learn about the principles of finance
in an MBA program? Now one thing we can do
again with our Prompt, we can just stop here
and run their Prompt. And ChatGPT will
give us an answer. But a nice feature about
ChatGPT is that you can control the format of your outputs so he
can tell it how to, how you want the output
to be formatted. So for example, couple of
options or you can do it, you can have it format the output in a
bullet point format. You can have it in a table
format and so on, right? So there's lots of options
that you can choose from. So let's go, Hey, let's go ahead and say bullet point format. So I'm just going
to instruct it to format the output
in bullet points. So let's go ahead and
run this command. And this is exactly what ChatGPT is going to
present back to us. So you can see that
it's highlighting the key features about
the principles of finance and what
you would actually learn about them
in an MBA program. And here's the, here's,
here's all everything. As you can see, everything is nicely formatted in a
bullet point format. You can also do the same
thing in the table format. If you'd like to
specify columns, rows, and things like that. And here it says as these are some of the
key principles and concepts covered in an MBA
that focuses on finance. We can even go one step further. And let's say you want to, one of the bullet points
here, call it your interests. And he wanted to get a
little bit more detail. While we can do is we can even take this one
step further and get ChatGPT to dig in a little bit deeper on
some of these topics. So let's say, for example, let's choose this one here. Financial statement analysis, evaluating and interpreting
financial statements, balance sheet, income
statement and cash flow statements to assess the
company's financial health. So this is a good one. So while we can do is let's
go ahead and copy this. And let's create a Prompt here. So what we want to say is, I'm going to make ChatGPT
and expert in finance been saying as a finance instructor. And now I want it to, so I told ChatGPT
who I want it to be. The intent is for it to
be an expert in finance. So as a finance instructor, elaborate and provide
more details on. And now what I can
do is I can put what I copied and I can paste
it here in quotes. And now I can actually
run this command. So let's go ahead and do that. And this is where ChatGPT
will start actually creating, providing up with of exactly
what those things are. So you can see Balance
Sheet analysis. It gives you more details about just that area Income
Statement Analysis. It tells you what that
subject is about. Cashflow statements, same thing, you can see the
bullet points here. And then financial
ratio analysis, a comparative analysis and
benchmarking and so on. So now Chat, you can
even use ChatGPT to go in more in-depth on some of the
outputs that you can get. You can even do one, you can even go one
level deeper by doing the exact same thing
and say like digging deeper in one of these topics
here that's provided from one-to-five here
on the specific topics. So as you can see here, ChatGPT can be very powerful tool from a learning
perspective.
9. 9 ChatGPT Blog Post: In this exercise, we're
going to focus on how ChatGPT can be leveraged
by content creators. For this example,
we're going to write a blog post on a
specific topic using ChatGPT to generate an outline and suggestions for the posts. We will then compare the results and you'll see the ways in which ChatGPT can assist
in Content Creation. We will provide to prompts. The first one is to get ChatGPT to help with outlines and ideas. And the second prompt
is to get ChatGPT to do more work for us in terms
of writing the actual blog. And you can decide which works best for your
style of writing. This structure of a
typical blog posts consists of four main parts. The title, introduction,
body, and conclusion. Now for our first Prompt, the goal here is
to get help from ChatGPT to create an outline and to give us suggestions and not to necessarily do
all the work for us. For our first Prompt,
what I'm going to do is I'm actually
going to copy paste it here so you don't
have to watch me type word by word so that
I don't waste your time. So I picked just a random
topic here for this blog post. So let's go through it
together so you can see the different
components of this Prompt. As a Writing Assistant, I need your guidance to create an outline
and suggestions for blog post on the topic of artificial intelligence
and health care. Please provide some
key points and sub-topics to cover
in the blog posts. So here, I'm not
necessarily asking it to write the blog post for
me and it's entirely, I'm just asking it to
give me some suggestions. So let's go ahead and
input this Prompt. Okay, ChatGPT has finished creating the output
for our prompts. So let's go ahead and
take a look together. So as you can see
here, first thing, this is the first component of our Blog, which is the title. So you can see
ChatGPT has created a nice style rule
for our blog posts, revolutionizing healthcare, the power of artificial
intelligence. It's a very good title actually, I like this one is
catchy and it's sort of conveys the point about what the blog post
is going to be. And you can see that
the next section that Chat GPT has provided
for us is Introduction. So this is great
because ChatGPT is not necessarily Writing Blog
post for us entirely. It's creating an outline, so it's giving us a title, and it actually gave us
an example title here. So this is the outline, this is the introduction. Here you have the
after Introduction, you have all the, basically the subtopics that you could talk about this field. And you can see that
we have several here. So this is basically the body. This will make up the
body of your blog posts. And if you scroll all
the way to the bottom, you'll see that there's
a conclusion section. Now note that ChatGPT hasn't really created the
content for you. It's giving you an outline
along with some suggestions. So Introduction, it actually has a written
you one paragraph. It hasn't created
the content for you, but it's telling you how you
could go about doing so. So here it says briefly introduce the concept of
artificial intelligence. And it's growing significance
in various industries, particularly in health care. So this is where
you can actually go and research in this area and then cure rate
your own content and add it to your
Introduction Part. Another one here is highlight the potential of AI to
transform health care. So this is again, another thing you can go
and research on the side. And then once you
are able to write your own paragraph
from the curated data, case studies,
whatever it is that you're researching
in that topic, you can actually
include it there. Then we have the subtopics
here in the body. So applications in diagnosis and diagnostics and
disease detection. So here, ChatGPT is making a recommendation that in
our blog posts we can discuss the role of AI in early disease detection
and diagnosis. Here is patient
care and treatment. Again, it's giving you tons
of ideas and suggestions of some other things
that you can actually discuss in the
body of your blog. So this is actually
pretty fantastic and amazing because this is exactly how you should
be using Generative AI. You want it to spark
your creativity. So as a writer or
whenever you hit a wall, you can actually use
Generative AI and ChatGPT to help
you get past that. And as you can see here, we have the conclusion section. Again, it gives you a suggestion
to Recap the transform. It might have potential
of AI in healthcare and its ability to improve
patient outcomes and so on. Here, there is a note this is, I find this very
helpful as well. It says this outline provides a comprehensive structure
for your blog posts. But feel free to
modify and expand Each section according
to your preference. And these are length
of the blog post because we haven't
really told it how long be one blog post to be. So that's something
we can consider, which you'll see in
our second Prompt. And one other thing I
really wanted to show you here is you can use ChatGPT. Now that you have this outline, you can use ChatGPT to dig a little bit deeper in
any of these topics. Now one example here I wanted
to illustrate is the title. So I actually really
liked this cidal, but let's say you
didn't like this title, but you're having
a difficult time to come up with other titles. So this is again where
you can leverage ChatGPT. And if you want to keep
things very simple, you can simply follow up
because ChatGPT now has contexts in terms of what you've asked it and what output
it has given you. So now you can say, please generate five
more interesting. And I caching titles for this blog post. And ChatGPT will generate
five more for you. So as you can see here, we got five more. And if you'd like some of
these, you can use them. If not, you can use a piece of each title and create
your own and QRA your own. But this is the
power of ChatGPT. Chatgpt when it comes
to Content Creation. For our second Prompt, I'm going to try
and get ChatGPT to write more of the
blog post for us. So I'm going to try and leverage it to fill in some other blanks. In the first Prompt, we just asked her for
suggestions and an outline, but in the second Prompt, we want to do we want it to do a little bit more work for
us and fill in some of the blanks and cure rate
some of the content for us wherever it's possible. So again, I'm not going
to type everything. I just picked a random topic
in sustainable energy. So let me go ahead
and paste that here and we'll go
through that together. So as a writer, create a compelling and
informative blog post on sustainable energy solutions. Please provide a
brief introduction to the topic along with the key points and
suggestions on statistics, case studies, or
actionable steps to make the Blog Post engaging
and persuasive. This blog posts should present a thought-provoking piece that promote sustainable
energy Practices. This blog posts should be
approximately 1,500 words. So note that I'm not just asking for an outline
and suggestions. I'm also setting the tone for this blog post and
for the audience. And I'm also telling it how long the blog posts should be. So I'm setting some
requirements or lengthwise. So let's go ahead
and run this Prompt. Ok, ChatGPT has finished
the output for our Prompt. Now actually what
happened was that the first output that it generated, I wasn't really happy with
it because it was very similar to the output
of the first Prompt. So all I did was I
click this regenerate response and it created
a second output for me, and I'm a lot happier with that. So let's go through it together. So you can see here the, it created, again, just
like the first round, they created a title
for us, which is nice. The one difference between the first Prompt and
second problem is that you can see that it actually created
the introduction for us. So if we scroll up again
to the first Prompt, it didn't really write
a paragraph for us. It didn't curate the content. They didn't take your other
Content Creation for us. It just gave us ideas
on what we could include in the introduction. So these are suggestions or recommendations of what to
include in our introduction. So for example, briefly
introduce the concept of artificial intelligence and its significance in
various industries. Here. In the second Prompt, it actually wrote the
introduction for us. So that's one less
thing for me to do as a writer or
a content creator. So, and it's actually
pretty good one, imagine a world where our energy needs are met
without harming the planet. Where it clean and
renewable source power, our homes, businesses,
and transportation. So this is actually pretty good. The body looks very similar, so it's just an outline with recommendations and suggestions
of what you need to do. So this is actually very
similar to the first prompts, so it's just giving
you suggestions, present compelling,
compelling statistics on the current state of
energy consumption and its impact on the
environment and so on. So this is actually very great. This is where you can
actually go and research in this area and included in
your body of the blog posts. And one other thing is
that you can see that it also created a
Introduction for us. Again, just like Introduction,
which is really nice. Because again, as writers, this is one less
thing for us to do. So as individuals, communities, and nations, we hold the power to shape our energy future. So it's very, very powerful. I liked, I liked this paragraph. It's very powerful. It, it is engaging the readers. So again, this is sort
of like how content creators and writers can
leverage the power of ChatGPT. Now, in, before ending this
exercise and to wrap up, I just wanted to
highlight a key point. And that's not to blindly copy paste content
generated by ChatGPT. This is very, very
important and you need to pay very close
attention to this. You need to ensure you do your own due diligence and
check the facts if you're using generated content
from ChatGPT and other Generative AI
applications and tools. Make sure you're
providing credits where they're due and you need to make sure
that you're not infringing any copyright laws
10. 10 ChatGPT Emails: For our next exercise, we're going to use ChatGPT to draft e-mails for a few
different scenarios. This is very powerful because ChatGPT will save you
a tremendous amount of time by helping you
use your time more efficiently and increase
your productivity. Which is the whole point behind harnessing the power
of Generative AI. For our first scenario, we're going to draft an email to follow up with
a job application. So pretend that you just applied
for a job that you like. And now you want
to follow up with the interviewer to express your gratitude and let them know that you're very interested
in the position. What we wanna do is
to get ChatGPT to do most of the work for us and create the template
for the email. And all we have to do is filling
the blanks and the gaps. So here's the prompt that's going to help with this e-mail. I'm just going to paste that in here and we'll go
through it together. So draft the follow-up e-mail to express gratitude and
reiterate your interests in a job position you recently
applied for so you can make it more specific and putting the company's name and you
can put in the job title. But we're not going to do that. We're going to keep it a
little bit more higher level than generic,
and I'll show you why. So let's go ahead
and run this Prompt. Chatgpt has finished generating the output for our emails. So now let's walk through it
and take a look together. So here you can see this is
the start of the emails. So Dear Hiring Managers name. So as you can see, ChatGPT has created majority or
the e-mail for us, maybe pretty much 90, 95% of the email,
which is great. So all we have to do
is fill in the blanks. Obviously, you want to read over the email first to make sure everything is
relevant and correct. But you can see that
here we can just replace that with our
interviewer's name or the hiring managers
name or whoever this e-mail is
actually meant for. Over here you can
see that it says, I wanted to take a
moment to express my sincere gratitude for
considering my application. Here again, it's a placeholder so you can put in the jaw
position you applied for. The company name that
you applied for. Express your gratitude
and appreciation. And then it just
basically follow along. And this is, you can see there's gonna be placeholders
with the company name, which you want to do. Now what you want to be careful, something I noticed about
ChatGPT and Generative AI in general is that sometimes things can give very repetitive. You don't want to say
the company's name 102030 times in an e-mail. So this is something just keep something you
should keep an eye out. So go through this and once you actually finish
filling out all the blanks, go ahead and edit this. Also. One thing, as you
can notice that this email is a little
bit long for someone who's going to be quite busy with their schedule
as a hiring manager. So one other thing you
can do is this is about 12345, about 56 paragraphs. And one thing you can do is you can actually go ahead
and make this shorter. And because you already, ChatGPT already has context of this email from
your previous Prompt. You don't have to
be very fancy here. So all you can say is follow
up with another Prompts, say, make this email shorter. Now, this is going to
put ChatGPT to work. It already knows what email you're actually
talking about. And it's just gonna
make it shorter. So now you can see that the
email is a lot shorter. It still has the same points that you are trying
to make in terms of expressing your gratitude and just letting the interviewer
know that you're very interested in the position and it's still very similar
to the previous Prompt. It has the placeholders
that we can replace. And at the end here
you can put in your name and
contact information. So this is how ChatGPT
can help us craft Emails. Next, we're going to
as ChatGPT to craft us an email that's going to be four type of event invitation. So imagine you are,
want to invite an industry expert to speak, add maybe a conference, or maybe attend the
discussion or debate. And this is where
you can actually get ChatGPT to help you
with creating that email. If you don't know
where to start. For this, we're going to use the following Prompt
as an example. So draft an email inviting
industry experts or influencers to speak at a conference or participate
in a panel discussion. So go ahead and run that Prompt. Let's walk through the
email together here. So as you can see,
ChatGPT has done a pretty decent job in
crafting this e-mail for us. So there and then here you
can put the person's name. And then here's a Introduction
In this paragraph. They are placeholders
that again, you can feel just like
the previous e-mail and you can talk about
what the event is, the date of the event, and then the location. And then here this
paragraph talks about sort of like the value that person brings to the event and some of the opportunities available
at the event and so on. And again, as you can see, this email looks a
little bit longer. This is why you shouldn't
just copy paste this straight out of
ChatGPT. You should Read through at filling
the gaps and then also try to make it shorter
and more to the point. I'm sure there's some
repetitive things here that you can take out. And then at the end,
it thanks the person. And then here
again, placeholders that you can replace
with your name, your organization,
or your title, and then the contact
information. For our next scenario, we're going to ask it to
generate an email around, say, a product or
service Introduction. So imagine you are a company providing service to
your customers and you just released a new
feature or new product that you're very excited
to share with them. And just sort of
like as awareness, you wanted to send
this e-mail out so customers know that this is
available to them for use. So for this, we're
going to fall, we're going to use
the following Prompt. So compose an email introducing new product or service
to potential customers, highlighting its key
features and benefits. So let's go ahead
and run this Prompt. Alright, let's go through
this e-mail together so you can see it has
generate the e-mail. But one nice thing is
that it has Summarize the features in a nicely
bullet point format, so it's very easy to read. Here are some key features
or benefits of product X. And basically it just goes
through the features here, feature one feature to
feature three feature for. And obviously you
would modify this to match your criteria, whatever they may be. And then following up. Also important part
of the e-mail. It doesn't just talk
about the features, but it talks about the
benefits to you as a customer. So again, very important and really liked the formatting
here in bullet format. You can, as you witnessed
earlier, ChatGPT, you can control the type
of output degenerates, and it has a lot of different ways you
can generate the output. I can do bullet points,
it can do graphs, they can do CSV file, and it can do tables. So you can control
that in your prompt. In here we kept the very simple as we discussed it previously. But here we have the benefits and the rest of the email with at the
end you got your name, your title, and your
contact information. So again, very
well-crafted e-mail for a few seconds is not
bad. This also works. This application
works really well. If you also have
your brand or you have your clientele
or customer base, or just the audience in general, and you'd like to send them
updates or newsletters. This works very
similarly to this and we can leverage
ChatGPT for that as well. As the last example
for this lesson, we're gonna get ChatGPT to
help us in getting feedback. So for example, let's say you
just worked on a project. And now you want to get feedback from either
your colleague, co-worker, or if you're in school from your
instructor or teacher. For that, we're going to
use the following Prompt. Write an email to request feedback from a mentor, teacher, or colleague on a
project or piece of work you have done or you
have recently completed. So let's go ahead and run this. As you can see,
ChatGPT has finished putting together the emails, so we'll skip all of this
again there Introduction. But one nice thing
is that you can see that ChatGPT actually again, formatted things nicely
in a list format. So because this is a
feedback requests e-mail, you can see that
I kindly request your feedback on the
following aspects. And the nice thing is ChatGPT. Not only it's actually
filled this out for you and is
written the content. But you could just
use this as sort of like inspiration of what are the things that
you can ask for. So you can treat
this as a template. Two are just
suggestions or ideas. Chatgpt is telling us that we can ask someone in
terms of feedback. We can ask them for
overall assessments. So how did we do overall? We can ask about strengths. So we can ask that
person to give us the feedback in terms of what things we did
well on that project, we can ask for areas
of improvements and then any additional
insight or thoughts or comments that that
person may have. So this is very nicely
formatted by ChatGPT. So again, the rest of the
email and the conclusion. Now again, just like
any other Prompt, you can control the tone, you can control the length. You can even tell you a Prompt. Make sure this
email is no longer than 200 words or 500 words or whatever
you want it to be, however long you want it to be. But hopefully, these
scenarios have demonstrated that it's really easy for you to get help
from Generative AI to craft somebody's emails and
how much easier it's going to make your
life and how much time it's going to save. And hopefully those prompts
and scenarios recovered. You can note them down and you can use them in
your daily lives.
11. 11 Summarize Books: In this lesson, we're
gonna learn how to use ChatGPT to Summarize any book. And this is gonna
be a huge help to save you a tremendous
amount of time again, to become more efficient. So we're going to cover
two different prompts. The first one is going
to be very simple, and the second one is going to be a lot more tailored
than detailed. And then you'll be able
to see the difference. Now, let's walk through
the output together. So as you can see here, ChatGPT did a pretty decent job in terms of
summarizing this book. Given that the Prompt was very, very simple and not specifically
tailored to anything. So you can see that ChatGPT has the Introduction
and it talks about what type of book this is. So it's a self-help book that provides comprehensive guide to personal or professional
development. So bit of an introduction. And you can see here, obviously the book was seven Habits of Highly
Effective People. And interestingly enough,
ChatGPT actually created seven bullet points and covered those seven habits in
the seven bullet points. So this is actually
they readable, very clear, very concise,
and you can see at the end, it also has the
conclusion and it's just again highlights what
the book is about. So provides practical
strategies and insights to help individuals
cultivate positive habits. So this is great. And you can see again, it already ChatGPT was able
to determine the best output, which was the bullet
point format. So this is not bad. And considering a book
that could be 300, 400, 500 pages, now, you can just take a
couple of minutes, read through this and basically
get the gist of it and be familiar with the book and the concepts is trying
to present to you. For our next exercise, we're going to ask ChatGPT to summarize a book that's
uptight fantasy. But for this one, we're going to
craft the Prompt In a way that is a lot more
customized to our needs. So let me go ahead
and paste that here. So here I got this Prompt. So please provide a brief
overview of the book, harry Potter and the Sorcerer's
Stone, by this author. So we're giving it the title of the book and the author's name. And then it here
where your prompt is actually becoming more customized compared to
the previous Prompt. That was very simple. So
your summary should be kept. Your summary should capture
the main characters, central themes, and
key plot points in three to four
paragraph summary. So we're telling it
while we want to see in the output and what
the length should be. So here we said don't go over
three or four paragraphs. You can even exchange
this or replace this with something like 500 words or 1,000 words or
something like that. If you want it to, then written in a concise
bullet points. So we're specifying the output. Again, you can change this to something else,
whatever you need, and then feel free to highlight the most impactful moments, major conflicts, thought-provoking ideas
presented in the book. So this is additional
information that can be included in the output of what ChatGPT is
going to give us. And then again ensured that
the summary provides readers with a clear understanding of the Books essence
and significance. Remember to maintain the
desired length and format specified above while
generating the summary. Again, it's enforcing the
length and is selling it. Don't go over what
the length is. So let's go ahead
and run this Prompt. Let's walk through to see
what ChatGPT came up with. So as you can see
here, it's finished summarizing the
book, harry Potter. So this book is the first
one in this series. Main character hair, Harry
Potter is an orphan. And then he talks about
the central theme revolves around the battle
between good and evil. And then here it talks about
some key plots in the book. And as you can see here, it's not too lengthy, so it's basically obey the rules of our length where
we told it don't go over three to
four paragraphs. Everything is nicely Readable, readable, and broken
down into bullet points. And you can see that we
asked it to tell us what the book signifies and
that's exactly what it did here the book explores
themes of friendship, loyalty, courage, and
the power of love. And then it tells us that at the end it tells us
that the book is basically setting up this stage for subsequent books
to come in the series. So now bad ChatGPT did
a pretty decent job. So hopefully these two
examples showed you how you can actually use ChatGPT to Summarize any book
in any category
12. 12 ChatGPT Summarize Text and Articles: In this lesson, we're going
to learn how to use ChatGPT to Summarize any text
or any Articles. And there are a couple
of ways that you can do that and I'll show
you both in this lesson. So first of all, I came
across this article. Again, it doesn't really
matter what article that is. You can use any article
that you're interested in. I just happened to
come across this one. So this one is from Stripe improving instant
payouts by this author. So you can see that here
he talks about the service and how they've
improved the service. So one thing we
can do in ChatGPT, instead of reading through
this entire article, we can just get ChatGPT
to summarize this for us. The first way we're
going to do this is just simply through texts. Again, this could be
really, any texts doesn't have to be this article, it could really be anything. So let's just go ahead and say, Summarize the following text. Let's give it a length
in less than 200 words. Okay? And on the new line here now we can just go and copy
paste this text. So let's go ahead and copy this. I will copy some more here. And you get the point.
You just keep copying. You just keep copying
any texts that you want. Summarize. So this is what we
can do, very simple Prompt, as you can see, Summarize
the following test Text in less than 200. So let's go ahead and
run this Prompt and see what ChatGPT comes up with. Okay, let's walk through
the output of ChatGPT so you can see that
it has Summarize the, Summarize the article
that we're interested in, in less than 200 words, which is great, very
readable in 34 paragraphs. And here you can see
that it just talks about stripe introducing when
it introduced a service. It talks about some of
the improvements and enhancements it's
made to the service. It talks about the expansion
in the US and so forth. So then at the end
he talks about the feature along with recent improvements and
wraps up the article. So this is one way you can Summarize any
article or any Text. And next we'll take a look at doing something
similar with any article, but in a much simpler way. So another way we can actually Summarize any article that
we see on the Internet through ChatGPT is
just asking it or providing it with the URL of the article, which
is very simple. All you have to do is tell, give ChatGPT is similar prompts. So we can say Summarize
the following article. And now what we
can do is go back to that article from
the address bar. We can grab the URL or the link, and then we can paste that here. And that's all we have to do. You can simply just run this Prompt and ChatGPT will
summarize this for you. Okay, as you can see, ChatGPT was actually
successfully able to find the article
because you can see here is output outlines that the article improving instant
payout on this stripe Blog, Blog discusses the recent enhancements
made to their service so it was able to successfully
find the article, interpret the article, and then Summarize it for
us, which is great. Here you can just
see he talks about the first improvement,
takes about a, talks about another
update or enhancement, and then talks
about monetization. So as you can see, it has done its job in
terms of summarizing it. So again, very easy. A few words you can
Summarize any articles that you read or their
Internet using ChatGPT
13. 13 ChatGPT Summarize Chats and Conversations: In this next exercise, we're going to learn
how to use ChatGPT for Chat and Conversations
summarization, whether you have Chat logs or conversational
transcripts from a meeting. For example, you can use ChatGPT to generate brief summaries
of the Conversations. In real life examples, Let's say you're
off on vacation for a week and you come
back to a ton of messages or
notifications and have very little time to catch on. So this is something you
can use your advantage. Another example is
let's say there was a meeting that was recorded. You weren't able to
attend that meeting, but there was a transcript
of that meeting. So instead of going
through and watching the entire an hour-long meeting that was recorded by
Zoom for example, you can just use the
transcript and get ChatGPT to summarize all
the conversation for you. So these are just some
real life examples. However, if you're a
working professional, definitely first check with your company's security and compliance department
just to make sure that it is okay for you to input this information
into ChatGPT. So what I've done for
this exercise is I've actually created
a simulated Chat. So this is all made up. I actually got help from
ChatGPT to generate this Chat. So it's just a conversation between three people
in a meeting. John is the head of sales, Sarah is the head
of marketing and MIs head-up Engineering
at a company. And they've come
together in a meeting to determine the budget
allocation for next year. So again, this is all made up. I got ChatGPT
degenerate this Chat. And now what are we going
to do is we want to get a ChatGPT to actually
Summarize this Chat for us. And let's first start
with the Prompt. Let's walk through this Prompt together so you can see this is very explicit and it has a lot of detailed within
the prompt itself. So please provide
a concise summary of the following Chat, capturing the main
points, key insights, and any significant decisions or conclusions reached
during the conversation. So again, this is something
you have to decide. What are you trying to
get out of this summary? What are the things
that are important to you from the summary
of this conversation? These are the things
that you can actually tweak in your prompts to
match your expectations. Aim for a summary
length of 150 words. Here we're setting the length and the size of the summary. Additionally highlight any contrasting perspectives
are notable arguments, and then identify any
unresolved issues or areas that require
further exploration. So this is very important
and if you need to know if anybody
was blocked in a certain area and
you need to help resolve any issues or
remove any blockers. This is exactly the
type of thing you need to include in your Prompt. And then again, a little
bit of reinforcement. Ensure that the summary is
well-structured, coherent, and accurately represents the essence of the conversation. So this is a very well-crafted, detailed and explicit Prompt. Now all we have to do, I have colon here and
then we're gonna go to the new line and I'm
going to paste the Chat. So the Chat is going to be
quite lengthy and long. So I'm not going to
we're not going to walk through that because
there's no point. But I simply just
paste that here. And you can see this
is just a conversation between those three
people in the meeting. So let's go ahead
and run this Prompt. Here is the summary that ChatGPT came up with by analyzing
that conversation. So you can see in the meeting, John, Sarah, and MR. Discuss the budget allocation
for the upcoming year. John emphasize the
need to prioritize sales and suggests
that increasing the sales force and investing in the marketing campaigns there argued substantial budget
to increase marketing, to create brand awareness and MI stress the importance
of investing in Engineering and R&D to enhance the products Functionality and
staying competitive. So here you can see
that each person is fighting for budget for
their own department. It says that they debated
the priority among sales, marketing and engineering
with each person really just emphasizing budget
for their own department, which is usually how
these meetings go. And here ChatGPT, the
nice thing is because of our Prompt and the way we structure than craft
there are Prompt. It has a section on
contrasting perspective, so it tells us John prioritize
immediate sales growth. Sarah emphasized long-term
brand-building and MR. Stress, innovation and customer satisfaction through
Engineering investment. So again, this is very good
and unresolved issues. It basically ChatGPT
analyze the conversation and it figured out that the meeting concluded
without a decision, the team needs to gather more
data and metrics to support their arguments and find
a middle ground that considers the needs
of all departments. The discussion will continue
in the next meeting. So as you can see here, this is very powerful and
very amazing because if you, if this meeting took an hour for those three people to talk
and not Region conclusion, you are able to read the summary within a couple of minutes and basically find out what the meeting how
the meeting went, what the issues were, what issues came up with, issues did not get resolved, which issues did get results. So it's very powerful and
you can just quickly, within a couple of minutes, you can go through the summary and figure out what
the next steps are.
14. 14 Prepare for Interviews: In the next exercise, we're
going to learn how to use HAGPT to help us
prepare for interviews. So let's say you've applied for a specific job or role
at a specific company, and now you get a call to
come in for an interview. Ideally, it would be great to practice interview questions
with another person, but now you can actually
accomplish this with CHAPT with a few
tips and tricks. So let's go ahead and
get started here. For this exercise,
we're going to go through three
different prompts. So the first prompt is going to be very simple and generic. And I'll show you
that in a second. The second prompt
is going to be a little bit more
detailed and tailored. And the third prompt is going to help us make the experience more interactive as if
you're there in the room with a person and that person is asking
you a question, waiting for you to respond, providing feedback
to your responses, and then moving on to ask
you the next question. For our first prompt, this
is what we're going to use. So again, very simple and
high level and generic. I would like your help to
prepare for an interview, simulated job interview scenario where you are the interviewer
and I'm the interviewee. Ask me ten questions
throughout this interview, but do not provide a response. So you can obviously adjust
some of these to your liking. You can make the number of
questions less or more. You like, and you can
ask it to provide a response if you wanted to
see what another person, if it wasn't you,
what would they say, for example, in this simulation. But because you're trying
to practice the interview, you would want to be the one
who provides the responses. And that's why I have the
prompt tailor this way. So let's go ahead and run this. Okay, let's go through
the result of what Cha IPT has produced for us. So you can see that this is the simulated
interview results. So interviewer being
HachiPT in this case, they're basically
saying good evening or good morning, good afternoon. Thank you for coming in today. And now they're starting
to ask questions. And as you can see, they're
starting nice and slow. Can you tell me a little
bit about yourself? And then there's really no responses here
provided because we asked not to give us
a simulated response. So it's just purely
asking us questions. So what interest what interests you about the
particular role and the company? How would you describe your
strength and weaknesses? Can you give me an example of a challenging situation
you encounter at work? So these are sort of more tailored towards
behavioral questions. How do you prioritize task
and manage time effectively? This is definitely a crucial
skill to have in any job. Tell me about a time you had to deal with a difficult
co worker or customer. This is a good
because it happens a lot and it's a crucial
skill to have again. So, as you can see, Chachi VT has curated a list of
ten questions for us, and this is perfect
because this is generic enough that regardless of what role or
company you apply for, some of these questions
will come up. So I would say, take your time, go through these questions, and think through your previous experiences,
either in school, on projects, other works, working with other
teams and come up with answers to help you better prepare
for the interview. Okay, for our second prompt, we're going to build it's
still going to be static. It's not going to
be interactive, but we're going to
build up on top of our first prompt and make it more detailed
and more tailored. So let's go ahead and
I'll paste that in. So let's go through
this together. I would like your help to
prepare for an interview, simulated job
interview scenario. So up to this point, it's all the same, and this
is the additional part here. So the title of the job I have applied for is program manager. Again, I just made this up.
The company is Microsoft. The industry is technology. I would be working in the Office 365 team as part of this role. So I'm giving it what role I'm applying
for, what company it is, what sector it is, and
which team I'm going to actually be working in on if I was to get
accepted for this role. So let's go ahead
and run this prompt. Okay, let's walk through
the results here. And as you can see,
now the questions that Chat GPT is asking us is a lot more
specific and tailored to the role we
specified in the prom. So you can see that. Let's
go through the questions, a few of these
questions together. So can you please start
by telling me about your experience and background in program management, right? Because it's a
program manager role. So what specific skills and qualifications
you possess that make you suitable for program role at program manager
role at Microsoft, how familiar you are with
the Office 365 Suite. So again, very very specific to this role
that we defined. And there's some really,
really good projects here. How do you approach prioritizing task and
managing resources? That's great because
that's something that program manager has
to do on a daily basis. And here, this is a good one. Program managers often
used to work with multiple stakeholders such as developers,
designers, and execs. How do you navigate
conflicting priorities and communicate effectively? This is one of the crucial keys and responsibilities
of their role. So again, good questions being presented to us by Cha ChiPT to help us prepare
for the interview. Obviously you can
go ahead and tailor this or edit or modify this prompt to your liking or to your needs based on
whatever role it is you're applying
for or the company, and you can just modify
it and it'll give you the more specific
you make it, then the better results you're going to get
from the output. But here, as you can see,
it was fairly simple, but we did we did give it some key
attributes that help us tailor this specific
to our needs as a program manager candidate or the role we are
actually applying for. So again, take some time, go through these
questions and work on the answers because when
you go to an interview, they will definitely
be asking you, not all of these, but at least
some of these questions. So this will be great practice. For our third prompt, we want to make things a
little bit interesting, and we want to make the
experience more interactive. So imagine you're
actually sitting in a conference room or in an online Zoom meeting and you're actually talking
to the interviewer. So an interviewer is sitting there right in front of you and they're asking you questions and you're providing a response. So very close to a human,
like, interaction. And it makes things more fun. It makes things
more interesting, and it makes you
sort of think on your feet while you're
being asked questions, as opposed to having
practice prior. So Uh, in order to do this, I found this I found it a
little bit challenging to get Chat GPT to wait for my
responses or inputs. But I did a little
bit of research and I find a solution online. So let me go ahead and paste
the third prompt here. And let's walk
through it together. So here I'm asking
Chat GPT to be an expert in conducting
job interviews. So you are an expert in
conducting job interviews for companies in the
technology sector. I'm being very specific. Simulate a comprehensive job
interview for a candidate applying for the product
manager role at Google. Again, I'm defining the role and I'm defining the company, and I've already
defined the sector. Are going to follow
the following steps. So now I'm giving you a number
list of steps to follow. Introduce yourself,
ask me a question, wait for my response. After each of my responses
provide constructive feedback. Do not ask me more
than ten questions. Again, you can tailor this to your liking or to your need. But I found that you also
need this line here. So apparently this wait for my response doesn't
just do it on its own. So I've added this
one last thing here. Now introduce yourself,
ask me a question, and wait for my response. So let's go ahead
and run this prompt. As you can see, now, Chad GPT is basically
introducing themselves. Obviously, there's just some placeholders
that you can ignore. But you can see that now, unlike the last two prompts, it didn't just ask
all the questions and give us a list of questions, which is potentially
what it did. It just gave us a
list of questions to help us prepare for
the interview on our own. Here, it's actually
not doing that. It asks us the first question, but now it's actually
waiting for a response. So what we can do is now we can provide a response
and see what happens. And what it's supposed to do
is provide some feedback, and it's supposed to ask us the next question or follow up questions based
on what we said. So it's asking us, can you please tell me about
your experience as a product manager and
how it aligns with responsibilities of
the roles at Google. So let's go ahead and I'm just going to
make something up here. As a product manager. I have worked very closely
with developers and stakeholders to build new
features for our SAS platform. And I believe I would be
a good fit at Google. Okay, so let's go ahead and run this and see what HAGBD does. Now, you can see that HAGVT
is providing us feedback. It's saying thank
you for sharing your experience as
product manager. And then over here, it's saying that it's
great that you believe. It's saying that your
experience demonstrates your ability to work cross functionally and manage
multiple stakeholders, which are essential skills
of a product manager. It's great that you believe you would be a good
fit at Google, as it's important
to have alignment with the company's
values and culture. So now it's saying now let's move on to
the next question. So we can just say something
simple like sounds good. And now Chat GPT is going to
ask us the next question. So in your experience
as a product manager, have you identified
and prioritized user needs and requirements
to drive product development? Can you please provide an
example of a situation? So now it's asking us
the next question, and this is where you can
actually go ahead and think. You know, it's
exactly like being in an interview and the person
asking you questions, and you have very limited time to actually answer questions. So you definitely want
to note these questions down in practice and prior
to your actual interview. But this should demonstrate
that you can use HAGPT in very creative ways to help you prepare
for job interviews. And this is one very important and fun
application of HAGPT. So this exercise is
just going to go on, so you can continue
to ask questions, and then because we told HAGPT not to ask more
than ten questions, I'll probably ask you
ten, and then it'll stop, and then that will finish the
interactive interview here.
15. 15 ChatGPT Translation: Another powerful
application of ChatGPT is using it to translate texts
from one language to another. Now, there's many different
scenarios that could happen. Let's say you're at work and you need to translate something, and that's not a language
that you speak of, then that's very
simple to use ChatGPT to actually get that
Text, translate it. And let's say you're traveling, you are somewhere that you don't know how to say a specific
phrase or speak the language, we can quickly pull out
your cell phone and on ChatGPT you can
put it into Text, have a translate to
another language, and then show it to one
of the locals there to help you with that
specific need. For this. It's actually very simple to get this
accomplished in ChatGPT, super-simple Prompt, nothing
complicated needed so far. First Prompt, while
we going to do is just provide a very
simple phrase here. So let's say translate. How are you? To Spanish? As you can see, ChatGPT was able to successfully
translate the text. So the translation of
how are you to Spanish is comma status and that
is correct. Very simple. And for our second Prompt, where we're going
to do now is we're going to give it something
a little bit longer. So I paragraph and I
basically just got this from the article that we worked
in the previous lessons. So this was again from Stripe
improving in some payout. So I just grabbed
this first one. So let's go ahead
back into ChatGPT. And I'm going to
paste in the Prompt. And it's gonna be a
little bit different, but very similar idea here, translate the following
texts to French, and then I just
paste that in here. So let's go ahead
and run this Prompt. Okay, ChatGPT has finished
translating this to French, and this looks correct. And one interesting
thing is that one of the reasons
I know that ChatGPT is translated
successfully because I've worked with
translations in the past, is that some of
these symbols here, such as the percentage in certain languages when
they're translated, There's actually a
space in-between. And you can see that
exactly in this case, for the French translation, there is between the number
and assemble percentage sign. There's a space.
Same thing for here. So this is just one more
indication that ChatGPT was able to successfully translate this to the language of our desire. And for our third Prompt, what we're gonna do
is we're going to do something similar in
the previous lesson. We're just going to ask it
to translate an article. So let me go ahead and
paste the Prompt Here. Translate the following
article to Italian. And I'm just passing in
the link to the article. This is the same article we
used in previous lessons. So very simple. So let's go
ahead and run this Prompt. Okay, it looks like
ChatGPT was able to successfully translate
the article to Italian. You can see the Introduction
and then you can see the conclusion if you
want it to be sure. We can always
between transitions, you can go from one
language to another. So just to test things out, we could do is we can
just grab this paragraph, which is the conclusion.
We can just say. Basically translate the
following text to English. Paste that in, run the command. You can see that it was
able to successfully translate that to English. So this is how you
can use ChatGPT to translate text from one
language to another
16. 16 ChatGPT Write Code: In this lesson, we're going
to harness the power of Cha GPT to help us write code. Now, this section is meant for software developers and
software engineers. So it's going to be
quite technical. If you're not a
technical person or you just have no interest
in this area, feel free to skip this lecture. We're going to cover a couple
of different scenarios. We'll start simple,
and then we get into a more complex example. For our first prompt, we're going to do something
very simple. We're going to ask
Chat GPT to write us a function to reverse inklist. And this is a very common
interview question as well. Linklist is simply a
linear data structure, and we're going to ask
hatGPT to do this in Python. You can really
choose any language, but for the sake of this example, we're
going to use Python. So my prompt, again, is
going to be very simple. Write a function to reverse
a inklist in Python. So let's go and then
run this prompt. Let's go ahead and take a
look at the result together. So if we scroll up
here, this is amazing. So you can see that CHAPT has actually written the
function for us, so we haven't written
a single line of code. We got HAHPT to do
the work for us, which is the whole
point of using the application of
generative AI again, to become more productive and use our time
more efficiently. So you can see that HAGPT has created the
function for us in Python with this class list node and not only this is where
things get very interesting. And not only has hatGPT created what we asked it to do for that function to
reverse the link list, it also provides
us a description. So it says, This function
takes the head node of a link list and inputs and returns a newly head node
of the reverse link. So it tells us exactly what
the code is going to do. So that's one of the
great things about hat GPT and writing code for us. And here you can see
that it even provided us an example of how we could
use it with populated values. So here you can see that it actually did it actually
went ahead and provided. It gave it a inklist,
so it starts with the headnote of
one and then 2345. And then it runs through
the code that it wrote. And as it's running through
that and reversing the list, you can see that it's
printing the results, and this is exactly the output with those example
or sample values. So you can see that
the original link list was ordered this way,
one, two, three, four, five, and the result, which is the reverse
link list is now 54321. So this
is pretty cool. Again, with one line prompt, we were able to get CHGBT to
write all this code for us, explain what the code does, and then show us
how it can be used. For next scenario, we're going to do something a
little bit more complex. So imagine you are a
back end developer, and this is something you
would do on a daily basis, but imagine that you need to create a function
that makes an API call. So in this scenario,
let's say we have an endpoint users and
you want to make an API call, that would create users, and the method for
that would be post. So let me go ahead and paste in the prompt that
I initially wrote here. So let's walk through
this together. We want Chat GPT to write a function to make an
API call in TypeScript, so I'm telling you
what I wanted to do. So write code to
make an API call, and which language,
which is typescript, the request URL is this, so I've just defined
I made this up, but the endpoint
here is slash users. The payload should include
the property name and email, and it should be passed as
arguments to this function. This API call should
be asynchronous, so I'm telling you I'm trying to hint that it should
use Async and await, and then it should use the fetch library to
make an API call. There's tons of
libraries out there when you want to make API calls, and there's a reason why
I'm using Fetch here. So let's go ahead and
run this front first. Alright, it looks
like CHAPT was able to create the function for
us to make this API call. So let's quickly go through it. So async function
and then function name it happened to
be make API call. Again, not great, but you could easily change this on your own, or we'll get HAGPT to change it for us in a second,
and I'll show you how. But so it created the function
with this function name. It basically did exactly
what we told it to. So it provided the
two properties as arguments to this
function. So name and email. Here's the request URL, so it's creating a variable for that and we have the
endpoint slash users. It's creating the payload, so the payload will
be the name and email that we're passing
in as arguments. And then here it
basically is doing a Trcche to make the
call to the API. So it is using the fetch
library to make the API call, and we got a wait
here, which is nice. And so it is actually and I guess it happened to detect
that this is post, which is kind of
like what I wanted this function to be
because I wanted to create a function to make
an API call to the user's endpoint
to create a new user, and it just happened
to pick that up. But usually, you could
have other methods such as get and
delete, for example. So in this case, it's post, which is what I wanted,
but I can easily change that if that's not
what I'm looking for myself. So, yeah, I did a pretty decent job
putting this function together. So try to make the API call. And then, you know, if the response is not
okay, I'll throw an error. And then if it happened
to catch an error, it'll log the um, issues so that we can
troubleshoot later. So this is great. Also, I really like this
feature, copy code. You can just click this
and copy the code, and you can just paste
it in your code, whatever your ID is or code
editor that you like to use, and then sort of move
on with your task. Now, one other thing
is here, again, I really like this
feature about CHAPT. It explains exactly what it is. So it says in this example, this function takes
two arguments. It constructs the payload, then it uses fetch
to make a post. So this is very similar not exactly properly formatted in a technical
documentation format, but it's pretty
much documentation of what this function does. And then it says
this function awaits the response and checks
if it's successful. If the response is not
okay, an error is thrown. Otherwise, it logs
a success message. And here it tells you just an example of how you
can call the function, so with some sample values here, and it says here, remember to replace the URL with the actual endpoint
that you want to call. Now, one thing is, I think JG BT did pretty decent job in terms
of creating this function. But one thing I'd like
to illustrate here is that you can still customize
this to your preference. So let's say here,
I don't really like the name make API call. I want to change that to something that's
more reflective of what I want the function
to be and what it does. Also, let's say I
now changed my mind, I don't want to use
the Fetch library, I want to use Axios. So let's go ahead and basically alter that
by the following prompt. So what I'm going
to say is rename the So rename the function
to create user. And here, I'm also going to say, use the Axios library
to make the API call. So let's go ahead and
now run this command. Right. As you can see, Chat GPT has made the adjustments
that we asked it to. So we are using the Axios slip right
now imported Axios from Axios and you can
see that it has renamed it from Ma API call to Create user, which is great. And obviously, it has
rewritten the function, which is shorter and simpler, which is, again, great and
very efficient and readable. And not much has changed, so we have the request URL, payload, and then the tryCatch. The only difference here is
now it tries to it actually uses Axios to make that post call to that URL
with those payloads. And then over here, if you look, it actually even updated
the documentation for us. So it says in this
updated version, we import the Axios
library and use Axios post to method
to make the post call, which is exactly what
we wanted to do. After our adjustments,
the function awaits the response
logs of success. And here, again, just
a sample code step it on how you can go about
calling this function. So create user with some
fake name and fake email. This is also something
I really like. It says, remember because we changed libraries
from fetch to Axios, it says, remember to have Axios installed in your project. So it's telling you
exactly what command to run to install that dependency
before using your code. So again, this is great.
This is how you would use HAEPT to produce code stampets for you throughout
your everyday work.
17. 17 ChatGPT for Documentation: In this lesson, we're going
to learn how to use ChatGPT to create documentation
for our code. Now, there's several
ways of doing this. For example, in our last lesson, we use ChatGPT to
actually create us a function that
would help us create a user by making posts call, an API posts call to a specific endpoint
via specific payload. Now, because ChatGPT remembers the context of the Chat
and interaction with you, one thing you could do
is you could simply continue that conversation
here from the previous lesson. So in the last lesson, we got ChatGPT to create this create user
function for us. Now all we have to do because it has contexts as what
that function is. You can just go ahead and
put it in the Prompt and say something like create technical documentation
for this function. Now for the purposes
of this example, I'm just going to start a new chat just so that I
can sort of illustrate. You can do this with any piece
of code or code snippet. Now, before doing so, I'm just going to
reuse this example. So I'm gonna go ahead and copy the code for this function. Now I'm going to
start a new Chat, and this is what my
prompt is going to be. So create technical documentation
for the function below. Okay, then I'm going to paste. Then let's run this Prompt and see what ChatGPT comes up with. Right, let's walk
through the results here so you can see that ChatGPT has created the
technical documentation for us. So it starts by the
function name create user. And then it basically
tells us what it is. So create user function
is an async function that sends a post request to
a specific endpoint. Here's the syntax or function
definition, which is great. Here's the parameters, so the arguments of the
function name and email, and it tells us what it returns, which is really
nothing in this case. And then here's
an example usage, so it tells us how
to make that call. So create user function with
some sample fake data here. And then the try-catch. So it talks about
dependencies in this case, acts years was one of the
dependencies that we need to install if you want to use it in our code to make API calls. And then here's some
fully detailed and very concise and clear readable
Functionality 4 h. So it says function
constructs a URL, creates a payload of
with those arguments, attempts to send a post
request, and then so on. Then at the end here
there's no that says respond data
can be handled and processed by uncommenting the
irrelevant lines of code. So yeah, that's fine. I guess we didn't really
even need to copy paste. We could have taken
this out and then have our own code in here
according to our needs. But I think this is exactly
what this is saying on commenting relative
lines in the code and modifying it according
to specific needs. Yeah, So this is great,
this is pretty good. So we go ChatGPT to create technical documentation
from our code by just simply providing the function and not
much more information. Obviously, we can make
things a little bit better by providing
more context, but this is pretty good
for minimum effort. As you witnessed in
the previous scenario, ChatGPT did a pretty
decent job in terms of creating technical
documentation given the minimal amount of
information we provided. Now in the second scenario, one thing I wanna
do is I want to go one step further and
made things a little bit more technical so that the Documentation
ChatGPT produces adheres to the correct
and industry standard. So let's go ahead and do that. And what I'm going
to do is I'm going to reuse the same
function create user, but I'm going to start
a new Chat here, and this is what my
Prompts going to be. Create, create
technical Documentation for the following function. Now, I'm going to give it
some industry standard here. So I'm going to say
using open API, specification and swagger, which are very well-known in the tech industry
at the moment. So let's go ahead and I'm going to again pace
in the function. I'm just going to get
rid of these comments here because we don't need them. And let's run the
command and then see what ChatGPT comes up with. Well, it's pretty impressive. So if we scroll up here, you can see that
ChatGPT was able to create exactly what I asked it to ask the two for the API documentation here
according to those specs. So create user
function and here's the opening APIs with
specific and Swagger doc. So you can see that this is the version of Open
API right now, three-point, Oh, and
here's the title. So basically all you, while you can do is you
can also just copy this, save it to a file,
and then you can host it wherever it is, you host your
documentation files. You can even include
this execution in your CI CD pipeline
if you really wanted to. And yet access to GBT APIs So there's a lot of applications that you can
leverage here with this. But I just wanted to
show you how amazing ChatGPT is in terms of accomplishing some of
these daily tasks. So you can see that the Documentation is
produced and create it. And you can see the path here. So this is the path for that. Here's the URL, here's the
summary, create new user. So this is interesting
because I never told ChatGPT anything
about this function. And it being purpose for
creating a new user. All I said was I need a
function to make an API call. And interestingly enough,
ChatGPT was able to figure out what this function is going to be used for
creating a new user, which is what my
initial the intent was. So this shows you how
powerful ChatGPT is and its capability in interpreting what
you want it to do. So this is just very impressive. And then it talks about, there's all these things. So if you're familiar
with Swagger, an open API spec, this sort of format. This is a YAML file, so there
should be very familiar to you and you can see
the types here. So properties name,
the type is string, again, description
the name of the user. I did not mention any of these. Chatgpt was able to interpret
and populate these for me. And then same thing
again for email. It's of type string, the format is email and then
this is the description. These are both required fields. And you can see some pre-existing, pre-populated
responses here, 200,400.500, depending
whether your request is okay, bad requests or
internal server errors or if the APIs or down. So yeah, this is how you
can get ChatGPT to create technical documentation
for your code according to specific standards. One last thing I wanted
to demonstrate here is that you can alter
this if you want it to. So for example, because I want to create a new resource
with this function, one of the responses
I'm interested in is actually to O1 to 01 is the industry standard
or HTTP response for success of entity Creation or resource Creation,
I should say. So here, we don't have
that in the list. So what I can do is I can provide a prompt to ChatGPT
to see if we can alter this. So let's go ahead and say, add one more response to
the list of responses. This should be two or one, and the description
should be created. So let's go ahead
and run this Prompt. There you have it. Chatgpt
was able to understand what we're saying and did
exactly what we asked it to. So everything is the same here. If you look through this, but
the only thing we're really interested in is the responses. And we add it to add the new
response in this section, and it was able to
successfully do that. So the response code, status code is gonna be to a one ended description
is going to be created. This is exactly what
we asked to do and this is exactly what
it did successfully. It even have a small description here in terms of
explaining what it did. So I have added the
new response with code 201 and a description and create it to the
responses section. So this is amazing
18. 18 ChatGPT for Tests: In this exercise, we're
going to learn how to use hat GPT to write
different types of tests. Now, in order to
speed things up, I'm just going to reuse
some of the code that we previously generated
in our last exercises. So the function create user
that makes an API call. And we're going to run through two different scenarios here, one unit test and one
integration test. So let's get started. For the first prompt,
we're going to tell Chat EPT to write unit
tests for the function. Now, I've already copied the
function to the clipboard, so I'm going to write
my prompt first here. So, let's say, write unit tests for the
following function. And then we want to have some requirements here
because these are unit tests and we don't want the test to actually make
the call to the API, we have to make sure
it's being marked. So we want to specify that the API call should be
marked in the tests. And now I also want to specify which frameworks
which testing frameworks Chachi PT should use in order to
write these tests. So I'm going to choose
just these tests. And I'm just going to
go in a new line here. I'm going to paste
in the function. So this is our prompt here, and then this is the function. So let's run this prompt and see what Cha chiPT
comes up with. Once again, HHIPT has done a fantastic job in
writing these tests here. So let's quickly walk through the output here to make sure everything looks
and seems correct. So you can see that
ChIPT here provided an example and how you can
write unit tests using Jess. Now, let's quickly
look at the code. So here, HHIPT has written
a code, snap it here. So this is basically your
test file in JavaScript, so you can just create a
new file in your project. Using whatever ID or code
editor you're using, and you can simply just copy base that in that file
and then run the test. But let's quickly run
through the test here. So it's mocking Axios
and then here's the describe your typical
describe it syntax. So describe is the test suite and each its statement
is a test case. So here we have the create user. Obviously, if you wanted to, we can rename this when we copy paste over or we can instruct Chad GPT to go ahead and edit
this to whatever we like. But let's just
leave this for now. And then over here, there's
after itch hook that tells Jess to clear all moks
after each test has run. So now let's actually
look at the test itself. So it says it should make
a successful API call. So here, it's trying
to write a test, or it has written a test that basically verifies
the happy path. So it is creating a
mock response here with the Sats 200
and the message. And here it's mocking the post call with
some payload here. Now we call the u we
call the create user, and that's going to
get intercepted. It's actually going
to be a mock. It's not going to
make the actual call. And here you can
see the assertions. So first thing, it's saying that the post call that was
made, it was called once, which is correct because
we called it once, and then here it verifies
the request URL. So it verifies that the API call was made to the right request URL
with the right payload. That's what this
test is verifying. And then also, it says, expect console D log
to have been called. Right? And then the
console dot error should not be called because this is a happy path
it's going to pass. And so this is all correct
and everything is great. And it's written one
other test here, so another its statement, and this one is actually
testing the failure. And you can see that here again, we are setting up
our mocks here. We call the function, so we are purposely
going to make this test fail when the
call is being intercepted. So we make the call. We simulate a failure using
this test and the setup, and then you can see
that it says the call to the post it has been called
one time, so this is correct. It has been called with
this request URL payload, so same as the first test. But note here that it's
saying expect the console log not to have been called because if we go
back to the code, the original code, we
have the try catch here, and the console log, API calls successfully,
this is only being called if the
call is successful. And in the cat where
we catch the error, this is where we
call console error. So this is why it's saying
that console dot log should not be called
and if you go here, the last assertion here says, expect the console error to have been called
with this message, which is exactly what should happen in this function
in a case of a failure. It hits the catch block and then it will do a console
error with that message. So these tests are actually great from
coverage perspective. So it wrote one to
test the happy path. I wrote one to test the failure. And this is exactly if you're a software developer engineer or an automation engineer
or a QA engineer, these are the type of
tests that you would write when it comes to
the unit test category. For our next prompt, we're actually going
to get Chat GPT to write integration tests
for the same function. Now, as you may know,
integration tests actually make the API calls. So in unit tests, we don't actually make the
API calls, we mark them. But in integration tests, we actually want the
API calls to be made. So they're real API calls
being made to the server on a database that we stood
up in our CICD tool. So let's go ahead and
provide this prompt so that CHAGPT can get started with writing integration
tests for us. So my prompt is going to
be, again, somewhat simple. Write integration tests for
the following function. The API call should
not be mocked, so I'm providing it
this requirement to make sure it's
a real API call. And then again, I'm going
to use Just for this, use the Just framework
for writing these tests. Okay, let's go ahead and I'm going to paste
in the function. So again, this is our prompt. So let's go ahead and run this and see what
ChachPT comes up with. If you take a look at the
results of this test, you can see that I think
this is really important. So it says to write
integration tests for the given function without
mocking the API call, you will need to set
up a test server. So this is where HGPT
actually isn't able to recognize that we want to actually have
integration tests. So what we'll be doing is we generate the code here in ChIPT then we'll copy pasted
into our project, and then we check that
code into the repository, wherever that may be GitHub
or whatever, Bitbucket. And then based on that, that's going to
kickstart the CI tool, and then we set up
our test environment, and then we run
the test and that makes a call to the
server and database that we stand up in whatever
container in the test. Uh, again, depends on what your setup is and how you have things set up for your project. But here, Chat GPT wasn't
able to recognize that, so it's saying that
it's still using a tool to mock the intercept and
mock the actual HTTP request. So it's using Knocke
as you can see, again, another popular tool that's
being used for testing. And again, this is going
to be very similar to because this Mg response is not really a real response, so it's still going to be somewhat of a unit test,
not integration test. But the tests themselves are
going to be very similar. I mean, the name here says
it's integration test, but it's not actually
making the call, so it's not a real
integration test here. But if you look
at the test here, it's still testing one for happy path and
one for failure. So again, very similar to our previous test for unit test. One thing we're going to try
to do now to see if ChachPT can fix this is we're going to give it a little
bit of an instruction here. So in the follow up
prompt, let's say, we're going to rephrase this
a little bit and say re write the integration
tests or actually here, it says that to write
integration test for a given function without Mk, you will need to set
up a test server. So let's say we have a test
server already set up, re write the integration test without Mo without mocking. Let's just see what JAGPT does. All right. Now, if we take
a look at the revised code, or in this case, the tests, you can see that actually
HAGPT was able to successfully accomplish
exactly what we asked it to
first time around. So it says, If you already
have a test server setup, you can write integration
tests without mocking by using actual HGTV request
to the test server. So that's exactly
what we wanted to do the first time we
provided the prompt, but it chose to actually
mocking using knock. And now with a follow up prompt, we
were able to fix that. So there's one lesson here. You don't have to get everything perfectly right in
your first prompt. You start with a simple prompt, and then you can always
build up on top of that, or you can customize things further with
follow up prompt. So that's really
the lesson here. And you don't have to be sort of too concerned or
too worried about getting everything perfect
with the first prom. You just want to
provide enough context so Cha GPT can get started. And that's exactly
what happened here. So let's quickly take a
look at the revised test. So you can see that it's
importing Axios and now here, there's no more library
Nock or Just Mok that actually is trying to intercept the HTDB request
and mock the response. So here, it says, describe, so this is the test suite name, create user integration test. This is actually pretty
good, not a bad name. And here's building
the base URL. A let server. So as you can see in
the before all hook, it's actually creating
a client using Axios, which is great. This
is pretty good. So it does that once,
and then after all, any cleanup, there's
no code here, but if you wanted to
clean up resources, this is where you would do that. And now we get to
the actual test. So again, very similar
to the previous lesson. I wrote two tests, one for
successful happy path, successful call and
one for failure. Let's take a look at each. So you can see that it's
constructing the payload with some fake sample
values over here, and then over here, it's actually making the call. So this is a real API call. It's using Axios post
to make a call to that endpoint with this payload and here are the assertions. So it's actually asserting on status quo 200 in the happy
path, which makes sense. And then here it's asserting on the response data be this case, there's really only one
message being returned, so it's a starting on that. So these are great
verifications, and this is exactly
what you would expect from test to test the
functionality of the happy path. So as you can see,
Chat GPT is very powerful when it comes
to creating test for a specific class or function
or really any type of code snippet that you wanted to provide as long as it's
reasonable and it makes sense. And basically, now that
you have the code, it is really simple because it has done majority of the work for so all you really have to do is you can just grab
all of this, right? You can just grab this file.
This is a JavaScript file, so you can just simply
copy this code or just copy paste the portion
that you want here. For example, we just care about
the Happy Path test here. It's already including
all the setup. So all you have to do
is really copy this. Go into your project. If let's say it's a
JavaScript project, for example, go to your project,
create a new test file, paste this code in and make the necessary adjustments or customizations due
to your preference, and you're ready to
check in the code and have the test actually
run in the CICD pipeline.
19. 19 DALLE Introduction: In this section we're
going to learn how to use DALLE-2 and how to unleash the power of DALLE-2 to create realistic images and any type
of artwork that we desire. We're also going to explore different features in
DALLE-2 and how to use them. Now when it comes to
the Setup for DALLE-2, you don't really need much. In fact, if from the
previous section on ChatGPT, you already created
an OpenAI Account, then you can use the same
Macallan to use DALLE-2. So there's really nothing
else for you to do. But if you haven't already created an account with OpenAI, you can just go ahead and
navigate to openai.com. Once you're here, you
can select on product. You can select DALLE-2. And this will take
you to this page. And then if you already
have an account, you can just click login. And if you don't
have an account, you can click on sign up register using your
email address. And then you can come
back to login and login into use DALLE-2. When you log into
your OpenAI Account, you're taken to this page
and as you can see here, you have several options. The first one is ChatGPT. So if you want to use ChatGPT, you would click this and
that will take you through the ChatGPT application page. The second one is DALLE. So if you want to use DALLE-2
to create realistic images, then you would click
on this and this would take you through
the DALLE application. And then if you want
to use the Open API, Open AI API to integrate with your
application or business, then this is where you would go. Now, we know we
want to use DALLE, so go ahead and click this. And now you're taking to the user interface for
the DALLE-2 application. Now let's take a quick tour
together and navigate through the different areas of
the DALLE-2 application. As you can see in the interface, is actually quite simple. So there's really not
much going on here, which is actually great,
especially for beginners. But in the center
here we have this is the place where you would
input your prompts. So this is an input
where you can put in any type of Prompt you want, depending on what type of images or artwork you want to generate. On the upper left-hand corner, we have a couple of
navigation areas here, so we've got history
and collection. If you click on history, history will sort of keep track of all the images that
you've created in the past. These are just some other
ones that I've generated for Fun and playing around with the application and learning
the application by itself. And we also have collections. Collections is, this is
where you can actually create public or
private collections. And you can save your work to these collections
and then share them. And if you go back over here
on the right-hand side, on the upper right-hand corner, you also have options. If you want to do things
such as buy credits, you can just go here,
click by credit. And you can, as you can
see here right now, 115 credits is for $15 and you can just
click continue and follow the steps and pay for
that and then gain more credits to be able
to generate more Images. We'll talk about that. We'll talk about credit
in a few minutes, but just wanted to quickly
introduce the interface. So that's pretty much it. And over here, we'll talk about this again
in a little bit. But these are just somebody Example artwork that was generated by users
and submitted. And it's a great reference to use when you're
thinking about ideas. Or if you want to spark your creativity in terms
of both types of images, in which style or class
you want to generate. Before we start to generate
images with DALLE-2, I would like to spend the quake second talking about credits. So in order to use the
alley and in order to input prompts and get images and use those
images for your purposes, you need to have credits. Now, the way it works
currently as the time of this recording is
that when you sign up for an Open AI Account, you'll get some free
credits for DALLE-2, play around with
the application. But if you want to
continuously use it day after day and generate
hundreds of images, you'll need to purchase credits. And I found this article
which is very helpful here. So this is article from OpenAI and it's at
help.openai.com. So this is the
article link address. And here it explains how
they credits actually work. So what is a valid credit? It says you can use
it daily credit for a single request and generating images
through a tax prompts. So whenever you actually
input a Prompt, you generate an image, it's going to use a credit. And it says that
credits are deducted only for requests that returns
successful Generation. So if there's a system error or something goes wrong
and you don't get an image bag or a set of
images back then, it won't. Dark credits. In this section, it talks
about what free credits are and when you get them
and when they expire. Here it talks a
little bit about how to buy Dalley credits, but
it's pretty straightforward. You just, as I showed earlier, you just click on
the Actions menu or your profile and there's a button called by credits and it's a pretty
straightforward process. And over here, I think this is, this section is pretty
good because it talks about the
difference between the free and PE credits. So it basically just says
frequented credit expires one month after they
were granted and pay credit expires 12 month
after they were purchased. And this is a very important
note here as well. Here it says you currently get the same set of rights,
including commercial use. This is very important
regardless of whether an image was generated through
a free or pay credit. So this is actually
pretty, pretty good. So if you want it to, you can use free credits
and it would have the same sets of rights for
commercial use as PE credits. So this is very important. Now, back in Delhi, I wanted to spend
a little bit time talking about this section. I think that decision from OpenAI to make this section available was very
good and very logical. Now, regardless of what your
background or education is, there should be nothing
preventing you from generating and creating
very high-quality and realistic images based on whatever your imagination or Creativity is or for whatever use case you'd
like to him to be. And I myself don't have
I'm background in Art. I don't know the terminologies, and I find this section
to help me a lot. So for example, when
you hover over the, these samples, you
can see the Prompt. And for myself,
because I don't know the terminologies and I don't come from an
Art background. I don't really know what this style is called,
for example, right? But if I wanted to generate an image that's similar
in style as this one, all I have to hover is
to do is hover over. And now I can see
the Prompt user used to generate this image. Here. You can see that it says an expressive oil painting of a basketball player dunking. So this, this type, this type of painting
is actually, sorry, this type of painting
is called oil painting. So this is actually
what I wanted to get from this, right? If I scroll down, down here again, I don't
know the style for this one. And it says it's a van
Gogh's style painting of an American football. They're going down here. I don't know what this one is. So if I wanted to do this, it just says a cartoon
of a monkey and space. So if I wanted to generate
something along lines of that, I would use the term
cartoon, right? And you can see,
you can see all of this going down here.
This is another one. This one, I don't know, but it says if I hover over, you can easily see that it
says an oil pastel drawings. So that's kinda like
the type of destroying. So this is, you can go
through this collection. I find it very helpful and
very useful when I want to figure out what type of
Art I want to generate. So this is sort of like
what I've referred to. If I am looking for
a specific style. One thing I wanted to
highlight before we start our exercises
with DALLE is the key components
to consider in your prompts to get the
best output and results. There are four main ones
that I always consider. What, the where, or
the environment, this style and the type. Let's walk through a Example together to illustrate
what I mean. So actually this
one is a good one. So the fish, the 3D fish, so as you can see, the image is pretty good. And if we hover over the image, we can see that Prompt that
was used to generate it. Now let's see if we can
identify the components. So here it says it's 3D render
of acute tropical fish. So this is the what, right? In an awkward, um, so this is the where. And then it says on a
dark blue background. So here it's
specifying the style. And then the last one
it says Digital Art. So this is the type. So this is a pretty good Prompt. It includes the main
components and as you can see, the results are
quite impressive. Now, it doesn't mean that
for every image you want to generate or every Prompt
that you're going to input, you're going to have, you
have to provide those things. You can absolutely just ignore them or not include
them in your prompts. And here's an example here, so this is a good one here. So as you can see this
cat on the moral cycle, if I hover over it, you can see the prompt for it. So it says a cat
riding a motorcycle. So this one is pretty simple, but you can see that D, you just have to know
that the less you provide DALLE in
terms of your Prompt, they're more you rely
on it to figure out the rest when you're
rendering the output. And this is a good
example of that. So we didn't really give
it any style or type. We just said a cat
riding a motorcycle. But as you can see,
DALLE chose to do this as the type cartoon. So this is exactly what I wanted to illustrate what this example, so it really depends
on how creative you want to get and how detailed you want
your prompts we, and whether you want to have control over the environment, the colors, the style, and the type of the image you're going to
generate with Dolly
20. 20 DALLE Examples and Exercices: It is now time to start
inputting some prompts and see what type of images we can
generate with DALLE-2. So let's have some FUN. Alright, for our first Prompt, we're going to try
and get DALLE-2 generate a realistic image. So this is what my
prompt is going to be. Generate a realistic image of a sunset over a beach
with palm trees. So with this Prompt, we're going to explore
the capabilities of DALLE-2 in generating realistic images of
specific scenes. So let's go ahead
and run this Prompt. Okay, Now that we
have our images, let's quickly walk
through some features available through their
daily applications. So let's just say out
of these four images, I like this one the best here, this third one, again, it really depends
on your preference, but this is just an
exercise and an example. So let's say I like
this one Best. Now, if you hover over this, you will see this action menu, this dot, dot, dot on the
upper right-hand corner. So if you click it,
you'll be able to see some options here
in the Action menu. So the first one
is open a new tab, which will do in a
second edit Image, which we'll cover in
a different video. This is great
generate variation. What this is going to do is
this is actually going to generate four more variations based on this style
of the image. You select that. So instead of putting
in another Prompt, this is just, you
can just click here, you can click
Generate variations. And DALLE will
automatically generate for more images that are
closer to this style. So not to the other
three pictures, but closer to this style that we're selecting
at the moment. And lastly, you can just
download the image. So this is exactly what you want once you're happy
with your image that you generated in Dalley
and you want to use it for whatever your use case is, whether you're in school or work or just a personal project, you can just download
it and use it. And that's what the DALLE
Features and it's very high-quality to very
high resolution and high-quality when
you're downloading this file is going to be a little bit larger
in terms of size. Now, these are the
options available. And what we can do is let's
go ahead and click this one, opening a new tab. And that should open a
new tab in the browser. And we have this
one picture here. So it's very similar
to this area except this one is just
focusing on that one picture, you select it so it's a bit more isolated if that's how you
want to think about it. So over here again, you have this picture. And you can also
see the picture is also larger so you
can see it better. And if there's areas
you don't like here, you can choose to edit
this or not use it. But over here you have
the Download button. So if you click
this, it's going to download to your device. Download the image
to your device. Again, you can edit
the image here, which we'll cover later. If you click variations, it will click for
more variations of this type of image
and this style, you can click Share and
you can click Save. So if I click Save here, this one is we talked
about collection earlier. So this is where you
can either create a new collection and save
it to that new collection. Or if you have Favorites
already set up, you can save it to
your collections. And if you click, sure, it's going to, You can make it,
you can publish it and share it publicly
if you want. So these are just
some other features available in this view when you open the
image in a new tab. So at the end of the day, it really comes
down to what your preferences and which
view you like to work. And this is Image and it's
isolated view and a new tab. Or you can just simply
close this and go back and do the exact same things in
terms of functionality here. Now, while we're here, I wanted to show you this option here,
generate variation. So one thing to notice that when we input this Prompt here, initially we use
the credit, right? So as we're generating
more images, we're going to use credits. So our Prompt, use the credit. And now if I go ahead
and click this, this is also going
to use a credit because it's generating
for you Images. But I want to do that
because I wanted to illustrate this Functionality,
which is pretty cool. So just notice this styles
of each of these images. And what are we going to do
is we're going to select this one because let's say
We'd like this one the most, for example, for the
sake of this exercise. And what are we going
to do is we want to see what DALLE can give us in terms of the same style but different
variations of images. So let's go ahead
and click that, generate variation and see
what DALLE comes up with. As we can see, dahlias finished generating
the variations for us. So we actually have four
more, which looks great. So on the left here
we have the original. So this is what,
this was our base and what we started with
from the previous round. And these four here to
the right are the ones that DALLE created for us
in terms of variation. Now if we look at these, some of these are good,
some of them are not. So for example, this one, I don't really like this one because the water just
looks weird there. No waves here is to 2D and
the sun's to white here. Not, not a hint of orange. This one, the water looks a
little bit better, but again, this sign is it doesn't
have any hint of orange And this one is
actually pretty good. I liked the water here. It's the color blue. It looks really
nice and there are some waves which makes
it more realistic. The sun has a hint of orange and stands out compared to the
background here and the sky. The trees are, okay. I actually like the trees
better than the original one, but I guess this is what it is. This one not so much. So I would say
from the new four, I probably would say this one, at least for me, right? It's a matter of preference. So for me, I like this one, so it's pretty good. Now all they have to do is simply go ahead and download it. And I can use this for my, whatever my use cases
or my application is. And again, let's say if this
is the one that you like. But in my scenario, let's say I didn't
like the trees. I can continue the same process over and over again
until I'm happy, right? So I can just go ahead and click Generate variation and then it will generate for more
for me based on this image. So now the starting point and
the base becomes this one. And no longer, this is
no longer the original. Just one thing to keep in mind is that every time you generate, every time you inputted
Prompt or every time you generate variations
from generated image, you are using credit. So keep that in mind please. For our second exercise, I'm going to use the
following Prompt. Create a series of cartoon characters engaged
in a lively conversation. So this Prompt focuses on the creative potential
of DALLE-2 by asking it to
generate a series of cartoon characters in a
conversational setting. So let's go ahead
and run this Prompt. We now have our results for
this Prompt and this one, the first one, not so great. This one here also
looks a bit weird. Again, remember this is
Generative AI, right? So over time it's going to get better based on
feedback and trading data. And it's just going to improve and get better
and better over time. So not a huge deal. And if you're wondering why did texts on
this looks weird, just something to understand
is that right now, like Generative AI does
not have the concept of Text or language and he can't really
interpret any of that. So that's why things
look weird here. The only texts that they
can really understand is this one here that you
put in the Prompt. But beyond that, it doesn't have any concept of language or Text. And over here like this one
is actually the best one. The third one here based on
the Prompt that we gave them. So this is something that
we could use, for example, if this was for a
children's book or some artwork for kids, I
think this would be good. This one also looks
a little bit weird. So I would say probably
the only user one. Use a usable one
here is this one. And if you wanted
to, we can just make more variations starting
with this base. So that's sort of the outputs and the images
that degenerate for us. But again, the one point I wanted to highlight
here is that by instructing DALLE-2 to
create lively conversations, it gives us the opportunity
to experiment with the generation of diverse
characters, body language, expression, and enhancing
our understanding in generating dynamic and
engaging visual content. For our next exercise, I want to be a little
bit more creative and I'm going to use
the following Prompt. Generate a surreal landscape where gravity seems
to be reversed. So with this Prompt, it helps us to explore the
imaginative aspects of DALLE-2 by asking it for a surreal landscape
with reverse gravity, we can dive into the realm of creative and fantastical
visualizations. So let's go ahead
and run this Prompt. So here are the results of our Prompt and they're
quite interesting. So as you can see, you can experiment with generating unconventional
landscapes, floating objects, and you can even defy the
laws of physics. And this is showcasing
the flexibility of DALLE-2 in producing imaginative and
unconventional outputs. For this exercise,
we're going to try to do something more artistic and we'll see if we can design a cover for
a music album. So this is going
to be my Prompt. Design an album cover for a futuristic electronic
music album. So with this Prompt, it helps us to explore the intersection of DALLE-2
and graphic design. By requesting an album cover for futuristic
electronic music album, we can experiment with
generating visually striking and technologically
influenced imagery. So let's go ahead
and run this Prompt. So here are the results
for our Prompt. And as you can see,
all of these actually have texts denim and
as we covered earlier, DALLE-2 doesn't have a
concept of language or Text, so these are just sort of nonsense texts
that you see here. But the important thing
is that this should, there may be parts of somebody's images that you
can use or they could be used as inspiration for you
to start your own design. So for example, here, if you like this one, you could just
download this image here by going through the
download Functionality, maybe a cutout this
text part here, and replace it
with your own tax, which would be quite
easy to do in a program such as Photoshop or are
they designed tools? This one here, I, it's all, again, nonsense texts, but it's actually pretty
cool in terms of reference. So this has a nice font, like, let's say this
was your album name. And you can use
this type of like, you can use a really
nice modern font. It says using sort of
like a neon light style. It also has a shadow. So while you can do is
you can just use this as a reference and design
yours from scratch, but following this same style. Again with these ones, if there's a parts
that you like, for example, this one, the
bottom part can be used. This one, the top
part can be used. And then you can just put
your own texts on the bottom using Photoshop or any other
program that you like. So again, you can use
parts of the image, or if you like, you can just use
these as inspiration and follow the style and come
up with your own design. And from the results shown here, we can explore the
futuristic aesthetics, neon colors, abstract shapes, and other elements
commonly associated with electronic music
album covers with DALLE-2. And this exercise, I'm going
to use the following Prompt. Create a realistic depiction of a classic painting re-imagined
in a modern urban setting. This Prompt teaches us the concept of
reinterpretation using DALLE-2 by asking for a classic painting green imagine in a modern
urban settings, we can experiment with blending traditional and contemporary
artistic styles. Now let's go ahead
and run the Prompt. The output is quite impressive. Looking at the results, we can see how DALLE-2 can generate realistic renditions of classic paintings
while incorporating elements of modern
urban environments, allowing us to explore
the potential of DALLE-2 in Art and
cultural imaginations
21. 21 DALLE Features: In this lesson, I want
to cover a couple of features that Dalley provides
to us and the end-user. So one is the upload Functionality
and what is the edit? So DALLE, he has
recently introduced the editor that we can
use to edit our images, whether they're
through upload or through the regenerated,
through prompts. And currently the Editor
is in better mode, but there's still some cool
things you can do with it. So let's get started. To start with, we're going
to try and upload an image. And all you really
have to do here is click upload that image
and that's pretty much it. That will ask you for a
prompt to select the image. And let's say I want
to select this one. So this was one that we created in one of
the earlier lessons. And this is just the beach with the sunset and palm trees. So here it asks you whether
you want to crop or skip it. If you wanted to crop,
you can just choose the portion you want
and click Crop. In this scenario, I'm
just going to leave it as is and say skip cropping. And now this is actually taking me to the editor, Dalley editor. And as you can see, it's quite simple, but there's still some
things you can do here. So first of all, if you're happy
with the way this is right now, it's right now. 1024 by 1024 pixels. If you're happy with this, just go ahead and click
the check mark. And as soon as you
click the check mark, these things and the Prompt
will become available. So note that this
is an edit Prompt, Ok, it's not
degeneration Prompt. It will generate things
for you in the edit, but it's the edit
Prompt for Generation. And over here, you have
some options here. So you have the select cursor, you have the pan, you have eraser, you
have a Generation frame, and then you can
upload more pictures. So this is a very Basic Editor, something that we
could do as we can even add more images to
this if you want it to. So let's go ahead
and click this. Let's select this
one for example. And if you want it to, we could combine images, use parts of images
if you want it to. So for example, let's say I'm
going to make this really small and bring this down
here and then hit Okay, and now we have this image
incorporated into this image. And one other thing we can do is we can add a
Generation frame. So Generation frame is something that we can place here on
the existing pictures. And once we placed
the Generation frame, we can go ahead and
put in a Prompt. And when we put in the Prompt, it's going to generate
over or it's going to fill in where our
Generation frame is. So for example, if I just put some description or Prompt
degenerate an ocean, it's going to fill in this
entire generation frame. So that's another capability
that DALLE provides to us. But yeah, that's pretty
much all there is to it, to the output Functionality,
it's super simple. You can use the Upload
File Functionality to just upload a single
image and edited. Or you can upload multiple pictures and use
certain parts of each picture. Or just simply use Generation
frame if you want it to. There's even an eraser
tool here so you can erase parts of an image and just keep certain
parts of the image. And then again, this also becomes sort of like
you're Generation frame. So when you go to do the putting your
prompts in the edit, input, a will also fill
in this portion as well. But we'll cover this part
in the edit functionality, but that's pretty much
it for uploading files. In this lesson, I would
like to demonstrate how you can edit certain parts of an image and how you can use a Prompt to regenerate
those parts are the image. So let's start with
uploading an image here. I'm going to use
our sunset image that we created in
the earlier lessons. So I'm just going to leave
it as is skip cropping. And there's obviously
an endless amount of things you can
edit in this photo. But to keep things simple, what I think I'm
going to do is I'm just going to say Okay to this. I'm happy with this. And I'm gonna grab the eraser, and I'm just going to
erase this sun here. So let's get rid of that and maybe let's just create
a blank canvas here. And one thing I would
like to say is, now I want, now that I've
erase this part of the image, I wanted to replace it
with a different son. So now this is where I
can use the edit Prompt to generate or regenerate
that part of the image. So let's see how
Dalley handles this. For my Prompt, I'm going to say, I'm going to try something
more creative and interesting. I'm gonna say 3D, render of the sun and then
comma Digital Art. So let's go ahead and run
this like this uppercase D, Just in case, let's go
ahead and run this Prompt Okay, So we have our results.
So let's take a look. So the first image, not much is visible, so this is not great. You can click on the right and left arrow to go back and forth. This one, not so bad, but can't really tell
if it's 3D or not. This one is still to
this or not that great. And this one is just to
the bot further away. So again, the results are, this is still in better and the results aren't,
aren't too bad. I would say probably, maybe this one is the best. So let's say this
is what we want it. And then we can go
ahead and click Accept. And once we click except
we have access to this and we can go ahead
and save this. There's also on new Functionality,
Functionality there. And again, we can put in
more prompts or you can just come over here
and click on Download. And this will download
the image for you. One last node on the edit functionality and the DALLE editor
is that you don't necessarily have to
start with uploading an image before you can
actually edit something. You can simply just use the Prompt input here
to generate an image. And once you have your output, you can just choose
one of those images. And from the Actions menu, you can enter the editor
mode and edit that image. So very user-friendly. You can even look at your history over here
on the right-hand side and select something that you have already created
in the past. So let's do another couple of examples together for PFK-1. So let's go in here again, these are the ones
that we created and let's say we
really liked this one. So we want to make
one more edit. So let's go ahead
and click that. And you can see from here, you can just go to
Edit from here. Or you can simply, while you're back in this view, you can select and
go Edit Image. So multiple entry points to
the same functionality here. So let's go ahead
and edit this one. Okay, now that we're
in the edit mode, we already tried
something with the sun. Let's do something a
little bit different. So let's grab the eraser tool. And what I'm going to do is I'm just going
to erase this part. Okay? And then as
soon as I do that, you can see the edit
Prompt becoming visible. And it says describe the
entire desired image, not just the erase data. So this is actually
a very good tip. So here what I would
like to say is, let's edit this a bit. So let's say man, surfing in the ocean near a beach with palm
trees and as sunset. Okay, So this is a good tip. So let's go ahead
and generate this. So here's the new output by DALLE with the edit
and regeneration, which actually
looks pretty good. So let's start looking, going through this and start
looking through the outputs. So this one is not that bad. Not so great either because the human is not
looking very real. Let's look at the next image. This one is not bad. It's a bit cartoonish
with some green. Some green water
or the ocean mixed with blue doesn't go too well. This one is not bad, is really, really
small and hard to see. And this one is actually it's not that
bad, It's a bit small, but you can see that
the person is trying to serve in the ocean. So very interesting. And the cool thing
is you can see Dolly was able to maintain the, all the aspects of
the original image. So nothing's really changed. What the palm trees with
the sun, with the ocean, just the area that you
cleared and we erased. And it was able to place to the best of its
abilities with the surfer, which is what we described. Okay, let's just do
one more for funds. So let's say we like
this one better. So let's go ahead here
and then we can click, we can either create
variations or edit this are shared or do whatever
we want with this download and save
it to a collection. But let's go ahead
and do one more here. So what I'm going to do, I'm going to make more
of a drastic change. So what I'll do is I'll
just erase the poem every, all the palm trees here. So let me get rid
of all of this. I'm gonna get rid of this. So basically no trees are visible and I'm also
going to get rid of the sun, will try this one more
time to see if he can produce a better result. So really we have the ocean
and we have the surfer. And let's see what we
can come up with here. Now I'm going to paste in
this Prompt for the edit. So just again, getting a
little bit more creative. Man surfing in the ocean
near a beach with, instead of palm trees, I changed the type of
trees to pine trees to see if we can generate that. And a sunset with a 3D sun. So I've also changed that. Just curious to see how DALLE
is going to handle this. So let's go ahead
and run this Prompt. Alright, so these are
the final results and as you can see, it did its best to
interpret our Prompt. And some of these actually
look quite funny. So this one a try to put in
some 3D type pine trees here, the ocean and the surfer
or remain the same, the background remains
the same, which is good. It was able to maintain
those aspects of the image. Here again, we have
some palm trees. These trees are not really, we can't really tell what
type of trees they are. And this is, yeah, these are close to pine trees. But you can see the sun, the interpretation of DALLE of the 3D sound wasn't that great. Mind you, I can probably do a little bit better
in terms of my Prompt. I'm providing a better Prompt
to help us understand. And I can also
provide feedback in terms of whether this was what
I was looking for or not. So yep, but overall, this is just wanted to
illustrate the ability and Functionality of the DALLE
edit and editor features. So I hope this was helpful and you'll learn
something in this lesson.
22. 22 Midjourney Overview: In this section we're
going to cover and learn about Midjourney,
what it is. While you can accomplish
with this tool, we'll go through the
sign-up process, will go through different
subscription plans that are available to you. Different types of prompts
that you can use to create different types
of images and artwork. And then we'll cover the basic and advanced features through hands-on exercises. Midjourney is a Generative
AI tool that can generate incredible and
realistic images and artwork. Any use case that
you can possibly imagine Midjourney can help
you generate images for. It is built on top of the
stable diffusion model and it uses discord as the
user interface to the tool. Midjourney BOD is available
in discord and you can use prompts to help
provide description of the type of images
you want to create. Prompts can be very simple
or they can be very complex depending on how creative or
specific you want to get. Let's take a look at some
examples to give you an idea in terms of what you can
accomplish with Midjourney. Okay, first thing we wanna do is just launched the browser. And what we wanna do is head
over to midjourney.com. So www.midjourney.com
here. Once you're here, just click on this button here called showcase on the
bottom left-hand corner. So go ahead and click that. So here's the
community showcase. This is where people have posted the examples here in terms
of what they've generated. Use the prompts and
generated in Midjourney. And these are just posted
here as examples for you to go through and get
some inspiration from. So as you can see here, these are amazing pictures
and just like DALLE, this is a great
point of reference if you want to
come here and take a look at specific
photos and get some creative ideas or find the style that you
really looking for. And once you have an
image that's kinda close it in terms of what
you have in your mind. This is where you can just hover over the picture or the image. And then you will be able
to see the Prompt user actually wrote to use and generate this
image in Midjourney. Okay, so let's just quickly
go through a couple of these just to see what the
prompts are looked like. This one is quite
basic and interesting. So as you can see, the images sort of like
2D posts or types. So if I hover over it, you can see the Prompt Here, minimalist flat design poster. So it's actually talking about the type of image,
in this case A1. The person wanted this to be
a poster type of artwork. And over here it's talking
about the location. It's talking about sort of
like the type of photography. So in this case
landscape, right? And also interesting enough, the specified the time
of the year, summer. So that would
influence the image. As you can see,
there's lots of light, sunlight on the building here. And that would definitely like the environment is
something that would influence or even the weather, for example, would
influence the output. Over here, this is a good one. So here we have a puppy. So the prompt for this, it says a cute puppy
playing parks sunlight 3D. So as you can see here
to all those have been incorporated into the
final image here. And then interesting, it says
that Disney Pixar studio. So over here you can actually
specify in the Prompts, you can specify the type of movie or TV show or
even name the actor. And it will add the AI will try to match the expectation in the Prompt as closely
as possible by going back and using that data
to influence the output, again as closely as
possible to what you've specified in your output. If scroll down, this
one is a good one. So as you can see, this is a more sci-fi and
space sort of image. And if you hover over it, you can see it says
aliens that film, movie. So actually this user
named exactly the movie they want the image to match in terms of
type and style. So this is interesting
fluid photography and then close up. Close up again is another
type of photography. So close-up, macro, side view, front view, bird's eye. So these are definitely
do affect the final. Those are the options
and parameters that will affect
the final output. And you can easily incorporate
those into prompts. If we go a little
bit lower here, these are all great,
great images, lots of examples and you
lots of inspirations from, let's see here we got
a sushi one here. This is a close-up magazine
quality shot off sushi rolls. So again, this is interesting. So close up again is the type of photography shot and
then magazine quality. So kinda like what you
would see in a magazine. This is trying to match that. So Midjourney, try to match
that in the final result. Scroll down here. Here's
another food bonds. So this is Cheesecake. If you hover over, it
says a close-up magazine quality shot of a luscious
New York style cheesecake. So and then over here you
can see food photography. So again, the type of photography is
incorporated in here. Scroll down a little bit more. This is a good one, so
you can see it says inch photoshoot style close-up. Again, the angle
shot and rain drops. So this is talking about
sort of like the weather, which you can see in the
illustrated for you here. And in the city at night. So the environment and
also the time of day can definitely help influence the image that
you're looking for. And this is a good one as well. So this one boy is walking with his pet raptor in the
leash city locations. So the environment
and this Prompt actually illustrates
one point that you can actually include the lens type in terms of the type of
photoshoot you're looking for. So in this case, it is Canon EOS R7. So if you're familiar with photography and different
types of lenses, you can also include those in the Prompt because the AI model has contexts and it
has training data. So it knows what. This actually clarifies
what type of photoshoot or photography lens the output
should closely match. So again, a great example of what you can include
in your prompts.
23. 23 Midjourney Account Setup Subscription Documentation: Alright, now that you
know what type of images and outputs you can
generate with Midjourney, it's time to get started. And in order to start, you don't really need much, you just need an email address. And all you have to do is register registration
is for free. And you can just head over
to midjourney.com again. And while you're here,
all you have to do is click the Join
the better button. So go ahead and do that and
just follow the instructions. If you can. If you already have an
account with Discord, you can just go ahead and login. If you don't, then
just create one. You will receive an e-mail, you accept the invite, and I already have an account, so I'm just going to login and all you have to do is
follow this process. And I'll see you back
here in a second. Once you sign up and
login to discord, you have to allow
authorization for the Midjourney bot to
your Discord account. So here all you have to
do is click Authorize. And once that's done, we're now logged in and we
can start using Midjourney. Once you're here, all
you have to do is click on join the discourse
to start creating. And this should take
you to this chord. Here you just accept the invite and now they should
open the discord app. And then we should
be able to go ahead and start using the Midjourney
and interact with them. Midjourney bought. Alright, we have our Account
Setup and we're logged in. Now one thing to note
is that there is this section called
new commerce rooms. And over here you will see rooms that start with
the label newbies. This is where you
want to go when you first register an account. Over here, you can actually use Midjourney and
provide your prompts to create images and artwork. And you can also see what
people are actually generating. What other people out
there like myself are, you are going to be
generating using Midjourney. And again, it's a pretty
cool place to just look at other people's artwork and
look at their prompts and just get different
types of inspiration. So it doesn't matter
which one you click Add. You can just select
a random one. I'm just going to click
on this one here. And as you can see here now, we have entered the room and you can immediately see the pictures that people have generated. So this motorcycle
is an example. Over here. We've got some more Generation, we've got some more Generation. And you can also see the
Prompt that people use. So for example, this one, it says motorcycle
has Ant-Man Logo. So this is actually pretty
cool AK resolution. This dash, dash S is for Seed and we'll
talk about that later. But just wanted to let you
know that this is where you need to come in in the newbies
room to start using this. Now before we start generating some examples and go through
some hands-on exercises, I wanted to go over the
Subscription plants. And this is straight from the
Midjourney Documentation. And they have several Subscription plans that
I would like to cover. So you know exactly how much time you have with
the free trial and then what are the other what are the other available accounts
and what they provide? Now, here, basically, there
are three Subscription plans. We have the Basic,
we have the standard and we have pro as the
time of this recording. Of course, these could
change in the future. Also, if you just creating
a brand new account, you have free trial. And as you can see here, you have a 0.4 h of lifetime. So again, this doesn't
renew every month. This is just a onetime
trial that you can use Midjourney and see
if it fit your needs. And this is something
you want to subscribe to you
in terms of plan. But you can see
zero-point for our, this is equivalent
to approximately 20 or 25-minute of generation time. And just to give
you a perspective, it approximately takes
about somewhere close to a minute to generate each image from the time you provide Midjourney
the prompts. So about 60 s low web, sometimes less, sometimes more. But that means that
this should give you roughly about 20 to 25 Images plus or minus a
few, give or take. Alright, Next we
have the basic plan. So the basic plan
is $10 a month. And this one will give you 3.3 h of Generation
time per month. Now with the paid plans, you also have the
add-on features. So let's say you
run out of this, but you only needed a few
more images you can use. You can use the top of features, so you can pay $4 for
an additional hour. Let's say you didn't need as
much as the standard plan. You just need a little bit more on top of your basic plan. You can also use the
add-on feature to top up. And one thing I'd
like to bring to your attention is that you can pay in two
different methods. So you can pay monthly
or you can pay annually. If you pay annually
for the Subscription, you actually get a discount. So if we scroll up here, you can see that there's actually a description
about that. So Midjourney has three
Subscription tiers, pay month to month, or for the entire year for
a 20% discount. So if you look over here, you can see that the basic
plan is $10 a month if you pay month-to-month or if you want
to pay for the entire year, then you get it a
little bit cheaper, you get it for $8 a month. So something to keep in mind when you're
purchasing your plan? The next plan is
the standard plan. So this one is $30 a month or if you purchase
annual subscription, you get it for a
little bit cheaper, so $24 a month. And this one's pretty good. This one gives you 15 h
of Generation per month, which is quite a lot of images. So if you're a frequent user of the Midjourney for
whatever your use cases or your purposes where
there's for school, work projects, personal
projects, whatever it is, then this this is a recommended
plan here because you get a lot of Generation
time here with this. And of course you have the
Pro Plan which is $60, again, 48 a month before
with the annual discount. And this one gives
you 30 h. And again, if you need anything above that, then you can use the add-on
feature to top it up. And there's also
one other feature with the pea plants here. You can also rate images. So there's a feature too, so that you can read other
images in the platform. And that should give you, that should earn, use
some free GPU time. So these are sort of what the Midjourney plans are as of the time of
this recording. Just a couple of more notes on the subscription plan
before we move on. So if you scroll up here, you can see it's very easy. The Instructions to subscribe
to a plan, it's very easy. So it says how to subscribe. And all you have to do is use a slash subscribe command to
generate a personal link. And it'll take you to
the Subscription page, or you can simply go to
midjourney.com account. So if I click on this, this will take me
to this page here. And over here. This is where you can actually select whichever plan you want. So again, if you want
to pay you early, you get it out a little
bit of this council, $8 a month build yearly. If you want to go one-to-one, then just click this
button here and then it will be a little
bit more expensive, so $10 a month here. Now, one other thing is
if you scroll down here, you will see that there's some frequently asked questions here and there's an
important one here. So these are just a
typical questions. If we go down here, this is very important. So it says how does
commercial use works? So in terms of copyrights and
privileges and ownership. So it says, if you have
subscribed at any point, you are free to use the images in just
about any way you want. So basically what this is saying is that as long as you have a Midjourney Subscription
and then you generated the images
using that Subscription. You are free to use it in
any way that you'd like. So this is pretty good in terms of ownership and copyright. So very important thing to know. And this is one way to
sort of get to this page. Another way, another
way that was instructed is we can
just simply go here. We can type in subscribe and
we just have to click Enter. This will generate
a link for us, personal link, and
then you click it. And then it will take us
to this page as well. Again, just two
different ways of going through the same page to pay purchase a subscription. One last note before we
get started is that I wanted to bring your
attention to this page here. So this is the Documentation
page four, Midjourney. And this is very, very helpful, especially if you're
getting started. And throughout the course and this section we will refer to this different parts
of the documentation. Or you have to do is navigate
to docs.midjourney.com. And it will take
you to this page. So there's a QuickStart guide. There's lots of very, very helpful and useful
information and they should answer some of the
questions that you might have. And there's different sections here using this code User Guide. For example, on their
user God UGA version. So if you go here, you can
see that it tells you that default model version
is set to version 5.5. And obviously this will sort of, it was released on May for 2023. And this will keep
getting updated with the new new releases and newsletters and release
notes and so on. And what we just covered
earlier was this in the Subscription that can be found under the
subscription plan. So if you go on there
Subscription section and you click
Subscription plans, that's exactly how you can
actually find this on. Again, all the instructions
are here on how to subscribe. So very helpful
Documentation page and there's lots of
useful information. And just like a
subscription plan, we'll come back and we'll refer to this from time to time.
24. 24 Midjourney Example 1 Image Generation and Basic Functionality: It's time for us to start having some fund with Midjourney and see what sort of
images we can produce. So let's do something funny and entertainment to
start the process. So let's say, first
thing you wanna do is you want to click on the
Discord message part here. Now, you can just
start with the Prompt. You have to say imagine this is how Midjourney works in terms
of inputting the Prompt. So you can see when
you go slash imagine, then you input your Prompt
and it says that description says creates images
with Midjourney, which is exactly
what want to do. So let's go ahead and do that. So imagine and then our Prompt, Let's say something
along the lines of cute hotdog with sunglasses. Very curious to see how Midjourney is going
to process this one. Now, the type let's
say I want is 3D like and let's
give it a location. So let's say Hawaii. And then I want this to be
somewhere on the beach. And then time of the year, let's say this is gonna
be summertime and the weather condition,
let's say sunny. So let's go ahead and run this. Now, as I mentioned, Midjourney is going
to take about oh, it says that it's not
accepted if you have to have to accept the
terms and conditions. So let's go ahead and do that
except terms of conditions. Please note that the terms and condition Prompt only
shows up once when you first create your account and once you agree to
wet, it won't show up. Again. Andy can proceed with creating your images
with Midjourney. So let's go ahead and
input our Prompt. Alright, let's go ahead and
put it in our Prompt here. So imagine acute heart with sunglasses 3D. We want this to be in Hawaii, somewhere on the beach. Time of your summer and then we wonder whether
it'd be sunny. Now, I'm going to add in one more argument
at the end here, and that's dash,
dash as space 800. And you can leave this out, but let me just explain what this dash dash stands
for and that's stylized. So if we head over to
Midjourney Documentation, if we go on their user
guide parameters, over here you can find stylized. So go ahead and click that and here it explains what it is. So it says, this Midjourney
board has been trained to produce images that favor artistic color
composition and form. The dash dash stylized
or dash, dash S. So either one you can
put in your Prompt, I just use the
shorter form here. It says it influences how strongly this
training is applied. A low value produces images that closely
match the Prompt, but are less artistic. And high values create
images that are very artistic but less
connected to the Prompt. In my case, because I'm looking for something
more creative that's not too close to the
reality and a bit unorthodox. I want this to be less
connected to the Prompt, and I want it to
be more artistic. That's why I tagged dash, dash slash space
1800, excuse me. That over here you can see
that the default value is 100 and the integer it accepts the integer of anything
in this range. So 0-1 thousand.
So as you can see, 100 is sort of on the
low-end of the range, meaning by default, it will stay closer to your Prompt
and be less artistic. And as I mentioned in my case, because I'm looking for
more creativity and I want it to be more artistic and less connected to my Prompt. That's why I put a
higher number there. So this is what
it is and you can just try this out without
the stylized 800. So let's go ahead and
run this Prompt now. Okay, Now this is going to take about a minute to generate. So what I'm going to do is
I'm going to pause the video here and come back when it's done, so I
don't waste your time. One thing I'd like to
mention is when you generate prompts and while you're
waiting for your image, because it does take
awhile to generate. You want to keep an eye out on your image generation process because there's so much noise in this Newbie discord channels
that you might lose track. So just keep an eye out. And because this is actually
how a lot of people go in these channels and
they generate a lot of prompts and Images
and it just stacks up. And then it just a lot of noise. So you might have to keep
an eye out on your image, your particular image
that you try to generate. Now let's go scroll
up and be found hour. So this is actually the prompts. So you can see this was our
Prompt and we were able to, It was able to
generate this Prompt. So we have a couple
of options here. Now, let's actually analyze this a little bit so you
can click on this. And if you click on this, it shows you the image. But then there's another button here called open-end browsers. So if you click that, It's going to open it in a bigger and better
resolution so that you can actually see
what the output is. Taking a quick look
at the result. First thing we notice is that Midjourney generates for
images for every Prompt. So this is to give
you some sort of variation and see which one
you want to start off with. And by default, as you can see, these are all square shape, but we'll talk about that in the next exercise and how we can change some of these
aspects ratios and make them non-square. For example, we can have them in rectangle or other
aspects ratios. Now looking at this,
given our Prompt, it wasn't our focus was
more on the hot dog, but I guess the interpretation
of the model and the application was more
around the animal dogs. So looking at this, it's actually not bad. So he got most of the
things correctly. So for example, in
this one there's a dog with the sunglasses
on. It's on a beach. You can see the ocean and
sky and the background. And you can see that
it's pretty sunny. Same thing with this one here on the lower right-hand corner. This left one here. This is actually not bad because it has the
picture of the dog. It's pretty much
everything with a bit of motion blur at the end there. And there's also a hot dog
sandwich in front of the dogs. So this is actually not bad. But you can see
that this is sort of like the interpretation of the AI model of our prompts. So you could do, You could actually go ahead
and retry this again. And while we can do is
you can regenerate. So if we go back to, let's go ahead and close this. And if you go back to discord, one thing you can do here is you can click
this button here. So this is basically saying regenerate based on the
exact same prompts. So let's go ahead
and click that. And now this is
going to regenerate for more options for is, it's very similar to us
putting the prompt again, but it's just easier
and a clickable button. So let's see what it gives us. Okay, we have the result
of our regeneration. So if we take a look here, these are actually not bad. These are pretty cute.
So these are cute dogs with on the beach
during summertime. And this one, again, they're all wearing sunglasses, so this one has a hot
dog in front of it. So yeah, this is
actually not bad. And this is basically like our first exercise just
to get things started. And again, just represents the interpretation
of the AI model. And if we go back here
in our next exercise, we're going to try
some different prompts and then it will cover what these things over here
mean and go through some of the features
that Midjourney provides
25. 25 Midjourney Example 2 Aspect Ratio: Before proceeding
to our next Prompt, one thing I wanted to
mention is that one of the nice features about having a paid subscription plan
compared to a free trial is that you can actually chat with the Midjourney bought directly. In fact, there's another
option where you can create your own discord
server and you can invite them Midjourney
BAD to that server. But for now, we're keeping
things simple because there's so much noise and
so many prompts by so many other people in
the newbie channels. And it's sort of gets cluttered. You have the option to just chat directly or privately
with the journey. But, and that way,
things remain a lot more organized and you can easily find things when you
scroll up and down. In order to do that,
all you have to do is just any Prompt in the channel. All you have to do is
right-click Midjourney BAD here, and then just go message. And now you can actually
just interact directly with the Midjourney by
for this exercise, what we're gonna do
is we're going to generate car or a
picture of a car using our Prompt
and then go through some other features that are
available in Midjourney. So let's start by
saying imagine. And over here let's
create a Tesla car. So let's say Tesla Model Y. You can again put anything. Now, one thing I wanna do is let's try some different
settings or tell them Models, some specific things
we're looking for. So let's say cityscape. So sort of like your
city landscape here is the environment
I'm looking for. And then for atmosphere, Let's give it sunset. And then one thing I'm
going to tag along here is the aspect ratio. So dash, dash, dash
16 colon nine. And if we head over to
the Documentation here, you can see that on
their parameters. You can see Aspect Ratio. So if you click on
that over here, it's sort of gives
you the definition of what this parameter
is and what it does. So the aspect or AR, so you can use any of
this and you're Prompt. Obviously this one is shorter, changes to Aspect ratio
of the generated image. So this actually
help us not produced just square images we can put
in really anything we want. And aspect ratio is the width to height
ratio of an image. It is typically
expressed as two numbers separated by colons
such as 7443. And then over here it
talks about 43 is, there are some common ones, 74169, these are all
common aspect ratios. And our overhear it talks about how it
translates to pixel. So now let's go back
to our bot here, and let's generate this. Let's run this Prompt
and see what kind of outputs we can get. The results are now ready. And as you can see here, these are pretty
amazing pictures. It did a really great job
in terms of matching or the outputs here
generating the output that closely match my Prompt. So one thing to quickly
notice right off the bat is that these pictures
are no longer squares, as you can see,
these are actually more rectangle and
that's because of the Aspect Ratio parameter be provided as part
of our Prompt. And you can see that these
are all pretty great. They're all Tesla Model wise. There are, have that city
landscape in the picture. And then you can see some great, very nice sunsets here and some other reflections
of the light. You can see on the car and on
the road and on the street. So I think Midjourney did a pretty decent job generating
these four outputs. Now that we have a good
output to work with, let's cover some other features
available in Midjourney. This one we already covered
earlier in the lesson, and it's just a quick way or a shortcut of
saying generate for more images that from the initial Prompt
that'd be provided. So that's already covered. Now, let's talk about U and V. U stands for upscale and
V stands for variation. Now, let's look at the talk
about these numbers here. For both U and V, you will
see one-two-three-four. And these are just
corresponding to the grid here. So this one here, top left, this is
number one, top right, this is number two, bottom-left, this
is number three, and bottom-right,
this is number four. So when you go, when you want to click u1 or
upscale picture number one, you're referring to this one. When you click on V4, you're actually referring
to this picture here. Now, let's talk about what
U and V actually means. So when you click
on you or upscale, you are telling Midjourney that you want that exact image. And Midjourney will produce
a very high-quality, high-resolution version of that image with refined details. You can then open it
in the browser and download the image and
use it for your use case, whatever that may be. And you can even upscaled
further if you'd like. Now, when you click on V, Midjourney will create for new variations of
that picture using The original picture as a base. So if you recall in one
of our earlier lessons, this was very similar
dysfunctionalities, very similar to the feature
in DALLE-2 that we generate four new variations based on the base image which we
covered earlier in the course. Now let's try out
these features. So looking at the
original output out of these four pictures, let's say I like this one
the best. So number four. So but I still don't want
to use this one just yet. I want to see what Midjourney can generate, offer this one. So one thing I can
do is generate four more variations
based on this image. So because this is
number four in the grid, what I can do is I can click V4. And what that's going
to do is going to take this image as a base and generate four more variations
based off of that. So let's go ahead and do that. The results are now ready. So if we quickly go
and inspect this, they actually look pretty good. And one thing we can do is this is actually higher resolution. So if you click on
opening browser, you can see a bigger image
and you can even click. It's not fully zoomed in. So if you click and you can see the fully high resolution
actual Image Size. So this is actually
really, really nice. This one. So this
is the first one. Very nice. Car has already
parked in the parking. This one is also great. This one is number four, so this is okay. Not really liking this
part here on the top. And this is the number three. So this one actually
looks very nice. I liked this one. I liked the bridge on the top. And I like to buildings
in the background. I like the color or the
clouds and the sunset, the light reflecting
into the clouds. I also like the sunset and the light
reflecting off the car. Very nice. And then also
the light on the street. So this is, I would say, probably my favorite one
out of these four again, is just a matter of preference, and this is just for
learning purposes anyways. So let's say out of these four, I like number three better. And this is now what I want
to use for my use case. And now one thing we can do is let's go ahead
and close this. And if you go back
to this chord, I want to use number three here. So again, remember this was 123.4 and I want number three. So what I wanna do is I want to tell Midjourney to
upscale number three, which is going to generate it, generate a very high
resolution picture of number three so
that I can actually save the image for my use case. So let's go ahead and do that. Click on U3. Okay, so Midjourney finished
upskilling number three. And as you can see the results, It's the same image is just
bigger and better resolution. So if you click on it, now you can see the image. And if you click
open in browser, this will give you
the full size. And now all you have
to do is simply, you can just right-click and go copy image or save image
as whatever you like. And now you have
the image and you can use it for your use case. And you can use it
however you like. Because I mean,
in this scenario, I have a paid subscription and I generated this with
a paid subscription. So according to the
Midjourney Documentation, I can use this image
however, I like
26. 26 Midjourney Example 3 Seed Functionality: Alright, now that
we know how to use Midjourney and we're
familiar with some of the Basic Functionality
is I think it's time to start talking about somebody
advanced functionalities. So to start with,
we're going to take a look at the Seed
Functionality. In order to get started, let's go ahead and generate an image so that we
can work with it. So let's just create
something very simple. Imagine let's say I
want a baby ocean. Ocean with waves, blue and
green water each summer. And let's just go ahead
and generate this. Okay, so we have the results. And as you can see, this is what the four
images that'll give us. I like the first one here. So what I'm gonna do is I'm
just going to do apprise one. Okay, now we have the
final image here, and I'm pretty happy with this. Now, let's say I just want to change something
with this image. For instance, let's say I'm happy with the
ocean and the waves, but now I just want to add
a person to this picture. So for example, I
want to add a surfer that's surfing in these
waves and the ocean. But I want the output
to still be very similar to this picture
that we just generated. And this is where
we can actually leverage the Seed Functionality. So let's head over to the
Midjourney Documentation. And if you look on
their parameters, there is a parameter
called seeds. So let's go ahead and click that and see what
that one's about. So here there's a description. So it says Midjourney uses a seat number to create
a field of visual noise like television
static as starting point to generate the
initial Image grids. Seed numbers are generated
randomly for each image. And I think this is the
important part here using the same seat number and Prompt will produce similar
ending images. So this is exactly the type
of thing I'm looking for, for the purpose of this
exercise and what I'm trying to accomplish with my final image. And one, there's some
more information here about the seat numbers. This is also pretty important here and it says seat
numbers are not static. I should not be relied
upon between sessions. Now, if we scroll down a little bit in the
Documentation page, you can see that there's
some examples here. So they're trying to create
an owl picture here. And over here they do that. And then for the second want, they are using the Seed and we'll experiment
with that in a second. If you scroll down here now, it tells you how to find
the jobs seat number. And it says using the
Discord emoji reaction. So you can just see here there's a quick video that shows
you how to do that. But basically it's saying
that all you have to do is react with an envelope
emoji, to a job, and then that should give you the Seed number that we're looking for so that we can use it for our next Prompt. So let's go ahead back
into discord and do that. Let's go ahead and react with an envelope to this image
to see if we can get that. So all you have to do is click this Add reaction button here. Now let's search for envelope, and then it's this
first one here. So let's go ahead and do that. And you can see that we actually got we actually
got two things back. We got job id and
then we got the Seed. I just care about this Seed
one for this exercise. So let's go ahead and copy that. I'm just going to copy
that to the clipboard. Okay, Now let's
see if we can use this seed number to
generate our final image. So remember, I want to keep
this as a base image and, or at least something very
similar to this image. And just add a
person surfing here. So while we're gonna do
is we're going to provide a very similar Prompt but with
slight modification here. So imagine. And then I'll say
ocean with waves. And this is the part
I'm just going to add one person surfing, just trying to keep
it to one person and now multiple people, and then everything else
will remain the same. Then at the end, I'm
just going to do dash, dash Seed, and then I'm just
going to paste that number. So this is very, pretty much the same as the first Prompt that be provided to get
this image. Above. All I've added here is this part here and one person surfing. And then over here I tagged
this parameter dashed I Seed, and then this is the seed
of our original base image. So let's go ahead
and run this Prompt. Ok, so the results are in, and this is not too bad. So we have four pictures
with ocean and waves. And there's one surfer
in each of them. Now I believe the
bottom two here look closer to the image
that we started with these, this one and this one, not
so much, but these two, especially this
number three here, it looks very close to this one. And I think maybe it might have gotten
confused when I provided the word beach here
because in this one you can see a little bit of the
beach here and the sand. But in none of these we
can actually see that and logically it makes sense
because when you're surfing, you're further out
and you can't really, you're not close to the beat. So maybe what we can do one last try is we
can actually take that out and maybe simplify it a little bit and rely
more on the Models. So Let's go ahead and do that. Let's produce a new image. So let's say imagine. And then we'll just provide
the exact same Prompt here. So we'll say ocean with waves. Then blue and green water will just say summer and
we'll leave it at that. So no beach in this one. So sorry, I spelled width here. And then Ocean with waves, blue and green water
and then summer. So let's go ahead
and run this Prompt. Okay, here's the
result of the new one. And let's say I like
number four here, so nice clear waves. And then let's just see if
we can actually go ahead and add surfer to this one. So first let's do, let's apprise number
four. So we'll click U4. They should create
a new image and then we can grab the Seed
from that new image. Now, let's find the
seat number for this. So go ahead and react
with an envelope to this one. Okay, great. So now we have the Seed number. Let's go ahead and copy that. And then for my next Prompt, I'm going to keep
things very similar and I'm just going
to add the surfer. So let's say imagine. Then we'll say ocean
with waves. Here. I'm just going to say one surfer summer. And then we'll do Seed. And then this is where we'll
paste the seat number. So everything is exactly the
same except I just added the one server here and then Use the Seed number
of the new MHC. So let's go ahead and run this. We have our results and if
he quickly inspect this, we can see that probably
out of all of this, I would say probably
this one here. So number two is the one
that matches the wave, the direction of the wave, and coloring and the texture and the waves of the base image. So if you go back
here, probably, I would say number two in this result matches
the one here. So this shows you how you can actually use
the Seed Functionality. So hopefully this has
helped you with that. If you recall, back to
the earlier lessons, this Seed feature
is very similar to dallies, erase and fill, degenerate and fill
feature because in reality you could actually
you generate an image, then you can use an eraser
to erase parts of the image. And then you can provide a
prompt that similar with slightly something different to make a new image off
of that base image. So again, you could accomplish, this is very similar
in terms of concepts. So you can accomplish
something like that with Midjourney using the dash,
dash Seed Functionality
27. 27 Midjourney Example 4 Combining Images: In this lesson, we're
going to explore a couple of different
ways of Combining Images. So let's say you have a few pictures on your
computer or your device, and you want to
blend them together. And we can actually
accomplish this through several different
ways in Midjourney. So let's get started. Now, in my case, what I've done is I've created
two images in Midjourney, and they're both
images of the cars. And what I thought it would be interesting to do is to try and blend two different
cars together to see if we can create
a new concept car, which is a pretty cool idea. So what I've done here
is I've created and saved an image of
an Audi Q4 e-tron, which is an electric SUV. That's going to be our image one and our for our image to, I've created a Tesla
Model three sports, which is an electric sedan. And the goal here is
to try to combine the two to create a new concept car. In order for us to
accomplish this, we first need to upload our images from our
computer or device, whatever device it
is we're using into Midjourney and it's
actually very easy to do. So all you have to do is click this Plus button and
upload the file. And here I'm going to
select both of these. So Audi Q4, e-tron, and Tesla Model three. So I'll go ahead
and select both. I'll upload them, and then
just simply click Enter. Now that we've finished
uploading the file. So this code, we
just have to grab the image links so that we can combine them using Midjourney. And it's actually very easy by default when
you upload an image, scored puts them on CDN. So we just have to go and
figure out what that CDN is. And all you have to do is
select on this image here, Open it in the browser, and then you can
simply right-click and go copy Image link. Alternatively, you can just
grab it from the address bar, whatever is your preference. So I've gone ahead and grab the link for that and I copied
it to the clipboard. So all I have to do is go
ahead and type in imagine. Then I'm just going to paste
the link and then space. And now I want to put in the
link for the second one. So again, just go
ahead and click on the Tesla Model three here
Open and in the browser, right-click Copy Image link. And now we can close this and then we can just
after the space here, we can go ahead and paste that. And now what we're trying to
do is we're trying to blend the two cars together to
create a concept car. And as you can see, this is actually a pretty simple Prompt. There's not much you have to do. Just provide the two images via their links and lead a
Midjourney takeover. So let's go ahead
and run this Prompt. Alright, we have the results
and if we take a look, they actually look pretty good. The final car is
actually still at Tesla, but we can definitely notice the modifications from
the result of the blends. So if you take a look
here, if we go back, you can see that in the original model pre-test
slide here you can see, for example, the rims here. They are not the same as
the output and the rims on the new concept car are actually more closer to
the one from the Audi. And it's really the
combination between the Tesla rim and the
I'll do Aldi RAM, but the emphasis
was on the waiting was heavier on the Audi
on the final result. If you take a look here, also the front lights, you can take a look here
that they are different. So on the test life you take, if you take a look
at these lights, and if we go back here, you can see that this is more representative
of the hybrid of the two because the light on the Aldi are quite thin here and the ones on the
Tesla are quite wide. So the final product
or the final blend, is actually sort of like
the hybrid of mode. So you can definitely notice traits and characteristics from both cars here in the
final concept car, which has actually pretty cool. There is another method
to Combining to images in Midjourney and it's a
feature that they've created for users and
it's called blend. Now, let's go ahead over to head over to the
documentation for Midjourney. Let's expand the user guide,
the commands section. And then over here
you can find blend. So go ahead and click that. And here it tells you what
blend is and how to use it. So it says the blend
command allows you to upload two
to five images. So we can basically
combine up to five images. Now whether you're using
the blend command or the image URL or link via the
previous method we covered You can do anywhere 2-5
images using either approach, which is fine and that's as of the time of this recording, they may increase
that in the future. So two to five images quickly and then
looks at the concepts and aesthetics of each Image and merges them into a
novel new image. So it just tells
you how to use it. Very simple blend. And it says is the same as using multiple
Image prompts with imagine. But the interface is optimize for easy use
on mobile devices. So either way your preference, you can choose either method to combine images or blend images. Here they have some options
and talk about Options. And here they tell you how
you actually can do that. And it's very easy. You just type in
the blend Prompt and you can start
uploading your images. So let's head back to this chord and see what we can
accomplish with this feature. So for this lesson, I've actually created to Images and I'd like
to blend them. So the first one is
just the character or a man in the formal dress
and wearing a suit and tie. And for the second one, I actually have
created Iron Man. And I want to use the blend Functionality from Midjourney to combine
these two images. So let's go ahead and do that. Now in Midjourney,
while we all we have to do is just
typing slash plant. And we can just select it. And as soon as you selected, it automatically gives you two placeholders for image 1.2. If you want it to do more, you just, you can actually, once you upload the first two, it will allow you to upload more images,
as you can see here. So let's go ahead and
do that for Image one, I'm going to pass in
the this man here. And for Image Tool, I'm going to pass in Iron
Man and I'm interested to see what the results would look like when Combining
these two images. And let's go ahead
and run this Prompt. As soon as I click here though, I just wanted to show that if you wanted to
add more options, as soon as you click
in the Prompt, it opens this dialogue
box and then it shows you if you wanted to upload
image 34.5, you can. All you have to
do is click here. It'll give you
another place holder. And then you can go ahead and upload more
images if you'd like. So let's go ahead
and run this Prompt. Alright, there
results are now fully rendered out and as you can see, they look quite interesting. So you can see parts of the Iron Man armor
is now integrated, are incorporated in
this man's suit. You can also see that
his facial like, he's become more cartoonish
like or comic like. And his facial expression has now changed and
is more serious, which matches as seriousness or the facial expression of
the Iron Man suit here. So this is quite interesting
in terms of results. And this should
basically covered the Functionality blend and how you have two different
ways of Combining Images. One node I would like to
make here when it comes to blend or the Combining of Image functionalities
in Midjourney is that you don't
necessarily have to blend two objects together to have one final
hybrid of the two, you could use the combined
Functionality Midjourney to simply place an object
in another picture. So here is what we're going
to do in this lesson. Now. I've created a
picture of a desert here with some hills and lots
of sand and a lot of sun. And what we're
going to try to aim to do as part of this exercise, is we're going to try and create this Tesla that we created
earlier and combine it with the desert picture
to see if we can place the Tesla car in the desert
and see how that turns out. Okay, let's go ahead and grab the address link for
this picture here. So just go ahead and copy
image link of the desert. And then we'll close this. We'll say imagine, paste
that in and then space. And now let's do the same
thing for the Tesla car here. So let's grab this one and then paste that
in after this space. Let's just know that
any Prompts that said, let's leave it simple. So just to URLs
separated by space. And then let's see how
Midjourney actually is going to interpret that and what the final outlook is
going to look like. Okay, let's look at the
final render results here. And as you can see, it actually looks pretty good. So it was able to accomplish exactly what we
wanted with minimal efforts. So you can see how powerful
Midjourney can be here. So all we did is we just said, imagine passed into URLs of the 21 was the environment
and the location. The other one was
an object which was a car and a with a very simple Prompt and no
modifications or configurations, it was able to interpret
exactly what we want it. So here it plays the Tesla car in the desert and now
this one's driving. And from here you can
generate more variations. Use the Seed Functionality to actually make slight
modifications while keeping the base the same. Or you could use more
bland Functionality. So you can even bring
in more objects here into this picture, whichever you like based
on your preference. But over here you can
see that you can, basically the point of
the lesson is you can actually place one object into another environment or location using the combination of Images
28. 28 Midjourney Example 5 Image Weight: In this lesson,
we're going to learn about a feature
called Image Weight. So let's head over to the
Midjourney Documentation here. And under user guides we want to expand this section
called advanced prompts. And over here click
on Image prompts. And if you scroll down here, here there's a section that covers Image Weight parameter. And basically what
this is is that it says it used the image, use the image with
weight parameter dash, dash IW to adjust the importance of the image versus the tax portion
of the Prompt. And here it tells you what
the available versions are. I'm currently on version
5.1, so this is, I can set this, the default is actually one, but I can set it
anywhere from 0.522. And this is sort of like
the range from min to max. And this is where
we want to signify the importance of the image with the rest of
the tax prompts. So let's go back to
Midjourney and this gourd and see this inaction
through an example. We're going to use a
similar image here that we generated previously
in 1 h lesson. So over here I have a picture
of a desert with hills. And what I would like to
do is I want to place car in here and then play around with the Image
Weight parameters. So let's go ahead and do that. So I'm going to use the
picture of this desert here. And let's type in
imagine. In our Prompt. I previously showed you how
you can find the image link. But there's another quick way of actually doing that and that's simply by dragging and dropping. So all I have to do is drag and drop that
into the Prompt. And as you can see, it has that. It's able to populate the
URL Image URL for me. So now let's go ahead and proceed with the
rest of the Prompt. Now I just want to say
Tesla model three sports. Then I'm going to tag along the IW or Image
Weight parameter. And then let's start saying 0.5. So we're going to assign a low value for
the Image Weight. So this is going to
actually put more. It's going to signify
the car tesla, and then it's going to reduce the significance of the desert. And what we're gonna do is we're going to try
with this first, and then we're going to
try with a higher number and see if we can
notice the difference. So let's go ahead and run this. Alright, looking at
the results here, I'd say this looks pretty good. So not so much. There's one there's
not much represent, representation of a desert here, but number 12.3 actually
look pretty good. So you can see that desert in the background and you can see the full car in the output
or the final image. So now let's go ahead and
try with a higher number. So signifying the desert over
the actual car Tesla here. So let's go ahead
and do imagine. Now let's go find our picture. Okay, so same picture, just going to populate the URL by drag-and-drop
functionality. And then we'll say
the same thing here. So space tesla,
model three, sport. And then for the Image Weight, we're going to say 1.5. So last time we tried with 0.5, this time we're gonna do 1.5. So V are saying that
there's going to be a significant way on the picture are the desert
over the car tesla. So let's see what
that comes up with. Looking at the result here, you can see that their
friends for sure. So you can see that
actually number four doesn't even
have a car in it. I mean, that's how strong
the emphasis was on desert, that there's not
even a car here. Number three, I think
Midjourney screwed up a little bit in terms of
rendering the final output. So the car isn't that great? But the point here is you can see the emphasis
on the desert. So a lot of this
picture is just sort of dedicated to the desert
part than the car part. Same thing with number one. You can see that The car is further and a lot smaller
in the final picture. And you can see there is a heavy emphasis on
the desert sand, the hills, and the background,
mountains and everything. So this is how you can
actually use the Image way, property and
parameter to specify more significance or
less significance in your mixture of your
image and tax prompts
29. 29 Midjourney Example 6 Prompt Weight: In this lesson, we're
going to learn about a feature called Prompt Weight. Now just like Image Weight, how you can change
the significance of your image compared
to the rest of the texts Prompt
with Prompt Weight, you can change the
significant of a portion of your tax Prompt versus another portion of
your tax Prompt. So if we head over to the Midjourney Documentation
under User Guide, let's go to advanced prompts, and let's go to
multi prompts here. And if we scroll, scroll down over here, it tells you what the multipronged basics are and how you can
actually use them. But if you go down here, there's a section called Prompt weights and this is what
we're interested in. And simply the
description is when a double colon is used to separate a Prompt
into different parts, you can add a number immediately
after the double colon to assign the
relative importance to that part of that Prompt. So let's go ahead and
put this into practice. For this exercise, I've
gone ahead and generated this image that we saw
earlier in our lessons. And what I would like to do
is I want to use this as a base and play
around with waiting. So what I want to do at first, I want to start by placing a heavy emphasis in our
Prompt on the car tesla, and then less emphasis
on the desert, which is the environment. And then we'll do that again, but reverse the numbers. So we'll place a lot of
emphasis on the desert and less emphasis on the car Tesla, by using the Prompt Weight. So first thing first, let's go ahead and get
the seat number for this. So here's our seat number. Let's go ahead and copy that. And now we can go into
our prompts here. Imagine. And this is where
we want to say Tesla. And immediately after tesla, this is where we actually
want to assign the way. And this is how we want. We can accomplish this by
using the double colon here. So I'm gonna give it a higher weight. So
I'm going to say 1.5. And then I'm going to say
driving in the desert. And for the desert
immediately I'm gonna do a double colon and
I'm gonna give it a very small weight, so 0.5. And then at the end, I'm gonna
give it the seat number. Okay, Let's go ahead and run this to see what our
results look like. Alright, we have our results. So if you take a look here, some of these actually
look pretty good. So this one not so much it as a person and some
weird stuff going on. So this one's out. This one I don't really
like the coloring here doesn't really
signify the desert. These two TO number
two and number four on the quadrant
look pretty good. So you can see that there's a heavy emphasis on the
car and you can see a big chunk or the
picture is taken up by the car which is dashed line
here is actually driving. So we can see sort
of like the sand and everything that's on the dust that's left behind
by the car driving. So this is actually pretty good. Now, let's go back
to Midjourney, and let's do this same thing, but now let's
reverse the numbers. So let's just say imagine. And then we'll say Tesla. Now, immediately here we'll say, we'll give it a double Prompt, double colon for the Prompt, and we'll say 0.5
instead of 1.5, driving in the desert. And then to the Desert, we're going to actually bump
it up to 1.5 instead of 0.5. And let's see how that's
going to turn out. And then I'll give
it the same Seed. Now let's render this and see
what the results look like. Okay, let's inspect
the results here for this last latest Prompt. And as you can see, this is kinda what
we were expecting because you can see
that Midjourney hasn't really put
a lot of weight or significance on the car
portion or Tesla portion, as you can see it try to fulfill the other side which was
driving on the road. And it's got that
pretty perfect. So but in picture number one, number two, and number four, the car is any men, Tesla, the focus is really
the desert and driving a car driving on the desert, not necessarily a Tesla. Number three though
is this is okay. This is a Tesla driving in
the desert on the road. Why do you can see
that still there's, the car itself is very smaller in proportion to
the entire image. But because we
define Tesla as a, in terms of less significance
in our texts Prompt, the AI model focus more
around driving on the desert, which you can see is illustrated
perfectly on Image 12.4. So this is obviously, this is just one example, but this is how you can
apply the Prompt Weight in your prompts to put
specific and fest emphasis. Or putting more, heavier
or less and less emphasis on the specific
parts of your tax Prompt.
30. 30 Midjourney Prompt Consideration: In this lesson, I want to
spend a little bit time covering some of the
important aspects of what you should be thinking about and
what you could potentially include in your prompts to
achieve the best results. So if we head over to the Midjourney
Documentation here under the Getting
Started section, there's a section
called next step. So go ahead and expand that, and then click on prompts. Now, over here there's some basic descriptions
of what a prompt is and what did somebody different components of a
Prompt and the structure. But we've seen a lot
of this already. So I would like to bring
your attention to this part here at the very
end of this page. And it's, it's, there's a
paragraph here that says, think about what details matter. So there's a lot of
things that you could potentially include
in your Prompt. Obviously, the less the
simpler your prompt is, and the less you include your
relying more of the model, figuring things out for
you, and more creativity. But if you want to achieve the best results
given your use case, you might want to consider including somebody's
aspects in your prompts. So for example, you need
to specify the subject. So it could be a person, animal, character, location,
object, the medium. So it could be a photo, painting, illustration,
sculpture, doodle. It could be anything that would distinguish the type
of output you're going to create and also
the type of output. It's the illustration, right? And what it's going to look like the environment is
gonna be indoors, outdoors on the moon, on a different planet, right? The lighting, There's
different aspects to lighting. So soft, ambient or
gas, neon, right? The color, you can define
things that black and white. You can define them as bright, you can define them as vibrate. Then there's the mood. So you can select the
mood of the picture or the portrait you're
trying to create, right? And then composition, um, and then you have the different types of
photography compositions here. So we've got portrait headshot, which is gonna be very up-close, close up, bird's eye view. I've even some other things that are not mentioned
here though, and I've seen in a lot of
prompts are things like movies. You can put the name of the
movies in your prompts. If you want this style
to be very similar, you can name an actor, right? To make things more closer
to that characteristic, you can use an actual actor
or the person's name to create and modify certain things in that picture for that actor, for example, their clothes, their smile, and so on. And you can even
put the sort of, the, one thing I've seen
is the camera lens. So there's a lot of popularity
camera lenses out there. And if you're looking for a
specific shot using a CAM, specific camera lens, for
example, a cannon one. Then you can even
put in the lens and the modal number
in your Prompt and AI will try to go and figure, figure it out based on
all the data it has. So there's a lot to think about. So one thing is go
through this list and always refer back to this as you're constructing
your prompts. And now next, next thing
we're gonna do is just do a few more pictures considering all these things so that we
can tie it all back together. Now that we have an
idea of what things we can play around with in
terms of the details. Let's go ahead and just
do a few more prompts and experiment with somebody's
dimensions and attributes. So first thing I'm going to
do is let's do, imagine that. Let's say extremely
close up portrait of a woman with beautiful ice. So let's run this and see
what it comes out with. Okay, We have the result
and as you can see, they look pretty good. So here we got four very
close-up portrait of then I. And you can see they
all look very nice. And this is exactly the type
of thing we're looking for. Something very close up, right? In our prompts. So
extremely close up. And if you want it to now you can optimize one of these
and use it for your case. Or you can generate more
variations based on one or just completely regenerate
for new options. Now, for our next one, let's do something a
little bit different. So let's say, imagine portrait of a woman in New
York City, fashion show. Let's say it's indoors. Ambient lighting,
black and white color. And let's say calm mood. Okay, so let's go ahead and
run this and see what we get. Careless. Take a look
at the results here and the idea look
extremely good. So you can see it actually was able to match
our Prompt pretty closely. So we have a woman
here in the portrait. There's blurring
the background in all four pictures,
which I really like. There's ambient lighting and then there's the person
is actually staring Staring into the camera, in this case looking at us. And it's also black and white, which is the color
we determined. So the results look pretty good and pretty close to
our Prompt here. For our next problem, Let's
try a different view here. So let's say imagine, I'll say bird's eye view of a horse running in a field. And then I'll say Disney. Pixar Animation
should be outdoors, and then vibrant colors. So let's go ahead and run this. Right if you have the
final render result here. So these look pretty good. I would say maybe this one, number two is the closest to bird's eye
view, definitely not. Number four. I would
say potentially number 12.3 are kinda closer
to the bird's eye view, but most of all number two. But as you can see, it has
done its best to try to make this into animation. Horse running in a field
that's closely matches the any movies or shows
that were created, or animation that were created by their Pixar
Animation by these needs. So again, results
look pretty good. Okay, Let's go ahead
and do one more here. So imagine. Then let's say astronauts, lady and every wanna
do neon colors. And here I'm going to
include the Canon lens. So let's do a popular one. So Canon EF 50 millimeter lens. And then let's play
around with somebody. Some conditions here. So I'm going to say raindrops on her space suit is
really interesting. So let's run this Prompt and see what it's going to give us. Alright, let's take a look here. Again, results look pretty good. So I would say this
one is really good. Number two, number three
is also very good, matches the prompts here. And this is just some
other ways of playing around with some of their parameters and
arguments that you could include in your prompts. Now, feel free to make a list for yourself
of water at some of the things that you could
include in your prompts and be more specific to
achieve the best results. If you ever need help. Or a point of reference, I would say your best bet is just referring to the
Documentation here. This is a really, really
good list here that you can always refer back to as a reference and use
in your prompts. And it should help you
achieve the best results
31. 31 Midjourney Settings: Alright, let's spend
some time together and go through the
Midjourney Settings because I think there's some important ones that
you need to be aware of. So the way to access Midjourney
Settings is very simple. All you have to do
is type in slash Settings and you can see it already autocomplete
stuff for you. So go ahead and press Enter. And now you can see
all the settings here. Now, there's a lot here and we won't go through all of them, but we'll go through
the most popular ones. So first of all,
here you can see all the Midjourney versions
available currently. Midjourney version
5.1 is available, and that's what I selected. If you want, you can select older versions if
that's what you desire. But for me personally, I always want to have
mindset to latest because the latest has
more training data and also it has new features which could
be beneficial to use. We also have, over here
we have remixed mode, so re-mix mode is
quite interesting. If we go, Let's go ahead to the Midjourney
Documentation here. And let's go to User Guide. Let's open advanced prompts. And then here we have re-mix. So let's go through the
description together. It says use remix mode to
change Prompts, parameters, model versions or aspect ratios between
variations, re-mix. We'll take the general
composition of your starting Image and use
it as part of the new job. So this is very
important and this is how you can actually use re-mix. And you can see here, it's showing you an example. So it starts by, here's, here's the
first starting points. So it says Line Art
stack of pumpkins. And this is what it generates. And when do we
have remixed mode? You got this button available. So you click that. And then over here for
the remixed Prompt, you can just say pile
of cartoon owls. And then this is
the result here. But you can see
that if you look at this pile of cartoon owls, It's very similar
to this one here, which was our starting Image, which was a stack of
Pumpkin, Pumpkins. This is how you can actually alter your prompts
using re-mix to achieve the same structure with a different object
in this example. And then here's some
other examples that you can go through and play
around with in remixed mode. And again, it's very easy. All you have to do
is type in slides setting and then turn on remix. Okay, Now let's discuss this setting here
called fast mode. So what fast mode is is
that with my subscription, I get this option. And what this means is that every time I
provide a prompt in Midjourney and I ask it to generate a picture for
me or an image for me. It's always going
to use GPU time, but it's going to actually
make it really fast. So it's going to do as
fast as possible to generate that image for me and use as much GPU
power to do that. Now, one thing you can do is you can simply turn this off. And if you turn this off, it's going to turn on something
called the relaxed mode. So let's head over to the
Midjourney Documentation. Over here there's a section called Subscription,
so click that. And then here you can, there's a section called
fast and relaxed mode. So click on that. And over here there's
a description. It says Midjourney uses powerful graphic
processing units or GPUs to interpret and
process each Prompt. When you purchase a
subscription to Midjourney, you are purchasing
time on these GPUs. So very important to understand. Essentially you're paying for hardware is while you're
paying for in your plan. And it says different
subscription plans have different amounts
of monthly GPUs. Obviously, the more you pay, the more you're going to get. And over here, it says, how many GPU minutes do
my generation's cost? And it goes into some details. But I want to bring
your attention to this section called fast
versus relaxed mode. So here it says, subscribers
to the standard and Pro plan can create an
unlimited number of images each month
and relaxed mode, relaxed mode will not cause
any of your GPU time. But jobs will be placed in acute based on how much
you've used on the system. So this is very
important because what relaxed mode
allows you to do is it allows you to experiment
with concepts without getting deducted from your time that comes
with your plants. So it won't charge you anything. It does take longer to give
you the image, but you can, in theory, create
unlimited amount of images for free, right? So you want to use the actual
fast mode or fast setting for when you have your
final results and you want to sort of upscale that or make variations of that, right? So you want to, you want to, you've already has some, have something
you're happy with. But if you just
have some thoughts and you don't really know, or if you're experimenting with different parameters
are Settings and you're not sure what you're
looking for to begin with, and you just have thoughts that you want to
experiment with. Relaxed mode is the
better way to go because you can try to provide, you can, You can
provide as many prompts as you can or as you want, and it won't charge
you anything. And once you're happy
with something specific, then you can and you're
happy with your Prompt and the structure of your Prompt and the attributes
of your prompts. Then you can switch
the fast mode and then you can
use it to generate the final results of scale it and use it for your use case. Also one other neat
feature in or command in Midjourney is whenever you want to figure out what
your usage is, depending on your plan. So whether you are
in a free trial or one of the other
three paid plans, it's really easy
to figure out how much of that you've
already used and how much you have remaining
all you have to do in Discord typing slash info. And if you just click Enter, you can see that how
much you've actually used and how much you
actually have remaining. So here you can see that
I have the basic land. It's on fast mode, it's public and fast
time remaining. I got 19, 4 min out
of the 200 remaining, I still have about
I've used about roughly 3% of my
plan for this month. And so far, I've created 92 images and I've
consumed about 1.06 h. And for this particular plan, I have about 3.33 h in total. So yeah, this, this is just a neat little feature or trick that shows
you what you have. And then also there's
some information here about jobs that are
currently queued or running. And she's just give you a quick snapshot in terms
of what you're usages.
32. 32 Midjourney Recap Exercise: Alright, earlier in this lesson, we try to have a Prompt
where the Prompts said acute hot dog wearing sunglasses and the bead
somewhere in Hawaii. And as you can see, Midjourney wasn't able to
interpret what I wanted to do. So over here we generated
two different variations. So this is the first one, as you can see, and
this is the second one. And as you can see in
all the eight pictures, the the dog is the main
the dog is the main focus. I actually, it's not
what I was looking for. I actually wanted a
hot dog character like acute hot dog character like a cartoon or
something like that, wearing sunglasses
sitting on the beach. But Midjourney just focused on the dark bar,
the animal part. And of course in this one, for example, they capture
both the dog and the hot dog. But it's not what
I was looking for. So what does this mean? Does this mean we
should give up? No, it just means that we
should try different things. So we just need to think outside
of the box a little bit. And we should be able to see if we can use some
other techniques we learned in this
section to go back and tackle that problem
and see if we can get the results that we desire. Okay, let me explain my
thought process in terms of how I think we could potentially
tackle this problem, again, is just a matter
of trial and error. We could really
achieve anything. We just have to take
the right approach. So what I'm thinking is that first we have a Prompt where we generate sort of like a character in the
format of a sausage, like a cartoon character, animated character that is a sausage and
wearing sunglasses, then we can generate
a separate Prompt. So in our second Prompt, we're going to generate just
the beach in terms of again, cartoon beach or animation beach or something
that's animated. And we can use the combined Functionality in the image prompts to
bring the two together. So we're going to take
the sausage character and we're going to place it onto the beach and then see what the final result
actually looks like. Okay, for this Prompt, I'm going to generate the
sausage character first. So this is what my
Prompts going to be, a close-up portrait of a
sausage wearing sunglasses. Now I'm going to throw this in Pixar animation studios
so that the model, the AI model knows
what I'm looking for. And then I'm also going
to throw this in there. And this is Sausage Party is a movie that
came out in 2016. It's an animated movie which is sort of includes adult humor. And there's a lot
of different types of sausages and different
characters in the movie, which I think with fit
this case perfectly. So let's go ahead
and run this Prompt. I forgot to include
the imagine, sorry. So let's do that again. Okay, So we have
the results here, and these actually
look pretty good. So first picture is out, this is again a picture of
a dog wearing sunglasses. So it didn't really
again understand, but number 23.4,
look pretty good. These are sort of exactly
what I'm looking for. I'm thinking probably
number three is best what I would like to have. So that's what I'm
going to upper upscale. So let's go ahead
and I'm just going to grab number three and
upscale number three. Now that we have the picture
of our sausage or hot dog, now let's go ahead and create
the picture of our beach. So this is what I'm
going to put in. So imagine and then I'll
put in this prompts. So I'm just simply saying
beach, summer, sunny. And then I'm going to say
Pixar animation studios again to give it that cartoonish
or animation like fields. So let's go ahead and run this. Okay, Let's quickly
inspect the result and these look pretty good. So I think number two is a little bit crowded here for
us to place this oxygen. So number one and
number four are ideal in terms of the
beach being close. This one, the number
three, it looks great. You see more of the water, but it's just a little
bit further away. Either one, it doesn't really
matter which one we choose, but I think the ideal
candidates will be either number
one or number four. So in this, in our case, let's go ahead and
choose number one here. So I'm going to
upscale number one. Okay, now we have both of
our images stand alone. So we have the picture of our hotdog and we have
the picture of our beach. And now simply we can use the
technique of Image Prompts. That basic technique
we saw on how to place an object in one
image into another to try and incorporate the
hot dog here into the beach. So let's go ahead and do that. Let's start with imagine. And why we can do is we can
just simply drag and drop. So we get address of the
image link for the sausage. So go ahead and press Space. And now what we wanna do is drag and drop the address
of the beach here. So we got the, we got the first image, we got a space here, and then
we got the second image. So let's go ahead and run this Prompt and see what
the results look like. Now, if you look at
the results here, these are pretty great. So this is exactly what I envisioned when we first
started this exercise. So as you can see, this
matches my Prompt a lot closer now than the other one where I just got the
dog wearing sunglasses. So this is actually
what I wanted. And the other two variations
that regenerated, That's not what I
was looking for, but we were able to use
the techniques we'll learn to bring things together and achieve the results we want it. So the whole point of this
exercise was to show you that you won't always get the result you expect given
your input or Prompt. But that doesn't
mean that you can't achieve the results
that you desire. So you just have
to learn the tool, the application, different
techniques and features. And you need to be able to think outside of
the box a little bit to bring some of these designs and
Creativity together.
33. 33 Bing Chat Overview: In this section,
we're going to learn about Microsoft Bing Chat. Now, Microsoft Bing Chat is
very similar to ChatGPT, except it's built by Microsoft. Now, in order to get Setup, it's really quite easy. You really need two things. One is the Microsoft
Edge browser, and you also need
an email address to sign up for the Bing Chat. In order to get started with interacting with Microsoft
Bing Chat, first, just open the Edge browser,
navigate to www.bing.com. And there are several ways of actually accessing
Bing Chat here. And there are different
entry points. So one is up top here, it says Chat, so you can
click that over here. It's as try it. And this is where
you can actually, it will take you to the Bing
Chat page if you click this. And also over here, there is a button, blue button with a B on it. And if you hover over
it or if you click it, you can see that it will Open
a side panel where you can start using the Bing
Chat here it says ask me anything and you can
put in anything you'd like. Again, it's very
similar to ChatGPT. You can put in prompts in terms of Instructions are Questions, and it will give you answers. So that's another entry point. And there are three
different ways of getting there with this page. Now, you need an account. So right now I'm already I already have an account
and I'm already logged in. If you're not logged in, it will say sign in here. So go ahead and click Sign-in. If you haven't Outlook
account, you're already good, you can just use that to sign in and start using Bing Chat. If not, then you'll just have to create one and then
use that to login. And after that, we're
ready to Use Bing Chat. I have my account,
I'm logged in. So let's go ahead and click
this Chat button here. And this should take us to the Bing Chat interface and you can see
everything is new here. It says Welcome to the new bank. Here's some, let's quickly walk through the user interface. So it's actually quite very straightforward,
just like ChatGPT, where you have an input
where he can put in your Prompt or Questions and then
it will give you answers. So here are some sort of Example prompts that you
could potentially provide. So what are some
of the meals I can make for my pinky taller, what are the pros and cons of
the top three selling beds, vacuums, and so on. So you could ask it questions, you can get more creative. Inspiration is like
this one here. And Bing Chat currently
runs on GPT four. So and also on top of ChatGPT, one of the advantages to Bing Chat at this
point in time is that it can also search
the web for results. So for the most accurate
and up-to-date result, ChatGPT currently
cannot do that, at least not the free
version, but Bing Chat. Not only it's running on
the latest version of GPT, which is GPD for at this time, but it can also search the web. I'd give you more
accurate results. So that's one of the good
things about using Bing Chat. And over here you really have three different
conversational styles. And you need to
adjust this based on what you're trying to
accomplish using Bing Chat. So we have creative, so this is when you
want to test out concepts or you want
to get something. If you want to get
inspiration for creativity, or if you're trying to explore
design concepts or ideas, for example, balance is sort of in-between
creative and precise. And if you want something
precise than you would want to choose
the precise option. For example, if you're
looking for a food recipe, you probably don't
want to be creative. You want precise results. You want precise portions of a specific ingredient
to put together a meal. So these are sort of
what these things mean. And over here, this is basically the most important
part, which is the Prompt. And you can see you can type up to 2000 characters in here. And you can also
click the microphone if you wanted to sort of speak and it would
turn that into tax. But this is where you
can actually put in your questions and Prompts. Also, whenever you are done with a specific topic and you go your results from Bing Chat and you want to
start something new, all you have to do is hover over this button here it says
new topic and you can start a brand new Chat and on a different topic or
a different thought that you're trying to explore. Now that we're familiar with the user interface of Bing Chat, let's walk through
several examples. Now because Bing
Chat and ChatGPT are very similar as in they're
both Prompt based. We're now going to go through
a lot of details because we did go through a very
comprehensive coverage with ChatGPT. And you could pretty
much accomplish the same thing with Bing Chat. We won't go through everything
again with West Bank. Now, one thing is I do want us to run through
some examples just so that you can see how you can use Bing Chat in your daily lives and how you can take
advantage of this?
34. 34 Bing Chat Examples and Exercices: For our first example, let's start with something
simple and finance. So let's go ahead and ask this. By the way, I'm
going to leave this unbalanced for the
duration of this course. So let's go ahead and ask it. First. I wanted to look up
some information for me, so I wanted to actually do some research and
then I'm going to ask it my question or
give it the instruction. So I'm going to say look
up the last earnings call. So these are the quarterly
calls for companies that are publicly traded on the stock
market and on the exchange. So look up the last earnings
call for Microsoft. And then I'm going to say, Tell me about the revenue
growth in simple terms. Okay? So it's realized what
we're actually trying to do and now it's going
to give us the answer. Okay, so now we have our
results from Bing Chat. And immediately you can notice some differences with
this compared to ChatGPT. So we do have our results here. But one thing is, as I
hover, first of all, actually on the bottom
here you can see that we, Bing Chat was able to search the web for the
latest information. And it included all the sources that it got its
information from. So there's five here. And the nice thing
is if I click of any others is going to take
me to that article or two, that news article or that website that it got
its information from. Also, if you hover
over some of these, you can see that
there's a hyperlink. So in here it says the middle
range of the middle of the range will add 55.35
billion implies 6.7% growth. Now, if I click this, this is going to open a new tab and it's
going to take me to that article from CNBC that
included this information. So this is very, very helpful. And if you wanted to dig in
more and go more in depth, you could actually go
through the original article and even research further if
that's what you want to do. So this is one of the
differences between ChatGPT and because Bing
child can surf the Internet, the nice thing is
it automatically includes the references for you. Now, another difference
that you might want to consider is that Bing Chat also
presents some follow up ideas or suggestions
and recommendations. So here it says what, where the earnings per share. So in our original Prompt, we asked about revenue growth. Now it's asking us, do you want to know
about the earnings or the profits in this case for that specific
company, right? Then it says, how does this compare to other tech companies, or what are some of Microsoft
biggest revenue streams? So this is a good
one, for example, again, depending on what
you're looking for, if I click this, it's just going to automatically populate that Prompt
and send it to ChatGPT. Oops, excuse me, I
meant to say Bing Chat. And now Bing Chat
is going to again research more and then
give me the results here. So you can see here Microsoft, largest product-based
revenue streams or Office products and services, azure Cloud Services,
Windows Server products. And it gives me it gives
you the breakdown as well from their revenue
perspective in dollar amount and in
percentage amount. So this is very good. And again, it provides the sources where it
got information from. Okay, we have our results here, and these are basically
Bing Chat got the results from Time Magazine and it was also able to
recognize a year here, even though I didn't
say anything, I just referenced this
year and it was able to change my Prompt to talk
influential people 2023. So that was its search
criteria when it's looking, browsing the web for
results and information. And it gave me this list
here, which is great. And I have 100 people. This is basically
a list of hundred, top hundred influential
people love 2023. And over here again, nice thing is it has
the source, right? And then some
follow-up suggestions. So for example,
let's say you are doing an essay or topic, or you're writing a book on a specific thing or
a person, right? You can actually
use this method. And if you're doing an article or news article
on a specific person, for example, and that
person is in deltas list. This is sort of the follow-up Prompt can help with Bing Chat. So you might not necessarily get the best result if you have
a very long Prompt asking, get tons of questions about specific things in a
specific sequence. But if you have shorter prompts, but you follow up with other shorter Prompts able
achieve the best results. So over here, we asked for this. Now we want to sort of narrow
things down a little bit. And it already gives me
a suggestion that says, can you tell me more about Jennifer and this other person? You could either
just click on this or you can type in any of the people that you see
in the top 100 lists, right? So you can really type
the Prompt yourself, or you can simply
just click on this. And it will actually give
you more information on that specific person or people or whoever that you
are actually looking for. And right now, it was able to recognize the two
people that we asked. And now it's giving us more
information and we can even ask it to further,
expand even further. And over here, you have
some emotions, right? So this is to help the AI
model and the algorithm. So here if you say like, then it means that
you like the answer. If you didn't like
you click this. And then you can also copy this To Clipboard if
you want to paste that in a document format or
a text editor or whatever. And over here you can
export and share. For our next exercise, let's go ahead and
start a new topic here. And the point of
this exercise is we wanna do something
very similar where we can actually ask Bing Chat to give us the output
in a specific format. So in this exercise, we're going to try to get it
to give us the results and then put it in a table format with some specific criteria. So for the Prompt, I'm going to ask it. Let's say Look up the top three electric vehicles. And I'm going to say format, the output in table format. Or I can just reward
this a little bit. The output should
be in table format. The table should
include four columns. The column headers
should include. So I'm going to
say the car model, the price, the pros and cons. So let's see what Bing
Chat comes up with. Alright, as you can see, Bing Chat was able to
do this pretty well. So it created a table here. And of course I can copy this table or
downloaded or export it and use it in different places such as
Microsoft Excel for example, if or put it in a presentation. But you can see that
a creator the table. So it got the information, it recognized what
we're asking for. It outputted the information
inward into a table format. They table includes
four columns, which is exactly what we did. And then it labeled it
according to our instruction. And the three top cars, electric vehicles it
found was Tesla Model S, Ford Mustang Mark
II, and Audi e-tron. So, and it has the
price for each. And it looks like here, the Ford Mustang is actually the cheapest option
out of all the three. And then it goes into
describing what the pros and cons of each car is. So this should hopefully
demonstrate how you can specifically format your
outputs with Bing Chat. For our next exercise, we're going to get Bing Chat to help us with
Content Creation. So in this example, let's do a blog post
on a mobile phone. So let's go ahead and tell Bing Chat to help us with
creating a blog post here. So for Google Pixel seven, so let's say
generate an idea for a blog post on Google. Pixel seven, detailing the new features
and the tech specifications. Textbox. So let's go ahead and run this. Okay, so Bing Chat gave us
some ideas here, three ideas. So it says a
comprehensive review of the latest
features and textbox, so we could include
that in our blog posts. So we just have to do
a better research. And here's some
pointers here equal to include details
on phones Design, display, camera, battery
life and performance It says five exciting new
features of the Google Pixel. So there's some things here like the new camera,
the new processor, built-in VPN and
other Features again, you could include these
details in our blog posts. And then it says we could
also do a comparison between this new phone and other smartphones and
see how it stands out. So this is pretty good. Now, I would like to
do a follow up Prompt. So we got the
information that we need in order to help start
writing our blog post. But now let's say I want to
create some images for those. Now previously we've
covered Midjourney, and this is where now you
can leverage programs or applications such as ChatGPT and Bing Chat to help you
create those images. So here, now that
Bing Chat has context as to what I'm trying
to do in this case, create a blog post with on
the phone Google Pixel seven. Now it has that
context built-in, so I can continue with
my follow-up Prompt. Now what I can say is generate
three mid journey Image prompts that I could
include in this blog post. So let's see what
it comes up with. Okay, So these are
the three Prompts that it came up with
four Midjourney. So the first one is not bad. A picture of a person
using Google Pixel 17 as camera to take a photo of a beautiful sunset.
So this is not bad. Number two and number
three are very specific to the phones feature and I don't think Midjourney will be
able to interpret that. So what I'm going to do is I'm
going to generate another, I'm gonna give it another Prompt to Generative three
more pictures, but I'm going to keep it a
little bit more specific. So I'm going to say generate three Image prompts
for mid journey, highlighting Google Pixel seven. Design. So let's go ahead and run this. So it gave us three more Rock prompts
here for Midjourney. So the first one, a picture of Google's seven and
modern design. So this is a good
one at a picture of the Google Pixel seven
camera, lens and Flash. Okay, this is not bad. And then a picture
of Google Pixel seven is high refresh
rate display. Not too sure about this one. We could try it in
Midjourney and see what it comes up with. Now, what I'm going to do
is I'll pause the video. I'll go in Midjourney. I'll put in these Prompts,
generate the pictures, and then I'll share
them back with you. Alright, I've gone ahead
and put it all three prompts that we discussed
earlier into Midjourney. And I have to say the results
are quite impressive. So let's take a look
at the first one. The first one, the Prompt was
a picture of a person using the Google Pixel seven camera to take a photo of
a beautiful sunset. So as you can see here, this actually looks pretty nice. So if you want it
to in your blog, if you're specifically
talking about the camera and the quality
and the resolution, this is definitely one of the pictures that you
could include in there. Now, for our second Prompt, it was a picture of
Google Pixels seven, slick and modern design. So this is a good one. If we enlarge this, you can see that these
are all pretty good here, depending on which one you like. This one here, number four,
it looks pretty good. And one and number three
also looks pretty good. Again, based on your preference, you can use any of these in your article Post or blog post or whatever it
is you're doing. And again, if you're
not happy with this, we covered different
techniques in Midjourney section
where you can create new variations and play
around with other Settings. And for our last one, the last Prompt was a picture
of the Google Pixels, camera, lens and flash. So this one's actually
really, really nice. So over here, I would say
number two and number four, specialty number
four, they look very beautiful in terms of the
image and the concept. And it is really truly highlighting the flash
and the lens of the camera. So as you can see,
Bing Chat is very powerful and it
can help you with creating Image
Prompts to be used in applications such as
dolly or Midjourney. Bing Chat is a very powerful
tool and just like ChatGPT, you can use it to
accomplish many things. We could use it for
Content Creation, we can use it for inspiration. You can use it for
creativity and design ideas. You can use it at work to
increase your productivity. You can use it to get
ideas for graphic design, our even prompts for generating images using other
AI applications. It's basically endless
possibilities and you can use your imagination for really anything you'd
like to accomplish. It can even, you
can even use it to solve math problems
or math formulas. Now, just like ChatGPT, one thing I would
like to ask is please fact check the results
you get from Bing Chat because sometimes
they're results it gives you in the output format
is not quite accurate. So you don't want to necessarily just copy paste
everything you get from Bing Chat and use that directly on your project or whatever
it is you're doing. Please do your own
due diligence, make sure the
information is accurate. So you may have to
do a little bit of more research on your part. And you just have to use
this as a starting point. So this is just like
very similar to ChatGPT. You want to be very careful
because sometimes, again, the output is not exactly correct or precise or even true. So you want to avoid things
that are being made up. You don't want to use
facts that are made up. And this is why you want to
make sure you double-check all the facts and you do
your own due diligence
35. 35 Microsoft Designer Introduction: In this section of the course, we're going to learn about a tool called
Microsoft Designer. Microsoft Designer is another
AI tool to help us with our creativity and design when it comes to graphic design. So first things first, you just have to
launch the browser. And I have the Edge
browser open here. And we just want to navigate
to Designer microsoft.com. And again, you just
need an account that's really all you need
is currently free to use. And if you don't have
one, you can create one. I already have an account
and I'm logged in. So as you can see here, this is the user interface. But before we start getting in, I just wanted to quickly
explain the tool. And Microsoft Designer is simply a graphic design AI tool, just like DALLE-2 and
Midjourney to help you with your ideas and creativity and help you produce
creative designs. It's also a fantastic tool
because it's a complete suite. So it's not just the Prompt
based graphic design tool. It, you can use it first to generate your
images and graphics, and then you can use the
built-in editor to further edit and modify
specific components in your design to help you
finalize that design or the output from all the way from the starting
point to the end. So it's very, it becomes very similar to
Canva or fake bar if you have used those
applications in the past. When you first navigate to
the Microsoft Designer page, this is the user interface
that you're presented with. So over here we have
the Prompt inputs. So this is where you could
actually input the Prompt In terms of what you're trying
to generate for graphics. It says describe the design
you'd like to create. And over here there's
a placeholder texts that's giving you an example. So an Instagram post about my cosmetic product
launch on July 1st. On the bottom here you've
got a couple of buttons. One says Add Image. So once you create your graphics
with Microsoft Designer, you can use the Add
Image Functionality yourself to actually incorporate that as part of your design. So it couldn't, it doesn't
have to be all AI generated. You could bring in other images that you created yourself or from other sources and include those as part
of the final design. Generate mH. So this will generate the image
based on the Prompt. And on the right-hand side, again, we got Examples. And as always, you can
use these as inspiration. So just some other ones
that have been used by previous users and
posts that he has Example. So over here you can
see it says a picnic club monthly social
meetup event, April 2023, Central Park. So it's sort of like
they've designed a poster for this specific
event for advertising. Over here we've got some food. If we scroll down here, because one for ramen, so it says delicious
RAM and recipe. And check out my cooking blog. And it says a Facebook post
showcasing a Raman recipe. So it's specialized for Facebook posts just like this one here and
Instagram post. So it's important to
include what type of social media posts you're trying to make and
which platform. And then it will try to, the AI model will try
to adapt and give you that result that will
best suit your needs. If you scroll down here, there's one for graduation, and obviously you
can customize it to the name you like for
the purpose you want Overview have over here we got the dancing glass here we have
some of animals and so on. So these are very good examples. And just like any other
applications we've seen so far, when you hover over the example, it will tell you
exactly what the prompt that Text problem was that the user used to
generate this output. Or you can simply just
click on it and it will populate this area and it
will generate the image. Obviously, the results may be different than what's
displayed here.
36. 36 Microsoft Designer Examples and Exercices: To get started, Let's start
with a very simple prompts. So over here, I've put something
together, very simple. So let's go ahead
and try this out. It says a picture of a person holding a
healthy snack with a caption that
includes the recipe and neutral nutritional
information. So put that in and then let's
go ahead and generate this. Okay, so here's
the result of what Microsoft Designer
generated for us, and they all look pretty good. The first one is, it says
a healthy snack recipe, and then there's a picture
of a granola bar here. This one, someone
eating healthy salad. And on the bottom
digit texts that says healthy snack recipe
label here or here we have a woman again
eating something healthy without some healthy
snack recipes and so on. So results look pretty good. Now, if you don't like the, any of this while you
could do is you can simply press generated again. And the model will
try to generate some more new
variations for you. So let's just say
for the sake of this example and the exercise, we like this one here. So go ahead and click on that. And then once you do that, you get two options
on the bottom, as you can see, you
can just download this and it will give
you the picture. Or you can click on
Customize design. And when you click
customize, customize design, this is where it
will take you to the edit mode and you
can use the editor. So let's go ahead and
click on Customize design. Okay, now we have
been directed to the editor mode in
Microsoft Designer. And as you can see here, you start to have, you got your main
image in the center and you have access
to some tools here. On the left you got this
menu bar templates, My Media visuals,
Text, and brand kit. On the top here you've got
the preview of your Prompt. Here you got your Zoom, undo, redo some Basic Functionality. If you just wanted to totally scrap this and
start from scratch, it would just click
on new design. Or if you're happy with this, exactly as it is, you can just click Download and then follow the instructions to get the download file and the format that
you're looking for. Now on the right-hand side, one thing I'd like to
bring your attention to is the, some ideas here. So for example, let's say you are not happy
with any of this. These are some other things, some of the other templates that it can present
to you that you can potentially choose from to replace this one if
you're not happy with the structure of how things are placed
on the graphic. On top of that, you can make edits of your own to this one. For example, if you don't
like this text here because parts of
the parts of this is actually going
on the granola bar. And the word here can
really be seen that well, you can modify that.
How are you like? So if you click on it, as soon as you click, you'd
see the edit options here. So you can change the font. You can change the size, you can make it bold,
italic or underlying. You can do other styles
such as strikethrough. You can align it loves the line, center line, and what have you. And then you can change the
position and the opacity. And on top of that, you can choose to
rotate it or resize it. So in this case, let's say
I wanted to resize that. I can even grab it and move it a little bit
to the bottom here. And you can see the
automatic aligned tool letting me know when it's perfectly aligned
with the top one. So over here, I can see that
it's letting me reread, basically reformat this so that the whole
thing can be seen. And if I wanted
everything in one line, what I could simply do is again, click on this and change the size to something
smaller, right? And then everything will
fit into one line here. So just a very simple example while you could accomplish here. Now when you click on
the whole picture here, you can see you will get, you're presented with
this menu on the top. And this is where this
is the background. So this is where you can choose the color of your
backyard background. You can replace it with a
different image if you'd like. You can crop it. You can
apply effects to it. So for example, if
you click on Effects, Let's go ahead and do that. Here's some filters, your
typical normal warm punch. So let's do warm here. And as soon as I apply that, you can see that over here, the picture changes
to apply that filter. Obviously, I can adjust
the intensity here, so let's do less it, the less filter, the more it presents
the original picture and the more it will move away
from the original picture. So this is sort of add
to 50% line, right? Obviously you can go and
experiment with this. You can change the
brightness, contrast, saturation and all
of these things. And you can even apply blur. Here. We can do the blur. And you can also, if there's your
picture that you're trying to bring in has a background that you
don't want to have the background because
it's not the main portion. You can, there is removed background Functionality that
you can leverage over here. So now let's go back here. Let's undo all the
changes we just made. Now, let's experiment
with somebody's idea. So let's say this
was the original, suggests that image that you're starting off as, as a base. And you're not
really happy about how things are organized. Of course, you can choose
to move this around because you have
access to all of these components on the page. And you can move this
around however you like. So for example, right now
I can move this text, I can move this text, I can resize them, I
can change the content I can add more texts. And Eve over here,
I can, There's a, if I click on this
background and I click detach background, I can actually just completely
remove this background and replace it with a
different Logo here, right? So that's what this
Replace option is four. So let's go ahead and undo that. And if you want it to, you, what you can do is
you can explore somebody other ideas here
with different structures. So for example, let's say we
want to try this one here. So go ahead and click that, and now you get something that's completely different. I
actually liked this one. This one is pretty good and
it's perfect for sort of like a poster or a cover or
as social media ad. If you're trying to advertise something or promote something, healthy snack recipes,
the colors are very nice. They compliment each other. They're not too
harsh on the eyes. So this one is
actually pretty good. I liked this one better
than our original one. Obviously there's
other styles here. So for example, this
is another one. This seems very close to the posts that you see
on Pinterest platform. If you've ever Use the
Platform Pinterest, this looks very similar to that. But overall, I think this
is a really good one here. There are a lot of different edits you can make
to this graphic, including adding your
own text if you want it. So for example, here
you have the design, the final design
looks pretty good. But in addition, you want to
include your own website or the website or the URL to your blog posts so that as
part of this ad or promotion, people can actually
see it whatever you post this on social
media, for example. So it's very easy to accomplish. You can see again this menu, all of these are very
self-explanatory, but over here you've
got the Text tool. So you can just click on it. And you can put
something in here and let the AI generate
the texts for you. Or you can simply
do this over here. So you're presented with some pre templated Text
styles that you can choose. So for example, this one's
nice, This one's cool. Grand opening, this one's good. And then you can just
explore somebody's right. But whatever it
is that you like, you can choose that and
then you can use it. We'll add it to your final image and then you can customize it. So let's say I like
this whole for example. So let's go ahead
and click that. And as soon as you click that, it will add this placeholder
to your final image. So let's go ahead and
actually customize that. So first thing, I want
to make this smaller. So go ahead and
select the text here. And from 60, we want
to reduce this down to something. 14 is good. So let's do 14 here and
then let's bring it down. And now I just want to replace
the text with my website. So let's say in this
case www.example.com, Let's say this is my website. And now I have it
in the picture. I can sort of make this size a little
bit smaller or bigger. And what I can do as I can even change the
position of this and potentially bring it bottom
or to wherever you feel like is the right place in
your graphic to include it. So this is how you can add text. This is how you can edit text. And again, adds a lot of different content or information to your ad or to your graphic. Once you're in edit mode, it doesn't mean
that you cannot use the AI anymore to generate it. In fact, you can
definitely enhance your graphic and Image with further modifications
that are AI generated. For example, let's, let's, for the purpose
of this exercise, Let's say that this granola bar, I am not happy with this. So instead I want to have
like a protein bar instead, and I want to add that to my
poster here for my purposes. So all you have to do on the left menu bar
here, click on visual. Now, when you click on visual, it opens a panel, it opens a side panel
and by default, it gives you some recommendations
that you can use. So because it knows you're
looking for healthy snacks. So it tries to find
those things for you. And by default, it
presents a list to you, so somebody's actually
look pretty good. So if I liked this bowl of
trail mix, for example, or fruit, I can just click on
it and add it to my poster. If you scroll down,
there is other things. For example, there's
filled shapes, so you could add those two to your image to cover
something or to create a background for a text or whatever it is
you'd like to do. Here you have photos that are
spraying photo and so on. You can just use any others. And of course, you can search for other things to see
what are the other pre, pre available images are
there for you to use. But one thing I'd
like to sort of for you to focus on is
this Generate button. This is, is to generate new
images based on Prompt, because these images already existing and he can choose
that for your purposes. But when you click Generate, this is where you can
actually put in a Prompt To generate something
that's more closely relevant
to your posterior. Once you click Generate, now let's go ahead and produce the image we
were looking for. So in this case, we want it to generate a picture or graphic of a protein bar and place
it on our poster. So let's go ahead and do that. And then click degenerate here. Okay, It took some time, but Microsoft
Designer was able to generate these images and these actually look pretty good. So if we were pretty, we're now presented by three graphics for
the protein bar. And you can choose
whichever you like. Let's say, I like this
one better for example. So let's go ahead
and click that. Obviously this is too
big for our image, so I want to reduce
this by a bit. And one cool thing here, one cool feature is that
because if I place this here, it doesn't look very, it looks very abnormal because the background of the
protein bar doesn't really match the background
of the granola bar. So while we can do is there's
a functional feature or functionality available here
called Remove Background. So once you select the
image of the protein bar, you can go ahead and select
Remove background and it'll try to do its best to
remove the background. So now as you can see, I can just bring that down
and it's sort of nicely blends into the background
that I had previously, which is more of a
pinkish color here. And it fits the background a little bit better
compared to before, and it nicely blend
so it creates that seamless
illusion that they're all part of this same picture. So this is how you
can remove that. You can remove the background
via that Functionality. Now if you notice, when we added this protein bar to our design
on the right-hand side, all of the ideas are now
updated to include that. So if you take a look here, these will have all been updated to include that as
part of the new design, which is pretty cool because
it just happens automatic. There's really nothing
for you to do. It's effortless. So you can explore
these ideas further. And for example,
let's say this one, you can, this one is
actually not bad. It rearranges things on the
final graphic or the poster. Now, over here you can see that you see some residue here, over here and here,
and maybe here. And this is just as show that the Remove Background
feature did its best, but it's still left some parts, Like left in the it wasn't
able to do it perfectly. And it kept some other residues and it's left on the picture. But that's not a problem
because you can download that picture on your computer. You can modify yourself in a Image Editor and then
upload it back in. So it's not a big
problem at all. I think it try it did
a pretty good job. You did a pretty decent
job here, maybe 98%, 99% perfect in terms of
removing the background. Anyways, going back
to the ideas here, you can explore these and
see which ones you like. This one's pretty good. And then there's all
these ones that you can potentially explore and just choose the one that you think
best illustrates the point. Now, this was the generate TAP was where you could
actually generate, have AI generate Image for
you based on that Prompt. But there's other
things available here. So there are other tabs
here, for example, photos. So if you put in
photos, it will. Again, these are
existing photos in the platform that
you are free to use. You can search for whatever
it is that you'd like to do. So for example, protein bar, Let's see what it comes up with. And these are the ones that are available for you
to use years at one other person eating a
protein bar here is one here, here's a hand holding it. So this is where, this
is exactly what we have here in our poster. So the granola bar, so there's lots of
available options here. Then there's graphics. So these are more sort
of 2D or sketches that we can choose from. Again, you can search
for what you're looking for and
there's some videos available for you
to use and then you just have to search
for a specific Thank. Now, this is everything you have access to in the platform. But also what you can
do is you can upload images from your
computer or your device. So you're not
limited to in terms of what's available to
you in just the platform, you can literally
import any picture. And the way to do
that is very easy. All you have to do
is click My Media. And here you can choose
the device so you could do it from your laptop
or your computer, your phone, google Drive, Dropbox, Google Photos,
whatever you like. And in this case, let's just use an example here. So I'm just going to
select this picture we created earlier in
the course in Delhi. And it adds it here to the collection what
we previously had. And once the upload is done, you can simply just
click it and it will add it to our image. Obviously, in this
case, this is, it doesn't make any
logical sense for us to add this picture
to this poster. I just wanted to demonstrate the functionality that you can upload any picture that
you'd like from your device. Now, let's say you're pretty happy with the current picture, and this is your final design. And now you want to
export it out of the Microsoft Designer
platform and use it in other places such
as social media. So very easy to do again, all you have to do is click
this Download button. And once you click this, you have to select the type. So you can do PNG, which is an image format. You can do JPEG, which
is the Image format. Or you could do PDF. You can select any of this. You can. There's some other options here. So for example, and
make the background transparent if you're
taking this image and putting it or overlaying it on top of another image
somewhere else, that would be a useful option. You can select,
remove watermark, so there's really
nothing on there, right? So I would keep that option. And then you can
simply just click download or you can copy as Image and then pasted it goes into the clipboard and you can paste
it somewhere else. But if you want me to
have the physical file, you can just click
it, click Download, and then that will download it to your whatever your devices. And over here it
finished downloading. And we're good. And you can also send that to your phone if you'd like here. So if you click Download, there's another option
here called sent to fall. So this is how you
can simply export your image out of the
platform and then Use a, use it in other places.
37. 37 Adobe Firefly Introduction: In this section we're
going to learn about an experimental tool
called Adobe Firefly. So this is called Adobe Firefly, which is Embed Ahmad
at the moment. And the way you can
access it is simply open your browser and navigate
to Firefly that adobe.com. Now because this is
new and experimental, you currently have to add yourself to the
waitlist and wait for them to add you to the platform so that
you can start using. I had to wait for a few
weeks and now I have access. And I have to say
I'm pretty impressed with what this tool
can accomplish. And in addition, this is a standalone tool like
Midjourney and DALLE. But Adobe is working in incorporating these into
other applications. So right now, this is actually, it says here that Firefly is coming to your favorite apps, but it's currently, it's integrated and
incorporated in Photoshop. So if you have Photoshop, you will see a version of this and you can do some
pretty cool stuff with it, like Generative filled
where you erase a part of an image and then you can run a prompt to replace it
with something different. Once you've created your
account and you have, you gain access to the platform. You simply just after
navigate to Firefly that adobe.com sign-in
with your account and you're presented
with this interface. And again, we're just going
to do a quick tour here. And there are some guides to
help you get started here. This one here, for
example, text to image. This is very similar to what we've learned so
far with DALLE-2, Midjourney and Microsoft
Designer, simply, you provide a tax
Prompt and try to do its best to give
you an image output. There's another
feature here called Generative filled. So
this is pretty cool. This is says that you can
use a brush to remove objects or paint in the new
ones from text descriptions. So if you recall back
to earlier lessons, this is exactly what we could accomplish in the
DALLE-2 edit mode. Over here, you can apply styles or textures to
Text with texts prompts. So you can do some really
cool things there. Generative recolor. So you can actually
change color variations. Here. You can turn it 3D
objects or models into images. Again, very cool. Here you can extend
Images and change the aspect ratio of your images. And here is, again, just some sample prompts
that you can use. If you hover over this, it will tell you
exactly the picture. It will tell you exactly
what the prompt is. And you can just simply click this button and
try Prompt and it will input it and run it
and you'll see the results. Here's another one,
Here's another one. So you can go through
all of these and explore different prompts
here and pictures. Again, this is really helpful. And you should definitely
go through these and use it as inspiration. And also, you'll learn about
the things you can actually include in your Prompt and the structure of your prompts
to get the best results. And if we scroll a little
bit further down here, these are some other things that the Adobe team is
currently working on in terms of introducing
new features into the Adobe
Firefly application. Very, very cool stuff here. So if you scroll down
here, there's this one. Personalized results
are generated images based on your
own objects or styles. Text two vectors. So this is pretty cool. Tx2 pattern. Here we got Text to brush, sketch to me. So
this is pretty cool. Use case something on a piece of paper and it can turn
it into an image. And then we've got
Text, two templates. So lots of new features the team is currently
working on to implement. So can't wait to see
somebody's inaction
38. 38 Adobe Firefly Examples: Okay, so let's go through some hands-on exercises here and provide some prompts to see what this tool can generate for us. So let's start simple. We'll start with
the text to image. So go ahead and
click Generate over here and there's a description
and you just have to, actually, I'm just going
to skip all of this. And yeah, so here we are taking to this page and you
have your big input here, which is where you would
put in your Prompt. But the one thing I wanted
to demonstrate, again, there's tons of endless
examples here in terms of what you could
potentially generate. And again, if there's
something that you like that matches your style or
what you're looking for. All you have to do
is hover over it and you'll be able
to see that Prompt. So this all here
looks pretty cool. So yeah, I urge you to go through this and
explore some of these. And they should serve as a good place to start and
get inspiration from. For our next three Prompts, we're going to try
something different just so that we keep things
interesting and move, we move away from some of
the previous examples I'm prompts that we covered
earlier in the, in the course. And we'll try to enter
the world of fantasy. We'll try to create some
science fictional type stuff. And we'll cover the
realm of space and see what Adobe Firefly can generate in terms
of output for us. So there should be
super interesting. So for our first Prompt, we're gonna do something simple. So we'll say a planet with
three sons in the sky. So let's go ahead and run this Prompt and see
what it comes up with. Okay, we're now
looking at the results that Adobe Firefly
generated for us. And I have to say, I'm
pretty impressed here. So again, just like other tools, Image Generative AI
tools that we saw, it gives us four variations here that we can start off with. And these all look pretty
good and they match the, match the text Prompt. I gave it very closely. And I have to say, coming into Adobe
Firefly after having experimented with DALLE-2 and Midjourney and
Microsoft Designer, the user interface for Adobe
Firefly is very intuitive, which is quite impressive. It goes to speak to
their efforts into making the user experience very, very easy and very
helpful in order to, for us to get started. And It's, it's quite impressive in terms of
what it can generate. And if you hover over one of these images, it's
pretty self-explanatory. So for example, it
says rate this result. So I'm quite happy
with this image. So I'm gonna give
it a thumbs up. And obviously that's
going to help the AI model in
the future, right? So that's one, if
I don't like it, I'm gonna give it a thumbs down. So for example, this
third picture is not as great as the first picture or
even the fourth picture. So I'm not happy with this one. I'm going to give it thumbs down because that's not
what I'm looking for. Again, this is just giving
feedback and they're going to use the feedback to
train the AI model better, which is something I
encourage everyone to do. Now, looking at all of these
four things are all great. I like, I really like number
one and number four here, if you hover over this, so we covered these
features here. Now, if you go here, it says show similar. So this will give you or
show some variations. Here we have degenerated filled. So this is where you can
erase part of the picture and then sort of fill it with
something different. Here we have the options menu. So if you click this, it says
submit to Firefly gallery. And that's kinda
like this showcase we saw earlier with Midjourney. Here you're choosing to open to share your Prompt
and the results of your Prompt with the
rest of the users and Adobe Firefly to help them
get inspired by your design. And then some other
options here like Copy to Clipboard and clipboard. And if you're happy here, you can just go
ahead and download the picture and use
it straightaway. Okay. If there are
pictures that one of the pictures that you
like, well, you can do, which is what I did
in this lesson was, let's say you like this
picture better and not the other three or
just this one the best. If you click this one
here, shows similar. It's going to change the
other three pictures. And it's going to use
this as a base and generate variations
based on the other ones. So that's exactly
what I did here. I click this, I'm going
to click it again. And then it's going
to change these three to something similar, just a different
variations of this image. So what, that's what this
shows similar does, again, it's very close to the
generate variations in other Image
Generation tools that we've explored up to
this point so far. So that's what
that's going to do. Now, let's Explore the Generative
fill feature. So go ahead and click that. Once you clicked on
Generative fill, it's going to take you to this page and it's
a very simple, as you can see, very simple
and intuitive editor. And this is where you can actually manipulate
parts of the image. If you're happy
with certain parts, you're not happy
with certain parts. You can actually use the
generator field to remove or add certain components to the image to make it match your
desired results. So in this case,
let's say I'm pretty happy with here we
have the planet, we got the sun's here. We got some mountains
below the atmosphere. So let's say the only thing, one thing I really
would like to do is add a space station in the
planet or in the orbit. So we can easily achieved that. Now over here, I
got the add tool, select that, so go
ahead and select that. And let's say this area. So this is what we wanna do. So go ahead and erase
at this part here. Okay, so I erase this part. And as soon as I
erase this part, note that on the bottom, we, the user now can
see this Prompt input. And you can see
there's a note here that says if you leave, if you leave the if you
leave this Prompt blank, it will generate, it will fill it itself based
on the surroundings. But in this case, I know
I want the space station. So let's go ahead
and type that in and see what Adobe Firefly
can come up with. So let's say Space Station. Because I erase this part, it knows where to place it. So let's go ahead and click Generate and see what it can do. Alright, so Adobe
Firefly was able to come up with some
variations for me. So let's quickly
walk through them. So this one, it actually
plays something here. So it's actually cool because it's not exactly
what I'm looking forward. This is more of a rocket but, or a spacecraft rather
than Space Station. But you can see it placed it
exactly where I asked it to. And also the
surrounding look pretty good because it blends
into the picture. There's no weird
board, weird borders, or any inconsistencies
that would stand out. This one here, this
one is another option, so this is another rocket, here's another one, and
here's another one. So lots of different options. Now if you want, you can click, if you're happy with these, you can click on keep. Or if you're not
happy with this, you can just select Cancel. And I'm going to select Cancel and I'm gonna do something maybe a
little bit different. So I'm going to say inter
National Space Station ISS. And then let's see
what it comes up with. Let's see if we can
do something better. Okay, So we have four
more results here. So again, these are
more closer to rockets and spacecraft is not so much the station. But I
think that's okay. I think the tool is
still fairly new Anna, experimental and they're making it better and it's
still learning. Here you can even click the
More button and issued. It just more means generate
four more variations for me. So that's what it's
currently doing. So this one is not too bad. This one is more of a satellite, so you can experiment with
this and see what you like. Let's say, I'm happy with
this satellite here. I'm just going to keep it as is. And if you're happy
with the final results, you can simply, you can
even use the pen tool here. And you can move, move the image around
if you like, and so on. But if you're happy, happy with the results, you can just click on Keep. And now it's going to
keep that satellite in there in that location
that you want it to. I wanted to insert the fill in. And you can go ahead and
click on Download on the top right-hand corner to
download the final image. Alright, for our
next Prompt, we're going to try
something different. So let's say Space Station
or bidding black hole. So let's go ahead
and generate this. We have the render results now, and they all look pretty good. So this one is kinda nice. This is great. This one is cool. This one is really nice. I like this one, and
this one's also not bad. Yeah, let's say, I like
this one the best year, so I'm gonna give
it a thumbs up. And if you want it to, we can even share that with submitted their Firefly Gallery, which is I think it's good to help other
people out as well. If you want it to make
more variations of this, we will click this shows
similar and it would change the other three images
to something that style. But let's say we're
happy with this. If you wanted to,
we can go ahead. That's just download this
image and start using it. But in this exercise, I kinda wanted us to explore
some of the options here. So again, these are very, Adobe has done a great
job in terms of making these self-explanatory
and intuitive. So here it's very easy. It says Aspect Ratio. So you saw how we
were able to change Aspect Ratio in Midjourney
using the parameters. It's a very similar idea, except here everything is already ready for
you to click on. So all you have to do is
choose a preset one here, you got landscape for
233 to four squared, which is one-on-one, 16 to nine, which is something we saw
in Midjourney and so on. So you can change these aspects ratios to
whatever you like. So as soon as you click it, it will automatically
change that. And now it's no longer a square, but it's going to
be a rectangle. So it's doing all the redoing
their rendering for you. And it's also creating some new variations for you
with the new Aspect Ratio. Just note that when you select the aspect ratio is going to redo the design as well here, as you saw with this exercise. Also note at the bottom here, there is this filter applied called ART and that's referring to
the content-type. So you could do none. You could do photo, you could do graphic, or you could do Art by default, is currently set to Art. And this is the
type of images it creates based on that setting. If you want, you can simply
change that to photo. So let's go ahead
and click that. And that's going to generate some variations with
the content type of as we go, I'm actually noticing that these variations are
getting better and better. I really liked this one like
this is really nice and very close to kinda what I envisioned when I first
put in the prompts. So I'm gonna go ahead and
give that at thumbs up there. It's very nice. This
one's kinda cool. These are, these are
all very nice and perfect for exploring the tool
and design concepts here. And over here we have this
style is we got popular, we got movements, we got themes without techniques,
we got effects. So you can see popular
right now is selected. And you got these ones. You got Digital Art, you got synth wave, you got palette knife, you got chaotic, you got neon. And just one thing
to note is that you could also input these
types into Midjourney. The AI model will be
able to interpret that. So you will see a lot of people up if you're
looking at examples, you'll see people mentioning
Digital Art in DALLE, for example, or synth wave
or even in Midjourney. And that's kinda like where
somebody's are coming from. So you're not restricted
to include these in just as part of your
Prompt In Adobe Firefly, you're going to include these in other Generative Image
AI models as well, or tools or applications. So let's go ahead and
select one of these just to see what it looks like. So let's do synth wave. So let's go ahead and as
soon as you select that, you have to rerun
your Prompt again. So it says degenerate
button becomes available. So go ahead and generate
that and now issued, create new variations based
on this style synth wave. As you can take a look
from the results, this style is a lot different than what we initially
had on the screen. And this is actually
pretty cool. And that's what St. may produce as if the
style is selected. Let's do one more or
less, do neon here, I'm going to unselect same
wave, I'm going to do neon. Let's generate this again, just to play around with
one more setting here under the popular section to see
what the results look like. Looks, results look pretty good. And yeah, this is why, this is probably one
of my favorite ones. I really liked this one. This one is also great. And yeah, actually
they're all pretty good. So that's kinda
like demonstrating the whole style of neon when
it comes to Generation. Now, there's some other
settings here, color and tone. So you can go ahead and
select different things here. For example, you can
do vibrant colors. If you select vibrant colors, you're going to see
different things. So let's do a combination here. Because it just takes
forever to go through all different every single one of these one-by-one
for lighting. Let's do something like dramatic lighting that
should be interesting and composition. It says. So these are some of their
photography competitions like close up, wide angle, bird's eye view, macro
photography, shot from above. Let's do, let's do close-up. And then let's, as you can see, as we adding or filters, they are all getting added here. And if you want it to
remove one of these, all you have to do is
click the X button beside that filter and it will remove it
from your prompts. So let's go ahead
and run generate. Yeah, we can definitely
tell the difference now. These two are definitely
cool close-ups in terms of the view and the
results look great. And if you're not happy with this and you would
just want it to generate four more variations
over and over again. You just keep half to
click the Refresh button. And it'll be able
to do that for you. For our last problem, let's do a little bit more creative and fantasy likes are
we going to say a dragon flying through and asteroid field? And then let's see
what it comes up with. Looking at the results. They look okay. I really
don't like any of them. So what I'm gonna do is I'm
just going to click Refresh to do for it to generate
four more for me. These ones look a
little bit better. I particularly like this one. This is actually pretty
cool and very close to what I was looking for when
I first put in the Prompt. And again, you can explore
what these options and apply different type of filters to these to get the results
you're looking for. So as you can see,
you can literally try anything that you
can imagine via these tax prompts here
and all the filters available or other features
such as the Generative fill
39. 39 Adobe Firefly Text Effects: We learned about the
texts to Image Prompt on the Adobe Firefly application. This one Generative filled. So this is gonna be very similar to what we explored here. There's some really
cool features here. So for example,
Generative re-color. Some of these are still in
development and design, so they're not even available
at this time, for example, 3D to, 3D to Image
or extended image. But I think this one is particularly nice as well
just because some of the other Applications can't really handle texts really well as we covered in
the earlier lessons. But I think this is one
worth going over and getting familiar with in
terms of Text Effects. And this is where it can
apply styles to textures, two different apply
styles or textures, two texts which the tax Prompt. So let's go ahead and run
through some examples. Once you're on the
Text Effects page, all you really have
to do is describe the in the Prompt here,
it says inter-text. All you have to do is describe the tax effect you
want to generate so you don't include the letters or anything
like that here, you just talk about
the effect or the texture of how you want
the output to look like. So let's go through
some examples together. First one, let's say on the beach there's a
lot of seashells. So that should be
an interesting one. Let's do that. So that's just typing seashells and then see
what it comes up with. Once you're on the Adobe
Firefly Text Effects page, all you really have to do is it's a two-part prompts
on the left-hand side, you just need to tell
it what text you want or what the text is going to be in
the final product. And then the second part of the prompt is just
going to be the, you describing the
effects that you want to generate on that text. So what kind of texture or effect you want
to apply on that? So here let's keep
things simple. So I'm just going
to say Firefly. And for this one, Let's go through
several examples. So on the beach there
are a lot of seashells, so this should be
an interesting one, let's say seashells. And now let's see what
it comes up with. And we have our results here. So this looks quite interesting. And if you're happy with it, you can give it a thumbs up. But I think this is pretty good. You can also try different
variations here. So if you're not happy with
this one, click this one. And then L, we render
based on this tile here. And you can see it's a little
bit different and you can experiment with the
other two as well. On the right-hand side, you got some sampled prompts. You can try flowers. So your texts with flowers, we did seashells, but
you could do flowers, you could do steak, you could do driftwood, wires, balloon, a lot of things. Anything you can, any object
you can possibly imagine, you can click on View
All to see more samples. Over here we got the
text effect fit so you got tight, medium and loose. You can choose your font. Again, there's only a
couple of mentioned here, but you can click on View
all and change the font. So here I can select poplar, and it's going to re-render
that to match that font. You can see that it
can, it will change. And again on the bottom you can select the
color if you want. Right now the
background and sort of the color is transparent because they're assuming that
you're going to take this and put it over
layered on top of another image so it not having background makes that
process a lot easier. And again, this is pretty straightforward and
self-explanatory, very cool feature. And in order to experiment with different textures and style of Text Effects if you want
it for your posters, for example, or your fliers. And very intuitive. And again, if you wanted to, if you're
happy with this result, Let's say this is
your final effect and you're happy with all
the filters you've applied. All you can really have
to do is just click on the Download button and you
can download the final image. And again, there's some options here that you can submit to Firefly gallery if
you're happy or just copy to the clipboard and
paste it somewhere else?
40. 40 Best Practices: Let's go through
some best practices. In particular, how not
to use Generative AI. Now, one of them most
important thing and one of the things that I
would like you to take away from this course, and probably the most important is please do not copy
paste everything you see from the output
of applications such as ChatGPT or Bing Chat. Because you, there's been
cases where I've seen ChatGPT just makes up facts on its own
which are not true. So it's very important that
wherever you get an output, You fact checked everything
to ensure that it's correct and it's precise
and it's accurate. Also, if you're trying to take the easy way
out, for example, if you're trying to get
ChatGPT to write an essay for you or create a content for you. There are a lot of things
that are easily detectable. As you've seen, ChatGPT gets really repetitive when
it's creating content. Especially the
longer the content, the more repeats certain words. There's a lot of different AI
detection tools out there. So the market is going to get flooded with a lot
of content that is generated using such tools
as ChatGPT, I'm Bing Chat. So what's gonna happen is a lot of these platforms and
applications are going to put in AI detection tools in place so that they can filter out the low-quality content and focus more on
the higher-quality. So again, please do not copy paste everything you see here, and what you should do
is you should use it as a starting point to get
ideas and inspiration. And in terms of how you want
to structure your work, whether it's a blog post or a social posts such as
Instagram or Facebook. What the composition is going
to look like for your blog, for example, and so on. And as we discussed earlier, you really need to be
aware of data bias because the output is going to be
based on the training data. And this is something that the providers and the builders
of these applications also need to be fully
aware of because if the training data is bias while the output data is also
going to be biased. So something to be mindful of, so that whenever these models are getting trained
and needs to get, they need to get trained on diverse data and not bias data. Also do not use to
create derived, use these tools to create misleading or false
information and always be aware of
potential legal issues. For example, do not infringe someone else's
intellectual property and rights using these tools?
41. 41 Thank You: We have now reached
the end of the course. So I just wanted to say
congratulations for finishing the course and
thank you for enrolling. It's been an amazing journey. And thanks again for taking the time to go
through the lectures. By now, you should have the
necessary skill set to use the Generative AI tools to help inspire you and help increase
your productivity and life. I really hope that
you have enjoyed the educational content
and discourse and can take the learnings and apply it to
your everyday life. And I wish you all the
best in the future