Transcripts
1. 1.1 Introduction: Welcome to the smartest choice
you could make right now. Similarly to how you're
missing out if you're not using Cha chi
Pet or another LLM. You're missing out if you're
creating contents like text, photo, video or audio
without using AI. Let's have a look at
some AI contents. To start off, I want
to play some music that I made in a
matter of seconds. Feel rhythm. Move with heat. City of Angels calling me the Sunshine from the west side. They make me feel Oh. This literally took
me seconds to create, and it's all fake. It's all AI. Let's have a look
at some images. Once again, it's all AI, and you can do this in seconds. Whatever you need
if it's realistic looking people, such as these, or logos for your company like this or vintage photos
or album covers. Whatever you want to make,
you can make it with AI, and you can make it fast. And of course, AI video, which is slowly just getting better and better
and more accessible. But it's not only about
generating video. You can also automate
the entire process of creating a video with
clips that are real. In fact, the voice that
you've been listening to for the past 10 seconds that
was made by AI two. By the way, the background music for this video I
made it with AI. But don't worry, I'm going to do this whole course myself. I'm not going to use AI
to make this course. According to Forbes,
33% of companies that produce content right
now take help from AI. If you make any type of content without letting AI help you, you are behind those 33%. I'm here to fill that gap. Just to be clear, by content, I mean any kind of content. Music, voiceover,
images, video, blogs, or any other text or
any content that you're creating and putting out
somewhere in the universe. After watching this course, you will have knowledge about AI automation in content
creation that few have. Will know how to produce
high quality and realistic audio and image
content in seconds, and of course, save tons of time in your content creation. You will be ahead of
all the people who are not leveraging
AI for content. We're going to learn how
to make content with AI, and we're also going
to learn how to automate the process
of creating content. We're going to go
through all the leading AI platforms for creating image, video, music,
voiceover, and text. Then in the end, we
will look at how to automate the entire process
of creating content, creating blogs, music,
or other content, and then publishing it without
you having to do with. After taking this course, you will know what
platforms to use for what purpose and
how to use them.
2. 1.2 Concepts to understand: Before we get into the course, there are some concepts that
are good to understand. Creating is now
faster than buying. Repurposing content, text
controls every other medium. AI enhancement is recreation, not refinement, and of
course, AI usage rights. To start off, creating
is faster than buying. Before people have paid for
stock footage, stock photos, logos, administrative
help, royalty free music, sound effects, voice recordings. I promise you that it's
currently cheaper and faster to create these for yourself rather than
paying for them. This is not a prediction
for the future. It's here now. It exists. People are using it, and
if you're not using it, I'm sorry, but you're behind. To demonstrate this,
I made this photo. This is the death of stock photos and paying
for royalty free music. By the way, I made
this photo with AI. Perhaps you need a
logo for your company, like this one, or like this one, or this one or this one. I mean, really, you can
choose between all of them because it took me about
3 minutes to make them. Or maybe you want some
music for your next video. Let's listen to a few
music tracks that took me about 10 seconds
per song to make. Just feel it. Feel pym
Move with the heat. City of Angels calling
me the sunshine from the west side Dam in the ray Wi Wi AR ray while
won that place. I'm cruising las bay
in this LA sun Dad They make me feel Oh. O. If I don't really
like these songs, then it's not a big deal. I can just spend a few minutes and make a few more of them. Or if I want, I can remove the vocals or the drums with AI. Let's talk about
repurposing content. Text controls every
other medium. Chachi PT and other LLMs doesn't just make your life
ten times faster and easier. It also helps you in producing
photos, video, music, and voice because all
the other mediums can be converted to text. Once they are text, then you can edit
them, and then you can turn them back into
image or audio. The best thing
about all of this, of course, is that
it can be automated. That's why this course
includes two full sections on Chat EPT and text because
your usage of LLMs, CTEPT will improve all
your content creation. It's crucial that you understand Chat EPT and that you use it. What does it mean to
repurpose content? It means you take
content that already exists and remake it
as your own version. Here are some examples. You take other blog
posts and make your own blog post from
the information in them. You take a video, turn
it into a text script, and then rewrite it
into your own version, and then you turn it back into
a video. You take a photo. Turn it into a text prompt
for an image generator, generate a new image
that looks the same. You take a song, you analyze the style and lyrics into text, and then you turn it back into your own song with
the same style. Or you take a voice sample, make it into an AI voice, and then you make your own
voice over with that voice. Text can help you with all
types of content creation. How about this one?
AI enhancement is recreation, not refinement. What I mean by that is that when you enhance something with AI, it is creating a new
thing from scratch. It's not editing
the current thing. When you edit a part of a
photo with generative fill, it creates new pixels. It does not edit
the current pixels. When you create a song with AI, it creates new
sound from scratch, sound that did not exist before. When you generate an image, it creates new pixels from
scratch, brand new pixels. It is recreation,
not refinement. AI usage rights. Commercial usage is allowed for content made by
these platforms, mid journey, stable
diffusion, 11 labs, Music fi, TO, Sono, in video, synthesia, image, video and audio platforms that we will be discussing
in this course, these are the major platforms for AI content
creation right now, and they allow you to use whatever you create
commercially. So there's not a whole
lot of boundaries. Of course, AI usage rights will vary by platform
and country. This is not legal advice,
but most platforms, when you create AI music, voice, photo or video, you have full free usage of
whatever you create. As we said, while the AI models were trained on current content, they are not recreating
things that exist. They are creating brand new
images, brand new voices, brand new music, that is not the same as anything that
currently exists. To finish it off, here are some AI statistics from A
Forbes advisory survey. According to this survey,
this is how businesses are using AI tools in
the following ways. 56% are using AI to improve and perfect
business operations. 51% are using it for cybersecurity
and fraud management. 47% of businesses are using
AI as a personal assistant, which I guess is the most
obvious use case for AI. Then there's this
one, of course, 35% are leveraging AI
for content production. That's what we want to see. If we go ahead and just remove everything else, and we
just look at this one. 35% of businesses are leveraging AI for content production
according to Forbes. All you got to do
is search for it, and you'll find out for
yourself that this is true. So one out of three companies that produce content use AI. Now, will this number go
down? No, it will not. It will go up. That's why
I've made this course because I think when we look
at the statistics in a year, it's going to look
more like this. It's going to be
more like 66% or two out of three companies that produce content will use AI. That's it for this video. I
will see you in the next one.
3. 2.1 AI Audio Introduction: Hey, guys, in this section, we're going to talk
about AI audio. How to create music
and voices with AI. I'm really excited about this because compared to
let's say AI video, AI audio has really come
a long way already. And there's a ton of
use cases where you can already use these
AI tools for AI audio. Some of the things that
you can do is you can create voices that
speak for you. You can give the
AI a text script, and it will read it out in
whatever voice you choose. You can change your own
voice into another voice, just like I'm doing right now. So you can basically record
an audio sample by yourself, and then the AI will turn that into Another voice
of your choosing. You can use voice samples, or you can train your own voice. You can clone your own voice. So maybe you have a cold, but you still want
to make a podcast. That's not a problem. You can create music
out of FI air, Music from text prompts, just like you do with
photos, AI photos. You can describe very briefly the type of song that you want, and the AI will
create it for you. Then there's other specific
use cases for AI audio. You can remove
vocals from a song, or you can remove only
the guitar from a song. Let's say you only want the
guitar from a rock song. Without the drums
and the vocals. AI can make that happen. Okay, now I'm gonna
play you a couple of songs that I made in
about 30 seconds. Brody u day. Feel the weight on my back. L a beast, burn my spirit up, tack every step of my tak. Feels like walk in the
shales rela less bloods. Testing my son. Every night of screen, my voice lost in knee abyss, on it by the Devon trapped
in this black abyss. Eas the golden sun. Feel the ssh bloom the fib
colors make come my life. Softly sway Whispers in the captivated. That was AI audio. Quite exciting create
voices that speak, change your own voice
into another voice, create music, remove
vocals from a song. In this section, we're
going to be looking at some AI audio platforms that you can use for
different things, creating voices, creating music. There's already a lot
of AI audio platforms, and I will not be able
to cover all of them. Want you to know
that it's very easy to find these platforms. You just have to search
for them, test them. Now, these platforms
that I have listed here, they're some of my favorites. There's some of the
most popular platforms, but more platforms are
going to show up and there's already a lot of
other platforms as well. You don't have to
feel that you have to stick to these platforms. You can find other
AI audio platforms that might be just as good. However, these
platforms, they're good. Just like with video and photo, we're going to be going
through them one by one. 11 labs is the leader and
has been the leader for a long time for just generating
normal voices with AI. I got to say they sound
pretty realistic. Music i is a platform where you can change
your own voice, and you can also make music. But this one is
better for converting your own voice like I just
did into another voice. And you can use these
voices however you want. Even if they sound like
Trump or Spongebob, it's not actually a sample
from Trump or Spongebob, so you will have
the rights to use them freely and commercially. Sono is a new platform that
is really good for music. It's like an image generator. You put in some text and
it creates a song for you. The music is pretty good. DO does the same thing as Sono, creates music from
a text prompt, and this one is
really good as well. We have adobe podcast
for Adobe users, really good for noise removal
or voice enhancement. This is more if you
want to improve the quality of an actual voice
recording that you have. And then we have
platforms like la.ai, where you can extract vocal
or instrumentals from a song. Let's say there's a song
from the 80s that you like, and you want the vocals, well, then you can extract
all the instrumentals, or maybe you want the guitar solo from a song
without the vocals. That's something that
AI can do for you. This is very exciting
because AI audio has already come a long way, and, let's get into it. I'll
see you in the next video.
4. 2.2 (Audio) Elevenlabs: Okay, we're going to start
off with 11 labs dot IO, the best and most widely
used AI audio generator. Here you can create voices and download them as audio files. It does not create music. These voices will
read your text. 11 labs use their own model. It can be freely used
for commercial use. If you have one of
their paid plans, you get 10,000 tokens for free. Which is quite a lot. That's the equivalent
of about 10,000 words. Here's the website
11 labs dot IO. Once you make an account, this is the page that you're
going to land on. To try it out, you want to go up here to the menu and
click on Speech. Enter a text into
this box right here, Text to Speech, and click
on Generate Speech. Only about 2 seconds later, we get our generated
voice. Let's play it. With a map in one hand
and hope in his heart, the man stepped
into the unknown, ready for the adventure
that awaited. As you can hear, it
sounds realistic. It's actually pretty hard to tell this apart
from a real voice. On this button right here,
you can change the voice. As you can see, if
we scroll down here, they have a bunch of voices. You can also create
your own voice. Let's test out a couple
of other voices. Bill, generate speech. With a map in one hand
and hope in his heart, the man stepped
into the unknown, ready for the adventure
that awaited. Okay, Bill sounds pretty good. Let's test Nicole. With a map in one hand
and hope in his heart. The man stepped
into the unknown, ready for the adventure
that awaited. Okay, that was kind of creepy. Obviously, they have a lot of different voices and a
lot of different styles. But I mean, this voice, even if it's a little creepy, it can certainly be used for
different creative purposes. Here, if you click
on speech to speech, you can record your
own audio file upload it here and turn it
into one of these voices. In the menu, if we
click on Voices, we're going to land
in the voice lab. Here you can create
your own voice. Maybe you have a podcast
or a YouTube channel, or you want to make an
instructional video. You could create a voice that sounds just like your own voice, and then you can
make scripts and cha GPT or something like that. Give it to 11 labs and
it will read it for you, and that can certainly
be a faster workflow than recording
videos by yourself. Here's an interesting little
article page on 11 labs. It's a feature that they
have not released yet, but they have a
wait list for it. So basically, it's generating
sound effects with AI. Here they have generated some sound effects for
different video clips that SRA made because all of SARS video clips that
open AI released, they did not include any audio. So 11 labs used their new
feature that they have not yet released to create sound effects for SARS video. Let's
have a look at those. Yes. To be. That's really impressive. That sounds really good. A lot of the times when you're creating a video or a movie
or something like that, you'll have a video clip, you'll have to add in the sound effects and
find them online, but now you're going to be
able to just generate them out of thin air, save
a bunch of time. What about commercial
usage for 11 labs? Can I publish the content I
generate on the platform? The free plan does not
include a commercial license, but the paid plans all
include a commercial license. Of course, this is
also going to vary by each country's loss on AI audio. But basically, you can use their voices however
you want and you own all the audio samples that you create with 11 labs. There's not a whole lot
of restrictions here. That's because these voices are trained on a bunch
of different voices. They're not representative
of any one person's voice. They are actually a new voice
that 11 lab has created. In the menu, they have a
section called dubbing, translate your content across
29 languages in seconds. If you have content
for social media, you can dub that content into a bunch of
different languages and post the same video in different languages on your
social media platforms. Map in one hand. Once you've generated
your audio file, you can click right
here to download it. And just like that, we
have our audio file. And it is hard. On our computer, we
can make a video out of it or whatever
you want to do with it. Creating audio literally has
never been easier than this. There's not that much
more to it than this. The platform is
super easy to use. It's very easy to sign up. It's easy to test it for free. If you like it, you want
to start using it more, then you can pay, get
one of their plans. If we have a look
at their pricing, there's a free plan and that
one even has API access. That one gives you
10,000 characters, which is about 10
minutes of audio. If you want to jump
up to the $5 per month plan,'ll get
30,000 characters, which is about 30
minutes of audio, and then it goes
on and on and on, the more you pay, the more
audio you get, of course. Really, really good platform. I would highly recommend
everybody try this out. Even if you're not
interested in making audio, even if you're not going
to use it for anything, just go in here and test it. This is a revolutionary tech, and I think it's important
that everybody understands how easy it now is to create
audio that sounds realistic, not only for the purpose of creating content, but
also for the purpose, if you hear audio somewhere
or maybe you receive a call, where somebody you know is
asking you to do something, it's good to know that
it might not be real. It might not be created
by a real person. It might be AI. Okay, that's it for this video. I'll
see you in the next one.
5. 2.3 (Audio) Musicfy: Okay. Next up is a platform
called Music f dot LOL. Here you can change the voice of an audio sample that
you already have. You can create your own voice. You can create music,
mainly instrumental music. They have a voice library
of famous voices, such as Sponge Bob Obama, M&M, Shrek, Taylor Swift. Their voices are
not direct copies, but made by mixing
different real voices. According to Music
fi themselves, this gives you the ability to use their voices commercially. Here's the websites,
music f dot LOL. You can sign up
for free and make 15 audio generations for free. They have an affiliate
program and an API call. Here are some of their voices, Juice World, Billy Ish, Sponge Bob, M&M, Donald Trump, Patrick the Str, Peter Griffin, Schreck, Travis Scott, the
Weekend, Mickey Mouse, Joe Biden, Obama, Riana,
Homer Simpson, Drake, Adele, Katie Perry, Stevie
Griffin, Ariana Grande, Luigi, Ted Bart Simpson, Candace Flynn, Finnis, and a bunch
of other voices. Once you've made a free account, this is where you're
going to land. There's a few options here
on the left in the menu. We're going to start off by going to create and
then convert voice. Here you can upload
an audio file, or you can record
audio yourself. You just want to try it out.
You can record something, say some words, and then
convert it into another voice. Here if you click these buttons, you can remove
instrumentals from a song or you can
remove reverb or Echo, but we're not going
to do that yet. To test it out, we can try
out this audio sample that we got from 11 labs with
the voice named Bill. It sounds like this. With a map in one hand
and hope in his heart, the man stepped
into the unknown, ready for the adventure
that awaited. Okay, so let's drag that
one into music five dot LL. We have now uploaded
an audio file. Then we're going
to come up here, click on Select A Voice. Let's do Donald Trump. We have the voice, Donald Trump, we have our audio file. Now you just have to
click on Generates. Your generated samples are going to show up right
here on the right, and up here you will see how many generations
you have left. It also shows up down here
so we can play this sample. With a map in one hand
and hope in his heart, the man stepped
into the unknown, ready for the adventure
that awaited. So now that we know that it's Donald Trump's voice,
it makes sense. It sounds like Donald Trump. But if you would just hear this, you probably wouldn't think
that it's Donald Trump. It's not just enough to have an audio file and then convert
it into somebody's voice. The tone and the pitch
of your audio sample should also be matching
your speaker's voice. The voice of your speaker
that you have selected, if you want a more realistic
pinpointed result. Let's click here
on Switch Voice. Let's test out Peter Griffin. Peter Griffin, here's
the audio file again. Click on Generate. Map in one hand and
hope in his heart, the man stepped
into the unknown, ready for the adventure
that awaited. Okay, so Peter Griffin
from Family Guy, I must say this one
did sound like him. If I just heard this sample, I would certainly be fooled to probably think that this
was from Family Guy. Okay, let's try something else. Here is a song that
I generated with AI. Let's play it to see
what it sounds like. Death Glens field of
Fido five Colors. Let's go back to
create Convert Voice. Drag this song into Music F, and select the voice of Let's do Travis
Scott, rap artist. Click on Generates.
Let's play this one. Tf call and Sun Feel the Fs pom calls make
com Soft sway in. Okay, I wouldn't say
that sounds very good, but it certainly did something. Of course, you can't just expect to get a good result every time. You're gonna have to put at least an ounce of creativity
on your own into this. But let's try another voice
for this song. Switch voice. Let's do Billy Ilis this time. Click on Generate.
Let's play this one. Icons make the soft sway in. Okay, that sounds pretty good. And because this was a song, and we did not check
this box right here where it says
Remove instrumentals, it removed the instrumentals
for the song and recreated the vocals with
the sound of Billy Ish. So if we play this
original song track again, Tic and son. F who the flowers. Of course, we can
hear that there's some guitar and other
instruments in there. But when we play this new
version with Billy Ish. The new. He. It's only the vocals. At this point, I think
you get the idea. This is very powerful, can
do a lot of things with it, and it actually sounds good. Let's test out one more
feature at Music Pi. Right here under Create, we
can click on Text to Music. Here's the box where
you can enter in a description of the kind of
music you want to generate. Here are some suggestions. We
can just try one of these. Create a low fi hip hop beat, relaxed and mellow,
incorporating vinyl crackles and
soft piano chords. Okay, generate that one. Let's play that one. It's not the best music I've ever heard, but it sounds good. Now if you're making a
video instead of paying for music or looking for
royalty free music online, it's probably actually
faster to just come here and generate music yourself if you need background
music for a video. Mic, really good platform. Same as 11 labs. It's
easy to sign up. It's free to test it out. I would highly recommend
that you come here and play around with their tools.
See in the next video.
6. 2.4 (Audio) Suno: Next up is a platform
called so.com. Here you can create
music from text. The music that comes
out of this platform actually sounds
surprisingly good. You can create
songs in any style. You can create
instrumental songs. If you need background
music for a movie or a video or something like
that, this can be really good. You can use your own lyrics, and then you can extend
your songs once created. Once you've made the songs, you can make them longer if you like It's free to create an
account and try this out. There's really no reason why you shouldn't because this
platform is awesome. This is Sonos dashboard once you've created an
account and logged in. This is where you're
going to land. The website is super
easy to navigate here. You can listen to some songs
that other people have made. You can go through some genres, categories, listen to songs, and you can even
copy the prompts. If I click on these three dots, you can click on this button
here that says reuse prompt. If you find something
that you like and you want to make something
similar yourself, that's a really
good way to do it. We're just going to go up here on the left and click on Create. Over here are some songs
that we have already made. And in this box for here's
where you create your music. There's not a whole lot of settings that you
need to be aware of. Can basically just type
in a description here and it will make a song for you.
But there's a custom mode. We're going to look
at that in a second. If we type out here
a country song about trucks, country music. All we got to do is
click on Creates, and that's going to start
creating two versions for us. It's also going to
give it a name, this one is called
the Open Road. As you can see up here, I've
generated two other songs, but it's given me
four songs in total, and it's also going to generate a little image for each song. Let's listen to both of these. On Dusty H. Under set song. I'm behind the wheel
meaning my red ****. Rolling down the back rows, kicking up some dirt, but the windows rolled down
in my favorite country song. She's got the still guitar
twin and that whiskey soap draw singing b good on
time, That was the first one. Let's listen to the second one. All dusty under the setting sun. I'm behind the wheel
meaning my red ******. Rolling down the back road
kicking up some dirt. One. I got to say that
sounds surprisingly good, and that took me about
10 seconds to make. It's giving us two versions and they're both very similar. The lyrics are similar as well. Over here on the right,
once you click on a song, you can scroll down and look
through the lyrics here. Here you can click on Extend
if you want to extend it. This one that it made for us is 1 minute and 37 seconds long. Of course, if you
want to download it, you just click on these three
dots and click on Download, and that will give
you the audio file. On Dustin under. It's so simple. I love it. If you go to download here, you can even click
on Download video. And that will give you
the song as a video file. Dust and under son
on behalf W sneak. With the image that
are generated for you, with the image that
are generated for you, the title, and the lyrics. Alright, you can also
click on Instrumental. So if we check this box
instrumental and we keep the same prompt,
click on Create. Let's listen to the instrumental ones that I just gave us. That's the first one.
Here's the second one. So I think it's pretty fair
to say that the days of Royalty Free Music
platforms is over. In my opinion, right now, it's currently faster to
generate your own song. For a video rather than
to look for it online. Then we have this one up here, which is called Custom Mode. If you click on Custom mode, that's going to enable you
to enter in your own lyrics. You can also make your
own title for the song, and then you'll enter in
the style of the music. What we can do here
is we can move over to Chat PT and tell Chapt to write a two
minute country song about a Deer in the Headlights. Here's some lyrics
from Chachi PT. Let's copy them. Paste them
into So Style of Music. We're just going to
type out country music. For the title, we'll name
it Deer in the Headlights. Click on Create. Let's
listen to these. So he Am cultural frozen is that globe
midnight star. Shine so bright. I think you get the idea at this point, it's super simple. Of course, it's not quite
the same quality as Drake or Riana or Taylor Swift
or any top artist. But if you need music
or background music for a video or for anything, this is the new way to do it. This platform hasn't even
been out for that long, so it's only going to
get better from here. That's it for this video, I
will see you in the next one.
7. 2.5 (Audio) Udio: Next step is UDO. This one
is very similar to Sono. I would say that
it's equally good to Sono or maybe even better, but Sono is a little faster
at generating the palms. Same platform as Sono, the platform is very
similar, basically the same. Udo.com, you can test
it out for free, create an account for free, log in, and this is where
you're going to land. Same thing here you can listen to some songs that
other people have made. Roll down to look
through some genres. If you click on these three dots for other people's songs. On DO, you can actually
download other people's songs. The prompt box for UDO is
right at the top up here. You don't have to click
anywhere to access it. If we click on the prompt box, here they have some settings. If you want to make
your own lyrics, you can click on
the Custom box here and write your
lyrics in this box. Here's the button for
instrumental music, and Auto generated is
just a normal song that's not instrumental
and without custom lyrics. Here they also have some genres. For example, we can
click on Hip hoop. That's just going to enter
in hip hop as the prompt. Here we can add in hip hop, a rap song about Los Angeles. Click on Creates. That's going to start generating your music. If you go over here on the left, click on M creations. Here you're going
to see the music that's currently generating. Usually, it takes a few minutes. Here I have a song
that I made with the exact same prompt.
Let's listen to that one. In the cost with the bob
we turn up City lights, we ignite we burn up, chase dreams down
to one, no one. Same thing here, you get two
alternatives per generation. Let's listen to the other one. I cruise in Les Bay
in this T s Day. Go this down tonight ba. I got to say I'm so amazed by these platforms and
how good they sound. I don't want you to forget that generating music is just like
generating text or photo. It takes a few tries before
you get a really good one. Same thing with Cait you might not get the best
answer right away. So, you know, don't
knock this off if you try it and you don't like
the result that you get. Keep trying, and I promise you you're gonna
get something good. Let's have a listen at
another song that I made. All I do is mix a little
by the I'm traveling in the get more de tax
sounds like I'm buying. Des on my mind like
my custom want some. You can let in lose chickens.
You can sit the tunes. It sounds really
good. One thing that you quickly think
about is the lyrics. They might sound a
little bit weird. This is AI generated. So, you know, if you just type in a simple prompt like this, you might not get
lyrics that are comparable to a real top artist. But how amazing is it that you can make this in a few seconds. If I just heard this in a
car or somebody played it, I probably would think
that it was a song from Spotify, a real artist. And there's really not that
much more to it than this. There's not much more for me
to cover on these platforms. They're so simple to use. So that's it for this video. I will see you in the next one.
8. 2.6 (Audio) lalal: Next step is a
website called l.ai. Here you can separate
vocals from instrumentals, and it works really well. You can download
them separately, so you can download the
instrumentals or the vocals. All you got to do
is search for.ai. Click on this website. You're going to land on
this website and here you don't even need to create
an account to try it out. All you have to do is click on this yellow button where
it says select files. The default setting is going to be vocal
and instrumental, but if you click on
this gray button. Here for example, if you
have an audio sample where it's voice and noise, you can remove the noise. But we are going
to try it out with this country song that
we made earlier in So. Just going to play that one quickly for you so
you can hear it. A dusty song on behalf. That's what it sounds like.
Click on select files. Drag in the audio file
for this country song. Upload. It's going to
upload on the sites. You don't have to
do anything else. It's just going to take
care of it for you. Generating previews. And that took about 15 seconds. Now we can listen to the vocals and the instrumentals
separately. Let's play it. On dusty under sun on behalf
we sn Those are the vocals. So over here on the
right, we can mute the vocals and unmute the
instrumentals, play it. Then we can test out
separating only the drums. So over here on the left, we click on drums, and then we click on Create New Previews. That's going to generate
some new previews for us. And that's going to give
us the same setup here. So right now if we just play it, we're going to only
hear the drums. Then if we unmute this one, where it says without drums, and mute the drums. Play it. Roll down back kick in some
dirt the wind rolled back. That's our song without drums. This is simply just incredible. If you want to
download the samples, you can pay for one
of their packages. Or, of course, you can just
use an audio recorder, like the quick time
player on Mac, and then you can just
record the internal audio on your computer and
play the play the music. Under this. That way you
can get it for free. That's it for this video.
I'll see you in the next one.
9. 3.1 AI Video Introduction: Welcome to the
section on AI video. This is going to be very
exciting for a lot of people. The value is in
understanding what's on the market so that
when you have a need, you can choose the right tool. When it comes to AI video, it's not just AI
generated videos like these on the right here. AI can help us in
different ways with video. There are platforms that
actually generate video, similarly to how Mi
journey generates a photo. There are platforms that
put together video for you, so they leverage a
bunch of stock footage, and then with the help of AI, they can put together an
entire video for you. Then we have platforms
that can edit your video for you or
edit parts of your video. Like photoshop generative fill. You can think of this instead
of having a green screen, you can use AI to change
different parts of a video. What different AI video
platforms are out there? If you know anything
about AI video, you certainly know about
SARA at this point. SRA just like Chat GBT
is made by Open AI. In this current moment, I
would like to say that it's by far the best AI video
generator on the market. But it's not available
for public usage. These clips on the right
that we're looking at here, they are made by SRA. Then we have a platform
called Runway. Similarly to SRA, Runway
can generate video. But that video is not
as good as SRs video. But runway is like an
overall AI video platform. There's a lot of
features within runway. For example, you can
remove the background. You have a motion brush. You can give runway images
and make those images move. Then we have a platform
called Synthesia. They focus on making AI avatars. On the right here is
Synthesas website, and this here is what an
AI avatar looks like. Synthesia is more for
instructional videos. And with synthesia, you can
make your own AI avatar. You can basically
clone yourself. They currently offer
160 different avatars with different looks
like this one, and they can speak 160
different languages. Then we have a platform
called I Video. This one puts together
a video for you. On the right here is
in videos website. Here you can enter a text prompt and it will create
a video for you. But it's not AI generated
video, it's real video. Stock footage clips,
and they will put those stock clips together and
make a full video for you. They choose between 16 million
stock photos and videos. In video can also generate audio and a script
for your video. On the right we have
D scripts websites. Their features include making your eyes always look
into the camera. If you film yourself and
you look off camera, they can edit your
eyes to make it look like you're always
looking into the camera. Be useful if you're going to
read a script or something. You can edit your
video by editing text. They have AI voice cloning, and they have a
green screen effect. If you're filming yourself
in your living room, but you would rather make it look like you're
sitting out in space, D script can do that for you. Then we have some AI features in regular video platforms
like Premiere Pro. For those who are
not video creators, Premier Pro is Adobe's
video editing software. It's been a leading platform on the video editing
market for a long time. They have added in a
few small AI features like extending an audio clip. If your video is 5 minutes long, but your soundtrack
that you're using for that video is
only 4 minutes. You can use their AI
feature to extend that audio instead of
finding a new song. They have an auto
captions feature, and I'm also going to show
you another trick how to automatically cut up a
video in Premiere Pro. How should you take
advantage of this knowledge? Well, most of these platforms honestly are not super good yet. They're in an early stage. But as they become
better and better, they will become more useful. If you are an early adopter, you will be in a very good
position for the next year. But there
are still tons of useful tools already for
using AI when creating video. Of course, if you are aware of the AI video platforms
on the market, then as they evolve,
you will be in a good position to start
automating video content. Be automating video content
is already a thing, and it's only going to
become more of a thing. We're going to cover some of
these platforms one by one, SRA, runway, synthesia, talking about the best AI video
generator, SRA, talking about runway and all its AI video features,
talking about synthesia, and making AI avatars, talking about in video, creating full videos for you. Of course, there's
already a lot of different AI video platforms
on the market like D script, and they all specialize
on something. For those of you
who are associated with video editing
on a regular basis, if you're not already aware of the AI video features
in Premiere Pro, for example, this
can be very useful. Real quick, if we
head over to Google, search for AI video, scroll down past the ads. The first results that
we're seeing are synthesia, in video, SRA, which are the platforms that
I've already mentioned. But then of course, the list
just goes on and on and on an endless amount of AI
video platforms already. Lastly, I just want to say watch what you're interested in. You don't have to watch what
you're not interested in. But it's still
good knowledge for anyone in the professional
world at this point. If you know that you're not
interested in AI avatars, you don't have to watch
the video on synthesia, but you might still gain a lot
of helpful knowledge about AI avatars by watching this video that can
help you in your work, or maybe you'll recommend a friend to start
using synthesia. I'll see you in the next video.
10. 3.2 (Video) Sora: Okay. Let's talk
about Open AI SRA. First of all, you can just
search for SRA on Google, go on OpenAI's website. The first thing that you will
see if you scroll down is some different videos
that have been made with SRA and then you can flip
through them and watch them. SRA is created by Open AI. It's not available for
public usage at this point. I think we can all agree
that it's currently the best video generator
at this moment. The quality is just
way way better than any other AI video
generating platform on the market. SRA training. SARA is an AI model that was trained just like My
Journey or Chat GPT. It was trained on an
unspecified amount of data. Open eye is rather
secretive about this. They don't want to
really give out what they trained SARA on, but it's a lot of data, and it's basically
video and photo content that they either
own or don't own. A supercomputer
called Nvidia Age 100 is a popular computer right
now to train AI models. One of those computers
sell for about $40,000. It's estimated that
for SRA training, they needed 4-10 thousand of these computers to train
SARA for one month. So let's say they used
6,000 of these computers, that would have cost
them $240 million. Yeah, Open eye is secretive
about the details. At least at this current moment, they're not really giving out
too many details about how it was trained or if
it will be released, when it will be released. Of course, there's a lot of safety concerns
surrounding this. Here's an interesting fact.
According to this study, SRA can generate about
5 minutes of video. Per hour of NVDA
age 100 GPU power. Basically what that
means is to generate 5 minutes of video
like this one. You would need to
run one of these supercomputers for an hour. Here is an interesting
thought for the future. Before a production company would buy an expensive camera, camera equipment would
cost $40,000 or even more. Perhaps in the future, a production company
will instead buy a supercomputer or spend
this money on renting GPU power in order to generate video instead of
producing it themselves. The main point, of course, is that if SRA was
released to the public, it would make a huge shift. Not only would it change movie
production, but of course, it also creates this
whole thought process of, can you now trust videos
that you see online? If you see a video of a
person doing something, how do you know that it's real? How do you know
that it's not AI? Well, that's an
issue that hasn't exactly been solved yet. There's really not that
much more to say about SRA. You can find SRA videos
on Open eyes website. If you haven't already, I would
really recommend going to the site open ey.com slash SRA. Just scrolling
through this page, looking at the different videos. There are really a
lot of astonishing videos here that will make you question what is real
and what is not real? What are the possibilities
for video in the future? If you can automate
something like this, all you're going to need is your imagination if you
want to make a movie. Simple as that, I'll
see in the next video.
11. 3.3 (Video) Runway: Talk about runway.
Quick pitch for runway. It's like a user friendly Hollywood studio
in your computer. The website is runway m.com. While this platform might not have everything
you need right now, it's certainly a platform
to keep your eyes on for the future as AI
evolves in video. What can you do in runway? Well, they have a
lot of features, but the main ones are the
green screen feature, remove or separate background
from your subject. Here we can see a
little preview of that. You can generate
short AI video clips. With quality, the
quality is not the best. It's nowhere close
to SRA, for example. Think of Runways
AI generated video like Adobe Firefox or
Dolly, but slightly better. You can change the
style of videos. You can make videos
look like a painting. You can turn a
photo into a video. Basically, you can make a
photo move. Image to image. You can transform the
style of an image. Think of this like
a photoshop edit, but you're doing it
with a text prompt. Runway is free for starters, you can sign up,
create an account, test the AI video
features for free, creating AI videos
costs credits, and you run out of
credits rather fast. This is the website
app runway ml.com. Once you create an
account for free, you'll land on this page. If we go over here to the
left in the menu and click on Generate Videos, three options. But we're going to click on
this one, text to Video. That's going to bring you
into this user interface. Up here on the right,
you can see that we get 105 seconds of video for free. This is your text prompt box. If we type in here, a guy running down the
streets of New York, you can click down here on
Generate four S. That means generate 4 seconds of video and then this box is going
to pop up on the right. Your video is generating and will be done
in a few minutes. Now, if you ask me, I will say
that the main use case for runway right now is their
green screen feature, separating background
from subjects. This is where a lot of
video creators could actually find this
platform very useful. Our video is finished,
let's have a look at it. Here we can see that
they're not even moving. We have some flaws going
on here with the cars. I would say that it looks okay, but obviously this video
is not really working. Videos of people, not the best. But if we try a video
without people in it, let's type out a drone shot
of the Alps in Switzerland. Generate that. It's going to open up a second
box down here. I got to say this
looks pretty good. There's a little drone up here, so it didn't really understand that we meant that
it was a drone shot. It actually put a
drone in the shot. But besides that, I can see
this clip being used in a YouTube video or a
corporate video or whatever. It looks good. Let's test out
the photo to video feature. You can do that in
the same usurface. Up over here, you
can drop an image. Let's test out
this image that we got earlier, drag that in there. Remove the prompt,
click on generate. That's going to open up a third video generation down here. Let's have a look at
this one. Once again, we can see that it's not so good at generating videos of people. It's certainly making
them move a little bit, but it's also making them
look unrealistic and warped. It's not good for making
videos of people. But what if we try
a simpler video where there's no face and
it's a simpler background, there's only one subject. Let's drag that one in
there. Click on generates. I got to say this one
does look better. There's a little glitch
down here on the foot. But besides of that, I
don't really see any flaws. The conclusion here is
that people don't work, but landscapes work and
images where there's a simple background and a simpler subject works
rather okay as well. Let's go back to the
main page and click on the removed background green
screen feature down here. Up here, we can see our
previous generations. Let's try it with the Spaceman. Over here on the right, we
have some instructions, click on an area
to start masking. So I'm going to click on three different places like this. As you can see, that places
out some green dots. Just like that, we have separated the background
from this guy. Now there's still a little space up here that's not selected. I'm going to press on that. Now the whole guy is selected. Now we can go down here and
click on Replace background. Throw on a little
effect, perhaps. Now once you've done
this, you're able to export only this spaceman without a background as a
video or as a PNG sequence. Then you can put it on premiere and lay it on top
of another video, or maybe you want there to be some text in between your
background and your subject. That's another good
use case for this. To summarize runway, it's not good for creating
videos of people, but it is pretty
good for generating videos of landscapes
or simpler subjects, and it's good for
removing backgrounds. That's it. I'll see
you in the next video.
12. 3.4 (Video) Pika: Okay. Onto the next AI
video generation platform. This one is called Pika. Can find it on the
Url pka dot art. It's a very simple to
use AI video generator. Pk have developed
their own model. They're basically using their own system to generate video. The quality is not super good, but it has good potential
for the future, and it's open for anyone to use. In Pika, you can generate
three second video clips. You can make about
25 videos for free. Their platform is only
for generating video. They don't really have
any other features like editing parts of your video
or something like that. Can be used in
discord and it can be used on their
website, pka dot art. This is discord, and this is what Pka looks like in discord. They have a bunch of
chats or servers here. Here you can see
all the videos that other people are
currently generating. We can click on one of these and this is what it looks like. Here's another one. But
we're going to be looking at it on their website instead
because that's easier. Part, you can just
click on T Pka. Once you've made an account, you will land on this page. Here you can hover your mouse over some different
videos to look at different videos
that they have created with Pa. As you can see, there's not that much
movement in these videos. It's almost like taking a photo and making it move in runway. Down here, we can see that
they're 3 seconds long. As we can see here, Pika can
generate videos of people, but they're not going
to be super realistic and most of them
have the same frame. It looks like a
close up shot from a movie when you generate
a video of a person. Right now we're in
the Explore page. If you click on my library, it will take you to the place where you see the videos
that you've made. And down here is
also the prompt box where you can
generate new videos. Here I wrote a guy running
down the streets of New York. This was the video that we got. Doesn't look very good. And here I wrote a drone shot
of the Alps in Switzerland. This is what we got, and I must say it looks a
little bit better. This one I can almost see being used in some
sort of video. Let's test out some
other prompts. Cinematic shot of a
cat in the desert. Click this button to
generate the video. Let's try another prompt,
the Sahara Desert. Generate that one.
You can see here that you're able to generate
several videos at a time. Here's a cinematic shot
of a cat in the desert. As we said earlier, it just looks like a photo that
they gave movement to. Here's the prompt,
the Sahara Desert, and of course, that's
a shot of the desert. It doesn't look great,
but it looks okay. To conclude Pika, I
can't see many use cases for it right now because the
quality is not good enough. But if they improve their model, Pika has great potential. It's easy to use
open for anyone. And if you just want to
try to make some AI video, this is a good place to go. That's it. I'll see
you in the next video.
13. 3.5 (Video) Genmo: Here's another AI video
generation platform. This one is called enmt AI. It's pretty similar to Pika. They use their own model. Doesn't have the best quality, but it's very simple to use. It's free to use,
and this one is also good if you just want to
try to make some AI video. You just have to search
for GenMo create an account or sign in with Google and you'll
land on this page. Here I wrote a man running
down the streets of New York. This video is not terrible. It's not good enough to
use in an actual video, but you can see that
it's getting there. I can definitely see
how in six months, this could be way
higher quality. Here we have the prompt of
the Alps in Switzerland. Okay quality, but really
doesn't look that good. All you have to do is type
in your prompt in this box, click Submit, a cat
catching a mouse. This is the video we
got from the prompt, a cat catching a mouse. Even if you don't end up using
these platforms right now, it's good to know about them. It's good to know
what's on the market for AI video generation. Because in a few
months or a year, if these models
have become better, you will be in a good position if you are an early adopter.
14. 3.6 (Video) Synthesia: Okay. Next up is a platform
called synthesia.ai. This platform focuses
only on AI avatars. You can create tutorials
or instructional videos. The quality is good, but
it's not quite at the point where you would
want to use it for your business in my opinion. You can test synthesia and
create free test videos, but to actually use
synthesia, you have to pay. So let's have a quick
look at a couple of videos that I
made with Synthesia. Hello. Today, we will
talk about apples. Apples taste good a fruit
that many humans like, but I'm a robot, so I don't
know what they taste like. And here's another one. Hi, Anna. Jake wanted to say
that you are a rock star. Your contributions
are a big part of the team's success.
Thanks for all you do. So there's a couple of things
to take away from this. First of all, obviously, this video doesn't
look 100% realistic. You can tell that
something is wrong. He feels and sounds a bit
robotic at the current moment. But what are they
doing in this video? They're actually adding in some text animations in
the background. They added in a
little animation of Steve Correll popping
up in the background. They have a pretty nice
template here with the guy and the blue
background and it's moving. I got to say if you watch
it with a sound turned off, it looks pretty good. Same thing here if we watch
this video with a sound off, I could probably be
fooled that this was a real video if I didn't
look closely at it. The point here is that these platforms they only
getting better. Soon enough, you're
not going to be able to tell that this is fake. I think synthesis one of the platforms that has
a lot of potential. Here we can make an entire
instructional video without editing, without
recording anything. Chat GPT could basically
write the script for us, and then you throw that
script into synthesia. In a few minutes,
you have a video that is actually useful that would have
taken a long time to make the regular way. I'm very excited
for this platform to up their quality
a little bit. Acid becomes just a
little bit better. I think we're going
to see a lot of businesses starting
to use these videos, especially when they
can make videos like this with pop up, sub celebrities, animated text. These videos can actually
be quite engaging. Here's the pricing for Synthesa. In the starter pack, you get
120 video minutes per year, and that's going to
cost you $264 per year. Of course, right now, that's
a little bit expensive. But if these videos just
become a little bit better so that you can actually use them
in your business, then I think this
is pretty cheap. Let's say you make a video
like this for your business. It's a minute long. If
you made it the old way, it would have cost
you maybe $100. Now you can make it for $260 and you can make 50
of these videos. And it's way faster than
making a regular video. The website for Synthesia
is synthesa dot O. And if you go to
their landing page, you'll find this
button right here, create a free AI video. I would recommend you
click on that and play around with these
different layouts. Test the avatars, see
what you think of it, make your own
little script here. Okay, that's it for this video. I will see you in the next one.
15. 3.7 (Video) Invideo: Next up is in video dot IO. This platform does not
generate video for you. Puts together a video for you. Using stock photos
and stock videos. It generates a script
and audio for you, and you can use your own script. This is the website
in video dot O. You can create videos
with text prompts, enter any topic and in
video AI gets to work. It generates a script,
creates scenes, at voiceovers and tweaks
the video at your command. You can sign up for free, and this is what it looks like
once you've made a video. Let's play this video
and have a look at it. Ever wondered why
the Tynosurs Rix, fondly known as the T Rix, has held our fascination
for over 100 years. This prehistoric predator, with its massive size and
equally massive reputation, continues to captivate us. But what makes the
T rex so special? We'll stick around
as we delve into the fascinating world of
the Tyrannosaurus rex. First up, let's talk about size. The T rex was one of the largest
carnivorous dinosaurs that ever roamed the earth. These giants stood at
a staggering 20 feet tall with a length of up to
40 feet from head to tail. That's roughly the
size of a school bus. Imagine seeing that in
your rear view mirror. Next, we turn to the TRx's
most distinctive feature. It's teeth. The T rex had teeth
that were up to 9 " long, the size of a large banana. Their teeth were not just long, they were also
incredibly strong. In fact, they had the
most powerful bite of any terrestrial animal
that has ever lived. It's believed that
a T rex could bite down This video took me
maybe 30 seconds to make, and I can already see this
being posted to YouTube. There are some flaws
in it, of course. For example, here,
he's comparing the TRX size to a school bus. In video IO has now chosen
a clip of a school bus, which isn't really correct. But what you can do is
click down here and edit, and it will open up
this editor that will show you all the
different stock clips and stock photos that in Video has used to
create your video. Then you can go on and
replace different clips, upload your own media. But as a ground total, I think it's doing a
pretty good job. Maybe you notice that there are watermarks on every single clip. Here it says story blocks. Story blocks is a platform for stock video
and stock photos. Here you can barely see it, but it says stock
by Getty images. In Video has collaborations with the stock photo and
stock video platforms, and they have access
to over 16 million different stock
photos and videos. Of course, when you
pay for this platform, the water marks go away. Once you've created an
account on in video, this is the dashboard,
the Landing page. Here you can click
on Create AI Video. All I did to create that
video was to write out a fun instructional
video about TRxs. Click on Generate video. It's going to give you
some options here. It's going to give
you a target audience suggesting the target audience. Right now, it's suggesting
dinosaur enthusiasts. You can choose to
look and feel of the video, bright,
inspiring, professional. Can choose what
platform it's for, YouTube, Facebook, Linked in. Let you just click on, Continue, and it's going to start
generating the video for you. Now, as it says here, this might take a few minutes, but you really don't have to
spend a lot of time on it. I spent maybe 30 seconds
creating that other video. The other thing that you
can do here, of course, if you click on Create AI video, instead of typing out
a short prompt like a fun instructional
video on T axis, and then letting in video create the entire
script for you. We could go over to Chachi PT and give Chachi Pita prompt, write me a three minute
YouTube video script for a fun instructional
video on T axis. Here's a script that Chachi Pit made. We can copy that one. Come over to our notes, paste that script
into our notes, and then we can remove
the things that we don't like and
edit the script. Then we can copy
the whole script again, come back to in video. Paste it into the prompt box. Down here, you can
see that you have 3,600 characters to
fill into this box. You're able to make a
video of several minutes. Then you can
generate that video. I must say even
though this platform is also in the early stages, I'm pretty impressed with
the results that you get. King was a colossal
creature that roamed North America in the
late cretaceous period. Despite its puny arms, it was a powerful beast with a hardy appetite for meat
and a bite force to match. It's top speed
remains a mystery, but it certainly
wasn't the slow poke of the dinosaur world. In video, Great
example of a platform that's already at a point
where you can actually use it. Maybe not for
professional videos, but certainly for
social media content. This is a good
example of how you could automate a
YouTube channel. It will be very exciting to see this platform become even
better and more useful. But if I tried to manually put together one
of these videos, it would take me at
least one or 2 hours. Here I could do it
in a few minutes. That's it. I'll see
you in the next video.
16. 4.1 AI Photo Introduction: All right. Let's talk about
how to generate AI photos. First of all, I just
want to say that there's a platform for all skill levels. No matter how much time you
want to put into AI photos, there will be a suitable
platform and solution for you. Before we get into
it, let's go to mid Journey's page
and have a look at some beautiful photos here. None of these photos are
real, they never existed. They are all just
generated by AI. Considering the
very high quality, I think this is
absolutely amazing. Of course, this has
already transformed the entire business
of photography. Me being an old photographer, this is my favorite
use case of AI. Can really see that
it's starting to look realistic and the
quality is very good. Here are the platforms
that we're going to be discussing in this whole
section on AI photos, mid journey, Dolly,
stable effusion, Adobe Firefly, and
Photoshop generative fill. Perhaps you already
know about all of them. Perhaps you've never
heard of any of them. Don't have to remember all
these names right now. If this information is a
little bit overwhelming, just be assured that
we're going to cover all the info that you
need to understand this. Let's quickly go
through the platforms. Mid Journey is the
number one platform for generating AI photos. The company is also
called Mid Journey. Dolly is a very simple
AI photo model, created by Open AI, ChagpT is created by Open AI. So you're conveniently
able to use Dolly within the chat GPT chat. Stable diffusion is number
two when it comes to quality. Stable diffusion is for
more advanced heavy users, and the company that created stable diffusion is
called stability AI. Then we have Adobe Firefly, a generative AI photo
model created by Adobe. This one is still up here, but we're not going to
be talking about it that much because
it's not that good. Then we have photoshop
generative fill. Also created by Adobe. This one does not
generate photos, but it's really, really good
at editing your photos. What platform should you use and what should
you use it for? If you want quality, you should use Mit journey. Mit Journey produces the
best quality, that's it. If you want to edit photos, edit something within a photo, then you should use
Photoshop generative fill. If you just want to try
to generate a photo, but you don't care
about the quality, you just want it
to be easy to use, then you should use
Di within Chachi BT. Or if you're building
a platform and you want easy API access, Dolly is also a good option. But we're going to
talk about that later. If you want a lot
of customization, maybe you want to build an
app that generates AI photos. You want to build
a specific website that generates
YouTube thumbnails, or you want to make AI photos that look like yourself
or your friend, or you want something
very specific. You also want high quality. Then stable diffusion might
be the good option for you, but it's going to take
a little more effort. It's a little bit harder
to learn how to use it. If you're a content
creator or you just want to create
high quality photos, you create any sort of content. Then mid journey and
Photoshop generative pill are going to be the
best options for you. Mid Journey combined with Photoshop generative Phil is
a very powerful combination. Both of them are also
pretty easy to use. I would say that this combo at this moment gives
the most value. If you're a beginner,
you don't really care about generating photos,
you just want to try it. You don't care
about the quality, then Dolly is for you. The only exception,
as I said before, is if you want to build an
automation or a platform or a website or an app that generates a lot of
photos quickly, then you can use Open AI's API, and then Dolly would
be a good option. If you're a heavy user, You want customization,
you want quality. Maybe you want to build
an AI photo platform. Maybe you want to build an app, maybe you just want a lot of control over the photo
that you generate. Then stable diffusion
is a good option. Mid journey clearly wins the entire competition for generating high quality photos. Quality is going to be the
main factor most of the time. Mid journey takes a
little bit of learning, but it is easy to use
once you get to know it. For heavy users, however, stable diffusion
combined with a platform called automatic 11 11 wins. But basically, if you combine
stable diffusion with a pretty technical platform
called automatic 11 11, this is how you can really
customize your AI photos. Stable diffusion
within automatic 11 11 can create
high quality photos, has a lot of
customization options, and it's open source. What about mid journey
versus stable diffusion? Mid journey has slightly
better quality, it's slightly more realistic. Stable diffusion still looks
pretty realistic as well, and it certainly has more
customization than mid journey. Before we get into it, let's quickly have a look at some photos from these
different models. Because now we are
in the visual world, explaining won't do much. Seeing the quality
with your own eyes, that's going to make
you understand. Right now we are on
mid Journey's website. We're on the Explore page, and as you can see, the
photos just look astonishing. What I really love
about mid Journey site is that you can come up here
and search for prompts. If I search for Einstein, I'm going to see a bunch of photos that other
people have generated. The best thing is
that this is free, and you can download these
photos and use them yourself. The person who
created, for example, this image, don't hold
any rights to this image. Let's search for man on horse. Instantly we get a
bunch of results. The good thing about mid journey is that
a lot of the times, you don't even have to
generate your own image. You can just come here
and search for it. Take another great image
that somebody else created. If we search for portraits, we can see here that the quality is extremely good
in mid journey. People's faces that
mid journey generates. They look very realistic. Let's have a look at stable
diffusion. All right. These photos are generated
by stable diffusion. As you can see here, they look pretty realistic,
pretty good as well. Here's one that's
called epic realism. Another photo that
looks pretty realistic. Usually you can tell
that there's something that just slightly
off about the photo, or they look similar. They have a similar
look in their eyes. This one, for example,
it just looks fake. Anyways, stable diffusion,
as you can see, also has very high quality. Let's head over to Open AI, which has created D three. Let's have a look at
some Dolly three photos. These ones are mainly
concepts or art. They're not good at all at
generating realistic photos. But if you need
something like this, then Dolly could generate
that pretty quickly for you. A lot of Dolly photos have this cartoony weird
vibe about them. Lastly, let's have a
look at Adobe Firefly. Similarly to Dolly, a lot of these ones are going to
look a little bit cartoony. Here's a photo of a woman,
and it looks pretty good. It looks almost a
little bit realistic, but there's something
weird about it. Here's adobe Firefly
creating a leopard. It certainly looks
really realistic, but still there's something
a little bit off about it. Same thing here's a
photo of a burger. It looks really good, but I could almost tell
that it's made by AI. I don't know how I can
tell, but I can just tell. Then a lot of the photos
from Adobe Firefly are just going to
look like this. It's more like concept art. They'll have some
common floss like this, the eyes will be looking
in different directions. Just to quickly compare them. Here's me using
Dolly to generate a photo of a fish
eating a carrot. Looks okay. Could be used as a cover or
something like that. Here's the same thing
in Mid Journey, I ask Mid Journey to create a photo of a fish
eating a carrot. As you can see the main
difference is just that it looks more realistic. You're definitely
going to have to make more photos to get the
one that you want, but there's a clear
distinction in quality. Let's go through the pricing
for the different platforms. Mid journey has a basic
plan of $10 per month. Here you can try out
creating some photos. This is going to
be enough if you just want to generate
a few photos. If you want to start
using it more, you can pay $30 a month. They also have more
expensive plans than this. But unless you're a business looking to generate
a lot of photos, $10 basic plan should be enough, or perhaps you want to
upgrade to the $30 plan. Dolly is included within
the chat GPT subscription, which is $20 per month. Right now there's not really a separate user
interface for Dolly. You just use it within
the Chat PT chat. If you're paying for CagPT, then you can use Dolly for free. If you're not
paying for Chat PT, I would absolutely not recommend that you buy
it just for Dolly, because Dolly is not good. Stable effusion is free, but if you use
Stable effusions API to integrate into your platform
or something like that, then you're going to pay
about $10 per 5,000 images. Regular people don't
really use this a lot. They make their money
mainly off of businesses, integrating their model into their own websites
or their own apps. If you've ever seen an app that generates photos
of people's faces, they are using stable diffusion. They are not using mid journey, they are not using Dolly, they're not using Adobe Firefly. All the apps and websites that are not any
of these companies are using stable diffusions
API to power their AI photos. Adobe Firefly is free to use. I think you can generate
up to 25 photos for free. Then you can pay $5 a month
if you want some more photos. Or if you have Adobe's
all Apps plan, which a lot of content
creators would have, then you get a lot of photos
included in that plan. Photoshop generative fill, the cheapest option for
getting it is $20 a month, and that's if you buy the
photography plan through Adobe.
17. 4.2 (Photo) Platforms to use: Okay, Mid journey,
Dolly, Stable effusion, Adobe Firefly, Photoshop
generative fill. We are going to have a
look at where you can use and interact
with these tools. Of course, if you already
know how to use these tools, then you can skip this video. Or maybe you just want to jump directly to the more
advanced stuff, where we talk about the features of mid journey and
photo creation. Anyways, let's start off with
where to use Mid journey. We're going to start with Mid journey because
that's going to be the major one that you're
going to want to learn. So the place where you use
mid Journey is discord. Unless you're already
familiar with discord, which a lot of people are, it's not super user friendly. Discord is a social platform for chatting with other people. And Mid Journey have a
collaboration with discord. So this is what Mid Journey
looks like in discord. It's important to make
the distinction that this is not mid journey.
This is discord. In order to open Md
journey on my computer, I'm going to search for
discord and open discord. They are completely separate. But within discord, here you
can see the mid journey bot. This is a chat within
discord that we use in order to access mid journey. There's
an exception to this. If you've already
generated more than 1,000 images on your
mid journey account, then they currently
grant you access to use their image
generator on their website. This is currently
like a Beta feature. Right now, as I'm
making this video, you're not going to have
access unless you have generated 1,000 images already. But if you're watching
this course one month, two months, or a few
months after I made it, then I'm hoping they will have integrated their
image generation feature on their website because that one is a
lot more user friendly. This is what it looks like using mid Journey on their website. As you can see, right
now, we're just on mid journey.com slash Explore. Up here on the left, if
I go over to Creates. I'm going to see the images
that I have recently created. They look cool,
but I promise you it requires no skill
to create them. If you have access to using
their website feature, right up here where
it says Imagine. This is where you
enter your prompt, so here I can just type
out a cat in a bathtub. Send that off, and
that's going to start generating within
this Create tab. Here we have it a CAT and
a bath tub, 33% complete. There we go. Finished.
Cat and a bathtub. Easy is that. Using Mid journey within discord looks like this. You go to the Mid Journey
chat within discord. Go down here to type a message. Then you're going to
type slash, Imagine, and this little box
is going to pop up and you want to choose prompt. To choose it, you
can click on Enter, and then you can
write your prompt. A Cat in a bathtub, send that off to Mid journey. All Mid journey is
doing is that they're using discord to
communicate with you. When you send off a prompt
like this to Md journey, all you're doing is communicating
with their AI model. In discord, you can
click on this image. You get four images. If you want to save one of them, you're going to have to click
on one of these buttons in order to upscale it
to make it bigger. Then you can right click
on it and save image. Then that's going to
go to your computer. Pretty simple. What
about stable diffusion? Where do you use
stable diffusion? Well, what you have
to learn here is that it's open source,
so it's different. There is no one place to
use stable diffusion. Mid Journey is the
company and they've created their own AI model, and they've been
using discord for you to interact with
their model, Md journey. Stable diffusion is the model, and stability AI who
made stable difusion, they have not made
any collaboration with any apps like discord. Instead, they're
making it open source, and open source just means that their model is available
for anyone to build on. Anybody can build
their own company using the stable
diffusion model, which means that we can use the stable diffusion model
on different online sites. We can use it on different apps. Then there's a platform
called Automatic 11 11, which is like the
main platform and user interface for stable
diffusion, advanced usage. If you really want to
customize your images, automatic 11 11 is the
place you want to go. When you find stable
diffusion within different online sites or apps, usually they don't have
a lot of customization. Because companies that
are building these sites, they are just taking
the model and making it very user friendly. But if it's user friendly, it's not super advanced. For example, if we Google
stable diffusion use free, Scroll down a little bit here, pick one of these websites. Here's a website
where they have built an integration for the
stable diffusion model. Click on Try now, what to draw? A cat in a bathtub,
create that one. Just like that, we have a
photo of a cat in a bathtub. Of course, this is a
lot more confusing than just going over to
the mid journey site. That's where I want to
show you automatic 11 11. This is the interface
for automatic 11 11. This is like a program that you can install on your computer, or you can run it
with Cloud hosting. Now, this is just a photo of it, but you can still see that here there's a lot of
different settings. Here's where you
enter your prompt. You can use negative prompt. There's a lot of
settings like width, height, scales, sampling steps. You can do image to. There's
a lot of different settings. I hope that clears
things up a little bit. I'm not going to be getting
into how to install automatic 11 11,
because in my opinion, that's a bit too technical
for this course, and you're probably better off
using mid journey anyways. But it's still good
for you to know how stable diffusion plays a significant role in the
market of AI photogeneration. If you want to become
a more advanced users, you should definitely look into stable diffusion and
automatic 11 11. Where can you use Dolly? This one is obviously
super simple. You can use it in
the chat GPT chat. All you do is tell Chat GBT to generate an
image of something. So we're going to
do that right now. Generate a photo of
a cat in a bathtub. It starts creating
the image right away. All you have to do is
prompt Cady Pete in order to understand that you
want it to generate a photo. Here we go. C at in
a bathtub obviously doesn't look as
good as M journeys, but still pretty good. What about Adobe Firefly? You can use Adobe Firefly on their website,
firefly.adobe.com. The entire user interface
is on their website. This is the website
for firefly.adobe.com. If we scroll down
here, we can see some different images that have been made using this model. Here we can type in Generate a CT in a bathtub and
click on Generates. That's going to bring us into the user interface for
using Adobe Firefly, and these images, they
look pretty good. Now, I don't really see the
point in using Firefly, because if you're going to use an external party to
generate your images, and you want to take the
extra step above Dolly, but you don't want to go all
the way to stable diffusion, then you might as well use
mid journey instead of Firefly because mid journey
has higher quality. Where do you use Photoshop
generative fill in photoshop. Generative fill is just
a tool within photoshop. It's not an app, and
it's good to have some photoshop knowledge if you want to use it, of course. So if we open up this image of the cat in the
bathtub in photoshop, basically the way to use generative fill is we can
highlight an area like this. Then this little box
here is going to pop up where it says
generative fill. If we click on it, we can
type in remove the plants. It's going to start
generating and regenerating and
editing the image. It's going to start creating new pixels for this
selected area. Bam, just like that,
the plants are gone. This just takes photoshop
to a whole new level. This is what people have been doing in photoshop for years, but look at how easy it is now. If we do it over
here, remove and make the background
empty, send that off. You can see that it just removes
that whole water faucet. The best part about it
is that we had actually selected a part of the
bathtub here as well. But the bathtub remained intact. That's because the
generative fil model doesn't just remove things, it actually creates new pixels. We didn't actually
remove anything. We actually created something. Down here, we have the
layers for photoshop. If we remove the original layer, we can see that we did in
fact not remove anything. We created two new images that are laid on
top of the old one. I think this feature
is just fantastic. Let's give this cats a hat. Bam, just like that,
we gave the cat a hat. Over here to the right,
you can choose because you always get three
different variations. I'm happy with it like this. We removed some
unnecessary items in the background and we gave
the hat a little party hat. Beautiful. That's it for
this video and that's it for the cats in the bathtubs.
See in the next video.
18. 4.3 (Photo) Midjourney Basics: Let's get into and go
through Mid journey, how to use it, the basics, the settings that you can use, and we're also going to go
through more advanced settings and strategies for creating
the photo you want. Similarly to Chat GPT, Mid Journey can create
almost anything if you have the right prompt. If you're not getting
the result you want, you might not have
the right prompt. These might be the most
important videos if you want to create high
quality images with AI. So we're going to have several
videos on Mid Journey. If you're not that
interested in mid journey, then you can just
skip these videos. Really quickly, how do you start using Mid
Journey and discord? Basically, you log into discord, subscribe to a mid journey plan. Join the mid Journey
server or discord. Then you go to one of Mid Journey's channels
within discord. Then you use the imagine
command within discord. Within the mid Journey chat. Like we can see right
here, they have the forward slash imagined prompt and then
they're prompting the thing that they
want a photo of. I'm not going to be
guideing you through the process of
downloading discord, but if you want a
guide for that, you can just come to this page. It's on mid journey.com, and it's the Quick
Start article. An easy way to find it is to just go and
Google and search, how to use Md
journey in discord. The first article is going
to be Mid Journey's website, Mid journey Quickstart Guide. You will have to
go to discord.com. This is discords website. If you just search for
discord on Google, it's going to be the
first link download, and then you can just click on Download Discord,
create an account. Even if it's easier
for me to go to Mid Journey's website and create images right here
in their website. In these videos, I'm going
to be showing you how to use Mid journey within
discord because one, most of you are not
going to have access at this current moment
to generating pictures on their website. Two, you have a little
bit more customization currently when you're using
Mid journey in Discord. But just to clarify
on the website, you don't have to type
out the Forward Slash. Imagine prompt. Here you can just directly
type out what you want. A dog running on the beach. Click Enter and it's sent off. You can see here that
four different images are starting to generate. Here on the right, we can
see the prompt that we used, a dog running on the beach, and they are finished,
we can click on them and look at
them one by one. I also want to point out
that on the website, they are already up scaled. But as you can see
here within discord, when you generate images, you get all four images
within one image. And then you have to click
on one of these buttons that says U one, U two, U three, four, and that means
upscale image one, upscale image two, and so on. If we want to generate
an image in discord, we're going to type
out forward slash. Imagine, click Enter, then this little box
here is going to pop up, type out your prompt,
a dog running on the beach. Send that off. Now we sent off our prompt
into the chat within discord, and we can see that it's
starting to generate 17% finished, and it's finished. Now we can click on it,
look at the images. Let's say we like
the second one here. Now we're just going
to click on U two, scale image number two. It's going to start
a new process. It's going to be a
little bit faster than generating four images. After a few seconds, that one is finished, we
can click on it. You can also double click on it, and then save image, save it to your
downloads folder, for example, here's the image. If we check out the
statistics on this image, you can see that it's 1024
pixels by 1024 pixels. So it's not super
high resolution, but it's still a pretty
high resolution. This quality is going
to be good enough to use for most use cases. We can access our settings
page within discord by typing out forward slash settings in the chat and then
pressing Enter. That's going to open
up these settings. The first setting that
I want you to be aware about is the speed of the mode. We have fast mode, we
have relaxed mode, and we have turbo mode. Now you don't have to
care that much about the turbo mode because we're not really going
to use that one. It's not super necessary. Let's compare the fast
and the relaxed mode, how long it takes to generate
an image with both of them. Currently, I'm on the fast mode. If we type out forward slash
imagine dog on the beach, and then I'm going to change the set up
a little bit here. I'm going to open up
a timer on the left. Start the timer, head over to my journey and send
off this prompt. Let's see how long it takes for this image to generate
on the fast mode. The image just finished and
we're at about 1 minute. Now we know it takes
about a minute to generate one photo in fast mode. I'm not going to show
you the slow one because it's pretty slow, but Mjony also has a separate article on
the different modes, fast relax and turbo modes. Here you can see that they're
telling you that wait times for relax
mode are dynamic, meaning they vary, they differ, but generally they are 0-10
minutes per image per job. I've created over 6,000 images, so I can definitely confirm
that sometimes slow, sometimes it takes about
10 minutes for one of these to generate
on the relax mode. So if you start
using Md journey, what you'll notice is that your workflow will be
something like this. You'll send off an
image to be generated, and then you'll switch
to another tab. Maybe you'll go to Chat
GPT, ask ChagT something, chat a little bit with Chad PT, and then after a
minute, you'll come back to mid Journey
and get your image. In the next video,
we're going to have a look at all the settings or all the parameters for mid journey. I'll see
you in the next video.
19. 4.4 (Photo) Midjourney Parameters: This video, we're going to
be going through some of the parameters and
settings for mid journey. First off, I just want to
show you some random photos that I've made in the last
few months with M Journey, just to show the quality
they can really reach. Obviously, some of these, they look a little bit AI. When I say they look AI, I say there's something about them that looks
a little bit off. You can tell that maybe
they're not real. But still, I think the
quality is just great, and you can really do so
much with this stuff. Getting a photo of a concept that you think about
in a few seconds, it's a whole game changer
for this industry. I feel like this photo,
this is something you would see at a banks website. They probably had to
hire some models before. Now you can just create it
in 30 seconds, 1 minute. What are parameters
in mid journey? They're basically
additional instructions, like technical instructions. These technical
instructions are going to affect how your image looks. In the last video, we made
spaceman Picasso style, and you know at first, it
comes out as a square. But then I made
it in a different aspect ratio, 16 by nine. Now all of a sudden,
it's a wider image. This is like the most
obvious settings, but there's also
a lot of settings that has to do with the style. There's a chao setting that changes how varied
the results will be. There's a stylized setting, influencing how strongly Mid journey defaults
aesthetic styles. So I'm going to put the link
to this parameter page, the settings page, in the text section of the video that you're
currently watching. The way to use these
settings when you're using Mid journey is that first
you type out forward slash. Imagine, press enter to
enter into the prompt box. Then you type out your prompt. I'm going to type out Spaceman. Then you want to
type out two dashes in the name of the setting, and then a number to
adjust that setting. So let's start off
with Aspect ratio. Simply described by
Chachi PT aspect ratio is the proportional
relationship between the width and the height
of an image or screen. So for everybody
who doesn't know aspect ratio here
are some examples, this is a one by one
aspect ratio, right? It's a square. This is
the default for journey. So one by one could be 500 pixels high and 500 pixels wide. So this one on the
right, for example, it's taller than it is wide. That's because the first number is bigger than the
second number. Here are the two most
common aspect ratios. These are the aspect ratios
for standard YouTube videos, standard movies,
standard TikTok formats. If you're making a
YouTube thumbnail, you want to make it 16 by nine. Then of course, if we
just flip it around, we have nine by 16, that's going to be the opposite, social media stories
or videos or TikTok. To demonstrate this further, I'm going to type out a
prompt saying Spaceman, and then I have two
dashes and then AR, and then five by 20. Then I'm going to flip
it around and make a spaceman that's 20
by five aspect ratio. AR, space 20 coll five. Once again, here is the format for creating the setting
for the aspect ratio, and then AR, and
then a little space, and then the first number, and then a colon, and
then the second number. Let's have a look at
these and it actually turned my five by 20
into one by four, but that is, of course, the same difference in ratio. That's the same aspect ratio. Here's aspect ratio one by four. You can see it's
very, very vertical. Here's aspect ratio four by
one, very, very horizontal. Let's have a look at the next
setting, which is chaos. This one I actually like
to use quite a lot. The chaos setting, the
higher value that you have, the more unusual and random
the results will be. If you have a lower value, you'll get exactly
what you asked for. The value goes 1-100, and you can type out
either C or chaos, and then the reference number. Here's an example, we
wrote spaceman C 100, chaos 100, the
highest chaos value. If we have a look at
these four images, you can see that they're
completely random. Sure, they all have something
to do with spaceman, but they're not quite
what you would expect. They're all different
variations of the word, spaceman in different scenarios. This is a great tool to use
if you want to be creative. Maybe you want to come
up with a concept. You want to base
it on a spaceman, but you don't really
know what you're after. This can be a great way
to find visual ideas. On the other hand,
here's the same prompt, spaceman with C one, Chaos value one, the lowest
value that you can have. And you can see that
it generates photos that they all look the same. They all have a similar style. They're all of a spaceman. Here, I want to show
you a practical example of a good use of
the chaos value. So here we have a prompt
that is pretty specific. There's a lot of
different instructions. It generates a
photo of a woman in a specific dress in
a specific setting. The setting in the
background is pretty specifically explained
within this prompt. The style of the photo is
also specifically instructed. If you have a lot of specific
instructions like this, then you're better off
using a low chaos value like one to get exactly
what you asked for. Because here you can see
we have the same prompt, but the chaos value is C
100, the highest value. If we have a look at the
results, they vary a lot. Some of them still look good. They might still be
interesting, but it's not really
what I asked for. It doesn't have that
specific style. It's not in the same
specific environment. Here's the same prompt
again with C 50. Right now we're on half of the chaos parameter
because it goes 1-100. These results, they're
starting to look more like the style that was
described in the prompt. But still they're
not quite there. They vary a little
bit in the style and the look and the type
of photo that we get. But here we have C one. We're getting exactly
what we asked for. I just want to show you really quickly within discord that you don't have to type out imagine and then press enter
every single time. If you're making
a lot of images, it's more time efficient to
just write forward slash, and then this menu
is going to pop up. The first option will be the
option that you used last. A lot of the times that's
going to be imagine. A lot of the times in discord, it's enough to just type forward slash and then press enter. That's going to open up
the prompt box for you. If imagine it's not popping up immediately, you
can just type out I, then it's definitely going
to be the first option, you can just press enter, and then you're taking
into the prompt box. What you can also
do is you can copy your whole previous prompt. If you have a long prompt the last one that we
just talked about, that has a lot of descriptions, but maybe you want to test out the different chaos values. Instead of sending
this off and then going here with your mouse and highlighting it and copying it, What you can do when
it's written out is that your text editor is going
to be at the bottom here. Either you can just click this thing three times
and it's going to highlight the whole prompt
and Forward Slash imagine, or when your text editor
is at the bottom, you can hold command, shift, and then press the up button. That's going to highlight
the entire section, including Forward
Slash, imagine. Then if you copy
that, you can send that prompt off and then
paste in what you copied, then you can just come in here, edit the chaos value to let's say one and then
send that off again. If you're generating
a lot of images, this is going to save
you a lot of time. The next setting
that we're going to look at is the no setting. This is basically
negative prompting. All you have to do
here is type out, no, and then the thing
that you don't want. Here's an example of that. Here's a prompt saying
beach in the Philippines. If we just type out beach
in the Philippines, we get a beach in the
Philippines and what are beaches in the Philippines have? They have palm trees. As you can see here,
all these four images actually include
some palm trees. Then down here, we
have the same prompt, Bach in the Philippines, d, no palm trees. We're doing a negative prompt
here within this prompt, negative prompt, no palm trees. If you have a look
at these images, you can see that it's still
a beach in the Philippines, but there's no paldres. The next setting is stylize. As you can see on Mid
Journey's website here, you can type out stylize or S, and then a number 1-1 thousand. This parameter
influences how strongly Mid Journey's default aesthetic
style is applied to jobs. The other thing to note
here is that there's a default setting for stylize. If you go to our settings
for slash settings, your setting is probably
going to be set to stylize med or stylized medium. There are four stylize
default settings here, low, medium, high,
and very high. Now if I remember correctly, they go something like
this, 50, hundred, 251000 for the low, medium, high, and very high. Here's an example.
Here's the prompt. Kodak disposable camera photo
of a man in the office. This is the photo that
I think was the best out of all these results that we got from
different styles. Here I didn't even include
any style parameter. What's going to happen
when I'm not including a style parameter is
that it's going to put the stylize at around 100
or something like that because my stylized default
setting is set to medium. I would recommend
keeping it at medium. Here we have another one where I put the stylized setting to one, S one, and these photos
still look good, but you can see that it's
taken all the style out of it. This photo, which is
the default medium, it has a few more colors. There's at least a
little bit of style. But here it's almost
like there's no style. It's just a gray photo. Now here's the next example, which is style 500. Here we can really
see that there's more things that are starting
to happen in these images. There's a lot more colors, there's a lot more items. There's a lot more
things going on, more little things to
look at in these photos, which could be really good
if that's what you want. Personally, I still
think this one without the stylized setting, represents my prompt the best. Here's the last example
with stylize 1,000 1,000. Here there's really a
lot of stuff going on. You can see that the colors
are becoming more vibrant. It looks less realistic. It's starting to look a
little more like concept art. This is really good
to know about. If you're going for concept art, if you want a lot
of things going on, if it doesn't have to
look super realistic, then you can bounce up this S stylize setting
to 500 or 100. If you want it to be very bland, if you don't want
any style at all, then put it at S one. But for most of your photos, I would probably
recommend keeping it at a default and not
using this setting. That's going to make sure
that the photo looks the most like the prompt
that you give M Journey. Kodak disposable camera photo
of a man in the office. I think these photos perfectly
represent that prompt. Right. Let's finish
this video and go back to the page with
all the parameters. As you can see here,
as I'm scrolling down, there's a lot of
different parameters. If you're interested
in Md journey, I encourage you to go read
every single one of them. Test them, see what they do. You're going to learn by trying, by testing, by doing yourself. After creating about 7,000 images in Midge journey myself, these are the settings
that I care about. The ones that I just talked
about, aspect ratio, chaos setting, negative
prompting and stylize. These are the
settings that I would start testing out first. Once again, I'm going to put the link for this article with all the parameters in the text section of the video that you're
currently watching. You're interested in M journey, I encourage you to go have a look at all these parameters. That's it for this video.
I'll see you in the next one.
20. 4.5 (Photo) Midjourney Adjusting Style: In this video,
we're going to talk about how to make
realistic people and how to prompt to adjust
the style of a photo. Over here on the left, we have some notes of
things that you can reference to change
the style of a photo. This will also affect
how realistic the person in the photo looks. Here we are in discord. I'm going to be
using the example of prompting for a 30-year-old man. And then down here,
I'm going to add these different sentences to that prompt and show
you the difference. Before we get started,
here's a list of ideas for things that you can reference in your prompt in M journey that will change
the style of your photo. The first thing and my favorite is to reference a photographer, like a literal photographer. Off the top of my head, I don't know any professional
photographers, but you can easily look
some of them up and choose one whose style you like and then just
reference their name. Which is what we're doing down here, Patrick De Macharlier. That's a photographer that
I found. I like his style. We are referencing him
in these photos here. You can reference
a movie director. You can reference
a specific camera, type of lighting type of photo, and of course, the style itself. To start off, we have
a prompt here where we have simply just
said a 30-year-old man. If we have a look
at these photos, these people, they look good. They look pretty realistic, but it's very boring. Like this one down here does not look realistic actually,
I think it looks fake. You can tell that their skin
texture is a little bit off. They don't have
that deep quality, that deep skin texture
that we're looking for, and they don't look
hyper realistic. Next up, down here, we've
put in the same prompt, a 30-year-old man in the
style of Patrick Demacarer. I'm probably saying
his last name wrong. Real quick, if we copy this photographer's name and go to Google and search for it, you can see that all
of his photos are in this black and white style. When we ask Mid
Journey for a photo of a 30-year-old man in his style, that's precisely what we get. I mean, compare these
photos with these photos. It's a massive difference. Next up, we are referencing
a movie director, Guy Ritchie, a famous movie
director from England. If we search for Guy
Ritchie movie shots. This is the kind of picture
that you would expect. This is the kind of picture
that Mid Journey will give us when we
reference Guy Ritchie. Here are the photos that we got. Now, these guys are
not holding shot guns. It's not 100% comparable, but the style is
definitely different. They look a little bit shady, a little mysterious,
just like his movies. This one was a
30-year-old man in the style of a Guy
Ritchie movie. Next up, we are
referencing a camera. The camera that we are
referencing is a cannon R five, and then we're also
adding in aperture 1.2, which is a camera setting. Here's the result that's not quite getting the
aperture right. But it does look like a photo
taken on a canon R five. The Con R five is a pretty standard high
quality digital camera. I think Mjne is doing a good job of referencing that camera here. This prompt was a
30-year-old man shot on Con R five aperture 1.2. You might notice that it
says style raw at the end. That's because the
default setting is set to style raw,
which it should be. Next up, we are referencing
type of lighting. We are referencing
Golden Hour lighting. The prompt goes a 30-year-old
man Golden Hour lighting. Of course, we get a man with Golden hour
lighting. Perfect. Now, the lighting is, of course, a really big part of any photo. I just want to show you a couple more examples of lighting here. Here we have a 30-year-old
man soft Studio lighting, and it's really nailing
that soft studio lighting. This will create a
great portrait photo, just prompting soft
studio lighting. Same thing down here, a 30-year-old man
soft natural light. I think these photos
they look really good. The only difference
between these photos and these photos up here, which I don't like
at all, is that down here we asked for
soft studio lighting. Next up, we're going to
reference the type of photo. For that, we have two examples, we have Kodak disposable
camera photo, and we just have analog photo. Here's a 30-year-old man Kodak
disposable camera photo. I think this prompt the
Kodak disposable photo. It always makes the
person look really realistic because it takes
away that sharp AI look. It gives you that
soft film look. If you want to make
realistic people, would really recommend using
this prompt down here, dated posable camera photo of, and then just describe whatever
person you want it of. Here we have analog photo, a 30-year-old man analog photo. I think these also
look pretty realistic, like the people, the humans, they look realistic
when you prompt it to be in the style
of a film photo. Because as I said, the
film look isn't so sharp, it doesn't have
that AI look to it. Just referencing the
analog photo style changes also the look of the human or the
person in that photo. Last but not least, you can
reference different styles. Here we're going to put Retro music video album cover style. 30-year-old man Retro Music
Video album cover style. Wow, these look so hip, so cool. They're not super realistic, but they are cool. I don't think anyone
can deny that. The main takeaway here is that every single word in your
prompt affects everything. Every single word in your prompt affects the look of the person. This is what Mi Jerne thinks a 30-year-old
man looks like, a guy with a beard
and a T shirt. But when it's a 30-year-old
man in a guy Richie movie, he's wearing a suit
or a tie glasses. He looks a bit more
cool, a bit crueler. When it's a Kodak disposable camera photo of a
30-year-old man, he looks more chill, more flowy. Maybe he's a tourist
like this guy. He looks a little
bit more down to Earth because I guess
that's what Mi Jerne thinks a guy that would be in a Kodak disposable camera
photo would look like. The setting
environment, clothing, look of the person is affected by each word in your prompt. That's it for this video, I
will see you in the next one.
21. 4.6 (Photo) Midjourney Upscale Options: Okay, let's quickly
cover these settings, the options that you get once you have generated an image. What you'll see here is
U one, two, three, four, and then this re roll button, and then V one, V
two, V three, V four. The ones that
you're going to use all the time are these
upscale buttons, u one, u two, u three, four. That just means upscale
one, two, three, four. So one upscale one, that's going to
be this image and then the second one is
the one on the right. This bottom left one is third image, and
this is your fourth. What happens when
we press on scale? Well, as we know, for starters, we have all four images
within one image. If we click Open
browser on this image, we can see that it's
just one photo. It's four photos in one photo. But once we have clicked
on this upscale button, at the bottom of the chat,
we'll get the full image. What about these V buttons? If we click on it, you'll see
that it says remix prompt. That's why I'm calling
it the remix button. Even though it says V. Here
you can change the prompt. But you're not rerolling the entire prompt that you can do by clicking
on this button. If you press the V
button or the remix, you're remixing this
specific photo. Let's do that without
changing the prompt at all, and then let's do
it one more time, but we'll type out in nature. Add that to the prompt. Now it's a 30-year-old man soft studio lighting in nature. Send that off. Here's the
first upscale of this photo. Down here is the
remix of that photo. You can see that these guys, they don't really look
like the same person. They're very
similar, but they do have some different
facial features. They're looking in slightly slightly different directions. It's not the same photo. They're just making new
versions of this photo, which can be really
good if there's a little tiny detail in this
photo that you don't like, but you like the photo itself. Then you can just remix it, click on the V one button. You'll get four more options
that look just like it. Out of those four
options, there's probably going to
be one that's good. What about down here,
we remixed it and we added in nature
to the prompt. Now they've definitely
changed the photo more. Obviously, we can see that
it's now in nature or outside, still has the same lighting. The guy still looks
pretty similar, but obviously now he looks more different
compared to this guy. When you upscale a photo, this is an upscaled photo, then you get all these
other options here. If you remix a photo, then you just get the
same options again. This one here, the circle or button. That's just a rerole. Then you're just remaking
the whole prompt. But what about all these
settings for an upscaled image? Let's start off with
this one, very region. What that allows you to do is to edit a specific
part of the photo. If we click on it, it's going to open up
this editing box. It says, drag and select
the area to be changed. I'm going to select his eyes. Then down here
within the prompt, after it says a 30-year-old man. I'm going to add with glasses. Now it's a 30-year-old man with glasses, soft studio lighting. Send that off. We can see that it's generating and
boom just like that, now he has glasses. You can see here
that we didn't get the full glasses on all of them. That's because I
actually did not select the entire area all the
way back to his ears. What about the Sum out options here? We have three of them. We have Sum out two x, one x, and custom Sum. Click on two x, we
don't get any options, it's just going to
do that for us. If we click on 1.5 x, same thing, no options,
it just does it for us. It starts a new
generation down here. But if we click on Custom Som, the prompt box is going
to open up again, and you can see here at
the end of the prompt, it's added Som two. Now, here's where you
can edit the Sums. You can edit it 1-2. I'm just going to put it
at Som 1.2. Submit that. If you click on the
Custom Sum button, of course, you can also change other things within the prompt. Let's have a look
at the results. This is the original image. Here are the options where
it's somed out by two x. All it's doing here
is that it's taking the original image and then it's adding more to the edges. Obviously, this one is pretty simple because it's just black. But if it's like in nature, then it's going
to add a bunch of stuff on the sides as well. Here's the one where
it's Zoom out by 1.5. What about these arrows?
What do they do? They basically change
the aspect ratio of your photo and I I press
on this down button, it's going to add space
underneath this photo. Let's test that out. Let's
press the down button. Submit that. Let's also try the one where we press
the left button. The left arrow, submit that one. Here are the results. As you can see in the prompt, it's also changing
the aspect ratio. It's actually adding a AR two to three aspect
ratio setting. Then it's also filling in like the rest of his shirt
and his arms down here. This one, obviously, just add some black space on the left. What about these very buttons? Very subtle, very strong. Let's test them out.
Click on it, Submit that. Very strong. Submit that one. What they're going
to do is change this image and give you
some more alternatives. This one is subtle, and this one strong changes
the image a lot. Here are the results
for variations subtle. It looks pretty much the same. In this one, the guy
has a weird eye. In this one, his face
looks slightly different. They're very similar,
but once again, if you have a flaw in
your initial image, you can do this and
potentially fix it. Here are the variations
very strong results, and they still
look very similar, but they are a little
bit more different. Here you know their
pose and their eyes, The pose has changed a little bit more than
in the other one. I want to show you another
example quickly of the sum out feature that demonstrates
this feature better. Here I've generated
a photo of fish in the ocean in a coral reef. This is the image that we got, and then we summed out by two x, scaled one of the results. Here's the next
image that we got. Summed that one out
by two x again, and then here is
the third result. Now we've summed out
by two X three times. The cool part about this
is when we download these photos and we look at
them next to each other. You can really do some
cool things like this, just summing out, starting off
really close on something, and then you just back out. I'm sure you can come up with your own creative ideas
of how to use this. That's it for these
settings if you want to look at your
photo on the web, just press on this web
button and it's going to take you to mid Journey's
website. See in the next video.
22. 4.7 (Photo) Midjourney Describe Feature: Okay, let's have a look at another feature, the
describe feature. This one is super useful. This feature does not
generate a photo. It generates text from a photo. It looks at an image and
describes it with text. So here I gave M Journey this really iconic
photo that a lot of you will probably have seen of a bunch of
construction workers. Sitting really high
up in the air. It's giving me
four variations of text describing
this photo that you can copy and use within your forward slash imagine
prompt to generate a photo. That's just what I did,
and this is what I got down here. How
does this work? You can give it either an
image or a link to an image. Here's something that's
good to know about. You can copy the link from
any mid journey photo. Let's go back to this photo and if you double click on it, you're going to get this menu and down here at the bottom, you'll see copy link. You can also click on Open Link. That's going to open up
this photo in the browser, then you can double click on its copy image address, that's going to be
the same thing. Then of course, you
can also double click on it and save this image. Once you have an image saved on your computer or
you have the link, you want to type out
forward slash D. The first option should be
Forward Slash Describe. If it's not, you can just
type out forward slash, describe, press on Enter. Then two different options
here are going to pop up. You're going to get
one option that says image and another
option that says Link. We're going to start off by
pressing on the link option. Now this little box
down here opens up. This is where you want to paste in the link for this photo. Now we can double click
on the photo again. Copy the link, go
back to the box, paste it in there, send
off that as a prompt. Let's do it again,
but with an image. Forward Slash D, describe up
here is the first option. Press Enter, click on Image. Now we open up a folder on our computer.
Here's the photo. Drag that into this box, go back to mid Journey and
press enter to send that off. Now it's given us two
descriptions for the same image, and each time you do this, you get four different
descriptions. The nice part about
this is that you usually get them in
different lengths. As you can see here,
this one is shorter. It's about two rows. This one is really long. This one is long, but
not quite as long. So you have some
different options. What I recommend you
do is you copy them, go to the imagined prompt,
paste it in there. Send that off to
generate that image, Copy the second one, send
that off as a prompt as well. Copy the third one, send
that one off as a prompt. We've now sent off all
four of these descriptions that we got as prompts
to generate images. While they're generating,
let's have a look at these descriptions because
these descriptions give us a really
good idea of how Mid Journey interprets
text into a photo. What are the components that Mid Jerne thinks of itself
when describing a photo? First off, portrait
of a man with a beard, that's pretty obvious, describing the subject,
the person in profile, looking away from camera. Describing the person even more. What is the person
doing in the photo? Wearing dark gray sweater. Pretty obvious. If
you don't do that, then this sweater could have had any color, dark background. Yeah, it's pretty
dark background. Moody soft light shot
on cannon EOS R five. Lastly it's describing
the lighting and the camera,
just like we did. If we have a look here,
we can see that it's describing the camera
in a lot of them here. It's also using R five camera. On the third one as well,
it's also using R five. Here on the other example, also canon ES R five. There's something about
this image that Mi Journey interprets as being taken
on a Con R five camera. Interesting. Let's have a look at the four different
results that we got. This is the first one.
Here's the second one, here's the third, and
here's the fourth one. Now what you can do
is you can go through these, pick out your favorite. Let's say the first
one is my favorite. Now you have a prompt that
you know is going to work. Well. We can use it again. We're going to get a
very similar result. You can also have a look at this prompt and see what is it about this prompt that makes me like the photos
that it's producing. Portrait of a man with
beard, dark gray sweater, profile shot, soft box lighting, 50 millimeter lens, F 2.8. It's describing the lens,
Sansom camera settings, the type of lighting,
man with a beard. Let's try it again
with this photo. This is an iconic photo. It's been famous for
many, many years. So we're going to type out
forward slash D, press Enter. Click on Image, and then
drag in this image. Click on Enter
again to send that off as a described prompt. Let's quickly read
some of these. A group of construction
workers sitting on the edge of a
skyscraper eating lunge, Black and White
photo in the style of Latex dripping photography. Then it's also going to describe the exact aspect
ratio of this photo. In this case, that's
going to be 54 by 29. Let's try this
prompt as imagine. What about this one? A group of construction workers
sitting on the edge of an iconic skyscraper
eating lunge and laughing in the style of vintage
Black and White photography. The scene captures
their relaxed yet brave demeanor as they
rest during work hours, surrounded by city scape. Let's try that one as well. Let's try this long
prompt up here as well. Forward Slash, imagine, send
that off to be generated. You get the point. If you
have a photo of a dog like this and you
want to recreate it. Instead of testing different prompts, writing them yourself, just send this photo into the described feature a couple of times and then test
all the variations, and that's going to
be a much faster way of getting what you're after. Okay, here's the result
from the first one. Now, as you can see here, it's not really realistic. Some things are just
off about these photos. That's just AI. That's
just how it's going to be. But if you generate a lot of
them, a lot of variations, there's always going to
be one that stands out, one that's good, and that also doesn't have any flaws
that looks realistic. I mean, this one down here
looks a bit more realistic, or this one here could
be a real photo. If you just saw this
on a random website, you probably wouldn't
think about it, you would just think
that it's a real photo. Same here, this one maybe
looks a little bit off, this one also not
super realistic. But this one here, for example, if I saw this on a website, I probably would think
that it's a real photo. That's it for this
video. Hope you liked the described feature. I'll see you in the next video.
23. 4.8 (Photo) Stable Diffusion: Stable diffusion, Dolly
and Adobe Firefox. We're not going to
go through all of them in detail like we did with M journey because you're better off just
using Md journey. But let's cover the
basics for them quickly so you know
what's on the market. Starting off with
stable diffusion. Stable diffusion is
made by stability AI, and this is their website. What is stability AI
and what do they do? Here we can see that
they have image models, they have video models, they have language models, and three D models. The main thing that people
care about stability A four is their image models and perhaps their video models. The first thing to understand
is that there's not one place to use
stability AIs models. They are used wherever people build
applications with them. Here we can see that
they have different image models, video models. If we go back to
their main page, the first button
that you see here is get started with API. We're going to talk
more about API later, but API is basically what
companies use to build their own platform with
stability AIs different models. This is their main way of making money, not through
subscriptions. But if we click here and
get your membership, we can still see that
they have a pricing plan. This plan applies to
people that want to use their models for commercial use. Here we can see it
says non commercial. If you don't intend to make
money off of their models or the things that you create with their models, it's all free. If you do intend to make
money with their models, then you should get
this professional plan for $20 a month. Now, they release new
models all the time. For image generation, there's a lot of different
models available. These models can be good
at different things. Here's a little example. Here's a website
called Hugging Face. Hugging Face is like an
AI community providing different models to anyone who wants to download
them or use them. Here's a page that I'm going
to link in this video. To a model that somebody built a little user
interface for. Back to stability AIs website. Here's somebody that has taken stability AI's API and used it on this little
site on Hugging Face. What this model is
specifically good at is that you can upload a
photo of somebody's face, and then you can upload
a photo of a body pose. Then the AI will create
a new photo with that face and that body pose
within a new environment. That's what I did. I
uploaded a photo of Obama and then I uploaded the photo of this guy standing with
his arms out like this. This year is the photo that
the AI model created for me. The prompt was just
Obama with his arms out. Obviously, it's a Chinese
setting here as well going on. Now, even if this
photo is not great, the quality is not so good, and the photo is weird. You can see here that
we have a use case for stable diffusion that we
could not do in mid journey. Let's say you have a YouTube
channel and you want to make thumbnails for
your videos like this. You want your thumbnails
to be a view, and you want to have a specific pose and you want the photo to be somewhere specific,
maybe on a mountain. This model could
do that for you. Now Mid Journey could create a really high quality photo
of somebody on a mountain, but it can't get your face
in there, at least not yet. Here's another site
on Hugging Face another stability AI model. This one is called stable video
diffusion image to video. What you do here
is that you upload a photo and it turns
that photo into a video. They're not showing the
original photos here, but basically these
little videos here were just photos
to begin with. Here's a photo of
automatic 11 11, which is the main platform for running stable
diffusion models in. Basically, if you want
to use automatic 11 11, you would be able to download a bunch of models from hugging face or another site and then choose
between them up here. Then you have an
interface here for generating images with a
bunch of specific settings. You could take this model
where you're able to recreate a face and use it
within automatic 11 11. Here you have image to image. Now, automatic 11 11 runs
locally on your computer. That means it's like
its own program that you have to download
on your computer. Automatic 11 11 itself is
not made by stability AI. It's a separate platform
where you can run stability AI models
inside of it. What you can also do
is you can run it on borrowed CPU power,
borrowed computer power. The most common way to do
that is to use Google Co lab. Most people are not going to want to use stable diffusion, since it's for very
specific use cases. In my opinion, it's very
tech heavy and it's a complicated
installation process unnecessary for most people, which is why we will not
cover it in this course. Besides that, reasons to use
stable effusion would be you can do image to image. It's free, but you do need a
Google Colab subscription, which is about ten
or $20 a month, or you need a really
good computer. Let's say you have an
old Macbook error, you're probably not
going to be able to run it on that computer. There's a lot more customization at least within automatic 11 11, and there are more models for different use cases
to choose between. That's it for this video, I
will see you in the next one.
24. 4.9 (Photo) Dall E: Let's have a quick look at
generating photos with Dolly. As we said before, Dolly
is created by Open AI, and you can only make photos
with Dolly within Chat GBT. Unless you're using Open
AI's API for Dolly. That would be if you're
building your own application. The first thing I'm going
to do is to come over to Mid Journey and
have a look at one of these photos that
we made earlier. Go to copy this prompt, a 30-year-old man
soft Studio lighting. Then in chat GPT, I'm going
to say generate a photo of a 30-year-old man
soft Studio lighting. Send that off into Chat GBT, and it's going to start
creating an image for me. Here's the image. It
looks pretty good. It doesn't look like
a real person really, but it's still a good photo. Comparatively to this
one on Mid Journey, I would say that obviously
this quality is way better. This one actually
looks realistic, and Dolly only
gives us one photo. But Mid journey, Firefly, and the other ones, they give
us four different photos. We could, for example, say make his hair blonde and send
that off into Cha GBT. That's going to start
creating an image again. And here's the new image. It says the image has been
updated with blonde hair. We can see that the
guy in the photo, they looks pretty similar, but this one has blonde hair. Obviously, that's
working out rather well. I'd like to remind you
that you can only use Dolly on the paid
version of Chat GPT. If you're not paying for Chat
GPT, you cannot do this. Let's test out one other image. We're back on mid journey and here's the black
and white photo of some construction workers having lunch amongst the sky scrapers. I'm going to go ahead
and copy this prompt. In Chat PT, I'm going
to say generate a photo of and then
paste in that prompt. Send that off to Chat GPT. And here's the image, and this
one doesn't look that bad. I think this photo is
actually quite nice. However, it looks
a bit more like concept art back
here at Mid Journey. This photo looks like it could
actually be a real photo. It looks realistic.
This one looks more like a painting or something
that somebody put together. Basically, I would say
that the conclusion of this is that Dolly is good for really quick short usage if you just want a quick
photo of something. Before ending this
video on Dolly, we're going to go back
to Open AI's website. Click on Log in. Then once you log in, you'll
get the option to either go to Chat PT or go
to their API page. This is their API page. We're going to talk
more about this later. Basically, what you can
do here in the left menu, you can go to API keys, generate what they
call a secret API key, and this is what you can
use to integrate Dolly into your own automation
or application. Let's say you want to automate making ten blog posts every day. You want each blog post to have a photo that
looks like this. It's very simple with Dolly with the use
of these API keys to make an automation
like that that can create photos for
you automatically. Now, this is the
only point where Dolly wins over all
the other platforms. Because pen AIs API API
access, their API keys. They're extremely simple to use. That's it for this video, I
will see you in the next one.
25. 4.10 (Photo) Firefly: Okay. Let's test
out Adobe Firefly. To use adobe Firefly, just go to firefly.adobe.com, or you can just search for
Firefly and you'll find it. If you don't have an account
and it says sign in up here, when you try to
generate a prompt, it's going to tell you
to create an account. But you can make an Adobe
account for free and then you'll be able to test out
Adobe Firefly for free. We're back in mid
Journey, and we're going to try the
same prompt again. Copy this one, a 30-year-old
man, soft studio lighting. Go back to Adobe Firefly, paste that in there,
send off that prompt. Are there salts for this prompt? Straight on shots of some guys and they
don't look good at all. I guess you could probably
use them for something, but, if you have the option
of choosing between this and this quality, that's a pretty
easy choice for me. The thing that I do like
about Adobe Firefly is their platforms
user interface. Up here, we can quickly
change aspect ratio. Right now it's 16 by nine. Click on that one and make it into portrait three by four. Generate a new photo. If you're not familiar
with aspect ratios, this could be a good
place to just come to learn what different
aspect ratios look like. Here's the new results,
and I got to say these people do not look good. Back in mid Journey,
let's try out this prompt again of some
construction workers. This prompt is longer
and more details. Let's see if we get some
higher quality results. Clearly, it does not want to give us high quality results. These people's faces are warped. It's not even what we asked for. At this point, it should
be very clear to you why mid journey is the obvious
choice for generating photos. That's it for this video.
I'll see in the next one.
26. 4.11 (Photo) GenFill: Adobe Photoshop generative fill for editing a part of a photo. Anyone who uses photoshop
already knows about this. If you use photoshop and you already know about
generative pill, then you can probably go
ahead and skip this video. If you don't know
about generative fill, it will be good
education for how easy it is to now edit photos. For example, this is
how you can use it. This is photoshop and
here's a photo of a hamburger that I made in
mid Journey. This is AI made. If I want to remove the tomato
and this class of drink, I can highlight them
and fill into this box, remove and fill with
black plain background. Send that off as a prompt
and it's going to generate some new photos to put on
top of this original photo. Boom, just like that,
they're gone and we get three new alternatives
for how we want this photo to look in
the specific places. Now, this is the layer
for the original photo. If I remove it, you
can see that it doesn't actually edit
this original photo. It creates two separate
new photos that are then placed on top of the original one that
match the background, and it works really well. Here's another photo that I
generated in mid journey. Let's say I want to turn
this wall into a window. I'm going to highlight
this wall and say, make this wall into a window
with a beautiful view, send that off as a prompt. Just like that, it's
created some new windows, some new alternatives for how this whole photo could look. As you can see, this little
part here on the right, we did not highlight that part, so that one stays the same
for all three alternatives. It's only this part of the
wall that we highlighted, that's going to differ
in the results. This is what the new image looks like without
the old background. I want to show you
a little clip here. Here's a video of a woman
running on a golf course. What this guy is
going to do is he's going to remove the background and create a new background with the help of photoshop
generative fill. Then he's going to
place the new photos on top of the video. He's creating this whole setting with a cliff and a
little castle here. Then now he's playing
this video again. This upper part here is just
static. That's a photo. But this part down
here is still a video, and then you can play it as a video and it
looks really cool. How do you use generative fill? This is not going to
be a photoshop course, but I'm quickly going to show you how to use generative fill. This little box here that
I'm currently moving around, it's called the
contextual taskbar. If you're in photoshop
and you don't see it, you need to go up
here to window, open up this menu
and then down here, you'll see contextual taskbar. If I unclick it, this one
is going to disappear. I'm going to click it
again to bring it back. And this is where you use generative fill
within Photoshop. In order to use generative fill, you need to have a selection. You can either click
here on select subject, and Photoshop will automatically select the person for
you, just like so. You can also come up here to the menu and choose
the Marquee tool. The rectangular Marquee tool, for example, now I can drag a box here over
the American flag. Once you have a
selection, this box, the text in it is going to
change to say generative fill. When you click on
generative fill, that's when this
prompt box opens. That's when I can
type in remove flag, fill with plain background
and send that off as a prompt to start
generating Boom, just like that, we
have removed the flag, and all we have left is a plain background just
like what I asked for. You might remember
that his shoulder was part of the
selection as well. If we toggle through
these results, you will see that this part
of his shoulder is also changing between the
three different variants that we got from
generative fill. You can also use
the Lasso tool from the menu here in order
to make a selection, and then you can
drag something out. Let's say we have
one selection here. If we want to make another
selection over here, what you have to
do is hold shift and then draw out
your other selection. That's going to add to
your current selection. Up here, you'll have
the option of adding to the current selection or subtracting from the
current selection. I'm going to tell it.
Add some devil horns, and this is what we got. I'm so sorry about this
Obama, by the way. Over here on the right, you'll see your three alternatives. You can also toggle through
them by clicking on these arrows in the little
contextual task bar. Let's do one more example of a use case where you can
use generative fill. Here's a photo of
a Spaceman that I generated with M Journey, but it's in a standing
vertical aspect ratio. Let's say it's from my
YouTube thumbnail and I want it to be an aspect
ratio 16 by nine. Then I'm going to change
the aspect ratio here and drag this photo over
here to the middle. One way to do this before we had generative fill would be to
copy a part of the image, drag that one over here
and make it bigger. But now, as you can see
the shades are different. You can tell that it's not one photo that it's been edited. It doesn't look that good. Instead, it's much easier to just use the Marquee
rectangular tool. Highlight this
part of the image, hold shift, highlight the
other part of the image. Go to the generative fill box and we don't even have
to type anything in. We can just click on generative
fill and then generate. Now, as you can see, we have a completely clear background. It matches, it doesn't have
any weird shades or lines. As you can also see, we filled the empty background
without typing in a prompt. Now the way that that works is if you don't
type in a prompt, it's going to either remove
the object that is selected. It's going to fill in with
something very simple. In this case, we already
had a white background, so it was pretty
obvious that when we highlighted the other parts
of the image that were empty, we wanted to fill that
with more background. But if I highlight
the spaceman and click generative fill
and then click on Generate without typing in a prompt, the
spaceman disappears. I just removes the
object that we selected. Say we go back to this bathroom that I generated at mid journey, we highlight this little
stool here and just click on Generate without
typing in a prompt. It's going to do its own thing. This is the result that we get. It doesn't remove the object, and that's because
there was a few too many things going on here. But let's say we go
up here and we try to just highlight
these lights up here. They're very simple. They're in the middle of this black wall. There's nothing
else going on here, we click on generates. This is the result that we get. It removed those
lights entirely, filled it in with
black background. If you're highlighting
this area, for example, where there's a lot of
things going on here, It's not going to
remove everything and make a plain background. Now, as you can see, when we highlighted these
lights up here, and the lights are
the only objects, except for this black roof, Generative Phil makes the
decision to just remove them. The other thing that you
could do is you could make several little
boxes like this. Here we have seven
different little boxes that are selected currently. Click on Generative pill, and we're going to type out make miniature planets and moons. Send that off as a prompt, and these are the options
that we're going to get. It's not 100% what we asked for. We did have some selections
down here as well. This tool is not perfect, but it is really
amazing and it can speed up your photo editing
process like crazy. It also completely
revolutionizes what you can do with a photo. Just in 1 minute, this is what we started
with. Here we go. I'm already ready to post this as a YouTube thumbnail,
for example. So Generative fill
revolutionizes photo editing. It requires some
photoshop knowledge, but really it's
very simple to use. You don't really have
to know that much about photoshop in order to use
this really great tool. All you need to do is go into Photoshop, upload your image. Come up here to
the Marquee tool. Just click on this
little rectangular box. Hold and drag to
make a selection. Click on Generative fill, type in add a planet, click Enter to send that off as a prompt. Boom. There we go. If you're just
willing to download photoshop and give it like
10 minutes to learn this, you can really do
some amazing things. That's it for this video,
I'll see you in the next one.
27. 4.12 (Photo) Remove Watermarks: Next step, I want to show you something that not that
many people know about. Watermark removal. I absolutely love these tools. It's not any one tool. They just started showing up around the time that AI
photos became a thing. People build these
tools with help from AI models like
stable diffusion. Basically what I'm
talking about, Here's Google and all we've done is search for
corporate people. Let's say you really
like this image. You want to use it on your
website or something. But there's all these
water marks on top of it. Oh, you can't use it because
of all these watermarks. Well, if you just go
on Google and you search for remove Watermark. Pick the first option,
Watermark remover. Online Watermark
remover for free. I'm going to upload that
image from Google, like that. Wow. I didn't even
press a button. It just does it
automatically for you. It removes all the
little watermarks. This is just amazing.
Click on Download. It's free, and just like that, we have the image from
Google with the water marks, but we have it without
any water marks. I missed one little
watermark up here. But now you can just go
into Photoshop generate to fill and remove
that if you want. I just think this is so funny. It's such a useful tool. And Shutter stock,
I'm terribly sorry, but this is going to be the
end of stock photos, I think. That's it for this video. I'll see you in the next one.
28. 5.1 ChatGPT Introduction: Welcome to the first
chapter on text. Text controls all
the other mediums. Video, photo, audio. They are all controlled by text. And when it comes to
generative AI and text, as we know, Chat
EPT is the master. We're going to quickly look into the alternatives for
Chat EPT as well. But if you're like me and you're a heavy AI user and
you're really into AI, and all the updates, it's hard to realize that not everybody uses
Chat GPT. Actually, The majority of people
right now are not using Chat EPT on a
daily basis. I do. I use it more than I use Google, and a lot of the people
that I know, same for them. But it's important
to remember that most people are not
early adopters, and Chat PT is still
an early platform. You would be surprised
to learn that most people have
not even tried AI, tried CatiPT, and a lot of people have tried it,
but they're not using it. So if you're already using
Chat EPT on a daily basis, this video and the next
coming few videos in this chapter on text might be a little beginner
for you, you know, You might already know some of the stuff that we're
going to talk about here. You could skip this
video in the next one. But I still think that
a lot of people that are heavy Chat IEPT users
can benefit from it. You might hear
something that you resonate with that you didn't know that will help you that is valuable
information for you. In this video, I'm simply
going to introduce Chat EPT to the people
watching this course who are not heavy users, to everybody who
has not fulfilled the full potential that
Chat EPT can give you. So let's talk about Chat
GPT replacing Google. Obviously, Chat EPT has
not replaced Google, and Google, I'm sure Google is going to be
around for a long time. But for me in my daily
life and my work, Chachi PT has
currently cut out and replaced at least 50%
of my Google usage. I know this is the same
for a lot of people that I know who are entrepreneurs or
they have jobs or whatever. Chachi PT has replaced maybe around 50% of
their Google usage, which is absolutely crazy. Because of course,
Google is like a rock. It's such a big company.
It's hard to imagine it going anywhere
and not being used. Here's why I think
that ChagPT and AI will actually replace Google. That's the first sentence here. Why look for an answer when
you can get one right away? If I want something, if
I want some information, why would I Google it and look through several
articles and look through the Internet when
I could just ask CachPT and receive the
answer right away? It's almost the same thing as
comparing the retrieval of information between
looking for information in a book or looking for
information on the Internet. So those are the
steps that I see. First, we have,
you know, looking for information in a book. Then we have looking for information with the
Internet with Google. Then we have looking for
information with AI. That's the leap that
we're now taking. And we have seen this
in the past with other really big
companies like Google. They are pushed out by
innovation, not competition. What I mean by
that is it's very, very unlikely that another
company will create a platform similar
to Google that's going to push out Google
and out compete Google. That's almost impossible. But Chachi PT, not being similar to Google at all, being a whole new thing being a thing that is better than Google, a better option. That is something that
will push Google away. So with that said,
I want to move over to Chachi Pit and show you some real examples of how Chachi Pt is replacing
Google for me. So this is sort of
a silly example, but I was watching the show, Sons of Anarchy and obviously spoiler alert if
you're watching it. But I asked Chachi Pt when
one of the characters dies, and it told me in what
episode that character dies. Here's me doing some
math with Chachi Pt. So I guess I was too lazy
to bring out my calculator. And I asked Cheti P, how many percent is
five out of 1,500, and it gave me
back that's 0.33%. Nice. Here, I had
a screenshot of some job titles that I wanted
to put into an e mail. So I just uploaded
the screenshot of the job titles into Cheti Pat, and I told Chachi Pati write these commas in
between nothing else. And that's exactly what it did. Cheti Pati wrote out all these job titles
for me and then I could just copy them and
paste them into my e mail. Instead of, you know,
writing them out one by one, that would have
taken me more time. Here's another pretty
good example of just super random
information that you can get faster
than Googling it. So I ask put, what is a fortune teller? And it says a fortune teller
predicts future events. And then after that, I ask, what do I need as a EU citizen to work in England, London? And it says as a EU citizen, you need a visa or work permit to work in London since Brexit, and I ask is it hard
to get one of those? It says obtaining a
visa or work permit in the UK can be challenging, depending on your
skills, job offer, or immigration criteria. And then I ask, in 2024, can you stay in England
for more than six months during one year as a EU citizen? And it gives me back the
information in 2024, EU citizens cannot
stay in the UK for over six months in a year without a visa,
exceptions might apply. And I say what exceptions? And it says, exceptions
include work, study visas, and family visas? I ask it, can you
extend the six months? It says you cannot extend the six month Turis visa for longer stays apply
for a relevant visa. These are a lot of different
questions that I was able to get answers to
very, very quickly. And if I had to go into like London's
immigration page and, you know, read pages and pages just to find
this information. That would have taken
me way way longer. So this is just a good
example of how you can quickly get some relevant
information really fast. And I think it serves as a
great example of why you would rather want to use Chat
GPT instead of Google.
29. 5.2 (ChatGPT) Paid vs Free & Interface: Okay, so in this video, we're just going to talk
about what is the difference between the paid and the
free version of Chachi PT. And I'm also just
going to tell you how to sign up and use Chachi PT. So, if you're already using it, you can basically go ahead
and skip this video. So there are two
models of Chachi PT. The first one is the free one, and it's called Chat CPT 3.5. And around March 2023, they released Chat PT four. Which took Chipt to
a whole new level. Essentially, Chapt
3.5, the free version. They say, I think
you can compare it to the brain of an insect. You can talk to it, but it's
not so good at reasoning. It doesn't access the Internet. The paid version, Catipt four, I think they compare it to the brain of a mouse or
something like that. That one is good at reasoning, and it can search the Internet, which is obviously a
huge deal breaker. Where it says context window, four K and 32 K, that's basically 4,000 tokens or 32,000 tokens that the
free or paid version of Catipt can handle. Now, what is a token? A token can be compared to a word or a piece of text
or a part of a word. So basically, you can
give the free version of Cha tit upwards of
about 4,000 words, and you can give
the paid version upwards of about 32,000 words. Obviously, this is
going to change. They're constantly trying to upgrade their product and
make it as good as possible. They do have another version
of the paid Chat four called the Chat PT four Turbo that
can handle 128,000 tokens. But that one is
only available in the API axis at the moment, so we're going to talk
about that later. For now, just think about it
like this. The free version. You know, I can be
fun to try it just to see what Chat GPT is if
you've never tried it. But if you're going to use
Chat PT for your work, for your personal life, if you're going to get
any value out of Chat PT, you need to have
the paid version. It's $20 a month, and if you're going to do this course, you're
going to have to buy it. I don't make any
extra money off of you buying Chat GPT four. It's just that the
difference in how good it is really is that big. So if you don't have a Chat
CPT or Open AI account, you can go ahead and
just search for Chat PT. You can either
click on Chachi PT, or you can go to the
Open AI website. Open AI is the company
that made Chat EPT. Click up here, try Chat PT, sign up, create an
account, log in. Okay, so this is what it looks like once
you're logged in. Now I'm just going
to quickly show you the interface and all the
buttons and what they do. And we're also going to test the free version and
the paid version, so you can see the
difference between them. This menu here on the left, that's all your previous chats. You're not going to
have any chats here if you're just logging
in for the first time. These two things up
here that are called e mail marketing and
SEO blog writer. Those are custom GPTs. We're going to talk about
that later, but for now, you can just ignore
them and also, you're not going to have
them if you just logged in. This one right here, which just says Chachi BT. If you click it, you're
going to open up on U chat. If I go to one of my old chats
and then I click up here, that's going to open
up a new chat for me. Down here on the left,
you have your profile, you can click on it,
create your plan, customize your Chat GPT, which we're going to do later, can create a GPT or
access your GPTs, and you can go to your settings. Except for that,
Chat TPT is like the simplest and most
valuable thing ever. You just enter something.
You just say, Hello, How are you, and it's going to answer whatever question or inquiry or thing that you
tell it or ask it. Up here where it
says Cat T four. If you click on it, you're going to get some other options. This is where you can choose if you want to be using
the free version, CaT 3.5, and that's the one that you're
going to be on if you don't have a subscription. Right now, I'm on GPT four, but let's go ahead and
switch to GPT 3.5. And I'm just going to
tell the free version of Chat GPT 3.5 this. Search for Banana
history with B. Can see that it really
quickly gave me an answer. And even though the
answer is correct, it didn't search the
Internet because the free version of
Chata PT can't do that. But if we click up
here and move over to GPT four and we enter
in the same inquiry, send it over, you can see that it doesn't give you
an answer right away. This black little thing is going to take a
little bit of time, search the Internet, because we told it to search
the Internet, and then it's going
to give us an answer. And these answers are
usually a lot better because they googled the
topic that we asked it about. And it got more
information on it. The reason that I put Bing in
here is because C chat CPT four uses the Search Engine
Bing to search the Internet. It does not use Google. This is because Open AI, which created Chat CPT, Open AI is owned by Microsoft, and Microsoft also owns
the Search Engine Bing. So of course, they're not going to use Google, they're
going to use Bing. This doesn't really
matter though, because Bing still does a good job of searching
the Internet for you, giving you the answers
that you're looking for, and then Chat i PT
summarizes them for you. That's it for this video.
If you're new to Chachi PT, and you just create
your account, go ahead and play around with
it, see what you can do.
30. 5.3 (ChatGPT) Alternatives: Let's have a look at some
Chat GPT alternatives. So there are other LLMs, large language models that you could be using
instead of Chat GPT. Some of them are good, some
of them are not so good. The main competitor is
certainly Google Gemini, which used to be called Bard, but there are a couple
of other good ones like Claude and Lama. So these are all different
large language models created by different companies, trained on different data, trained in different ways. You're going to see
a lot of AI tools. For example, one of
the biggest AI tools is probably Jasper. Jasper is a platform that introduces itself as a chat bot, but it's built off of Chat GPT. So there's a lot of chat bots, a lot of tools that
are built on Chat GPT, or they're built on
Google Gemini, Claude, Lama, using these LLMs APIs. They're basically taking
the Chat GPT engine or the engine of one
of these other LLMs. And putting it into
their own company, their own chat bot. But these that we're
looking at right now, Chachi PT, Google
Gemini, Claude, Lama. These are all different
large language models. So it's just important to
make that distinction. Overall, Chat EPT is still
dominant in the space. It has the best reviews, the best customer feedback. But Google recently made an
update to their LM Gemini, and it's certainly
catching up with ChatCPT. This is sort of like a
race, and these platforms, these companies, they are all competing for your $20 a month. They all realize that
this is the future and Millions of people are already paying $20 a
month to use these, and that number is
only going to grow. Right now, Chat GPT has about
16 million daily users, and Google Gemini has about
4 million daily users. There is one main
thing to note here, and that's that the free
version of Gemini and clawed is a lot better than
the free version of Chat GPT. So if you want to use AI and you don't want
to pay for it, I would definitely use Gemini or Claude instead of Chat GPT. Claude cannot access
the Internet, but Google Gemini can, which of course
makes Google Gemini a better option over Claude. All of them are
good at reasoning, but Chat GPT wins in most scenarios when it comes down to how much people
like the platform. Two positive things
about Google Gemini is that they do include 2 terabytes of storage in
your plan of $20 a month, and there might be a
really good integration coming up with G mail. Obviously, Google owns Gmail. Google has Gemini. It could be pretty powerful once they start
combining those two. Let's test out Chachi PT, Gemini, and Claude real quick. So I'm asking them to plan a one week trip in Paris
for a single person. List one activity for each day. This is Chachi PT, and it does a pretty good
job of breaking it down. It highlights the
title of the activity. And then it also
describes that activity. Same thing with
Gemini, the layout is a little bit
different, you know, images, but basically
the same thing here, does a good job. And same thing with Claude, giving us a good answer and good breakdown of
the activities. So they all do a good job with, you know, basic reasoning. As I said, if you want to
go for a free version, Gemini is a really good choice because it can
access the Internet, and it's pretty
good at reasoning even on the free version. However, for most of us that want the most value out of AI. Currently, Chachi PT is
the no brainer choice. It's just overall better. It could change in the future. But as of now, if you
want my recommendation, just go with Chachi PT.
31. 5.4 (ChatGPT) Custom Instructions: Okay, let's talk
about customizing your Chat GPT account,
customizing your chat, the way that it speaks to you, the way that it formulates
its sentences and writes to you the length of
the text that it gives you. This is pretty much the
only setting in Ja GPT, and it's also my
favorite setting. It's great. I love
it. It's really good. And it serves as a
really good example of how settings are going
to work in the future. Before, you've always opened up a settings page and you
have all these parameters, you have buttons and
stuff like that. But now you're going
to start seeing setting pages that
look more like this. This is the setting page that
we're looking at right now, and this is all that it is. You can describe to a chat
what you wanted to do, and it will do it for you. So the way to find
the setting is you just come down here
to the left corner, click on your name and click
on Customize Chat GPT. And you're going to
see two boxes here. First, you're going to
see what you would like Chat GPT to know about you
to provide better responses. And then you're going
to see how would you like Chat GPT to respond. And this is the custom
instructions box. These are the only two
settings that they really are. And then you just want
to make sure that this little thing down here is, enable for new chats. So I'm going to read you what
I've put into these boxes. And this is what I
use on a daily basis. This is not prepared
for this course. This is what I actually use. What I find to make
Chat GPT give me the best responses in the
way that I want them. So I just say I am an
entrepreneur based in Sweden. I'm interested in saving
time by not wasting time on reading or
writing unnecessary text. I like to get the facts without introductions
or outgrows. Over here to the right,
you can see what Capti is recommending you
to put into this box. It's recommending you to
put where you're based, what you do for work, what
are your hobbies interests? But basically in this box, you can just put a very brief
description of yourself. The second box is really
the more important one. Here we have how formal or
casual should Chait be, how long or short should
responses generally be? How do you want to be addressed? Should Chatbt have opinions
on topics or remain neutral? This second box is really
going to affect what the answers that you get
from Catipt look like. As you can see right here,
you can put in up to 1,500 characters in each box. So I'm just going
to read you what I've put in my second box. Never write sentences
longer than 25 words. Don't write introduction
sentences in your answers. Don't write outrows. Only give the direct answer to the question or statements. Besides of the direct
short answer needed, don't say anything else,
don't write another word. Be even more brief in your answers than
you are currently. Short, concise answers only. Convey the value of the text
in as few words as possible. Don't write introductions. Don't beat around the bush,
get straight to the point. Don't write endings.
Just give me the facts that I'm asking
for and try to shut up. And then I'm giving it an
actual example here of the same sentence in a
shorter and longer format. So down here is the
original sentence, and up here is the
same sentence, but I've just made it shorter. So I'm giving Chatut
an example of what it looks like making
a sentence shorter. So as you can see
in this second box, I'm really repeating
myself here. I'm really repeating
the message of, you know, give me
short, concise answers. Sometimes it doesn't
always listen to you. So the way that I've forced Chat GPT to listen to my message here to make short answers is to really repeat that
message like a lot. So once you've saved your
custom instructions, they are going to apply to
your regular chat GPT chat. Now, if you have other
GPTs, like I do, right here, the
custom instructions are not going to
apply to those GPTs. They will only apply to
the default chat up here. All right, so let's have
a look at some examples. So I told Chachi PT,
tell me about Spain. And this is me talking
to the regular Chachi PT chat that has my
custom instructions included. So, as you can see
here, it just answered in two very short sentences. This is how I like my answers, short and concise.
Saves me a lot of time. But then I asked one of my GPTs, which is the Chat PT chat without any
custom instructions, the same thing. Tell
me about Spain. And it feels like it gave me a whole blog article on Spain. Now, if I want to
write a blog article, this could be good,
but if I just want to know briefly
about Spain, I don't want to read
all this stuff, I would rather just
read one sentence that encapsulates all
this information. I just want to
show you that even with my custom instructions, instructing JagibT to
write short answers, if I tell it to write a 500
word blog post about dogs, that's exactly what
it's going to do. So if you're telling it to do a longer post, it's
still going to do that. But if you're just looking
for a shorter answer, that's what it's going to give
you because that's what I instructed my Chachi Pit to do. There's no limitation
here to what you can tell Chachi Pit to do and how
it should respond to you. Would definitely recommend
playing around with this, and I would also
definitely recommend telling your custom
instructions to give you shorter answers because Chachi is known for giving long answers
a lot of the time. Longer answers than you
would normally need.
32. 5.5 (ChatGPT) Usage Cap: Really quickly, I just
want to tell you about the Chat GPT usage cap. Basically, on the
regular plus plan, which is $20 a month, you get about 40
messages every 3 hours. But it varies on how much compute power
they have available. So you get 40 messages
every 3 hours, but a lot of the times you'll
get a lot more than that, and you don't even
reach the usage cap, even if you send maybe like 60 or 100 messages in 3 hours. So if you've reached your limit, this is what it's going
to look like if you try to send something to habit. You've reached the current
usage cap for GPT four. You can continue again after a specific time that they are going to
tell you right here. Or you can use the free model right here as much as you want. What solutions do we
have to this problem? We can either wait,
upgrade the plan, or use a separate tool. The waiting part
is pretty obvious. Just go to habit and check the time that I
told you to come back. Or if we have a look at
their pricing plans here, the plus one for $20 a month. That's what most of you
are going to be on. This is the plan that I'm on. The team plan is
another $5 a month. But you need two
users for this one. If we go back to Chachi Pi, click on your profile,
click on my plan. You can see right
here under team. They need a minimum
amount of two users. But the team plan gives you higher message caps on GPT four. So they don't tell you
how many that you get. They just tell you
that you get more. And then they have
an enterprise plan. Unlimited access
to Chat GT four. But of course, they're not going to show you
a price for that. You have to contact
their sales first.
33. 5.6 (ChatGPT) Internet Search: There's a few things I want
to quickly cover when it comes to Chachi Pits
Internet access. Sometimes it will tell you
it can't access links, but it does it anyways, and this can depend
on how you prompt it. When Chachi PT searches
the Internet for you, it will read about
three to ten sites before giving you an answer. This is important to know because sometimes
this is not enough. Sometimes this means that it doesn't find the
information that you want by only looking
at three to ten sites. But if you prompt it again, it will keep searching and eventually find
the right answers. I just want you to know that
Chachi Pits Internet access has been revoked in the past. Open AI has taken away Chachi Pits ability to
search the Internet. And who knows? This
might happen again, but it's been up for
several months now. And it's obviously one
of their key features, so I don't think their
users would be very happy if they did it
again. But you never know. Chachi PT uses Bing, not Google as their
search engine. This is because
Microsoft owns Open AI. Open AI created Chachi PT, and Microsoft also owns Bing. So, of course, they're going
to link those two together. I just want to show you real
quick that sometimes if you prompt Chachi PT something
that sounds like this, visit this link and
read the content, and then you put a link. Sometimes you'll get
an answer saying, I'm unable to open
specific links. But now I just want to show you that here's a random article. The article is about a
page called copy.ai, and the headline is 0-10
million users in four years. So if we copy this link, and then if we open Chachi PT
and we paste in this link, and then we just ask ChaiPT, what are they
talking about here? Send that off as a prompt. It will actually
access this link, and it's going to tell
you the article discusses copy AI's growth to
10 million users. This is just a good example. Proving that hatibi can in
fact access specific links. But if you're having
problems with it, what you want to avoid is
to literally type out, visit this link
because that seems to trigger some kind of
glitch or problem. And my other point,
if we ask Catibt, how many web pages can hatibt access before
giving you an answer, I will tell you hatibti can open up to ten web pages
in a single Q. This is my point if
you're looking for specific information and
you don't get it the first time that
you ask for it and Chachi PT searches the
Internet for that information. Then you might just
want to tell Chait, keep looking until you find more information on
what I asked for. You can also tell Chachi PT, be creative in your searches. It doesn't just search for the same thing over and over again, but it searches for
a wide variety of things giving you a wide
variety of answers. That's it for this video,
S in the next one.
34. 5.7 AI content detection: All right, Let's
address this question. Does AI generated
content hurt SEO? If you're not familiar
with SEO, basically, that just means search
engine optimization. That basically means what
is being showed on Google. What we're asking here
is if we write content, a blog, or whatever, with AI, and Google recognizes that that article is written with AI, are they going to not
show it to people? The answer to the question
is no, it doesn't matter. Google only cares about quality? Same thing goes for Instagram and several other platforms. However, for example,
the video platform, TikTok has started marking
AI generated video. If you're scrolling
through your feed, you might see a stamp on a video where it
says AI generated. As AI generated video
and photo gets better, I would not be surprised if a lot of platforms
start doing this. There's a couple of things I want to show you if you go to Google Search Central blog, you can find this article where they talk about
this exact topic. Here they're basically
saying that they reward high quality content
however it is produced, meaning whether it's produced
by a human or by AI, they will show high quality
content to people on Google. Google Rewards, expertise, experience, authoritiveness,
and trustworthiness. Basically, if you can achieve these things with an
AI written article, and good for you. You can make money, and
you can get a lot of traffic with AI
generated content. You don't need to write the
content yourself anymore, but you do have to write
high quality prompts. The next thing I
want to show you is a website called
Content at scale, and they claim to be
creating the most human like AI writer out there. Up here, you can
click on AI detector, and you'll be taken to
this page right here, which is an AI content detector. Here I'm going to
paste in a story of 100 words that I had
Chait write for me, and we're going to click
on Check for AI content. Over here to the right,
it's going to tell you that this article reads like AI. The sentences here that
have been marked as orange, they are AI tagged, I guess. If we go back to
Chait and we ask it explain how AI content
detection works, it's going to tell us AI
content detection uses algorithms to analyze
text patterns and style. Then if we ask Itawa to know for sure if content is
AI generated or not, it's going to tell us no
guaranteed method exists to determine if content is AI
generated or human written. So websites like this
one content at scale.ai, which are AI content
detectors. They're not a scam. They can actually
detect AI content, but they can't prove it. So if you're a student
at a university and you turn in an article
and your teacher says he used this tool to basically prove that you made that
article with AI, he's wrong. He can't prove it.
But obviously, if your entire article
is flagged as AI, it's pretty easy to assume
that you made it with AI. This tool, content
at scale.ai is free. So if you're
interested in whether your content flags as AI or not, you can just come over here and put it into this box
and you'll know. That's it for this video, I will see you in the next one.
35. 5.8 (ChatGPT) Language: Talk about hatipt and language. Chatupt understands basically
all major languages, but it might be slightly more skilled at bigger languages. My favorite thing about
this is that you can write the hatipt in several
languages at once. This is something that
I do frequently because I'm Swedish and I'm going to
show it to you right now. This is something that I
naturally do frequently. Here I have a sentence
where I switch between English and
Swedish three times. This first part is in
Swedish, then it's English, then it's Swedish again, English, and then Swedish. What I'm saying here is write a story about how a dog ran
away from home and then found its way back and made a front along the way
and write it in English. You can just send that off and Chachi PT understands
that perfectly. Let's do this. Let's
have Cachit write a 500 words story about
a dog in Spanish. Here's our dog story in
Spanish, 500 words long. Let's copy that one. Move over to a new chat, paste it in here. Move to the top and now I'm
going to write in Swedish. I'm going to write summarize
this story in three points, one sentence per point and write it in English. Send
that off to Chachi. Bam, it's going to
summarize this story in three points and it's
doing it in English. I think this is really
cool because now we're using three
different languages, giving the instructions
in Swedish, giving the story in Spanish, getting an answer in English. This is something
that you really have not been able to do before. If you speak two
languages like me, it can be really
helpful to be able to communicate in both
languages at the same time. I think it's amazing.
If we ask Cheti, what languages does
Chaipt support, it's going to list
some languages. Here, Chachi is saying
that it does support Swedish and I asked it
as fluently as English, and it says, no, not as
fluently as English. English support
is more advanced. Then I say what languages are Catbi best at, rank it in order. English, French, Spanish, German, Chinese,
Japanese, Russian, Italian, Dutch, Korean, Portuguese, Arabic,
Turkish, Swedish. Now my language is number 14, and my country only
has 10 million people. Basically, you can expect that whatever language you speak
or want to translate to, Chatb can already do a
pretty good job at it. That's it for this video,
S in the next one.
36. 5.9 (ChatGPT) Website to PDF: Now I just quickly want to show you a little
trick that you can use that can be very helpful in a lot of different scenarios. All we're going to do is take an article online or a website, turn it into a PDF and then
give that PDF to ChatBT. In the next video where we
look at creating a custom GPT, this can also be useful. Just for example, here's
a random article that I found about global climate
highlights in 2023. If we scroll down
here, we can see that this article clearly has
a lot of information, a lot of statistics on
climate change in 2023. Instead of copying the link and then moving over to CaT
pasting it in here, Then asking CGPT,
what's this about? Maybe you're having
some troubles with CGPT accessing the Internet. Maybe you'll just get
a better result from uploading the entire
website as a PDF. What you can do is copy
the link and then go to Google and just search
for website to PDF. You're going to get
all these results for different online tools
that are free to use. Can you just click
on the first one, web page to PDF. This website is
called webdf.com. Then we just paste the
link of the article on climate change in here.
Click on Convert. That's going to turn this
article into a PDF that we can then upload into CGBT.
All right. It's completed. Click on Download file now. Now we can go back to Chai PT
and just drag in this PDF. Then we can ask Chad
PT, what is this about? Of course, Chad PT is going
to read that PDF file and give us all the most important
information from that PDF. The other scenario when
this could be useful is if you want to let's say compare information from five
different links. Here we have five
different links, one, two, three, four, five. Instead of pasting all of
those into the CaT GBT chat, and then saying compare the
opinions from these links, which might be a little
confusing to aGT because it might have some trouble accessing five links at a time. Also it's going to
visit one link at a time and then give you some information and
then visit another one. But if you just upload
a bunch of PDFs, it will be able to access all
the PDFs at the same time. So let's say you have six
different PDFs like this. Then you could just pull all
of those PDFs into Cha GPT. Then, for example,
you could say, compare the opinions
of PDF one with the opinions of PDF
two through six. And this is going to be a
lot clearer than if it was websites than if it had six
different websites to visit. All right. That's it for this
video, see in the next one.
37. 5.10 (ChatGPT) Custom GPT: Okay. Let's talk
about custom GPTs. In this video, I'm
going to explain to you what a custom GPT is, I'm going to show you
how to make a GPT. A custom GPT is like a chat
with separate settings. You could also say
that a custom GPT is an AI trained on your own data. Now, this video might be a little longer than the other
videos in this course. If you're not interested, you
can just skip this video, but it's still good knowledge
for everybody to know. A GPT is what we
can see up here. Here you can see that
it says Chat GPT, and then it says Default GPT and then it says Twitter
Post writer. Now the first one
here, chat GPT. That's the normal Cat GPT chat. If you click on that one, it's going to take us to
the normal chats, where we have the
normal settings. But these ones, default
GPT, Twitter Post rider, these are custom GPTs
that I have made, and I've named them default
GPT and Twitter Post writer. If you've never made a GPT, you're not going to
have any of these. It's going to be empty up here, or you could have a lot of them. You could have like ten of them. When we click on
them, we just enter a chat for that
GPT specifically. Now, if we go to the
regular Chat GPT chat here, You remember the episode
where we clicked down here and we went to customize at GPT, and here are the settings
for the regular Chat GPT. These settings do not
apply to your GPT, and that's because
your custom GPTs, they have their own settings. That's the whole point of a GPT. A custom GPT is like a chat
with separate settings. Let's go through the
GPTs that I have here. If I just want to chat to Chat GPT without custom instructions, I have my default GPT, which is basically
just an Mt GPT without any settings
or any instructions. Then I have a
Twitter post writer. This one has been programmed to write Twitter posts
specifically. If we click here
on Explore GPTs, we will be directed to a store for other GPTs that
other people have made. Here, for example,
under programming, we have Code copilot. We can click on that one
and click on Start Chat, and that's going to
take us to this GPT, which is specifically
trained to code. Let's have a look at
how to make a GPT. To make one, you're
just going to come down here to your profile, click on it, and then
click on my GPTs. Here you're going to
see a little list of the GPTs that you
have if you have any. Then you can click
on Create a GPT. Now, you're going to have
some different windows here. The first one that you're
going to land on is Configure. This is where you add your
instructions, give it a name. Then if you click
on Create here, you're going to be
taken to a chat, this chat is actually
where you create and build and instruct your GPT. Like I said before with
the custom instructions, this is a new way to
have a settings page. Instead of having a bunch of buttons and settings like this, all we have is a chat and we tell it how we want it to act. Let's make a GPT that is a
long format blog writer. Just for example,
we're going to tell that you are an
expert at writing blogs with more than 1,000 words on the topic
of climate change. The next thing I'm going to
do is I'm going to come right here and I'm going to
click on Upload files. Now, you remember
in the last video, we transformed some web
pages into PDF files. Well, I have here three
different PDF files on climate change. I'm just going to
highlight those and drag them in here to
the upload section. Upload these three PDF files. Then I'm going to come
back to the instructions. I'm going to give it some
additional instructions here. All the blog posts you write
are 1,000 to 1,500 words. They all have five
different sections with different headlines. Put H one tags on the titles. Put age two tags
on the headlines. This is just going to
format the size and boldness of those
headlines and titles. Right with comedy and passion, even if the topic is
serious and informative. Now we're going to
go up to the right corner and click on Create. Here you can choose if you want only you to be able to
use this custom GPT. Or if you want anyone with a
link to be able to use it, or if you want to publish it to the GPT store so that anybody
can use it. It's published. Click on Ve GPT. That's going to take us back to the original C A GT interface. But now up here in the
left upper corner, you can see that we
have long format blog writer as an option. I'm just going to tell
it to write a blog on climate change
statistics in 2023. It's written out this blog. First of all, I just want to say that these things
are not perfect. I checked the word count on this blog and it's not
quite 1,000 words. But if we want to fix
that, we can just tell it, make it 1,200 words, and it's going to
redo the article. It's doing the other
things correctly though. It put an age one
tag on this title. This is the size of an
age one tag in a blog, and it also put age two
tags on the headlines. I told it to be comedic even
if the content is serious. Let's have a look at
the first section here. Welcome to 2023, a year that's sizzled in the frying
pan of climate change. Let's remember one
thing. Climate change isn't just a passing trend. It's like that one guest at the party who just won't leave. It's definitely
trying to be funny. I'm not saying it's
successful and being funny, but it's certainly
trying to be funny. Or at least writing
in a comedic style. I think you get the
idea, you can instruct your GPTs in specific ways and then you can access
them really quickly. It's like having
custom instructions on your regular chat GPT chat, but you just have a lot of them. You can have different GPTs for different
purposes, of course. If we go back to my GPTs, go into the long format
blog writer again, and then you click on this
little pen here, edit GPT. That's going to take us back to this section where we can
create our custom GPT. Right now we're on Configure. What you can do here is
you can click on this plus and that's going to add an
image to your custom GPT. You can upload a photo or
use Dolly to create a photo. I clicked on U Di but if we move over here
to the Create section, you're going to see that the
GPT builder is currently generating a photo
for this custom GPT. It made us a profile
picture randomly, I didn't even give
it any instructions. It just made it for me. Of course, you can give
it specific instructions on how you want
your image to look. Now as you can see
up here, we have a little profile
photo for this GPT. Now in this Create section is where we can update our GPT. Here I can tell it
all blog articles you write should be
at least 100 words. Send that off to
the GPT builder, and it's going to start
updating our GPT. This is what I mean.
You can update your settings just by telling
it what settings you want. It's going to answer as your request has
been implemented. I'm all set to craft numerous detailed blog
posts on climate change, ensuring each one is at
least 100 words long. If you go back to configure
now when we look in this box, you're going to see
that it actually changed the instructions. You can either change
these instructions yourself in the
instructions box, or you can change
the instructions in the Create tab with
the GPT builder. This is where you got
to watch out because if you start off in
the configure tab, giving it an instruction here. Then like we just
did, you come to the Create page and you give
it another instruction. That instruction is going
to override the old one. You can't just enter
an unlimited amount of text in this
instructions box. This is about the length and the size of instructions
that you can have in here. I would say that there are
two main custom GPT use cases that can be really helpful. One, you can create a custom
GPT for your company. You can create PDF files
from your website, and then you can upload those PDF files into
your custom GPT so that your GPT is specifically trained on your company's data. You can give it a
writing style that aligns with the writing
of your business. The second one, of course, is just different writing styles. You could have one GPT
that writes Twitter posts, you could have one GPT
that writes blog posts, you could have one GPT
that writes e mails. You get the idea. It can be
very helpful to save time. When you're finished working on the instructions for your GPT, don't forget to come up
here to the right corner and click on the green
button that says Update. Boom, just like
that, our new custom GPT is ready to be used. Hope this was valuable to you. I'll see you in the next video.
38. 6.1 Introduction to Automation: Okay. Let's talk
about AI automations. For most people, this will
open up a new way of thinking. Once you realize how much
content is already automated, it will make you think twice
when reading a blog article. Automations existed before AI. Automations and AI, they're
not the same thing. But combining AI with
automations is very powerful. You do not have to be a
developer to create automations. I am not a technical
person by nature, and I was still
able to learn this. You just have to be willing to spend a little time learning. Why automate? Work a little more to create
the automation. Don't work at all
to complete tasks. Automating computer work is like building a machine
for manual labor. You could even say that the time we are currently
living in can be compared to the
industrial revolution but for computer work. Few hundred years ago, people did the tasks with their hands. Now a lot of manual tasks
are automated by machines. Since computers came out, the tasks for computer work have mainly been done
by people's hands. People do a lot of
computer tasks manually. But as time moves on, more and more computer
work is being automated. Who are these videos
on automation for? These videos we're
going to cover the very basics of
no code automations. If you're a developer or you have done
automations before, this might be too beginner
friendly for you. Might already know some of
this stuff and you might know a lot more than
me about automations. But if you're not familiar with automations and you're
not a developer, this should be very
exciting to you. It presents big opportunities
to save time and create workflows we
could not create before. No code tools. In these videos, we're
not going to be talking about things that
require you to code. We're going to be setting
things up in no code platforms, and those platforms are
make.com and Sapier. These two are the
leading platforms in no code automations. They have nothing to do with AI. They existed before Chat EPT. But now that you
can integrate AI and Chat PT into
these platforms, you can accomplish some
really cool things. First, all software
applications and features were built with code by people
who knew how to code. Everything in the software
world still consists of code. But now there are user
friendly programs that will handle
the code for you. Your experience using them is like using any
other platform. Like mid journey or Chat EPT. It's just another platform, and this is done within make.com or Saper.
Let's get into it. I'll see you in the next video.
39. 6.2 (Automation) Zapier&Make Introduction: Let's talk about
make.com and Saper. In this video, I'll just
be talking about how it works and what you can
do within M and Speer. Then in the next video,
we're going to look at some actual real examples
within M and Saper. First, there's a trigger, then there's an action, and then you can have
a chain of actions. Something happens. Once
that thing happen, a lot of other things
happen as well. For example, you
receive an e mail. It reads that e mail, then it puts the content of that e mail into a spreadsheet. I just want to say
that this is not a course on no code automation. This is just an
introduction to it. While no code
platforms are simple, there are thousands of things
you can do within them. You could learn how to set up a simple automation
in a couple of hours, but you could also spend months building features
in Ma or Sapier. The videos in this
course on automation are supposed to open your
eyes to what's possible. Let's have a look at a few
more automation examples. Let's say you want
to keep a list of every single e mail address
that sends you an e mail. You connect your e mail
address to make.com. Then you connect a Google
spreadsheet to make. Then you create
settings that enter the recipient's e
mail address into that spreadsheet each time
you receive an e mail. Here's another slightly
longer example. You receive an e mail, an app within SAPR, reads that e mail. You give the text from
that e mail that you got to Chat GPT within SAPR. Chat GPT reads that e mail and labels that e mail as a
customer support question. This is where we can
start incorporating AI. You connect a Google
spreadsheet to SAPR. Sapier puts the text from your e mail into
that spreadsheet. And then everything from
that spreadsheet is sent off to your businesses
customer support team. The next e mail that
you get is labeled as a sales e mail by Chat iBT, and that e mail is sent
off to the sales team. Here is another example. Let's say you have
an E commerce store, you sell things online. You connect your stripe
payment account to SAP. Then you connect
your e mail to SAPR. Every time a sale is made, an e mail with pre written
text is sent to the customer, saying, thank you
for your order. Here's another example. Let's say you run a small
business and receive a lot of small receipts from
different purchases you make. You connect a folder within
Google Drive to make. Every time you upload a file into that Google Drive folder, that file is sent
to your bookkeeper. Now you can send
photos of receipts and PDF invoices to your bookkeeper by just dragging
them into a folder. Saves you a little bit of time. I'll see you in the next video.
40. 6.3 (Automation) Zapier&Make Examples: Okay, let's have a look at some automations within
Sapier and make. This is saper.com. This is an automation
that creates Instagram posts
automatically for us. To start off in this automation, the first task that we have up here is connected to
Google Spreadsheets. We have a spreadsheet
connected to this automation, and this spreadsheet
is constantly automatically updated
with text content. This text content can
be on different topics. The next task that
we have is Dolly. As you might remember
from the image section, Dolly is open AIs
image generator. This task takes the text from the Google Spreadsheets and creates an image
based on that text. The third task that we have in this automation is Chat EPT. This task also
takes the text from the spreadsheets
and rewrites it in a new format of one
or two sentences. Whatever the topic is of the
text in the spreadsheets, Chat EPT takes that
topic and rewrites it into one or two sentences that is suitable to
post on social media. Then we have another
task here with another application
called Publisher Kit. This fourth task takes
the text output from Chachi PT and layers it on top of the image that
was generated by Dolly. Then it creates a new image with the text on top of the
original image from Dolly. Let's say Dolly generates
an image of the ocean. Chat PT might write a sentence
talking about the ocean. Then this fourth task with
the publisher kit puts the text of the ocean on top
of the image of the ocean. Now we have a finished image ready to be posted on Instagram. The last task of this automation is,
of course, Instagram. So here I've hooked up an
Instagram account to Sapier, and this last task simply just posts the image on Instagram. Here you can also
type in some captions and some other information. Here's another example
of another automation. Now what this automation
does is that it takes a link, visits that link, summarizes that link
with C hat GPT, and then places the text summary within a Google spreadsheet. The first task of
this automation, the trigger for this
automation is whenever a new row is added to a Google
spreadsheet that I have, this spreadsheet
only contains links. A new link is added to
my Google spreadsheet. Then the second task
here takes that link and visits it and reads all the
information on that link. Now we have extracted
all the texts from a certain web page
that we have visited. The third task in this
automation is Chat EPT. Now we're going to
give all that text from the link to Chat PT to then summarize within
Chat EPT and rewrite it, reformat it as one or two
sentences suitable for Twitter. Once Chat HPT has
done that for us, we take the text
output from Chat EPT, the one or two sentences of text that is ready to be
published on Twitter, and the fourth and last task in this automation adds that text into another Google spreadsheet. That's the end of
this automation and then another automation
can take over. Take the text from
the last spreadsheet and turn it into an Instagram
post or a Twitter post. Here's another automation. This one is a little longer, a little more complicated, but I'm going to go
through it quickly. What this automation
does is that it reads an e mail newsletter
that I get every day, and then it puts all
those links into the spreadsheet
that we were just talking about in the
other automation. Every day I get an
e mail newsletter. This newsletter always
contains a lot of links. These links lead to interesting articles
on different topics. The first task in
this automation reads the e mail that we get every
day from the newsletter. The second task in this
automation is Chat GPT. Here we're giving the
entire e mail to Chat EPT. Chat GPT then takes
that e mail and extracts all the links
from that e mail. The third task takes
the links from Chat EPT and reformats
them as regular text. Then down here, we have
another Chat EPT task. This one takes the
first link and puts it into a
Google spreadsheet. Here we have a task that
simply makes the automation wait 2 hours before moving
on to the next task. Then the automation
repeats itself and it goes on and on and
on, and does the same thing. Basically every 2 hours, it takes a new link from the e mail newsletter and puts that link into a
Google spreadsheet. That's the end of
this automation. It feeds a link to a
spreadsheet every 2 hours. Then once that's done,
another automation can take over and read that link and then turn it into a Twitter post or a blog
post or whatever you want. Let's have a look at
M. This is make.com. It's the other platform. It looks a little bit different, but it's the same concept. It works in a very similar way. Over here on the left
is our first task. This is our trigger. And here on the right is our first
and only action. What this automation does
is that whenever a new row of text is added into
this Google spreadsheet, it will take that text
and post it on Twitter. As I showed you in SAPR, we have automations that
take links and read them and rewrites them into
Twitter posts using chat GPT. Then this automation takes that text and posts
it on Twitter. Now, SAPR does not
have Twitter and that's why I'm doing
this in ma.com. Here's another example of a simple automation
within ma.com. This is an automation that I was actually talking about earlier. What this automation
does is that every time I receive an e mail, it will save that
recipient's e mail within a Google spreadsheet. This way, you can keep track of everybody that sends
you an e mail. It's very simple to do.
There's only two tasks. First, you hook up
your e mail account and then you hook up
your Google account. Connect your e mail address, connect a specific
Google spreadsheet, and then adjust a setting so that every time you
receive an e mail, either the recipient
or the contents of that e mail are placed
into a Google spreadsheet. Before we end this video, make versus spar.com, which
one should you choose? Personally, I think Saper is
a little bit easier to use. Make is cheaper and
has more features. Both of them are really good. I think whatever you
want to do, you can accomplish it in
either platform. However, some
specific apps might only be available in
one of the platforms. But almost all the
major platforms like Chachi PT will be available
in both Ma and Sapier. You're just starting
out, Spier might be a little bit easier to
understand at first. But if you have a lot
of repetitive tasks, M might be a better
option to save money. That's it for this video.
I'll see you in the next one.
41. 6.4 (Automation) API: Quickly talk about API application
programming interface. Because to set up
your own automations, this is good to understand. Of course, if you know
anything about development, you know what an API is. This is for the people that
don't know what API is. If that's you, if
you don't know and you're interested in automation, it's good to understand
what an API is. But even if you're not going to make your own automations, it's also good for a
general understanding of what platforms can be built
upon or not built upon. API is like a connection between programs that enables
them to talk to each other. When a company builds a program, first, they offer
it to consumers. That's usually how it goes. Then they might or might not
offer it to companies or to people who want to build their own thing
with their program. That's when you need an API
to access their program. So if a company like Open AI has a program
like Chachi PT, they can offer API
access to that program. With Open AI API for Chachi PT, you can then use Chachi
PT within Sapi M, you can use it on your website or within an app that
you're building. But if Open AI did not offer
API access for ChatPT, then you could not
use CachiPT within Sapiar M on your website or within the app
that you're building. So if you see that a platform that you use has API access, you know that other
people can build their own things
using that platform. Why do companies
offer API access to make money and
grow, of course. API access can cost money. That's up to the
company offering the API for their application. CAT GPT, API access costs
money and you pay per usage. As a CAT GPT consumer, you pay $20 a month and you get a certain
amount of usage. On the contrary, if you
provide CTPT access to your own consumers
through OpenAI's API, then you will have
to pay Open AI for every single task that
CAT GBT performs. If you build an app
that uses CTBT's API, every time your consumer
sends a QRE using CTPT within your app,
you pay Open AI. So let's quickly go
to Open AI API page. Just go to open
a.com and log in, and you're going to land on this page where it will either take you to Chachi BT or
to their API page. Now, this is their API page and there's a lot of other
pages on this page, making it perhaps a little bit
complicated if you're new. But basically here on the left, if you open up this menu,
you can click on API keys. Here you can create a new
key and then it's going to show up like a little
snippet of code. That you can then
copy and paste into SAPR or your website or
wherever you want to use it in order to connect your Chat GPT account to your SAPR account or your
make account or your website. Over here on the
left in the menu, if we click on usage, here we can see our
Cat GPT API usage. If I hover my mouse over
some of these columns, it will tell me the amounts of money that I've spent on GPT four GPT 3.5 image models,
which would be Dolly. Here, for example, on this day, I spent slightly more
than $1 on Chat BT's API. Of course, if you
build an application on this and you get
a lot of customers, ChatBT is going to make a
lot of money off of you, because you're not paying them $20 a month for the API axis, you're paying them
every single time. Somebody uses the
chat in ChatBT. It's not very expensive, but now you might understand
if you see other programs online where they've built their own applications
using Chat IBT, why they have to
charge money for it. Because it's actually
costing them money to provide
you that service. Now the free version
of Cat EBT, GPT 3.5, API access for that one is very cheap comparatively to
the one for GPT four, which is the better version. Just as another example, if we head over to
music five dot L OL, one of the platforms
that we were going through in the section on audio. Appear in the menu,
you can see that they have an API page. As soon as we see this API axis, we know that we can use their program to build
our own program. If you want to build your
own app that makes music, you don't have to
develop your own system, you can just build
the app and use their API access to
use Music Pi system. That's it for this video.
I'll see you in the next one.
42. 6.5 (Automation) Where to get information: Let's talk about where to get information for
automated content. Before looking at how
to make an automation for automated blogs
or social media post, we have to understand where the information for those
posts can come from. Because the Chat GPT API for GPT four does not include the feature to
search the Internet. When you're using CaTPT in
Sapi or M through OpenE API, you can't tell CAPT to
search the Internet. CTEPTS API will
base its answer in your automation on
the information that CATIPT already have. Lucky for us, it has a gigantic
amount of information. When you're using Chachi PT within Chachi PT, as a consumer, You can tell CTEPT to search for the latest information
on electric cars, and it will go on the
Internet and search for you. But when you're using ChatPT right here within Sapier
in an automation, through OpEIes API for ChatPT, you cannot prompt Chat IPT
here to search the Internet. Our process for automating content is to find
information on a topic, convert that
information to text, give the text the ChatPT, repurpose the text into
your own brands voice, into your own
format with CatiPT, and then make a blog article
or social media post from that text in your own
format, with your own voice. So what does this mean for us? It means we can't make an
automation where ChatBT goes on the Internet and searches for the latest information
every day. But we can make an
automation where Chat PT creates
new content about dogs every day because CatiPT already knows all the basic
information about dogs. If you want to automate
content which is about very specific information or content that is about
the latest information, you will have to get that
information somewhere else, and then give it to Chachi PT within your automation
to reformat it. Our options for how to
get information are. We can get information from, links that we turn into
text and give to Chachi PT, or we can get information from PDF files that we give
directly to ChatBT. Or we can use knowledge
that Chachi PT already has. How to think if you want to use information that
Chachi PT already has? Chachi PT knows most
basic knowledge that has existed on the
Internet for a while. Automating content about
phishing, dogs, hiking, travel tips, exercise history, or another broad
topic will work. Because Chachi PT already knows most information
about fishing, dogs, hiking, travel tips, exercise history, or
another broad topic. But automating content about the latest electrical cars
will not work so well because it relies on having very specific information
about the latest car models. If you're using Chachi
PT as a consumer, it could go and search for information about the
latest car models, but it will not know
the latest information about the latest cars. GPT four does have
information about companies. For example, it might
be able to tell you specific information
like revenue for a big company like Apple. It might even have
some information about smaller companies, but there will be a
lot of information in these areas that
it does not have. Then we have to
use other methods for retrieving information. If you want to feed
your automation, specific information,
then you must find other sources
for that information. These sources can be
newsletters, blogs, RSS feeds, which is basically like whenever
a website is updated. It can be websites, podcasts, or any other source that you're creative enough to
figure out on your own, converting your
information into text. If your information
comes from links, then you have to use an app
within Sapier or make to read that link and convert everything on the links sites into text, text that you then give to
CachPT in your automation. You can also convert it
to a PDF that you can directly give to CachPT
within your automation. Let's say your information
source is a podcast. Then you can use a platform
or an app to transcribe each podcast episode
into text and then give that text to Chachi
PT in your automation. The point here is
that we can only give Chachi PT PDF files or text within our automation.
I just want to repeat this. Our process for
automating content, if we are not using information
that Chat EPT already has is to find information
sources on a topic, convert that
information into text, give the text to Chat EPT, and then repurpose
that text into your own brands voice
or your own format, and make blog articles for social media posts
from that text. I will see you in the next video
43. 6.6 (Automation) Words to avoid: Are some more tips when
writing content with AI. This is specifically
extra important if you're going to automate content
because if you don't do this, your content is not
going to be very good. But of course, this
applies if you're doing it normally with
Chachi PT as well, it's not just for
automating content. There are words and phrases to avoid when writing
content with AI. These are phrases
that AI commonly use, and they sound a bit robotic. Prompt Chachi Pet to not use these words when creating
content in your automation. Here's a list that
I'm going to show you a bunch of words and phrases
that you should avoid. Here's an example of
these words and phrases being used in a sentence
that is written by AI. To unlock the potential of
groundbreaking technologies, we must embark on a journey to delve deep into research
and innovation. In the realm of
artificial intelligence, it is crucial to foster an environment where
knowledge can freely flow empowering creators
and innovators to discover and master
new methodologies. It just sounds robotic. It doesn't sound human,
it doesn't sound good. It's not a style
that I want to read. The best way to get
rid of these words, is to just include
this in your prompt. Never use any of
the following words or phrases in any
text you write. And then underneath here, you can add in all
of these words. And these words and phrases will be available
for you to copy and paste into your prompt in the text for the video that
you're currently watching. So let's go through
the words and phrases. Delve, unleash, crucial, dive, discover, unlock, sure,
master, ultimately, imperative, embark, Endeavor,
foster, groundbreaking, enlighten, pivotal, and power, dive deep, in the
realm of in the world. Breaking barriers, Aims two, continues to push
the boundaries, remarkable breakthrough, unleashing the potential, advancements in the
realm, elevate, explore the world of, adhere,
evolve, bespoke paramount. If you start creating blog posts or any kind of contents with AI, you will soon start noticing a lot of these
words in your text that you're getting
from Chachi PT or whatever AI platform
that you're using. In addition to including
this within your prompt, never use any of
the following words or phrases in any
text you write. You can also give examples
of how it should write, like we spoke of in the
chapters on Chat GPT. You can include in
your prompt an example of something that you've
written yourself that you like, and then tell it in the prompt, this is how I want you to write, and this is how I don't
want you to write. In your prompt, you have
a list here of things to not include in any text
that it writes for you. Then as you start
using your prompt, you're going to keep finding words and phrases like
this that you don't like. Well, then you just copy them and paste them into this list. You might have to work on
your prompt for a little bit, but once you've done
it and once you have removed all these AI words, it will really produce
some great results. Now you might be thinking
a lot of these words like master and sure discover. These are normal words. Maybe you want to use
them in your articles. Yes, I understand
that. But the AI often uses them way too
much way too often. In scenarios where the
words don't really fit, and that is what makes it sound less human when
reading the text. I would highly
recommend that you remove words and
phrases like this. In order to get a higher quality output in
your automations. I'll see you in the next video.
44. 6.7 (Automation) Automatic Blogging: Okay. Now that you
have an idea of how to set up an automation
within Sapier or make, we can go through a
couple of examples of how to do automatic blogging. Automatic blogging, of course, you can do it in a lot
of different ways, but here are two examples. One, you can base it off
of a list of keywords, or two, you can base it off of the information in a newsletter. Let's start off with
automatic blogging with a list of keywords. This is more suitable
for broad topics that Chat PT already have
enough information about. Here you would filter
out keywords that are easy to rank for and
also have traffic. Then you can publish
2-10 articles every day without doing anything once your
automation is set up. The purpose of this could be to drive traffic to
affiliate links, to an E Cmerce store, to ads, whatever you want. Or the purpose of this
could be if you work at a company or maybe you run a software business or whatever, you can drive traffic
to your own website. To your business by creating
articles that have traffic. A great example of this
is SAPR themselves. As we've talked about now, SAPR is one of the
automation platforms, but they also write a ton of good articles on all kinds
of different topics, topics that are not
related to their service. If I just go to Google and I
type in how Chat EPT works, what is the first
result that I get? The first result is a
sponsored post by SAPR, and this is an article, just like the ones that
we're talking about now. Next up is Semrus, which is another
keyword platform that we're going to
talk about in a second. Then here's another article by SAPR on how Chat EPT works. Now the reason for
these platforms to write articles about other companies or about topics that are not related
to their own platform, their own company is that
people will click on these articles and read about other topics such as
how does Satip work. But as they're
reading this article, they are on Sapers websites. Here, SAPR can also pitch their own products,
their own platform. This is a very common way for different companies to grow. They publish a bunch of articles on random related topics. Automatic blogging with
a list of keywords. You're going to
need some keywords. To filter out keywords to use, you can use platforms like SEMrush or AH RFs. These are two of the
major platforms on the market for filtering
out relevant keywords. If you're brand new,
just starting out, I would recommend SEMrush because you can use
it a little bit for free every day
and you can get a free trial for a week as well. Here I'm quickly going to
show you how SEMRH works. This is semrah.com. Over here on the left,
if we go to this one, keyword Magic tool, and we enter in phishing
as a keyword. Now it's going to show us
a bunch of results for different keywords related to phishing within
the United States, and here you can
change the country if you want to see it
in another country. How to choose keywords? You should choose your
keywords on two metrics. Keyword difficulty score or KD, and it should have at
least a few hundred search hits per month. The keyword difficulty
score goes 0-100, and you want keywords that are easy to rank for on
Google that have a keyword difficulty score
or KD score of 0-25. If we go back to Semruh, currently, we have a search
for the keyword fishing. So what do we get? Plenty
of fish, fish, Ef fisher? Here, we can see that the
keyword, plenty of fish has over half 1
million search its every single month under volume, and it has a keyword
difficulty score of 78. So this is a high keyword
difficulty score. It's going to be very
hard to show up on Google if you have an article
that says plenty of fish, and somebody searches
for plenty of fish. What you want to do
is click here on KD and once you've
clicked on KD, it's going to filter
these keywords on phishing by the ones that
are the easiest to rank for. So here we have a
bunch of keywords that are easy to rank for, but as you can see, they
don't have that much volume. Now, if you're going to spit out ten articles every
month, it's okay. You could probably go
with some of these. But otherwise, I would recommend coming over here to volume. And entering in may
be a minimum of 400. So now we're getting
all the keywords that are easy to rank
for on phishing, and they also have at least
400 search hits per month. Now, what you can do if
you're on a pro plan, which you can get a free
trial for for a week is you can highlight all these keywords and then download
a list of them, put them in a spreadsheet and automatically make blog
articles for these keywords. And then if the keyword
difficulty score is low enough, and you have a good blog, there will be a fairly high
chance that people will find your blog articles about these keywords when they
search for these keywords, and as we know,
several hundred people are searching for them
every single month. So that's how you
choose keywords. Let's have a look
at the process for automatic blogging with
a list of keywords. First, you get a list of
keywords for your topic. Then you need to build a
blog website in WordPress. Then you build the
automation to write blog articles for
your blog website, and then automatically
publish them on WordPress. The automation, for
example, every 5 hours, a new keyword could be inserted into a
Google spreadsheet. When that keyword is entered
into your spreadsheet, Chachi PT writes a title
and a blog article. The last step, of course, is to automatically publish the
article on your blog. Once you set this up and once
it works, every 5 hours, a new article will
be published on your website and you don't
have to do a single thing. Let's have a look at an example
for automatic blogging, using a newsletter
for retrieving the information and the
process for doing this. First off, you
would subscribe to an e mail newsletter
in your Niche. Then in your e mail settings, you would forward
all newsletters to a receiving address
that is connected to your automation
within Sapier or M. Then in your automation, you would summarize and rebrand the text from the
newsletter with Cha chi PT, or like I showed you earlier, you can make SAPI or make
visit all the links in the newsletter and
write blog articles based on the information
in those links. Then you automatically publish
each article on your blog. That's two examples
of how to do it. Again, this is not a
course on M and Saper. If you're interested in
actually doing this, there's a lot of
other courses on M and Saper and automations. You can find free
information on YouTube. Takes a little bit of learning, but once you learn it,
it's pretty simple. Of course, there's a
bunch of other ways that you can do
automatic blogging. These are just two of them. But hopefully, now you get an
idea of how it can be done. I will see you in
the next video.
45. 6.8 (Automation) Automating Social Media: Okay, let's talk about
automating social media posts. Example, you can have
one automation that posts the same texts on Twitter, Linked in, Facebook,
and Instagram. Ways to do it. You choose a
way to retrieve information, like we just talked
about, rebrand it into your own voice
format and length. Now Twitter and
LinkedIn are easy, since they're mainly
based around text. Facebook also works
pretty well for posts that are just text
without video or photo. And on Instagram, like
I showed you earlier, you can layer the text on top
of an auto generated image. I know there's a
lot of Instagram accounts that already do this, and some of them, they have
millions of followers. All you really have to do
is create the automations, hook up your social media
accounts to the automation, and then focus on writing a good prompt that writes good text
output to be posted on these platforms that is able to sufficiently summarize an
article or a website or something like that
that has a lot of information on your topic and rewrite it in a short format in a
couple of sentences. So that it's suitable
for Twitter or Linked in or Facebook Instagram. You could have one
automation that posts the exact same text on
all these platforms. You could also have
different prompts for the posts that you write
for Twitter or LinkedIn. Maybe you want to write
longer posts for LinkedIn. But for Twitter,
you want your posts to be just one or two sentences. I just want to show
you an example real quick of how it works here. Here's the automation for Instagram posts that
I showed you earlier. Right now we're in Sapier. And if we click on
the last task here, which publishes this
photo to Instagram, here you can see
the step details. Here it's listing the Instagram account that we're posting on. Here's the image URL for the photo that
we're going to post. This is the photo being posted. Not only can you automate the photo that is being
posted on Instagram. You can also
automate the process of writing different
captions for each post or including different hash tags
and different posts. What about automating video? Instagram reels, tik
tok, YouTube shorts? Obviously, video is
harder to automate because it's harder to make
video, but it's still doable. Here's an example of
how you could use an automation to
make video content. Let's say you're a musician. You create an automation
that downloads short format vertical
videos that are trending, and then you automate
the process of replacing the audio for those videos
with your own music. Then you automate uploading
these videos with your own music to TikTok,
Instagram, YouTube. Once you've built
this automation, you don't have to do anything
if the videos do well, which they should because
they're already trending, so they have a track record
for success already. Then you'll get thousands of
views with your own music. Now, I would not say we're quite at the place yet where you can fully automate a
well edited video, but you can automate parts of the process of
making a video. You could automate the process
of writing a video script, generating audio for that
script with 11 labs, or creating suitable
photos for your video. For example, every day
you could wake up with an auto generated text and audio draft for a
new YouTube video. You just do the final touches. So automating part of the process is also a very good way to
save a lot of time. It's a good way to be
able to publish a lot of content without putting
in that much work. Let's say you want to build a brand as a business advisor, and you want to post on
Instagram, Twitter, and LinkedIn. But you don't want to risk low quality content
being pushed through. Well, you could automate
the process of summarizing four different newsletters and blogs about business advice. You can create an
automation where every day you wake up and Catt has summarized 20 articles and made ten suggestions for you for posts that you
could make based on the best advice in those
articles that it's summarized. Then you can pick your
favorites and revise them, maybe rewrite them in your
own words a little bit. Once you've done
that, once you've chosen the final output, you place that text
into a spreadsheet and then it's automatically posted
on all three platforms. Without you having to spend 30 minutes every day
doing it manually. Hopefully, this is giving you some ideas of what's possible
and what you can do. Maybe you can automate the
entire process or maybe you're just going to automate
a part of the process. Anyways, there's a
lot of possibilities. That's it. I'll see
you in the next video.
46. 6.9 (Automation) Automation Ideas: Okay. More automation ideas. Most automations that can help you are not centered
around content. Most of the automation ideas that we've talked about so far. Is that to do with
content, but here are a few additional ones that I think could be helpful
to a lot of people. If you don't want to automate
creating the content, you can automate posting the same video to
several platforms. If you're making a
vertical short form video, you can have an
automation that posts it with the same
captions to TikTok, Instagram, and YouTube
all at the same time, and that's going to save
you a bunch of time. Same thing if you
have a photo, you can post it to Instagram, Twitter, Linkn, Facebook, all at the same time
with an automation. These automations are
a lot easier to create than the ones that create
the content for you. So this is really a
no brainer if you're making content for
several platforms. You can track engagement from different platforms
in a spreadsheet. So if you're creating content
on several platforms, instead of checking
the engagement separately on TikTok, Instagram, Twitter, whatever it might be, set up an automation
that feeds in your engagement from different platforms into a spreadsheet. It will be
automatically updated, and it will be way
easier for you to keep track of your engagement across platforms and see what platforms are performing
better than others. Can automate e mail responses. For example, you could set up if you receive an e mail that contains a certain thing that is recognizable by Chat EPT. Chat EPT can write an e mail
back and send that off that includes that person's name or some other information
in that e mail. This is super use case
specific, of course. But it's good to know
that there's a lot of things that you can
do here with e mail. You could make your
own newsletter. Based on the methods that we've talked about for
retrieving information. Instead of using
that information for blog posts or social media, you could set up
an automation that writes a we letter every week. Maybe this automation
collects information from several other news letters on the same topic over
the course of a week. Maybe you have other
information sources as well. Then GIPT can, of course, summarize all this
information into your own newsletter
in your own style. You can identify companies
based on their e mail. This is a feature in Saper. You can identify companies
based on their e mail. You can get more information
about the sender of each e mail based on their
domain in their e mail. This is in Saper and
Saper has a feature where it can read all the
sender addresses for all the e mails that you
receive and then look up the domain for that sender and give you back some
information like, what's the company?
What do they do? How many employees do they have? If you get an e mail
from Matt at apple.com, Sapier could look up
Apple and tell you, Yeah, you just got an
e mail from Apple. It's a big company they
make phones and computers. You could create an
automation that uploads pictures or PDFs to a folder and then extracts data from that picture or that
PDF with C chat GPT. The most obvious use
case here would be just uploading
receipts and invoices, and then Cat GPT would
extract the date, the price, the company
that it's from, Then it could insert all
this information into a spreadsheet for you so you don't have to do it manually. Here's one that I
like. You can make personalized pitch e mails. Let's say you have
a list of e mails and names and company
names and websites. Maybe you have 1,000 companies with 1,000 people that
you want to e mail, but you don't want to send
the same e mail to everyone. Well, you could set up
an automation where just like in the ones that
I've already showed you, Sapier or Make could go through
each company's website, extract data about the
company for that website, and then give that text
that data to Chat EPT, and Chat EPT can write
a little summary. For a personalized
pitched e mail that you then send
off to that company. You don't even have to send off the e mails automatically, but you could write out
that one sentence of personalized text
to that company using Chachi PT
using an automation. I hope you enjoyed
these automation ideas. Hopefully, you came
up with some of your own for your own use cases, for your own purpose
that could benefit you. I really believe that everybody that works with computers can save a bunch of time
with automations if they just put a little bit
of effort into building.