Transcripts
1. INTRODUCTION: Discovered the
power of Kling AI. The leading video
generation tool that transforms simple text and images into professional and captivating
videos with ease. Hi, everyone. My name
is Faizan Amjad. And in this course,
you will learn to generate videos from scratch using stunning features
of King AI like text to video and image
to video generation. I'll guide you through
every step from generating hyperrealistic
images to seamlessly integrating them into videos. By the end of this course, you will be equipped
with skill to produce stunning video
content effortlessly. So what are you waiting for? Join now and unlock your creative potential
with Kling AI.
2. How to Get Access Kling AI: Welcome. So to get
access to cling AI, search for cling AI, and you have to open
this link cling ai.com. Once you click the link, it
will take you to this page. This is the interface
of cling.com. Here, you can sign in for free, and you will get 66
credit on a daily basis, and the credit will reset
on a daily basis as well. Now I'm going to
sign in for free. If you have an account, you can add your email and password. But if you don't have an
account, sign up for free. Click here. Now click
on this next button. Go to your mail and you will get this
verification number. Copy and paste this number right here and now click
on this sign in button. And now, as you can see, guys, we have 66 credit. Now I am going to introduce you to the interface of cling AI. Here, as you can see, guys, if you want to
generate AI images, then you can click here. And if you want to
turn these images or any image into videos,
then you can click here. You can also generate
your AI videos as well, and there will be also a video editor in
the feature soon, so make sure you stay tuned. Right here, as you can see guy, these are some example
created with ling AI. Now, if you scroll down
as you can see guy, these are some AI images and
videos created with ling AI. So the user of this website
created these examples. So you can get the inspiration from these images
and video as well. You can click any
image or video, and here as you can see guy, this is the prompt
of this image. This is the image ratio, and this is a reference
image, which is none. So this user created this image using all in the prompt
with no reference image. So this is the
interface of ling AI.
3. New Interface of Kling AI: Welcome back everyone. So the link AI change
their interface. If you learn ling AI from
the previous interface, don't worry, they just
polish the interface. They did not
completely change it. So if you now go to
the Kling AI website, this is a new interface. Now you can hop your mouse
to this creative studio, and as you can see in AI Tools, we can see we have video
generation, image generation. We even have a new feature
called Sound Generation. You can even generate
sound from now on. We have effec and if
you click here Metols, it will show you all the tools that are available
on this website. Now we can simply click here
onto this Create button. We can close this tab for now. And now, as you can see, we have this interface. Now we have this homepage. You can explore.
We have the asset. If you go to the asset, this is all the videos that I have created using Cling AI, and we have also
the images as well. You can go to the effects
to create some effects. You can go to the image. We have a new model for the
image called colors 2.0. We even have a feature
called image editing. We have a new cling 2.0 model. If you go to the
videos, as you can see, this is a brand new interface
for video generation. This is the text to video. This is image to video. You can even switch
the model from here. Like in the previous
interface we did. Now this section is
completely changed. Now, as you can see, this
section is polished. Now we can click here. We can view the images only. We can click here. We
can switch to videos, and now it will only show you all the videos that we have
created using Clean AI. Now you can select your video
from the small thumbnails. You can even show the audio. So currently, I did not
generate any audio. That is why we don't have
any audio shows here. Right here, as you can
see, we have effects. If you click here, you can go to the homepage you can explore or you can even
go to your assets. Now right here, we
can extend the video. Right here, we
have the lip sync. We have AI, which will
try on sound generation, we have Video generation. Currently, we are here
in the Video generation. That is why we have this
green logo right here. If you go to the
image generation, now the image generation
logo will turn green. So basically, this is a new
polish interface of ling AI.
4. How to Generate Image in New Interface: Welcome back, everyone. So the Kling AI recently
changed the interface. They just polish the interface. So if you want to generate
images in the new interface, simply go to image, and as you can see, we
have this interface. Now we have also the videos
that are shown here. So if you want to
use only the images, you can click here and
you can switch to images. And now it will show
you only the images that you generated
using Kling AI. Now we even have a new model for image generation
called colors 2.0. Color 2.0 has an in
feature called restyle. So what the restyle does, it will take your
image as a reference, and if you type in any style, it will change your image
according to that style. Now, right here, we can change the standard mode
to high resolution. You can change the output. You can change your resolution
aspect ratio from here. Now if you switch to
subject, as you can see, it automatically switch
to the available models, which is 1.5 colors. If you switch to pass, it will also switch to
1.5 because 2.0 colors model does not spot subject phase and
entire image reference. If you switch to entire
image reference, the model again switch
automatically to colors 1.0 because the entire
image reference is only available
for colors 1.0. If you switch to restyle, then it will automatically
switch to colors 2.0. Now you can type in any prompt and you can generate your image. So this is how you can generate images in the new
interface of cling AI.
5. How to Generate Videos in New Interface: Welcome back, everyone. So how do we generate videos in the new
interface of Kling AI? So Kling AI recently
changed the interface. They just polish
the interface for video generation, go to videos. And as you can see, we
even have a new model for Kling called
Kling 2.0 Master. Now you can go to
the text to video, and you can also
select the preset. You can select your
lens, show type, lighting, frame,
atmosphere, et cetera. Now, if you want to use
the motion brush feature, you can go to the
image to video, and as you can see, we don't have any motion
brush feature. So to access to Motion brush feature,
you have to click here. You have to change your
model to cling 1.5. As you can see, cling 1.5, standard mode does not
support the motion brdge. So you have to switch your
model to professional, which is for VIPs. Now you can access to
Motion brush feature. You can also access
to camera movement. So now we have this
polish interface. Have this creativity and
relevance feature right here. You can select your outputs, and you can even change
your second 5-10. Now, if you switch your
model to Kling 1.6, now as soon as we switch
our model to Kling 1.6, as you can see, we
have this message. Currently unavailable
on Kling 1.6, please switch to 1.5. Now you don't have to go to this area and switch
your model to 1.5. You can switch your
model from here. Now we can confirm it. And as you can see, now we
switch our model to 1.5, and now we have the
motion bridge and camera moments available for us. Now, again, you can
go to the elements, and if you switch your model to Kling 2.0 master, currently, it does not sport elements, but we do have
multiple elements, and it is also available
for Kling 1.6. So Kling 2.0 master
model of Kling only sport text to
video and image to video 4720 p resolution only. So basically, this is a
new interface of Kling AI.
6. Kling AI Interface: Welcome back. Now we will create some basic images so you can
better understand cling AI. Click on these AI images. It will take you
to this interface. Right here, you can
type any prompt. You can generate any image. You can also reference
your image if you want to generate a similar result
according to your reference. Hey, you can select
your aspect ratio, and here you can
select number of generation to your
specific prompt. Your result will be displayed to this portion of this website. Cling AI take 0.20 credit
to generate one image. We are generating four image, so it will take 0.80 credit. For video, it will
take ten credit. You will get 66 credit and
it will expire in 24 hours. If you, for example, use
20 credit out of this 66, it will reset to
66 the next day. Now let's type a simple prompt. So I'm going to type a glass
of water on top of a table. Now, as you can see, guys, if I lower the
generation number, it will also lower my credits. So let's generate
for generation, and let's select
this aspect ratio. Now, we don't have
any reference image, so let's click on this generate. Now keep this in mind, we are using the free account, so it will take some time to generate your
images or videos. Now you don't have to
wait for your generation, you can type your
next prompt as well. Now as you can see, guys, we have these images, and these are stunning images. Now you can type any
prompt and you can generate any image that
you want to generate. Now, as you can see, guys, if I hold my mouse to an
image, we have three options. One is enhanced,
which is available for the premium
user. Don't worry. I have a premium account, so I'm going to
show you everything that this website has. Right here, as you can see, we can use this image
as a reference, and right here, we can
bring this image to life. So I'm going to
select this image. Bring this image to life. And as you can see, guys, it will take ten credit
to generate this image. Click on this generate button. Now, as I told you earlier, you don't have to sit
here and wait for a couple of minutes
or maybe hours to watch your image
converting into a video. You can generate next
video if you want to. So right here, as
you can see, guys, if we go back back
to the dashboard, let's close this we click
on these AI videos. Now right here,
as you can see g, this website is now
generating our video. Here we have the first option, which is text to video. So basically, this is
similar to the AI images. If I hold my mouse right here, we can go to the AI images. We can type any prompt. So this is similar
to the text video. Now we can generate any video if you want
to just type a prompt. You can select this prompt. For example, you can
select your camera moment, so I'm going to select till and now let's
generate this video. As you can see,
guys, I can generate more than one video and I
don't have to wait for it. So you can generate multiple
videos at the same time. Now, I know there are so many
features of this website. So don't be panic. This is just an introduction
to the website. I will introduce you to
every feature of cling AI, and we will create
some stunning images and videos using this AI tool. So here we have the result. So it turned my image into
this beautiful video. Now we have this video as well. We generate this video
through this prompt. Now there is a huge
difference between the premium account and the
free account of King AI. The first difference
is you can't use Kling 1.5 if you are using the
free account of Kling AI. The second difference,
which is very huge, is it took you longer
if you are using the free account to generate your clips than the
premium account. So, believe me or not, it took me two days to
generate this clip. Sometimes it generates
your clip very fast and sometimes it
generates your clip very slow. To that point that it failed
to generate your result. And once it failed to
generate your result, it will return your credit. So as you can see, guys, my credit is more than 66. The free account
limit is to have only 66 credit for 24 hours. Why don't I have
more than 66 credit? Because I have so many
failed generation, so it will return
your credits and it plus your credit to
your current credits. Your credits will
expire in 24 hours, no matter if you use all
those credits or not, it will reset to 66 credits. So this is just a basic
introduction of Kling AI. I have a premium account, so we'll dive deep
into this tool, and we will explore its full potential to
generate desired results. So I will see you guys
in the next lecture.
7. Generate Stunning Images: Welcome back. Now, first of all, click A and go to AI images. There is two ways to
generate your image. First is to write your prompt. The second, you can
reference an image. If you want to create
the similar result, then you can
reference an image to create your result
according to that image. Now, as you can see, guys, I have generated so many
amazing Images in this tool because I write
more than few words to generate my
impressive results. Now, if you are using
the premium account, you can also enhance
your image as well. So for example, if
you click here, you can hover your mouse, and we have an option
called enhance. If I click on it, it
will enhance my image. It will turn that image
into a higher definition. Now, as you can see, guys, this is my result. This is the previous result, and this is my new result. You can also hover
your mouse right here. If you are using the
premium account, then you can also download this result without
any watermark. In a free account, you will download any
image that you want, but you will have a watermark
with that image as well. This is a premium account we
don't have any watermark. And if you use a free account, then you will have a
watermark like this one. Now if you want to create
impressive results, then you have to write
more than a few words. So how can you find your words? Go to designer.miicrosoft.com,
sign in, click here, and you
have to select images. Now we can close this tab. Now what you have to
do, you have to click here and you have to
write your prompt. So for example, I'm
going to write. So this is just a
simple prompt closeup of a burger in a hotel. Now there is an option
called enhanced prompt. And what it does, if I click
on this enhanced prompt, it added few words to make
this prompt impressive. Now we can copy this prompt. We can go to the killing AI. Let's paste this prompt, and we can select
our resolution. You can select your ratio. So I'm going to select this one. You can select the number of generation that you
want to generate. I'm going to select four. And now let's click
on this generate. Now, let's go back to
Microsoft Designer and we can select any
resolution right here. Now we can click on this
generate button and it will generate this result. These are all our images that we generated through
this prompt. Now we can enhance
and we can also reference that image as
well if you want to. We can also bring this
image to life as well. Now, this is the Microsoft
designer result, and I like this one.
Then the ling AI. So what we can do, we can
just click on this download, go to the ling AI. We can click A to
reference that image. Now before we
generate the result according to the reference
image, there is a slider. We have zero plus, and we have three
plus, right here, we have minus and we
have three minus. So this is the strength
of your reference. If you place your strength
to the zero position, it will generate your result according to this exact image. If you place your slider
to the plus side, it will just add
something to your image. And if you place your
slider to the minus side, then it will take
something from By, I'm going to show
you all the results according to this slider. So first of all, let's
try this zero position. And by the way, if you
reference any image, then it will take 0.40 credits
to reference that image. So without any reference, you can generate four
images in 0.80 credits, but we have a reference, so now we have 1.20 credits. Now, click on this
generate button to generate the results
according to the reference. We don't have to wait until
the result is generating. We can take this slider
to the plus side and now let's click on
this strand button. Now I can take the slider
to the three plus size, and now we can generate results. Now take the slider
to the minus. Now we have these results. The strength is set to zero, and the results are amazing. Now, if you set
your strength plus, it will edit something
to your images. As you can see,
guys, if you go to the original image with
the strength set to zero, don't have anything
in our plates, but if you go to
the plus strength, as you can see, guys, it just added something into your image. So we have these bread
pieces added to our burger. And if we go to the le plus, it would add something
more to your image. In the triple minus,
it just ignore my hotel room and it place
my burger onto a table. And in just one minus, it will just take
something from the image. So to explain this strength, if you set your
strength of zero, it will just generate the exact same result
according to your reference. But if you place your
slider to the plus side, it will addit something
to your image. If you take your slider
to the left side, then it will take
something from your image. There is another example that
I want to explain to you. So with the prompt, I have
generated these results. With the reference image with the strength set to plus sign, as you can see, guys, it just
edit something in my image. With the strength set to zero, it does generate the
exact same results. And with this strength
set to the triple minus, it takes everything
from my image and it just generate the
vector of the image. Now, when I was revisiting my results, I noticed something. With this strength zero, you can see we have
these normal images, but with this strength
set to the plus, check out the size
of the burger. The burger size is
also getting bigger. So this is a normal size. This is one plus size, and this is three plus size. Check out the burger size. The burger size is
also getting bigger. Now, if we go to
the three minus, the burger size is
getting smaller. And this is only
one minus result. Now, in my second example, I also notice this as well. With this reference,
my goals is slim, but with the strength
set to three pluses, as you can see, guys, my
gold is getting fatty. With zero strength,
we have normal size, and with minus sine, it getting smaller and smaller. And it also remove something
from my reference, and it just generate this image. So you have to also
keep this in mind when you use reference strength
to generate your image. This is how you can generate AI images in cling.com. Goodbye. I will see you in
my next lecture.
8. AI Virtual Try-On: Welcome back, everyone. So if you go to the AI images, we have a new feature
called AI virtual Tryon. So you can try on any shirt, any pen, you can try on any
garments onto yourself. Or if you scroll down, you can use this model as well. And as you can see, guys, these are some examples. So we have two options. First, we have single garments, and then we have
multiple garments. So first of all, I am going
to select this shirt. Now, if you scroll down, I am going to use this model. And now let's generate. Let's generate one output. And as you can see, guys, it is now generating this
garment onto this model. And as you can see,
guys, just like that, we have this garment onto this
person or onto this model. Now let's try on
multiple garments. So first of all, I'm
going to upload the top, so I'm going to
select this shirt. You don't have to remove the
background of the garment. You can use this image as well. Now I can use this black pen. Now if you scroll down, we have another option called custom now I'm going to
upload my image right here. Now the image is
uploaded successfully. Let's generate this
garment onto myself. And as you can see, guys, we have this shirt
and paint right here. Now, if you hover your
mouth to your image, we have another
option called Expand. Now we can click on it and
we can expand to a ratio. So I'm going to
select this ratio and let's click on Expand
image immediately. This way, you can change
your vertical images to horizontal you can change your image to any
other aspect ratio. And as you can see guide, this is the final output. Now I'm going to show
you another example. So as you can see guide, this is the bottom
and this is the top. I'm going to click
on this reupload. And if you have some
image like this, you can select this image, and it will automatically crop the image and
select the bottom. And now if I reupload
and select this image, it will now automatically
select the top, as you can see. Now let's click on the generate. And as you can see, guys, we have this result. Now, if you open this image, let's download this image. Let's open this image. And if we zoom in as
you can see guys, we have a little bit
problem right here. If you have a beard
in the model, you will get this
kind of result. But let's change the
model to recommend it. And this time, let's select this model and click
on this generate. Now as you can see, guys, we have this result. Now I'm going to select a
different image this time, and now let's click on Generate and let's increase
the number of output. And as you can see, guys, now we have these outputs. The results are now
pretty amazing. Let's expand this further to a vertical form
and here we go. So this is how we can try on different garments
onto the model or onto yourself using the AI virtual Tryon
feature of Kling AI.
9. KOLORS 1.5 vs KOLORS 1.0: Come back. So we got a new model for image generation
called close 1.5. Before this model, we got 1.0. Now I'm going to compare both of these with
the same prompt. Now, these examples are for
1.5, and as you can see, guys, with this prompt, we got these results. Now with the exact same prompt, with 1.0, we got these results. Now let's see another example. With this prompt, this is 1.0, which is not bad, but with 1.5, we got these results. Now, the major difference
between 1.5 and 1.0 is that 1.5 generate
realistic images. On the other hand, 1.0
generates impressive images, but they look like they
are generated from AI. And as you can see, this look
like an artificial image. But if you look at
the result of 1.5, it is more like a
realistic to me than 1.0. We have another example. With this prompt,
this is 1.5 results. And as you can see
guy, the results are looking pretty good. Now with the same
prompt, we have 1.0. And as I told you earlier, this more artificial
to me than 1.5, because 1.5 is more like a
realistic image than 1.0. Now we have another example. With this prompt, we got this
result which is not bad. But with the exact
same prompt with 1.5, we got these results. Now you tell me
which one is good, this one or this one. Obviously, 1.5
looking pretty good. Now with this prompt, we got these results and with the exact same
prompt with 1.0, we got this result. With this prompt, with 1.0, we got these results
and with 1.5, we got these results. Now, in this example,
1.0 look beautiful, but in 1.5, which I
told you earlier, it's pushed the image more to the realism
than the imagination. 1.5 push the image to the
imagination kind of thing, but 1.5 generates
realistic kind of image. Now, if you type something like pure imagination in this prompt, as you can see, guys, this is a pure imagination scenario. With 1.0, we got these results. With 1.5, we got these results. So in this example,
1.0 generates some impressive results
because this is pure imagination
than the real world. But 1.5 try to generate
realistic results, but I think 1.0 win this one. Now with this prom, this is
pure imagination scenario. This is 1.0 deserts and with 1.5 with the
same exact prompt, this is real that we got. And 1.5 just smash 1.0
because as you can see, guys, it looks like
a realistic image. On the other hand, 1.0
look like a concept art. Now we have another example. With this prompt,
this is 1.0 results, and with the exact same
prompt, this is 1.5. And as you can see guys, 1.5
generate realistic image. Now I try to generate vector
art using this prompt, and as you can see
guys with 1.0, we got these results,
which is not bad. But with 1.5, it generate
what I ask him to generate. Because I copy this prompt, I steal this prom from
designer domicrosoft.com. AzkaCGuide this is
Microsoft designer result. And I copied that prompt and I piece that
prompt right here, and we got this result. Now with this prompt, with 1.0, we got these results. And if we go to 1.5, we got these results, which is impressive because if you look at the
color combination, and if you look at the graph, 1.5 generate realistic
image then 1.0. In 1.0, as you can see, the graph is not
aligned correctly, but in 1.5, we got a realistic graph and we
got a realistic figure, and we have a good color
combination, then 1.0. Now I try to generate some three D animated
images. With this prompt. We got 1.0 results. As you can see, these are some creepy results because I ask him to generate
three D animation. But with 1.5, we got this beautiful result because 1.5 generate dualistic image. That is why we are getting
what we ask him to generate. If you try to generate
some anime style pictures, you can also do that. With this prompt, as
you can see guide, this is 1.5 results, which is very impressive. In this example, 1.5 look like a finish anime style picture, but 1.0 look like a concept art. We have another example. With this prompt, with 1.0, we got these results, and with the exact same
prompt we have 1.5 results. Now with this prompt,
we have 1.0 result, and with the exact same prompt, we have 1.5 result. And 1.5 generate what
I ask him to generate, I ask him to a man
wearing a ski mask sitting on a couch and a big fire happening
in the background. And this is what 1.5 generate, and 1.0 just generate these types of images
which I really don't like. Another thing that I notice
in 1.5 versus 1.0 is that 1.0 contains some artificial softening
in these images, but 1.5 generate images without
any artificial softening. As you can see, guys, this
looks like a cinema camera. And if we go to the 1.0 result, these images have some
artificial softening in it. Now we have another
example of 1.5. I just change some keywords in my prompt and we got
these results of 1.5. Now, as you can see guys in 1.0, again, we have some
artificial softening. It looks like that I upscale
some low quality image, and this is what the
final output is. With this prom, we
have 1.5 results, and with the exact same prom, this is 1.0 results. Now let's talk about
the vector images. With 1.5 with this prom, we got these results. With 1.0, we got these results, which is impressive than 1.5. Color combination is good. In the background, the red color is good, but on the other hand, 1.5 red color is
killing my eyes. Now with this simple prompt, 1.0 generate these images, which is some artificial in
it, as I told you earlier. But 1.5 just smash 1.0
in this example as well. Now with this prompt, we have 1.0 results and with
the exact same prompt, we have 1.5 results. Now we have another example. With this prompt,
I have 1.0 result. With the exact same prompt, we have 1.5 result. Now, 1.5 is looking like that I'm taking a picture of
this scene from my camera, in 1.0, but in 1.0, we have this feeling
that this is generated from the artificial
intelligence. Now we have another example. With this prompt, we have 1.0 result and with
the exact same prom, we have 1.5 result. And as you can see, 1.5, try to generate realistic
image as much as possible. Now we have another example. This is the prompt, and
this is 1.0 result. And with the exact same prom, this is 1.5 result. And as you can see, guys, 1.5 general realistic
image, then 1.0. Now we have another example. W 1.0, this is the prompt. These are some results of 1.0, and with the exact same prom, we have 1.5 result. With this simple
prom POE of a car, this is 1.5, and with
the exact same prom, this is 1.0 result. Now, again, we have some
vector image as well. With this prompt,
we have 1.0 result, but with the exact same prompt, we have 1.5 result. So 1.5 try to generate
realistic vector, but 1.0 have some detail in it. So when we trying to compare
both of these model, if you want to generate
vector images, then I think 1.0 is
the best option. But if you want to
simplify your vector, then try to use 1.5 because 1.5 simplify your vector images
as much as possible, but 1.0 have some detail
in its vector images. Now let's recreate this
image using link AI, so I'm going to
copy this prompt. These are the result of this prom using
Microsoft designer. Now let's paste this prompt
and now let's generate image. Now another major
difference between 1.0 and 1.5 is that if you want to
generate one image using 1.0, it will take your 0.20 credits. But if you switch
your model to 1.5, it will take one
credit per image. So if you want to
generate four images, it will take your four credits. Now let's click to
generate with 1.5. Now this is a result of
1.0 with this prompt, and the results are not bad. I'm really impressed with the output of this
prom using 1.0. Now with 1.5 with the
exact same prompt, we got these results. Which is also not bad, but for some reason, I kind of like 1.0 result. Now, if we go back to the original source,
on the other hand, I think 1.5 try to generate realistic image
according to this prompt. But 1.0 generates its own
version of this prompt. If you want to
save your credits, then my recommendation
is use one picture, generate both models results, and then if you like
one model result, use that model and generate
four result if you want to. Because if you
generate four images, you will get some
different variation. If we compare these four images, we have some tiny differences
between these images. This applies same
to this 1.5 model. We have tiny changes
in four images. And in the final output, it just changed
the camera angle.
10. Generate Images With Subject Consistency: Welcome back, everyone. So if you change your model
from Cro 1.0 to or 1.5, if you go to text to Image, right here, as you
can see, girls, we have a new feature for 1.5
called Upload a reference. So now when you
upload a reference, you have to select a subject. Now we have this
image as a reference. So to create these
kind of images, you have to go to abstodGoogle. Now, once you go to
Labs dot Google, you have to select
image effects. And now if we open
my original image, this is the prom that
I use to generate this image using Labs
Google image effects. Now, once we upload the
reference, we have three options. Either we can select
this whole subject, we can select only
the phase reference or we can select
the entire image. Now you have to
keep this in mind, if you select the entire image, then the model will
switch from 1.5 to 1.0. So suppose we select
entire image. I'm going to confirm,
and as you can see, guys, now we have cloud 1.0. And now if we over the mouse onto this small
preview of this image, if we click here,
this is the edit. If we select the subject phase, and now if I click on
this confirmed button, as you can see, the model is now switched from 1.0 to 1.5. If we switch to 1.0, the reference will go on. Now let's upload the
reference again. And now this time, I'm
going to select the phase. Let's click on Confirm. Now, once you select the
phase of your reference, you can type anything that
you want this subject to do. Now, if you change your
reference strength through 100, then it will copy the exact facial feature of your subject. So let's change this to 50%. And now I'm going to type
a boy sitting on a sofa in a studio and let's
type dramatic lighting. And as you can see, per image will take two credits
with the reference. And if you delete the reference, it will take one
credit per image. So we have four variations. That is why we have
eight credits. And now let's generate
these images. Now with the exact same prompt, let's go to the edit, and now I'm going to
select my whole subject. So if you select the phase, it will copy your
face as a reference. But if you select the subject, it will copy your
entire subject. That includes its
clothing, et cetera. Let's click on Confirm, and let's click on generate. As you can see, guys,
we got these results. Now as you can see, we got
this beautiful result. And if we go back to the original source,
as you can see, guys, we have this phase
as a reference, and these are a new entire
AI generated images. Now we can also
change the reference, subject phase reference as well. Now, as you can see, guys, it copied my 45% phase
reference of this image. If we change this to 100%, it will copy this exact phase. So let's do that. Let's go back to the phase
selected results, and I'm going to change
my reference to 100%. Now with the exact same
prompt, let's go to edit. And this time, I'm going
to select entire image. And now we have 1.0 model. And with the reference, it will take 0.30 credit per image. So let's generate
four variation. So the images are now generated. With phase reference,
100% strength, this is what we got with the
subject reference as well. Now with only the
phase reference, as you can see, guys,
we got the result. And as I told you earlier, it will only copy your face. So that is why we have different hairstyle in
these four variation. Now we have the entire
image reference, and this is what
we got with 1.0. Now let's select
another subject. I'm going to delete
this subject. Now if we go back to labs dot Google Image effects
with this prompt, I generated this image. So let's download this image. Now I'm going to
select this subject, click on Confirm and with
the exact same prompt. With 100% phase reference, we can generate let's
let this aspect to show. And now let's generate
these for variation. Now as you can see, guys, we got these images. Let's change the subject
reference to 100% as well, and I'm going to change
my pace reference to 60% because I want AI to take the reference
of the subject, not too much, and I want the clothing of
the subject 100%. So that is why I change
my strand to 100%. Let's generate these results. Now as you can see, guys, we got these results. Now we are getting a different
phase because I type a boy and let's change the keyword to men and now let's generate. So as you can see, guys, in the new generated results, it takes 100% reference
of my subject. As you can see guys, if we go to the original source
reference, let's go to edit. We have this jacket, and if we go to the
generated images, we got that same jacket. And now as you can see guys, we got these results. So the reason why this
reference image feature is important is because you can now generate
consistent images, and with that consistent images, you can also turn your
images into videos. So if we go back to the
animated corrector, as you can see, guys, we can
take this boy to anywhere. So we can also type sitting on a sofa to sitting in a car. Now with this prompt, it will generate images, and now we can turn that
images into videos. This way, you can create consistent videos with that
same subject consistency. So now, as you can see, we have that same subject in a car. We can change the sitting
in a car to driving a car. Let's add a comma with dramatic lighting.
Let's turn right. And now, as you can see,
we got these results. So now we can create consistent character
videos using this reference
feature of ling AI.
11. KOLORS 2.0: Welcome back, everyone. So Cling AI launch
their new image model, which is called Colors 2.0. Now to access to Color 2.0, you have to go to the
homepage of Cling AI, and by the way, this
is a new interface. We can select image, and we can switch
the model from here. I'm going to select Colors 2.0, which is the latest model. Now we can type in any prompt. You can go to Chat GBT and you can ask him for some ideas. Like in this prompt, I gave me ideas on Mini World
for video generation, and I got these prompts
for Mini World. Now we can copy any prompt. So I'm going to
copy this prompt. Now you can test
this prompt before you paste that
prompt in cling AI, go to image effects
of Labsto Google, and you can paste this prompt. This is a free image generation
with unlimited credits, so you won't lose any credit. So you can test your
prompt here first, and then you can try this
prompt later in cling AI model. So you won't lose any credits. Now, as you can see, we
got this beautiful world. Now we can paste this prompt. Now you can change your aspect
ratio from here and you can also change the
outputs of your prompt. Now you can change the model from high resolution
to standard. I'm going to select
high resolution. Let's generate four, and
let's generate a standard, so we can compare them later. So the images are now generated. Let's look at the high
resolution first. So if you open an
image, as you can see, we have this new interface
of viewing an image. These are the height resolution
result of clers 2.0. Now we have these
standard images, which is also not bad, but the height resolution have
some depth of field in it. You can also upscale
your image as well. You can expand your
image as well. This is another prompt that I try with colors 2.0
high resolution. We have this result. I have this regular
soda gain superpower like flying laser phis, and with 1.5, we
have this result. We have colors 2.0
result right here, and we have this example, which is looking realistic. And we have this
example as well. Now if you want to
only view the images, then you can switch
from all to images. And now, as you can see, now we have only the images. And we have some
other prompt as well. Like in this prompt, we
have these examples. As you can see, the images with colors 2.0 are looking crazy. We have this example. Check out the details. And with this prompt, we have these images. And with this prompt, we got these crazy images. M let's compare the
colors all the model. As you can see, we have
colors 1.0 with same prompt. If I open this image, this is the result that
we got using this prompt. Now with color 1.5, we got this result. And with the same prompt, with colors 2.0 high resolution, we got these results. This is not what I was
expecting from colors 2.0, but this is what it is. And now with colors
1.0, with this prompt, we got these results and with the exact same prompt with
1.5, this is what we go. And with colors 2.0
high resolution, we have this insane upgrade. I mean, it looks like
a real life picture. Now we have this prompt, and with colors 1.0,
this is what we got. And with the exact same prompt, this is 1.5 images. And with 2.0, we
got these images. I mean, in this case, I think 1.5 was look
realistic to me than 2.0. 2.0 overall look fantastic. I think we have some art here. We have five finger
and we have a thumb. So in this prompt, I think 1.5 is the clear winner. In some cases, 1.5
perform better. But if you look at overall, 2.0 is beating the 1.5 and
1.0 in average scenario. Now we have this
prompt with 1.0, we have these images, and with the same prompt, we have 1.5 images. Now with the exact same prompt, we have this image, and 2.0 perform incredibly better than all
the other models. As you can see, we have
shallow Daptofeld. We have a cinematic scene. You can even generate
this image into a video, and it will look fantastic. Now we have this prompt with
1.0. This is what we got. And with 1.5, we have a little bit improvement
in these images. But if I show you 2.0 result, you will be surprised. I mean, check out these images. They look like a real
life pictures to me. Now we have this
prompt with 1.0, we have these images. And with the exact same
prompt, this is 1.5. 1.5 give me the images that
I want from this prompt. And with the same prompt, we have 2.0 result. I mean, everything look
realistic except the birds. I think 1.5 perform
better in this prompt. It gives me the bird
shape that I want, but everything else, if
you ignore the birds, I think everything else
is look better in 2.0. We have this prompt with 1.0, we got these images, and with same prompt, we have 1.5 result, which is a huge jump
from here to here. And with the same prompt, we have 2.0 result. In this prompt, I also like
1.5 result better than 2.0. So as I told you earlier, 1.5 perform better in
some cases than 2.0. Now we have this prompt,
and this is 1.0. And with the same
prompt, we have 1.5. As you can see, we have a little bit improvement
in the image. And with the same
prompt, this is 2.0, and there is a huge jump from 1.5 to 2.0. It look realistic. But again, I think,
in this case, 1.5 also perform a little
bit better than 2.0. Because it also look like
an artificial image. But if you look at 1.5 image, the image look nice and I don't seem to notice
any error in the image. So in this prom, 1.5 is also again a winner. Now we have this
prom, and again, we have tomatoes
again in the image. With 1.0, we have these images, which is also not bad. And with 1.5, we
have these images. And with colors 2.0, we have this insane image. And clearly, as you can see, we have a winner right here. 2.0 perform better in these
images than 1.5 and 1.0. Now we have this example, and before I show you
1.0 and 1.5 result, this is the image, and
this is 2.0 result. As you can see, it looks
like a game picture, and with the same prom, this is 1.0, and this is 1.5. And I don't think this prompt
work in all the models because all the model give me
the concept art except 2.0, give me a game like picture. Now we have this prompt
and we have 1.0 images, and I really impressed
with 1.0 as well. It performed better
in this image. And with 1.5, we have
this insane upgrade. As you can see, from here
to here is a huge upgrade. And with the same
prompt, we have 2.0, and as you can see, 2.0 perform better than all
the other models. Now we also have a upgrade
to view the image. As you can see, right here, we have a small thumbnail of
all the generated images. We can view them by scrolling down or scrolling up like this. Now, if you open an image, we can also use the scroll
to view the image as well. You can scroll down or you
can scroll up like this. Now you can also use
the keyboard arrow down key to view the
image to downward, or you can use the
upward arrow key to view the images to upward.
12. Restyle Images Like Ghibli Style: Welcome back, everyone. So Kling AI image model Colors 2.0 offers a new
feature called restyle. You can restyle your image
into any style that you want. I input this image, and I type in the
style of 90s anime. And this is what we got. So to restyle your
image, first of all, all you need, you can switch
your model to colors 2.0. And you can upload
your reference. Now, once you upload
your reference, you can either select the style. I'm going to select this
style, and by default, we have the standard mode, and you can also
change your outputs. So I'm going to
select two outputs. Now you can also click
here and you can refresh the hints that
the cling AI offers. And as you can see, this
time we got the style. So I'm going to select Pizle art and let's generate
four images again. Again, we can refresh,
and this time, I'm going to select comic and let's generate
four images again. And as you can see,
we have these images. They not look fantastic, but this is what we got. Now with the pixel art style, this is what we got. And I really like
the pixel art style. And with the comic style, we got these images
and I really impressed by the comic style and
with the pixel style. With this style, we have so
many errors in the image. Let's try some other style. Let's select the anime style, and I'm going to also
select three D style. Now with the three
D cartoon style, this is what we got. And with the change to
anime cartoon style, this is what we got. So this is how you can
use the restyle feature of the new model of
colors 2.0 of King AI.
13. Image Editing: Welcome back, everyone. So cling AI New update
gives you a new feature, which is called image editing. If you go to the
ling AI homepage, and as you can see, right here, we have image editing. If you click on it, we
have this interface. Now this is how it works. You have to upload an image. Now my image is uploading. Now, once your image is upload, this is the interface of
image editing in painting. Now as you can see, we can use the scroll to move this image. Now I can use my scroll
button to move this image. We can press Control, hold it, use the scroll to zoom in
and zoom out the image. You can also change
your screen from here. Now, these are all
the shortcut keys that are available
for this interface. Now, right here, as you can
see, we have some tool. We have brush selection. So basically, first of all, we can increase or we can lower
the brush size from here, and now I can select
brush selection. Now I'm going to select my watch and I'm going
to type a prompt. Roll golden watch, and we
can select the output. So I'm going to
select two outputs. And as you can see,
if you click here, you can increase intensity
if you want the effect to stronger or if you
want the effect to lower. So I'm going to set my
effect to the medium. You can also inverse
your selection by clicking here and you can
also clear your selection, and you can also undo
or redo this action. Now we can click here
to paint this effect. And once you submit
your selection, we have this window that appears that it is generating the effect that we
ask him to generate. Now you can click here
to continue editing. You can again ask any changes
in your image as well. Now we have three selection. We have Bush selection, we have box selection, and we have quick selection. Now I'm going to
clear my selection, and I'm going to also
clear my prompt. First of all, let's
look at the result. And as you can see, we have a Golden Rolex watch right here. Now you can also use
the same selection by clicking onto this image. And if you want a refresh start, then you can clear
your selection. We have the bred selection, and we have other two selection. So you can simply select
the box selection, and again, if you want
to change the watch, you can simply select the watch. And if you want to
select this part, you don't have to
undo this selection. You can also draw
another area like this. And now we can this
time change this to caso modern watch. As you can see, it is not
generating the effect. Now we have a quick selection. I'm going to clear my selection and I'm going to select
quick selection. First, it will
analyze your image. Once it analyze your
image, as you can see, we can select any part of the
image to change that part. We can also select
the background or we can select
the shirt as well. I'm going to select my shirt, and it also selected the beard. So we can also erase this part by clicking
on this eraser tool, increase the brush size, we can erase this selection. And now we have this. Now I can change my
shirt, white shirt. So while it is
generating the shirt, let's look at the watch,
and as you can see, we have this Cassie
watch on my wrist. As you can see, it does
not change my shirt. So again, we have a little
bit changes in the shirt, but it does not change the
shirt to a pure white. So we can select this input again and we
can ask him again to change the color of the
shirt to plain white. So this time we're asking him properly to change the
color of the shirt. Now, while the shirt
color is changing, we can clear the
selection and we can brush the subject like this, increase the brush size, and we can select the
whole subject like this. Or the reason why I'm
doing this is I'm going to show you how we can use inverse
selection the right way. If you want to remove or if you want to change
your background, simply select your subject first and then inverse your selection. So this way, you don't have to paint your whole background. So we can just paint a
little bit, not too much. The edges of the subject. Let's erase this part. And now I can ask him to change the background
to a studio, match light and shadow of
this atmosphere field. So let's ask him to change
the background to a studio, match light and shadow
of this atmosphere. So while it is generating
the background, let's look at the result. And again, we don't
have the white shirt. So let's ask him change
the shirt color. So this time, I ask him to change the color of the
shirt to plain black. While it is again
changing the shirt color, let's look at the background. So as you can see, it
changes my background. And if you look at
the shirt result, the shirt stays the same. So I think we have to manually brush the shirt so I
can clear my selection, increase the brush size, and we can paint the shirt. Now you can also change the
color of the hair as well. So I'm going to select
my hair and let's change the color of hair to red. Now, this time, as you can see, we have the plain black shirt. So the reason why
we were getting the error is because we did not select
the shirt properly. So again, this time,
I'm going to ask him to change the color of the
shirt to plain white, and hopefully we will get
the result that we want. Now, as you can see, it
changes the color of my hair, but it also changed
the hair cut as well. So I'm going to
select this input. I'm going to clear my selection, and I'm going to draw this hand, and I'm going to also draw
this area of my hand. So I have this prom arrow
tattoo on both hands. Now, this time we
have this blue shirt, and finally, we have
the white shirt. So in order to modify
an object or subject, you have to properly
select that area. Now, as you can see,
we have this arrow tattoo on my hand checkout. We have another example. If I open this image,
as you can see, I also changes the
background of myself. In this example, I have
the studio background. If you go to the in paint, replace background that
match the lighting of the scene with the
subject, add a studio. And as you can see,
we got this result. And in this example, we have replaced
the background that match the lighting of
the scene with subject. Add a studio background
with medieval stuff, and this is what we got. And in this example, I selected
the beard and my hair, changed hair and
weird color to blue, and this is what we got. And this time, I changed the
hair color to blue only, and we got these images. And in this example, I change the shirt
color to plain black, and I use box selection. Now, if you notice I also
select my face as well. That is why we got this weird
face effect right here. So if you avoid this selection, it will change the color
of the shirt properly. So in this example, I also use a quick selection
to select my shirt. That is why it did not change the color of
my shirt to red. As you can see, we are not getting the color of red shirt. In this example, we
have these changes. Now you can also select multiple object and
you can change them. So in this example,
I selected my belt, my watch, and my glasses, and I typed this prompt. Change watch to casseo, change glasses to sunglasses, change the belt to brown. And as you can see, we
have the wa change belt, and we have sunglasses as well. So you can select multiple
objects in one image at once. You don't have to select
individual object. You can select multiple
object at once. And in this example, I change my background
to empty room, and this is what we got. I again paint my whole
background, including myself, and I typed this prom,
clear the background, make it an empty off white room. And as you can see,
we have these images. If you don't want the
error in your image, then you can use
Quick Selection to select your shirt or any object, and then you can use
the bras selection to select the eddies because as you can see the
quick selection, we select the object, but it won't select the
edges of that object. In this example, it
select my shirt, but it is not selecting
the edges of my shirt. That is why we are getting
this kind of error. And that is why we were
getting the error. So manually brush the edges, and hopefully you will get
the result that you want. Image editing, we have another
feature called expand. You can expand your
image as well. So in this example, I select a free expand area, so I selected this area,
and as you can see, without any prompt, it
gives me these images. And if you type a prom, so for example, I selected
this area as well, and I type abandoned
hallway with scratches and graffiti on wall and
check out the result. Now we can go to the Expand. We can select any image. So I'm going to select this
image and go to Expand. We can use the original or
we can use a custom ratio. So I'm going to
select this ratio, and I'm going to also
place my image right here. And without any prompt, let's expand the background, and let's type a prompt as well. So without any prompt, we have this image. Now, I really like
these two images except this one and this
one because in this one, I think we can really see
the merge effect right here. Now I type this prompt
studio lights and plants plays in background
as you can see, we have the plant and we have
studio light right here. So it beautifully merges these things and place them
to expand the background. I really like this one and this one because as
you can see in this plant, we have this light
effect as well. So this is how you can
expand your background, and this is how you can use
the image editing of Kling Ai
14. Basic Introduction to Generate AI Video: Welcome back. Now you have
to click here and go to the videos in order to
generate clips using cling AI. Now, there is a few ways to generate clips using Cling AI. The first one is text to Video. In text to video,
you have to type a prompt in order
to generate clip. So now let's do that.
Now I have this prompt. You can copy this exact same
prompt in the settings. As you can see, guys,
we have to slider. We have creativity and
we have relevance. Now in the mode,
we have standard and we have professional. If you are using
the free account, you will only get
the standard mode. So now let's check the standard
mode for this example. Now we have the length
five to 10 seconds. We have the aspect ratio
that you can change as well. You can select any aspect
ratio that you want. Now we have number
of generation. Right now, I have set to
one video generation. Now below the aspect ratio, we have camera moments. If you click here, we have
so many camera moments. Now below the camera moments
we have negative prom. So what is the negative prom? Negative prom is
something that you don't want in your
generate video. For example, if you don't want blur in your video,
just type the blur. If you don't want distortion or something like
frame infringement, you can type that
word here and it won't generate that
thing in your video. Now let's click on
this generate button. So I also use Microsoft
Designer for this prompt. As you can see, guys,
this is the result of this prompt in
Microsoft Designer. Now I am thinking
that we will have a different result
than this one, because this was generated
in Microsoft Designer, and we are using the King Eye, which is totally two
different kind of tools. While the video is generating
on the right side, we will have the results
of generated videos. As you can see, guys, I
generate so many videos. You can click here to change your videos
to text to video. So these are all
my text to video. You click here, you can change
this to image to video. These are all my images
turned into AI videos, and you can also
select all video, you will be having
all the videos displayed on the right side. You can also click here. You can also change the view
from simple to detail view. Now, in detail view,
as you can see, this is the new video that I am generating using this tool. These are all my old videos. In detail view, you will have the prom will have
the reference image. If you have one,
in this example, I have the reference image, so these are all my
reference images. It will also display
the version that you used to generate that video. So for example, for this one, I use Kling 1.0. If I scroll down
for some videos, I use Kling 1.5, like this example.
Let's change that. You can also click A to select any video or picture
you can download. Or you can delete that
video or picture. You can also set that
to your favorite, or you can click again to
unfavorite, that thing. Now let's click on it again. Now the video is generated. Let's view the result. And it is totally different
from the Microsoft designer, as I said earlier. So this is how you can
generate a clip by using a prompt in text
to video using ing AI. Now, in the next lecture, we will talk about
what's the difference between the standard mode
and the professional mode.
15. Generate Impressive Text to Video: Come back. Now let's
copy this prompt. I used Microsoft designer
to enhance my prompt, and I got these results. Now let's go to the King
AI and paste this prompt. Now for this video, let's select the mode
to standard mode, and I don't want to
tweak any setting. And now let's click on
the Generate button. While this video is generating, we can select the
professional mode. By the way, if you are
using the premium account, then it will cost you 35 credit to use the
professional mode. Now we can generate this prom, so the videos are now generated. So this is the
standard mode video. It's looking pretty good. And this is a professional mode. As you can see guy, there
is a huge difference. There is no frame infringement. There is nothing
wrong in this video. It looks like a real
footage from the real life. Now let's change the
cling 1.0 to 1.5. Now when you select 1.5, the camera moment
will disappear. And we can't change the
professional mode to standard. So now let's generate. Now there is a huge difference between professional mode
and the standard mode. The professional mode will stay consistent and
generate your video. There is nothing wrong. There is no frame
infringement in this clip. But the standard mode
just generate your clip, whatever you ask and it
will stay inconsistent. As you can see, guys, we have something frame infringement
in the foreground, and we have something
weird dots or ringement appearing in
the mountain, as well. So if you want to generate
high definition results, then use a professional mode, but you have to upgrade
to the pro version. Now, by the way, if you are revisiting your
generated videos and you want to view
which mode I use and which setting I use to
generate that clip, then you can also view that. For example, if I
select this video. As you can see, guys, I
changed the mode to standard. But if I select this video, it automatically
changed the mode to profession because this was the mode I select when I
was generating this video. Not only this, if you
select any camera control, then it will appear right here. If you change any aspect ratio, length, et cetera, then it
will appear to the left side. Now the video is regenerated, here we have the result. I kind of like this one because it generates
what I asked for. For the professional mode, I think I like this one, but the new version
is looking more beautiful and better
than the previous ones. Now, if I also
regenerate this video, we will get different result. So why not do that? Now, if you are not getting
the result that you want, then regenerate your result. So in this example, we get a different result and
better than the previous. This video that we generated using one
but five is not bad. But the new video is looking more beautiful
than the previous. So now we have the
regenerated video result using 1.0 perfection mode. If you want to generate videos, then you have to try version 1.0 professional mode version
and 1.5 perf mode version. Because you will get
different results, then you can choose video you want to use
in your projects. I can use all these
videos except the standard board
because the standard mode is not useful. You can use in your
YouTube videos, but it is not useful because
we have inconsistent video, whereas the pfession mode 1.5 mode is much better way
to generate videos.
16. Creativity vs Relevance: Welcome back. Now I
will show you what's the difference between
creativity and relevance. So for this example, I use this prompt and I
generate these four results. Then I use this image
as a reference. Then I generate
these four results. Now let's click here to
bring this image to life. Let's close this tab. Now if we just stay
here in the center, it will just generate
a normal video, milking, pouring
on this chocolate. Let's change this to
creativity and let's generate. Now again, I'm going to change my slider to the right
side to the relevance. And now let's click
on this Gnute button. Now the results are generated. Here we have the
creativity result. And here we have the
relevance result. So in creativity,
as you can see, guys, it's just adding
something on its own. Whereas in the relevance, it just stick to the topic and generate what
we ask for him to generate. Now, what if we
just don't change the slider to the
extreme relevance or extreme creativity? Let's change the slider
to about 0.35 creativity. And this time, I am going to
select the standard mode, and now let's
generate this clip. Now let's change the slider to about seven relevance
and generate the clip. Now let's change the
ling 1.0 to 1.5. Now let's change the
slider to creativity. And again, change the slider to relevance and
now let's generate. So now we are generating the
four video at the same time. So the results is now generated. So if we just change the
slide a little bit to the left side to the creativity,
we have this result. And if we change the
slide to the right side a little bit to the relevance,
we have this result. Now I know these two
result is looking similar, but there is a huge difference. So in the relevance,
stick to the topic. It's gather the milk
onto this side, where in creativity,
as you can see, guys, the milk is spreading
to this portion, not just gather to this side. In the relevance, as
you can see, guys, the milk is now
gathered to this side. Now let's take out the result to the extreme creativity
and relevance of 1.5. This is the extreme
creativity, 1.5. The results are not bad. Now let's check out
the dileman result. Now if we change the slide
to creativity is 0.3, I think we will get something more better result than this. So now let's generate clip. While the video is generating, let's take out some
other example. So in this example,
I use this prompt. I use 1.5, and I change my
slider to extreme creativity. So we have this result. And in next example, I change my slider to
extreme relevance using 1.5, and we got this result. And if we just don't
change the slider at all, we have this result. And in this example, I just use an image. Without changing the slider, we got this result. So now the results
are generated. If you use just a
little bit creativity, you will get some
interesting results. And I'm using the standard
mode for this one, not using the professional mode. And I'm using latest 1.5. Creativity is set to 0.3, and this is the
worst result that we have ever generated
using this image. So my final words, don't use this slider above
0.65 and below up to 0.35. If you want to generate
something interesting, then don't push this slider to extreme left
or extreme right. And this is the difference between creativity
and relevance.
17. AI Camera Movement: Welcome back. Now go
to Microsoft Designer, and I want to create a
chocolate commercial. So I'm going to write Chocolate
Commercial in a hotel. Now we can enhance the prompt. I think this prompt is good. Let's copy this prompt. Go to Kling AI, and we can paste this prompt. Now if we scroll down, you can see guide the camera
moment is disappeared. So to use the camera moment, we have to change the
Kling 1.5 to Kling 1.0. Now we have this
camera moment control. If we click here,
you can see we have different kinds of
camera moments. If we scroll down, we
have also master shot. You can select this and you will get different
camera moments. Now I will explain
every camera movement. So first of all, we
have horizontal. Now we can change the slider and we will get different
horizontal position. If I change this
to the left side, I will get a horizontal moment from center to the left side, and if I change this
to the right side, then we will have a
camera moment from center to the right side. Instead of explaining,
I'm going to show you. So first of all, let's select the horizontal and let's
change it to the left side. So in this prompt, we have a chocolate commercial,
and in the commercial, the camera will change
the position to the horizontal position from center to the left
side horizontal. So now let's select 1.0, change the motor professional, and let's generate clip. Now the video is generated,
this is the result. So as you can see,
guys, in this video, the camera is changing
the position to the left horizontal
side. Which we set. Now we can change the
camera moment very easily. You have to just
change the slider to where you want to
set your camera moment. So let's try out the
Zoom out camera moment. Now I have generate
different camera moments. So this is the Zoom in extreme
Zoom in camera moment. And this is the pan 3.4. This is a tilt five and this is extreme
role to minus ten. As you notice, guys, the videos that we
generated using these camera moments
are inconsistent. So all the clips
are inconsistent. There is something happening in the video in the background, in the foreground while you
use the camera moments. There is a subtle camera
moment that you can use and you will get less
this infringement. But if you use extreme camera
moments like we use in Zoomin and in extreme
minus ten horizontal, you will get extreme
infringement. Now let's reset
the camera moment. Let's generate the simple video. Now I am going to
copy this prompt. These are the results
of this prompt. And now let's go to the video. Let's paste this prompt. First of all, let's generate
without any camera moment. So without any camera
moment, this is the result. And as you can see guy, the video is consistent. There is no weirdness
happening in the video, and this is the three
D character video without any camera moment. But as you can see guy, the results are pretty amazing. So for this video, we have the same prompt which we use in three D character, and I use master shot, move left and zoom in. And here we have the result. As you can see guy, the
result is inconsistent. There is some weirdness
happening in the shot. We have another example, master shot, move
forward and zoom up. There is not much
detail in the video, so that is why we have
this smooth result. And we have another
example of master shot, move right and zoom in. Again, inconsistency
in the video. So I don't think
right now we can generate consistent result
using the camera moments. If you want to experiment
with the camera moment, you can do that as well. With the free version,
you can just select your standard mode and you
can use any camera moment. And I don't think
that you will lose any additional credit for
using the camera moment. But if you want to generate
consistent result, then I recommend don't
use any camera moment because right now the camera
moment is not usable.
18. Negative Prompt: Welcome back. Now, in this tool, we have another amazing option
called negative prompt. So if you don't want to see
anything in your video, just type that word,
and it will generate without that thing
that you type here. So, for example, in this video, as you can see, guys, we
have this weird distortion. So in this example, I type infringement and motion blur that I don't want
to see in my video, and here we have the result. As you can see, guy, there is
no distortion in our video. We have another example where
I type just infringement. So we have this smooth
transition between two frame. Whereas if I play this video, we have this weird
infringement. Like this one. In this example as well, I just type infringement
and we have this smooth transition result. By the way, if you are
using the standard mode, you can also get
some amazing result. Like this one, I use my standard mode and I
select this master shot. And I just type disfigurement, motion blur and infringement. And here we have the result. There is a little bit
infringement, but not a lot. We have this result as well, and we have this one as well. We have another
one For this one, I use the standard mode, and we got this result. As you can see, guys, the
character is colosing its eye, and we have something distortion
happening in the short. So to avoid this distortion, I just type distortion
in the negative prompt. And as you can see, guys, now we have this smooth result. And by the way, I use
the standard mode, not the professional mode. As you can see, guys, the
ten credit is appearing in the generate So by
typing negative prompt, anything that you don't
want to see in your video, you will get some
stunning result, even if you use
the standard mode. And after I regenerate this again with
this camera moment, with these negative keywords that I don't want
to see in my video, I got this result, and there is almost none
infringement in the video. So this is how you can
generate stunning result, typing negative
prompt using ling AI.
19. Generate Image to AI Video: Welcome back. So the next way to generate videos
is to use images. Go to the image to video. We can upload the
image or we can go to the AI Images, select any image. You can choose any image, and then you can click
here, bring to life. You can generate video
using your image. So we have already done this, so let's delete this. Now go to the
Microsoft designer. You can type any prompt that you want,
enhance your prompt. You can select your size, what the size of
your image would be. You can select any size. And now let's click on
this Gen wait button. Now, here we have the result. You can select any
image that you like. So I kind of like this one. You can click here to download
any image that you want. Now go back to Cling Eye, go to the image to video, click here, select that
image that we just download. Click on this open. You can drag and drop your
image as well. So now we can drag and
drop this image as well. You can type any
prompt is optional. There is some other
settings as well. And for this example, let's select the standard mode. Now we can click to
generate this video. So the video is now generated. Here we have the result. You can bring any image to life. You can upload any image.
It doesn't matter. You can upload yourself as well. Now in this example,
I upload my image, and here we have the result. Not only this, you can also
bring your old photo to life. You can also upload
your manipulated photo. As you can see, guys, I photoshop
myself in this picture, and I bring this picture
to life as well. This is another example as well. We have this example as well. And we have this
example as well. So you can see, guys, it's
pretty amazing how you can bring your childhood
photo to life. Not only this, you can also tell him to do anything that
you want with your photo. So in this example, I type knife cutting
orange into two pieces. And now here we have the result. Knife is now cutting the orange. And in this example, I ask him make this video
set in a cyberpunk style. And this is what I got. Pretty amazing, right? So this is how you can generate your
videos using the image.
20. Start and End Frame: Welcome back. So in Killing A, we have another amazing
feature called add ndframe. If we enable it, you can see we have two
windows to upload the images. This is the start frame, and this is the end frame. Now go to the
Microsoft designer. You can type this prompt. You can generate these images. For this example, I'm going
to download this image. Now you have to
select your end frame according to the perspective of your first frame in order
to achieve stunning results. So I think Number four image is close to the orange image. So let's download this as well. Now we can click here to
upload the start frame. Now let's click here to
upload the last frame. And for this example, I'm going to select
this ten mode, and now let's click on
this ten Wt button. So here we have the result. As you can see,
guys, the orange is now transforming into Apple. Now I also auto
extend this video, and now we have this
generated video. Then I also extend
that video five second further with the prompt
hammer mashing apple, and now we got this result. You can also ask
him to do anything. In this example, I type Apple transforming
camera stay static. So the camera won't move, the camera will stay
in the position. There is two different
perspective. That is why we got
this weird result. But if you got the perspective right and the table
position is the same, then you will get a stunning
transformation video. In this example, I just
type Apple transforming. Before we got the orange
at the start frame, now we have the Apple at the start frame and the
orange at the end frame. You can do different types of crazy thing using the end
frame feature of this tool. Let me show you
some few examples. So we have this example. This is my start frame, and this is my end frame. I typed the pro
car crash through the wall and stuck
between two walls. Chain my slider to 0.8 relevance and use standard
mode for this generation. In this example, I use this frame and add
myself into the venom, and then I use this
at the end frame. As you can see, guys, and
I type man becoming venom. I didn't change any settings, use the standard mode for
this generation as well, and I got this result. After I generated that video, then I improve some
lighting on my face, add the venom face
at the start frame and add my face
at the end frame, and type a symbiote is
revealing a man inside it. Use the Dan mode for this one, as well, and I got
this crazy result. After that, I regenerate
again my video, and I got this crazy result. So you will get different kind of result if you regenerate. I also use my two pictures
at the same location, but with different angles I also use the standard mode
for this generation as well, and this is the
professional mode result. As you can see, guys, my
face is now stay consistent. And now we have this example. If you have the same image
with the same perspective, you will get this
kind of result. As you can see, I
type camera stay static and we get
this crazy result. Now if you use a
professional mode with the add end frame feature, you will get consistent result. So this is the
standard mode video. And as you can see, guys, my face is not visible properly, and this is the result
with a professional mode. The video will stay consistent. We have these example as well. And this is another example
of end frame feature of Kling AI with a
professional mode. And we have another example with a professional mode
with these two images. As you can see guy, the
results are looking realistic, and we have this
example as well. With the frame switch,
you will get this effect. Now, you don't have to use the first frame
and the last frame in this combination where we have a white
angle and we have a close up angle because you will get
something like this. You will get extreme
fringing effect like this. And we also have this example. And as you can see guide, we got some pretty
amazing lasers. Even the people that are in the background are
moving very correctly. So you can create anything
with this switch. You have to just
add a star frame and you have to
add an end frame, and it will create something
in between those frame.
21. Turn Yourself into Iron Man: Welcome back. Now,
even if you use the free version of Killing
AI with the standard mode, you can create
something like this. So without any special skill, you can create stunning
visual effects as well. And if you use a
professional mode, then you could get some
stunning results as well. Now I'm going to show you how we can create something like this. First of all, move
to the photob.com. This is a free
alternative to Photoshop. We have two screenshot
of Ironman. First of all, uncheck Autoselect and then
check Transform Control. By enabling the
transform Control, you will get transform Control. We can press Control
C to copy this layer. Go to the Iron Man press
Control V to paste yourself. Now we can convert this
layer to a smart object. Now press Control and hold them. Use your mouse
scroll to zoom out. And now we can resize this
person or the subject, press Inter, and we can again press Control
plus or hold them, use scroll to zoom in. Now I'm going to
lower my opacity and now we can align the phase
with the ironman phase. Now we can change the
opacity to about 100%. Click here. And now let's
align this right here. Now I'm going to
enable raster mask, click here, select your layer
first, and then click here. Now we can click and hold it and we can select
the paint bugget. We can swap the color as well, change the color to black, and now let's click select
the mask and click. Now we can select the
brush, press right click. We can change the size of the brush right here and
now we can swap the color. So click here. Now we
have the white color. So now we can again
use Control and t, hold them, use the
scroll to zoom in. Select the raster mask
and now we can paint. Press Kontozi to undo. Right click, and we can
lower the size of the brush. And now I can brush right here. So you have to carefully brush. Inside the I did man mask. Now we are done. We can select the move tool press
control or Zoom out. Now we're almost done. Now I'm going to click here and I'm going to add
this adjustment layer, right click on the
adjustment layer and create clipping mask. So it will only affect the
phase, not the whole thing. First, I am going to
increase my contrast, and I'm going to
lower my brightness. We can turn this on and off. Let's hide this layer
to get the color right. Let's lower the
brightness a little bit. I'm going to also add
a selective color. Now if you don't make
this eclipsing mask, then if we select
the neutral color, it will affect the whole image. As you can see, guys, it is now effecting
the whole image. And if I change this
to e clipping mask, it will only affect the layer which is underneath this layer. Now we can reset this
layer and we can match the color with the iron man. Now I think we are done. Let's place this to about. Now we are done. Now the
last thing that we need to do is we can right click
off the person face, go to the Blending option,
unable inner shadow. Go to blend mode, change the blend mode to normal, click here and change
the color to black. Now we can increase the
size, increase the opacity. Let's lower the past a
bit. Now we are done. Now go to this
screenshot, click here, press Control C to copy
this and press Control V. Now we can place this screenshot underneath
all the layers. Now we can select
all this layer, select this layer, press Shift and select the first layer. So it will select
all the layers. Right click and convert
this to a smart object. Select this and right click and also convert this
to a smart object. Now we can click here and we can also press Control
and select this one. Right click, convert
this to a smart object. Now we can go to the filter, go to the noise, add some noise. So I'm going to add 5% noise. Maybe change this to a 3%. Now click here on
the filter again, go to the sharpness and
select smart sharpness. Click Okay. And now we are done. You can also color grade your
pictures if you want to. So I'm going to just add something green and add
something like Merchant. And if we turn this on and
off, you can see, guys, we have some color
grading which will help us to match the
face with the scene. Now we can click
here on the file, go to the Export and
save this file as JPEC. We can change the quality to 100% and now click
on the save button. Now select this
layer, double click. We can turn this layer off. Now we have this layer, we can press control as to save this and now we have
this update version. Now we can go to the file
and we can save this as JPA. So the reason behind why I add
this screenshot underneath this screenshot is to make
them one layer, smart object. So then we can
color grade and we can add some effect on top of this layer and it will effect all the layers which are
underneath the main layers. And now go to the link AI, we can delete these images, and we can upload
the new images. Now you can use the standard
mode as well if you want to and we can click
on the Create button. The videos is now generated. Now, if I take this as a first frame and take
this as a last frame, with the standard mode,
this is the result. And if we set this first frame and set this as a last frame, then we got this result with
the standard mode as well. So I read and read
this video with the professional mode and
we got this amazing result. This look more realistic than
the other generated videos. By using this technique, I also created
myself as a venom. Now, with the help of AI, you can turn yourself into any superhero without
any special skill.
22. Motion Brush: Welcome back. Now we talk
about the motion brush, and this feature
is a game changer for the AI video
generation industry. First, we have to
upload our image. Now, once you upload your image, you have to change cling 1.5 to 1.0 because this feature is not available
in 1.5 right now. Now we have to click here. And as you can see, guys,
we have this simple guide. So let me show you
how it's worked. First, we have this brush. We can draw something, we
can draw anything that you want to move. And
we have eraser. If you want to erase your brush, for example, if I
draw this rock, and if I want to erase my brush, I have to select the eraser, and then I have to
remove the rock. I can also increase the
size of the brush as well. Now we have auto segmentation. So right now, as you can see, guys, I'm just brushing myself. But if I reset, you can also reset
your brush area. I can automatically select myself by enabling the
auto segmentation. Now I'm going to select
my hair, my glasses, my pants, and my shoes as well. Now I can uncheck this and I can brush the
missing pieces. So we will get the accurate
result. Now we are done. So at the right side,
we have the track, and this is for this
area, the green area. So right now, I'm going
to select my track, and we have to draw the direction where I
want to move myself. So I want to move myself
to this direction, right? Let's delete this brush, and let's draw to this side. Now you can also delete your
path, your track as well. You have to click here on the
cross to delete your path. Now we will get different areas. You can select multiple
areas to move, but right now I have only
myself in this picture, so this is enough. Now, at the end, we have
this add static area. So the static area won't move. So for this, I have to select everything that I don't want to move
in my picture. I think this is enough. Now, right here, as
you can see guys, if you select something
that you want to move, but you accidentally select
that as a static area, then you can also undo that. As you can see now this
area is now clear. You can also redo as well. Or you can reset
everything if you want. Now we are done. We can
confirm and we are done. And by the way, I'm going to reopen the motion brush again. For example, if you
don't draw any drag, if you don't draw any path and you click on the
confirmed button, we have this added message, not drawn, not paired. Motion brushes won't
effect submit. So you submit if
you confirm this, if you submit this brush area, it won't affect anything because you did not draw the direction. So in order to move
something in your image, you have to draw that thing
and you have to draw a path. If you did not draw any path, then it won't move that object
or a person or a thing. So now let's confirm. And now let's use the
standard mode for this and we can
generate this video. And now here we have the result. As you can see, guys, if we
use the professional mode, we will get some
consistent result. Right now, it's not consistent, but it's looking good. Now we don't have to just
go back and click on the video in order to generate
videos from the image. We can also delete this. Don't worry, it won't
delete your video. And now for this example, I'm going to click
on the motion brush. So let him analyze
your image first, and now we can
enable segmentation. I can select myself. Let's uncheck. Now let's select the track and let's
draw a path right here. So now I'm going to select the static area, enable
auto segmentation. And we are done. Let's
select this person as well. Now, click on the
confirm and generate. So now we have this video. So sometime it generates something that you did not
expect him to generate. Don't worry, I think
we can fix this. So while this video
is generating, we can delete the image. We can upload another image. And in this example, we will draw multiple objects
in order to move them. Now the image is uploaded, click on the draw motion. Let him analyze the image. We can enable auto segmentation, select the idea number one. We can draw this got, select the area number
two, select this one. Now I'm going to
select Track two, and let's move this
go to this position, select the Track one, and move
this go to this position. Now we have to draw
background as a static area, so the background won't move. Now we are done. We can
click on the Confirm, and let's generate this clip. And I have this
example for you guys, and I want to explain something. Let him analyze the image first. So for example, let's take
automatic segmentation. If you click the background, if you draw the background
as the motion area, the moment area,
the moment path, if you select the static area, you won't be able to draw the background as
the static area because we draw the background as the movement path.
So we can reset. Let's reset this again. And for example, if you draw your character
as a static path, then if for example,
the area number one, I have to draw my face, then I won't be able
to draw my face because this area is now
set to the static area. So make sure to brush
very carefully. And in this case, for example, if I select this portion, this part of the character, it will select all the
background as well. So enable to draw this area, you have to manually
select this area. And if we check
auto segmentation, if we set the
background as static, then it will remain
the moment path. It won't consider this
as a static area. So let's reset everything. And in this clip, I want to move some portion
of my character, not a whole character. So what I'm going to
do is I'm going to select my arm of this character. Let's draw this arm to his face, select the area number two, and I'm going to draw the
arm of this character. Let's draw this as well Let's uncheck let's
draw this arm as well. And now I'm going to block
this and now we are done. Let's let the background, first of all, uncheck this. Let's draw this. Let's
draw his face only. And now I think we turn, we can select the static area, click on the confirm, and
let's generate this clip. So the video is now generated. Here we have the result. As you can see, guys, one got
is moving to this position, other is moving to this position as we set in the motion brush. Now in this example, I draw a path expect to
make me fly in this video, but instead, we got this result. Which is not bad. And
in the next example, I draw a line expecting
him to make me sit, but we got this result. Now in the next example, I found a solution to this problem. So in this example,
I draw this path, and now I am now
standing in the video. And in this example, I'm now finally sitting. So now I will show you
how motion barrage works. So if we look
closely to the path, as you can see, guys, I draw
a path between my legs. So the AI is now understanding that I'm
asking him to make me sit. And now I will talk about what's the problem with the
other generation. So if you look closely, I overdraw my arrow
onto the drawing area, that is why we got this result because the AI is now confused. Whereas, as you can see, guys, in this video where
I am sitting, I draw on the static area. And now in this example, we got this result This robot
is not raising its hand. It's not raising its punch. So in the next
example, what I did, I just draw this arrow
and we got this result. So you have to draw motion. So now in this example,
we got this result. And I changed the
arrow a little bit. As you can see, guys, now I change my arrow
path a little bit, and we got this result. I have this example as well. As you can see, guys, if
we open the motion brush, I draw this character
completely, and I draw a path. I want this character to
move to this position. And for this character, I only draw the punches and
the head of this character, and I set the
background as static. If you look closely,
this character is not completely
set to this static, only its punches and
the head is now drawn, not the whole character, and I draw this path where
I want to move him. And this is a result
of this drawing. And this is another
simple example if we open the motion brush, I just draw my car and
I just draw a path. Set the background as static. We have another example
as we robot fighting. If we open the motion brush, I draw this character, I want to punch this character, and at the same time, I want this character to
lean back a little bit. And for this character, I draw this punch. I want this character
to punch this character and also lean back a
little bit as well. And we got this result. You can also draw
multiple areas, and you will get
different kind of result. This is another simple
example as well. If we open the motion brush, I draw this character
to this side, draw the bus and move
the bus to this side, set the background as static, and now we got this result. The robot is not moving
because the AI may be thinking that I want the bus
to crush this robot. So right now, the AI
is sensitive as well. We have another robot
example as well. If I show you the motion brush, here we have the motion brush. Now here we have the result. As you can see guy, the
results are stunning. Now if you use a
professional mode, you will get precise result. So in this example, I draw the left arm of this and also the left arm
of this robot as well, and I draw this path. Set the background as static, and it generates this result. The result is now precise. Unlike the standard mode, with the same drawing, if I show you with
the same drawing, I generate this video
with the standard mode, and the character also moving a little bit and
background is also moving. But in the professional
mode, you can see, guys, it is precisely only moving
the part where I drawn. So with this drawing, we got this result with
the professional mode. So as you can see, guys, everything is now consistent
and with the same drawing. But with the standard mode, we got this result. Now you can also move
your edited image. As you can see, guys, I
edit myself in the studio, and I draw this simple motion
brush. We got this effect. We have another example. If I show you the motion brush, this is the motion brush, and this is the result
with professional mode. The professional mode
video look weird because it generate
what I ask him to. So as you can see,
guys, I ask him to move the muzzle flash and
his arm as well, and it's just did
and it look weird. But with the same drawing
with the standard mode, you will get different result. And this is the result
with the standard mode. As you can see, guys, it's moved the arm and the
muzzle flash as well. The motion brush with
a standard mode, it look more realistic and more stunning than
the professional mode. But the videos will inconsistent if you
rule the standard mode. But with the professional mode, you have to be precise
with your brushes because it will only move that portion
where you brush the area. So if you are wondering how I created the board
punching videos, this is the prompt and these are the result using the
Microsoft designer. Now in this example, I only draw the flash of the
gun and we got this result. Let me show you
the motion brush. As you can see, guys,
everything is not set to static except the face and
the body of this person. So my point is you
have to be precise while using the
professional mode if you use the motion brush, if you want to get
the motion right. And this is how you can
use the motion brush.
23. Lip Sync Videos With AI: Welcome back. So now we
have the final feature, which is called lip sync. If there is a pace in your
clip Ino generated video, you can lip sync that person. You have to just
select your video. You have to click
on the lip sync. It will first
identify your video. Then we have this interface. As you can see guide,
there is so many voices. You can select any. Hey there. You can play and you can do. You can listen to that voice. You can select any gender, like male, young man. Hey there. Beautiful. Once you
select your voice, you can type anything
that you want him to say. Once you type your speech, you can also click here
to preview your words. Hey, there. I am from the
past. My name is Alex. You can also change the speed
of your voice. Hey, there. I am from the past. Let's change this to one X, which is the normal speed. And now you can click
on the lip sync and it will take
your five credits. Now you can also upload your audio as well in
order to lip sync. So right now, I'm
going to click here, Lip sync and go to the upload. Local dumbing. We
can upload a file. And as you can see, guys, if your recording is
more than 5 seconds, then you have to corp
your audio. Hey, there. My name is Faizan and
I am from the past. Hey, there, mine. So you have to select
a portion from your audio because the video
length is five second, so we have to select the
audio under five second. Now I'm going to
confirm my crop, and now let's lip sync
this clip as well. It doesn't matter if you use image to video or text
to video generation. If there is a phase, then
you can lip sync that video. So for example, I'm going
to select this robot, and now let's click
on the lip sync. So first, it is now identifying if there is
a phase to lip sync. Now if it fail to identify
a pace in your clip, then you will get this error. Let's identify this video, and now as you can see, guys, now we have this interface. Now if you want a person
to speak very slowly, then you can set the speech rate to 0.8, which is the lowest. And before you type anything, you can preview that voice. Let's walk to this side and maybe I put my
hands in my pocket. As you can see, guys, the lyric is now speaking very slowly, and if I want him to speak fast, you have to increase
the speech rate. Let's walk to this side. And maybe I put my
hands in my pocket. And there is a duration limit up here on the right
side of the voice. Let's walk to this side. So this voice is now 4.5 second. So maybe we have to change
the speech rate to 0.9. Let's walk to this side. Now we have 5.3 second voice. Now we can preview this voice. Let's walk to this side. Now, if you want a person
to speak very fat, then just increase
the speech rate. Let's walk to this side, and maybe I put my
hands in my pocket. For me, I'm going to
set my speech rate of 0.9 and click on the lip sync. So the lip sync video
is now generated. Here we have the result. Hey there. I am from the
past. My name is Alex. Now, as you can see, guys, the results are now
pretty amazing. Now, if you have a
perspective video like this is the front camera, but if you have a side camera, then the lip sync is
also very stunning. Hey, there, my name is Faizan
and I am from the past. As you can see, guys, my
perspective is now changing, but it got my mouth movement. My name is Fazan and
I am from the past. We have this example, as well. Let's walk to this side and maybe I put my
hands in my pocket. I think my mouth
look way too big than it should be,
but it's okay. It's not that bad, and this is the final video that
we generated with lips. Oh, I'm getting fat. My lungs, my lungs. Now you can also redub
your video if you want. You just have to click
here redub you can upload your audio or you
can use text to speech in order to
lip sync your clip. So this is how you can
use the lip sync in order to lip sync your
videos using cling AI.
24. Extend AI Video Generation: Welcome back. Now, if
you generate any video, you can also extend that video. So in this example, I'm going to select
my burger right here. I'm going to enhance
this burger. Now, the burger is enhanced. Let's bring this to life. Now I'm going to select
professional mode and let's select cling 1.0
because if you use 1.5, then you can't really
extend your clip. There is no option.
If you are using 1.5, you can't really
extend your clip. So the video is generated. Here we have the result. Now, if you look closely, we have an option called extend, and we have two
option in extend. One is auto Extend. The other one is
customized extend. Now, if you are using
the standard version, then you can also
use this as well. And it will take ten credits if you use the standard mode. And if you use the
professional mode, it will again take 35 credit. So let's extend automatically. Now I'm going to select
my generated video again and this time, I'm going to select
customize extend. Now I'm going to type catchup pouring on top of the Burker. Now the video is extending. So we have the result. As you can see, guys, the video is now extend to nine second. And this is auto extend result. Now we have the
customer dessert. And after 5 seconds, as you can see, guys, we have this catcher pouring
on top of the burger. And just like that, you
can extend your clip. You can type anything that
you want in your video. Now I have another
examples as well. So in this clip, we
have this prompt. I generated this
video using text to video And by the way, this is the standard mode, not the professional mode. And this is also
the standard mode. And we can also hoard the mouse
to the extension history. And as you can see, guys, I have typed a hand
powering milk. So we have this result. We don't have any milk, but you can try again again
to get your desired result. Sometime it does not extend
what you ask for him. So in this clip, I type
dark cloud covering sky, and as you can see, guys, there is nothing happen
after 5 seconds. It's just a normal
Zoom in video. There is nothing
happening in the sky. This is another example where I type seen move to a car parking. And if I play this video,
as you can see, guys, after five second, we have
a car behind the character. This is by the way,
the auto extend. We have just a normal
Zoom out clip. And I type a futuristic bike
is in front of the arrector. And after five second, you can see we have a bike. This is also a
standard mode video. I typed car crashing
and after five second, you can see we have this
smoke coming from the car. Sometimes it does not generate what you ask
for him to generate. So in this example, I ask him to take the
letters off from the burger, and it just generated
this video. And in another example, a hand is adding extra
patty in the burger, and we have this hand is
adjusting the burger, not adding the patty. And in this example, I type Imodi is turning
red and getting angry. So after 5 seconds,
we have this result. He's getting angry, but
he's not running red. Not only you can extend your
video to up to 10 seconds, you can extend to as
much as you want. Like in this example, in my extension history, this is the original prompt
that we generated this video. And after I add
plane passing by, then I add aeroplane is
flying in a distance. Then in the last prompt, I add sky is getting darker. Rain is about to fall. So this is the whole video. It does not add any
aeroplane at all, but the sky is getting
darker, which I notice. Here we have after 15 second, the sky is getting darker. So this is how you can
extend your clip using King.
25. kling 1.0 vs kling 1.5 Pro Mode: Welcome back. So now we have all the videos generated with Kling AI with different mode. And in this lecture,
we will compare 1.0 per factional mode versus
1.5 per factional mode. So right here, we
have the setting for the following
generated video. First is 1.0 per
factional mode result. And now we have 1.5
professional mode result. As you can see, the 1.5
deliver what I ask him to, while the 1.0 deliver just
a video of a car crashed. We have another example. Here we have the prompt, here we have the
settings, and this is 1.0 professional mode result. And with the same settings, this is 1.5 pro mode. And we have another example. With this setting, we
have 1.0 promde result, and now we have 1.5 promote. So there is a huge
difference between 1.0 promode versus 1.5 promote. Let's see another example. With these settings, we
have 1.0 promo result. And with the exact
same setting with 1.5, we got this result. So did you notice something? Let's see another
example as well. With this setting, this
is 1.0 promo result, and with the exact same setting, we have 1.5 promote result. With this setting, this
is 1.0 promote result. And with the exact same setting, we got 1.5 promote result. If done with an animated
character video, what it would we look like? With this setting, we
got 1.0 promo result. As you can see, guys, we
got our face animation, and with the exact same setting, we have 1.5 promote. Now let's talk about if you
bring your pictures to life, what it would look like. So this is the result
of 1.0 promote, and this is the result
of 1.5 promote. So if you're talking
about bringing your pictures to life, there isn't any much
difference between 1.0 versus 1.5 promote. So 1.5 promde is more
consistent than 1.0 promote. 1.0 generates stunning videos, but there is some distortion in the videos that
can be noticed. But there is not
huge distortion. If you use the standard mode, you will get huge
distortion or infringement. But if you use 1.0 promo, you will not get a huge
distortion or infringement. But if you consider 1.5 promode, it would be better to
use it because there isn't much distortion or
infringement that I notice. While comparing these
two, I notice 1.5 is more consistent and deliver
the best than 1.0 promo. 1.0 promote, in some scenarios, perform better than 1.5. So this is the conclusion
of whether you want to use 1.5 or 1.0 promote.
26. Kling 1.5 VS Kling 1.0 Standard Mode: Welcome back, everyone. So the new update of
Kling AI allow us to use Kling 1.5 in
the standard mode. So before this update, we can only use the
professional mode. Now we can use the standard
mode, which is a good thing. And the standard mode
will take 20 credits. First of all, I'm going
to select image to video model and let's
select an image. Now I'm going to
select my image, and now I'm going to type
this brown man running in a cyberpunk city wearing
a cyberpunk style suit. Now let's select
the standard mode, and let's click on Generate. Now, I have generated
some few examples, and now we are going
to compare Kling 1.1 versus 1.5 standard mode. This is the first example, men running in a
cyberpunk setting wearing a cyberpunk style suit. And as you can see, guys, I use Kling 1.0, and I have
used the standard mode. And this is the result. And with the same image
with the same prompt, this is Kling 1.5. And as you can see, guy, the result is improved a lot. I did not even use the
professional mode for this one, as you can see right here, I have used the stand in mode. Now let's look another example. Giant slime far from the ceiling and covered
the man's body. Now I use Kling 1.5
professional mode for this one, and this is the result of the professional
mode, Kling 1.5. And with the same image
with the same prompt, I just use the stand
in mode of Kling 1.5, and here is the result. So you can use Kling
1.5 standard mode, and you will get some
pretty amazing results. Here we have another example, man running from the tsunami arriving from the background. I use Kling 1.0 model
and I use standard mode. And here we have the result. And with the same image
and with the same prompt, this is the result of Kling 1.5. And as you can see,
guys, the results are pretty consistent. Now here we have another example of ing 1.0 standard mode, man running from
the big explosion happening in the background
camera following him. With the same image
with the same prompt, this is the result of
Kling 1.5 standard mode. Y. Now this is the final example. Everyone starts fighting,
punching each other. I use Kling 1.0 and use the
standard mode for this one, and here we have the result. And with Kling 1.5, here we have this result. So now you can use Kling 1.5 standard mode
and you will get some pretty amazing
results if you don't have a professional
account of Kling AI.
27. Start and End Frame Kling 1.5: Welcome back. So
in the new update, you can also use start and end frame by using 1.5
cling AI model. Now, I generate this clip
by using these two frame, this is the start frame,
and this is the end frame. By the way, you can also swap the end and the start
frame if you want. And I use the professional
mode for this generation, and this is the result. And as you can see, guys, pretty much all the movement
of my body is inconsistent. So let's change
the model to 1.5, and let's see if the video will stay consistent
or inconsistent. We have another example
of start and frame. By using the standard mode, this is the result. Now let's change more
to professional and let's change the model to 1.5. We have this example as well. By using 1.0 professional mode, this is the start frame, and this is the end frame. And we got this result. Now let's see if we got some consistent
result by using 1.5. So let's change more to 1.5, and let's click concenrate. So the video are now generated. Here we have the final result. So the video is
pretty consistent. But there is inconsistency on my face that I'm not seeing. But it got the motion right. It is improved than the
professional mode of 1.0. Now let's check out
the second example. And I think this
result is pretty amazing than the
standing mode of 1.0. Now if you use
inconsistent images, if you use the first frame and second frame is completely
different than first frame, then you will go
some weird result. And this is the first
example of that. In both images, the
car is stuck in wall. That is why we got this result. Even if you use 1.5, you will go some weird result. Now, this is the second example. If you use completely
different images, the start frame is different
from the end frame. That is why we got this result. And as you can see, I am
using 115 professional mode. Now in this example, I use a Ferrari. If you want to generate
consistent images, then first of all, go to
designer dotmicrosoft.com. And for the Ferrari, I just type a wide
view of a Ferrari car. And by typing this word, you have to just enhance prompt, click on this enhanced prompt, and it will write
the prompt for you. Now you can copy or you
can select the size or generate the images from
designer dotmicrosoft.com. But the images of
Designer Microsoft is look like cartoonish. They don't look realistic. For generating realistic images, go to labs dot Google
and go to image effects. As you can see, guys, I
have select image effects. Let's go to the My library. And as you can see guy, the images of labs dot
Google is pretty realistic. Now, by clicking here,
you can go to Image effet you can paste your prompt and you can
click on this generate. While it is generating
the images, let's go to the designer, and as you can see, guys, the designer images
look cartoonish. Whereas if you go to
the labs dot Google, we got realistic results. Let's download this image. And if you scroll up
into your prompt, as you can see guy,
this is a wide angle. We can also change
this to close up. We can also change the
color of the Ferrari. We can change the brand name. You can change the colored
keyword if you want. Now we can click on Generate. Before we generate a wide angle, now we can generate a
close up of a Ferrari. And by doing this,
as you can see, guys, you will got some
consistent images. Now let's click on
this download button. And now if we open the images, these two images look
consistent. They look the same. So after we downloaded
the images, I select the close up
at the start frame and select the white
shot at the end frame, and we got this result. And by the way,
this is generated using the standard mode of 1.0. And with the same images, we got this result. As you can see, the
result is consistent. If you want the camera to move, you can also swap the frame. And in this result, I just swap the frame. I select the white shot
at the start frame and change the close up
shot at the end frame. And this is the
result that we got. And these two images also generated by using
the laps dot Google, and we got this result. So there is another
method if you want to generate
consistent images. First of all, go to Chat GBD and ask him to write an
image generation prompt, a water bottle fall
from the sky in Paris. And we got this prompt
and now ask him to write a prompt of
the second scene. Now I just ask him to keep these two prompt consistent
and give me these two prompt. I copy and paste this prompt
into labs dot Google. As you can see, these are
the first prompt result, and these are the
second prompt result. The bottle is falling
from the sky, and in the second prompt
it landed on the ground. And here we have the result. Now I have used 1.0 tenon mode, but you can change your model to 1.5 and select
professional mode, and then you will got
some professional, consistent results. Now, you can also ask JA GPT
to generate images for you. As you can see, guys, I ask
him to write six prompt, keep the consistency in six, a time lapse of a plant
growing to a full grown tree. And these are six prompts, but I ask him to generate these images and
generate these images. And then I ask him can
you make them realistic? And we got these results? Then I asked him to make them ultra realistic and
we got these results. Then I put these two images
and check out the result. So by using the
consistent images, you will get some
professional result. Now here is another example if you swap the frame of the form, we got this fade out transition in between these two frame. And if you use this
frame at the start and this frame at the end
and use standard 1.0, we got this result, which is better than 1.5 result, but 1.5 result is consistent. So I hope you get the
idea how you can generate the start and end frame
videos professionally. Just keep images consistent. Now, if you have two
consistent images, then you can generate some
pretty amazing result. Like I have these two images which look identical
to each other. In the first frame,
we have a seed, and in the end frame, we have
a small plant or a tree. Now, I use link
1.0 standard mode, and we got this result. A seed becoming a
plant or a small tree. Now I just change my model
to 1.5 professional mode, and I got this weird result. The seed is not transforming. We got just two clips
put together in a video. So if you are getting
this kind of result, then try to type a prompt. So I just use these two images
again and type a prompt, seed transforming
into a small tree. And then I got this
amazing result.
28. Motion Brush of Kling 1.5: Welcome back. So the
new update of Kling AI allows us to use Kling 1.5
with the draw Brush feature. Now you can use draw Brush
feature with Kling 1.5. So now we have this example. As you can see, guys,
I have generated this by using draw
Brush feature. And this is my draw Brush path, and this is a result of
Kling 1.0 standard mode. Now with the same image
with the same drawn path, I'm going to use the
professional mode, and I'm going to select cling
1.5 and let's generate. Now we have another example. And if I go to the
path, as you can see, guys, this is my path. Now I'm going to use
the professional mode and let's switch
the model to 1.5, and let's click on Generate. Now we have the same
image this time. But I only drawn the flash
of the gun, as you can see. Now I'm going to select professional mode and
let's select clean 1.5, and let's click on Generate. This is another example of motion brush with
professional mode and using and this is
another example of motion brush by using clean 1.0 model and professional mode. And if we go to the path, this is the path that I drawn, and here we have the result. Now, let's change the model with the same motion
brush path drawn. Let's click on generate. And we have this
example as well. With this image, if we go
to the motion brush path, this is the path that I drawn. With a professional mode of
1.0, this is the result. If you watch my previous
lecture on motion brush, I try really hard
to make me sit. And at the end, I got this
result after some few tries. So now let's try the
same motion brush path, but with Kling 1.5, and let's use the
professional mode to get access to motion brush of 1.5. And now this is the result of Kling 1.5 professional mode. And as you can see, guys, everything is now
stay consistent. And this is the
second example of 1.5 professional
mode motion brush. And as you can see guide, the result is pretty amazing. And this is another
example where I just brush the flash of the gun. And this one is pretty wild. And this is another
example of tour boot. And as you can see, guys, the motion of their
body is consistent. There is no infringement
in video in any frame. It's kind of look like a
real three D animation. Even this example, whereas in the professional mode of 1.0, we got a weird hand movement, but in 1.5, I noticing the
hand movement is now fixed. So 1.5 is actually
improved in motion brush, and this is the result of me
trying to seat myself using the AI And I think this result
is pretty amazing. And here we have some
other amazing examples. So I use this image, and if we go to
the motion brush, I draw this path. First, I draw the pedal and track this pedal to this side. Then I draw the whole water and track the water
to that side. Then I remove this side of the water because if
we look at the image, we have these ripples
in the water. So I remove this part. So I remove this path by
using the arrays tool, and then I select
the area three, draw this path and track
this pad to that side. And final, I select my host subject and track
my subject to upper side. Now at the final, I draw
this board as a static area. And I use standard mode for this motion brush
generation and use 1.0. And if I play my video, this is 1.0 result. Even 1.0 is looking amazing. Now by using 1.5
with same image, same motion brush path, we got this consistent result. Isn't it amazing? Now you
can do one more thing. If you notice we got a problem. The boat is not moving at all. So what you can do as well is
you can select the As tool, enable auto segmentation
and erase the static area. Because we draw the
boat as a static area, that is why the
boat isn't moving. Now we can select
the brush tool. We can select area five. We can draw the whole boat. And we can track the
boat to this site. Now let's confirm And for this, I'm going to use my standard
mode and select 1.0. While the video is generating, let's view some other example. So I use this image, and if we go to the
motion brush path, now in the motion brush path, I draw my car, and I want to move my car
through this side. And I draw the whole
area as a static idea. With 1.0 standard mode, this is the final result. As you can see, the
car is going this way. So I fix my area in
the next generation. If we go to the motion brush, I draw my line to this side. In previous I draw
my line like this. Now I have drawn to that side. And here we have the result
of 1.0 standard mode. And it looks like the
car is kind of drifting. And with 1.0 professional mode, with this updated path, here we have the result. And with the same car with
the same image, same path. But this time, I use
1.5 professional mode, and this is the final result. Now we have another
example as well. I use this image, and if we go to
the motion brush, I draw my car and
I draw this path. I want to move this
car to this side, and I draw the whole ocean
and I move the ocean to this side and draw the
whole area as a static area. And this is the result. This is not what I expected. The car is going backward
instead of going forward. So in my next generation, I fix this problem. I draw this path to this side, I draw the arrow to this side. In previous generation, I draw
the arrow like this side. So by drawing the arrow to
this side on the ground, I got this result. We fix this problem. So if you got these
kind of problem, then you can easily fix them
by conducting your area. And by using 1.5
professional mode, this is the final result. As you can see, the
video is consistent. There is no infringement. Everything look realistic. And we got this example
where we have a tiger. And if you go to
the motion brush, I draw the tiger and I
point the arrow into the water and I draw the whole water and point
the arrow to this side. I want the water to
move to this side, and I want my tiger to
go inside the water. And this is what we
got by using 1.0 Now, if you go to the motion brush, I draw this rock as attic area. But in my next generation, if we go to the motion brush, I remove this as attic area. And I also correct my arrow
positioning of the tiger, and with 1.5 professional mode, this is the final result. This is the result that we want. The boat is kind of moving. Not the boat, but we got a
camera moment in the video. Now let's use a
professional mode, and let's use 1.5
and let's generate. So if you don't want to lose any additional credits,
I have a tip for you. If you want to use motion brush, then use 1.0 tendon mode. By doing this, if you
got something wrong, then you can fix your problem
in your next generation. And if the problem is fixed, then you can use 1.5
professional mode, and you will got
your generation, and the video will
stay consistent. This method allow you
to save your credits. If you use 1.5 professional mode in
your first generation, then you will lose 25
additional credits. So this is my way to
save the credits. You can use the same method
to save your credits as well. So 1.5 professional mode
video is now generated, and here we have the result. So this is how you can
use Motion brush tool by using 1.5 professional mode.
29. Camera Movement With Kling 1.5: Welcome back. So in the
previous version of ling AI, we can't really use the camera
moment if you switch image to video inkling 1.5 or
inkling 1.0 as well. But now we can use
the camera moment in cling 1.5 by using the
professional mode. So to generate images, I use designer domicrosoft.com,
copy this prompt. Then I paste the prompt
in labs dot Google. And select image eft. Now you can also change
the highlighted keyword. If I go to my library, as you can see, these are
my original prompt results, and I select their
highlighted keyword and change some few keywords, and then I got these results. If you go to the
Microsoft designer, I select this image,
you can click here, edit entire prom, you
can copy this prompt, and I paste that prompt
in labs do Google, and I got these results. Go to cling AI, select
your model to 1.5. Go to image to video. Now first, I'm going to
select the mobile image. You can scroll down and you
can select professional mode. And as soon as you select
your professional mode, as you can see guys,
we got camera moment. You can select
your camera moment from this drop down menu if
you scroll down even more, as you can see guy, these
are my camera moment. So if you select
this, you can change the slider to get
your camera moment. If you change this to
left side to minus ten, you will get left
horizontal camera moment. If you select vertical, you will got the
vertical camera moment. We got the Zoom we got the pen, and we got tilt and roll. So I'm going to select
this pen minus ten. Let's change the two minus five. Let's not go to extreme. Maybe let's go to extreme. Now let's click content rate. Now let's select this picture and let's download this image. And for this image, I'm going to select tilt and let's change
this to minus ten. Or maybe we can
select this image. Let's download this image,
and let's scroll down, and I'm going to
change my tilt to ten. Now let's click on generate. I also generate this image, and this is the
prompt of this image. Now for this image, I'm going to change
the pan to ten. While this video is generating, I'm going to show you
some few examples of camera moment by using
1.5 image to video. So in this example, I select this image and I change my camera moment
to vertical ten, and I got this result. Now in the second image, I change the camera
moment to pan minus ten, and we got this result. And for this image, I use Zoom camera moment, value ten, and this
is the result. As you can see, guys, 1.5
is pretty consistent. And for this image, I use my camera moment
pan, value ten. The camera is panning
toward the sky. And if you notice there is no
inconsistency in the video, the video is pretty
much consistent. And in this example, I use role value ten. This is the extreme
camera moment. And still we got some infringement right
here, but not too much. Now, by the way, you can also mix these types of camera
moment videos together. For example, I can mix this
video with this video, let's download these two videos. Now I'm going to show
you how we can mix these types of camera
videos together. First of all, open the cap cut. This is the best beginner
video editor software right here in the market. Now, create a new project. Click here to create
a new project. Now select these two
videos and import them right here in
the media page. Now I can import
this video first, and let's import this video
beside the first clip. Now we can press space
to preview this clip. So the camera is
panning to this side, so we can also reverse
this clip as well. But I think instead of
reversing the clip, let's place this clip
right here and we can reverse this clip
right here. Click here. First select your
clip and click here to reverse your video clip. And now, as you can see, we can put this video right here. We can also select this clip, go to the mask, select
the horizontal mask. We can invert this
mask by clicking here, and we can select this
arrow and we can move this upside to
make this feather. Now we can drag this line to this side and
change the feather. Two 0%. Now add a
keyframe right here, and we can also add the
keyframe in position. Now go few frame ahead, change this to this side and
change the feather as well. And now, if you
check out the video, we got this amazing result. Let's crop this
select this clip, go to the animation, go to the out animation, and we can add a
fade out animation. And if I play this
clip, as you can see, guys, this is the final result. This is how you can use
your camera moment clips of Kling AI and merge
them together like this. Now, these are some
other example of camera moment by using Kling
1.5 professional mode. This is the pan camera
moment value minus ten, and this is the result. Now I highly recommend don't use to extreme value of
your camera moment. Just say like here, if you want to get
some realistic result, two minus three or
two plus three, if you want to get
some amazing result. Now we have the
second example where I use extreme tilt position ten, and this is the result. Now, this is the final example. I use pan plus ten, and this is also a
extreme camera moment. But for this shot, it
works pretty well. So this is how you can
use the camera moment by using 1.5 professional mode.
30. Kling 1.6 vs 1.5 vs 1.0 Text-to-Video: Welcome back. So Kling AI just
launched their new model, which is Kling 1.6, and it is set to improve
than their previous version. Like Kling 1.5, you can also use standard mode
with Kling 1.6. But currently, we can't
really use motion brush, and we can't really use
camera moment with Kling 1.6. So I have this example. This is a prompt, and I use
Kling 1.0 professional mode, and this is the result. Now with the same prompt, I use cling 1.5, and this is the result of
1.5 professional mode. Now let's go to the
edit and let's generate this with a new version of 1.6. Now we can click on Generate. So the video is now generated, and here we have the result. Let's view the result. By far, 1.6 is pretty amazing. It is far more better. It is far more superior than 1.5 and 1.0 because you can't really find any frame
infringement or you can't really notice
any frame infringement in your generated clips. Now, I have some more example that I want to share
with you guys. This is the prompt, and I use 1.6 promote for this generation, and here we have the result. As you notice, there is zero frame infringement
in the generated clips. And as you can see, guys, the 1.6 is more clear and there is no inframe
infringement in 1.6. In 1.0 and 1.5, we have a little bit
frame infringement, but in 1.6, there is none frame infringement
in the generated result. Now, here we have
another example. This is the prompt, and I use also the pro mode
for this generation. And the result look realistic. There is a little bit frame
infringement right here, but everything looks great. And this is 1.0 and 1.5 result. Now, before you get excited
and you start use 1.6, I want to share with
you guys this secret. The 1.6 is four realistic clips. If you use unrealistic prompt, you won't get much more better
result than 1.5 or 1.0. Let me show you an example. The same prompt, we have
both 1.0 and 1.5 result. And now with the same prompt, this is 1.6 pro mode result. There is no frame infringement, but I did not like
the style of 1.6. 1.5 is far more better if you want to
generate animated clips, but 1.6 is for realistic clips. So with this prom, this is the result
of 1.5 pro mode. And with the same prompt, this is the result
of 1.6 pro mode. Now, all of these videos generated using 1.6
professional mode, but is 1.6 standard mode also
good for video generation? I'm going to show you
some few examples, and I'm going to compare 1.0, 1.5 and 1.6 standard mode because all of these videos are generated using the
professional mode. Now let's see if standard mode also good enough for
video generation or not. So this is the prompt, and this is the result
of 1.0 standard mode. Now with the same prompt, this is 1.5 standard
mode result. And 1.5 is also good
for video generation. And now we have the
final 1.6 standard mode. So as I told you earlier, 1.6 is far more realistic
than the other, because 1.5 video looks great, but it is unrealistic for me. So 1.6 is for
realistic generation, but if you want to
generate in between realism and unrealism,
then use 1.5. Now this is another example. This is the prompt, and
this is the result of 1.5. And here we have the result
of 1.6 standard mode. And for this example, I think 1.6 is a clear winner
because in 1.6 generation, you can clearly see a
little bit camera moment and everything
looks crispy clear. Now with this prompt, this is a result of 1.0. And this is 1.5
standard mode result. And now this is the result
of 1.6 with the same prompt. And for this example, 1.6 is also the winner for this generation
because in the background, the building looks also really great and can also look great. Now we have another example. This is the prompt,
and this is the result of 1.0 standard mode. With the same prompt,
this is the result of 1.5 standard mode. Now finally, we have 1.6
standard mode result. And as you can see,
guys, 1.6 is great. It looks like we are shooting this sphere using
the mobile phone. And as I told you earlier, 1.5 is in between
realism and unrealism. Now we have second last example. This is the prompt,
and first of all, this is the result of
1.0 standard mode. And as we expected,
everything looks crap. Now let's move on to 1.5. With the same prompt, 1.5 manage to generate
this masterpiece. We have a little
bit infringement, but everything looks great. Now finally, we have 1.6 result. And this is unexpected
result for me. Because I believe 1.5 looks
great in this example. Because in 1.5, you can see
clearly in the background, we have some
audience, but in 1.6, there is no audience. And as I told you earlier, if you want to generate
realistic video, then use 1.6. But if you don't want to
generate realistic video, but also you want to generate some of the realism in
the video, then use 1.5. Now, if you try to
regenerate this prompt, this is the result of 1.5. And surprisingly, 1.6 now
perform a little bit better. And if you notice when the robot start to work toward
the red robot, the ground shake
and with the ground shaking camera also
shakes a little bit. And now we have the
audience in the background. For this generation, I think
it's a tie for 1.6 and 1.5. Now we have the final example. Heat is the prompt,
and with this prompt, this is 1.0 standard mode. And with the same prompt, this is the result of 1.5. And now, finally, we have 1.6. And as you notice, guys, even with this standard mode, 1.6 is stand out. Now, I accidentally regenerate
this prompt using 1.5, and this is the
regenerated result. And 1.5 is look better than before in
this regenerated clip. So for this donation, 1.6 is also the winner. So by far, what I have used 1.6 is far more better if you want to
generate realistic clips.
31. Kling 1.6 vs Kling 1.5 Pro Mode: Welcome back. Now
let's talk about 1.6 image to video generation. Is 1.6 improve in image to
video generation than 1.5? Let me show you
some few examples. First of all, in this example, we have this picture, and I write a prompt a knife cutting apple into two pieces. And as you can see, I use 1.5 model, and I use professional
mode for this generation. Now even the professional mode, as you can see, guys,
it's look unrealistic. Now with the same image
with the same prompt, I just use 1.6
professional mode, and this is the result of 1.6. And after seeing
this generation, 1.6 is far more better than the other tools that
are available currently in the market because the motion is great and Apple slice
slide looks realistic. Now let's see some
other example as well. Now in this example, I use my picture
and I type a prom. Room is getting
darker very fast, and person head is on fire
lighting the room after. So I want the room to get dark and I want my
face to set on fire. And as you can see, I use 1.5 professional mode, and this is the result. Look great, but it's
not looking realistic. Now let's take out the result of 1.6 with the same prompt
and with the same image. I use the professional
mode for 1.6. Even the video is unrealistic, but visually, 1.6
is far more better. Now we have another example. And with this
picture, I type Man running away from the
robot that following him. I use 1.5 professional mode
and I got this weird result. And after I see this
result, I thought, why not regenerate this clip
using 1.6 professional mode? 1.6 is going to
be better, right? If I play my clip, as you can see, the
person is running, everything look
realistic, but there is no robot in the clip. I ask him to add a robot, but there is no
robot in the clip. So this is a little
bit disappointment. So I want to fix this problem. So I thought how to
fix this problem. So in this example, I just change the prom. Person running towards camera, our giant robot is chasing him. I use a different image
because in the previous image, you can't really see my face. And I use 1.6 professional mode, and now we have improved result. Person is running, and
robot is also chasing him. And as you can see guy, the result is looking
really great. And I satisfied
with this result. So if you got any problem
with your generation, just change the prompt. You will get the result
that you want to. Now, if you don't get the
result that you want, like in this example, with the same image
with the same prom, I got this result. But I regenerate this result, and I got this result, which is improved in quality
and in motion as well. If you don't get the
result that you want, even if you type a
very precise prompt, you got a high quality image
of yourself or anybody, try to regenerate
thickly because it helps you to improve in quality
or in motion as well. Now we have another example. With this image, I type this colorful balloon
falling from the ceiling. Some of them fall on the person light illuminating the room with color balloon. And this is the result that
I got this is by the way, 1.5 professional mode, and
everything looks great. And now we have 1.6 with the same pro
with the same image, and also same mode which is professional mode.
Let's see the result. 1.6 is far more
realistic than 1.5. But in 1.6, we got some detail. If you notice in the LCD,
as you can see, guys, as soon as balloon
falling from the ceiling, we have the reflection
in this LCD. And we also got
realistic balloon, but they appear for the
shorter period of time. 1.6 is a little bit
better than 1.5. If you compare the result, 1.6 is a little bit better, not too much, but a
little bit better. But if you talk about
the emotion, 1.6, just destroy 1.5 because 1.6 motion looks
realistic then 1.5. If you check out 1.5 emotion, they just a feeling
that this is not real. But in 1.6, everything
looks great. But I kind of like
the both results. So if you want to use 1.5, then you can
definitely use that. Now, 1.6 is also better
in terms of expression. So with this image, as you can see, I type a prompt, a person standing with
a neutral expression, transitioning to a sad
expression then to a happy expression and
then and finally smiling. And this is the result. Now, after that result, I regenerate the result with the same mode with the
same model, which is 1.6. Here we have
regenerate the result, and I kind of like this
result than the previous. So if you don't get the result
that you are expecting, then try to regenerate. We have another example
of 1.6 professional mode. With that same image, I type man running in a cyberpunk city wearing
a cyberpunk style suit, and this is the
result that I got. I if you try something
unrealistic, you might get some
few infringement in your generated clips. But in terms of visual, I think 1.6 is looking great. And now this is
the final example of 1.6 professional mode. I use this image and I type card transforming
into a white car, and this is a result of
1.6 professional mode. Now, in terms of video quality
and in terms of realism, 1.6 try to be realistic
as much as possible.
32. Kling 1.6 vs Kling 1.5 Standard Mode: Welcome back. So
1.6 professional mode is better than
other AI model. But is 1.6 tener mode also
good for video generation? Let's find out with
some few examples. Right here, we have
this image and we have that same problem that we use
in the professional mode. Now, in terms of expression, 1.5 is not really good at it. Now with the same image with the same prompt
with the same mode, which is the standard mode, but I use 1.6, and even in the standard mode, 1.6 is better than 1.5. Now we have another example. With this image, with this prompt person
running towards camera, a giant robot is chasing him. I use 1.5 standard mode, and this is the result. For this donation, I think
1.5 did a really good job. But the robot is not chasing me. It's chasing these people. And now with the same
image with the same prom, but with 1.6 model
with standard mode, this is the result. And I think 1.6 did
a really great job. It beat 1.5 in this
generation as well. We have another example. Image with this prom, everyone starts fighting,
punching each other. I use 1.5 standard mode, and this is the result. Now with the same image
with the same prom, just use 1.6 for this one, and I also use standard
mode for this. And this is the result. I look a little bit better than 1.5 because there is less
infringement than 1.5. Let's take out some
other example as well. Now, do you remember this guy? I use this moster image
for the comparison of 1.0 and 1.5
professional mode. But they both fail to express the expression of this moster. Now with the 1.6 standard mode, 1.6 did a really good job at expressing the
expression of this moster. Now let's see some
other example. With this image, this
is 1.5 standard mode. With this prompt, a dint slime fall from the ceiling and
cover the man's body. Now with the same image
with the same prompt, with 1.6 standard mode, this is the result. Now, I think 1.5 in this generation did a
really good job, then 1.6. Now we have another example
with that same image with this prom man running in a Cyberpunk city wearing
a cyberpunk style suit, with 1.5 standard mode, this is the result. And I'm not going to lie. I kind of like the result
that I got from 1.5, even with the standard mode. But my question is, it did not generate
any cyberpungity. We only got a futuristic
cube or something, but we did not get
any cyberpungity. Now with the same image
with the same prompt, 1.6 standard mode, this is
the result that we got. And even with the standard mode, 1.6 beat 1.5 in terms of visual and in
terms of video quality. Now with that same image
with this pro man running from big explosion happening in the background
camera following him. With 1.5 standard mode, this is the result. And with the same image
with the same prompt, 1.6 tender mode,
this is the result. 1.6 perform better in terms
of visual and in terms of expressing the
expression of the people. As you can see, this person notice the explosion
and runs towards the explosion in terms of save people or
something like that. I think 1.6 did a really fantastic job in
this generation as well. Now you might ask a question
what model you should use? 1.6, 1.5 or 1.0. Now, in terms of realism, if you want to create
realistic videos, use 1.6 because 1.6 is far
more realistic than 1.5. But if you want
to generate clip, that is a mixture of realistic or realism
plus unrealistic, then try 1.5 because 1.5
generate in between both. But if you want to
try out your prompt, if you want to experiment
with the prompt and at the same time you don't want to lose
any additional credits, then use 1.0 because 1.0 will
generate with ten credits was 1.5 and 1.6 will cost you
20 credits each generation. After you experiment
with 1.0 generation, you might get some idea what the 1.5 and 1.6 video
would look like. So this is my conclusion. I hope I answer your question. Now, if you have some other
question in this course, then feel free to ask me. I'm always available to
answer your question.
33. How to Lip Sync With Kling AI: Welcome back. So now we have
a new update of lip sync. Now we have a dedicated
section for only lip sync. Now what is new in
this update is that we can upload the video in
order to try this lip sync. Before this update, we can't
really upload any video. We have to generate a video in order to lip sync that video. Now if you use some
videos that contain multiple phases like
in this example, as you can see, we
have multiple faces, then the AI will fail
to recognize any phase. So my recommendation
is that use that video that shows your face clearly and that
contain only one phase. Now I'm going to
upload my video. Now, once you upload your video, AI will analyze your
video and then it will confirm that this video use
lip sync feature or not. So you can clearly see
my face in this video, so now we can lip
sing this video. Now, if you want to
switch your video, if you want to
change your video, for example, I don't
want to use this video. If I have to change my video, then I have to click here in
order to re select my video. And now, as you can see, we can select any video that we want. Now there is two way to
lip sync your video. You can either use
text to speech, which contains several
voices or you can upload your own audio
in order to lip sync. Now I have typed this line. Now let's go to the mail
and let's play this audio. Hey there. My name is
Faison. I am using King A. Now some voices can't
really use emotion, and now we have this message. Neutral emotion is not
available with this voice. So we have several
voices that can't really use emotion of neutral. So you have to select a different voice in order
to use neutral emotion. Hey there. My name is Faison. I am using King AI. Hey there. My name is Faison. Now sometimes in some voices
we have this message, so you can't really
use that voice. You have to select
a different voice. Hey there. My name is Faizan. I'm using Kling Ai. Now let's use this voice. Now you can also change
your emotion as well. Let's select angry emotion. Hey there. My name is Faizan. I'm using King Ai. Now let's turn red. Now we have the lip sync. Let's check out
the final output. Hey there. My name is Pasan. I'm using King Ai. Now we are facing some problem. As you can see, the lip
sync is off the chart. Because if you look closely
in the original video, my mouth is moving too much. That is why the AI
struggle to reshape my lips in order
to try lip sync. Now in this example, where I did not talk at all, as you can see, guys, my
mouth is not moving at all. I generated this video using
image to video cling AI. Now I try lip sync onto this video and this
is the final output. Guys, there are so many
balloons in my room. No, believe me, check this out. Now, as you can see,
guys, the lip sync is crazy onto this video. Now we have some other example. So with this video,
if I play this clip, the mouth is moving a
little bit, not too much. And I try lip sync
onto this video, we got this result. Hey, there. My name is Faison, and I'm going to create
Pixar Animation. As you can see, guy,
the lip sync is crazy. Now, as I told you earlier, do not upload a video
that contain two phases. Well, I try that
and surprisingly, in AI just select one phase. The reason behind is that
this one is talking. That is why link focus
onto this phase, and I try lip sync
onto this character, and this is the final output. You think you can create anime without any
special skills? Well, yes, and I'm
going to show you how. You think you can create anime without any
special skills? Well, yes, and I'm
going to show you how. Now we have another example
of anime style animation. This is the original source
where the character is talk, but not too much, and this is the final
output of lip sync. Alright, an hour of work,
and I'll be good to go. This is another example, and this is my
original source where the mouth is not moving
at all of this character, and this is the lip sync result. What a great day for a ride. Got to keep the momentum going. Now we have another
example of bad lip sync. So as I told you earlier, if your mouth is moving
too much like in this video and in my
previous original clip, then you will get some
bad lip sync result. How many times do I have to say the same thing? Are
you even listening? So if you want to try
lip sync correctly, if you want to nail the
lip sync onto your video, then apply lip sync onto that video that contain
less face lip movement. So if you have a mouth movement
too much in your video, then you will get some
bad result of lip sync. So this is how you can use
the lip sync, using cling AI.
34. Elements: Welcome back, everyone. Now, if we go to AI videos, we have a new feature
called elements. And what it does, you can
select multiple images, and it will generate videos
with that image reference. So for example, let's
take a basic example. Now we have these two images. Now, as you can see, from here, you can delete your image, you can upload another image, and here you can
select your element. In my example, I want to
select this entire image, but for example, if you want
to select only this truck, then you can also
select this truck. You can click on Confirm, and then you can generate your videos using
these elements. Now I'm going to select
my entire image. For my second image, I have selected
this Ferrari car. So now we are going to make a video with these two images. So I'm going to
type a simple prom like a Ferrari car in a field. So the element feature will only work with 1.6 ling model. If you change your model to 1.5, you can't really generate
this element video. If you change this to 1.0, we have that same problem. So now we can only generate these element videos
using 1.6 model. So with the professional
mode selected, we can generate this video. So the video is now generated, and here we have the result. As you can see, we have this
Ferrari car in a field. And if I pause this video, if we open the original image, as you can see in
the original image, we don't have any cloud. If we open this image, I want you guys to focus
on the cloud pattern. And in the generated clip,
as you can see, guys, we have that same cloud
pattern right here in the sky. And we have this field
in this generated video. So I hope you get the idea. Let's see some other examples to better understand
this element feature. So in this example, we have two images. We have this image,
a plane flying. This is the reference. This is the element, and
we have another element. We have a driving motorbike
character vector style. Now I typed this problem man riding a bike and
plane passing by. With standard mode of 1.6, we got this result. Have the plane passing by and we have that motorbike rider. Now, to generate
these kind of images, you can use Labstt
Google image effects. And as you can see, guys, if we go to the Labstt Google, this is a prompt data used
to generate this image, and this is a prompt for
motor bike rider image. Now we can also add an object and we can do
anything with that object. So for example, right here, as you can see, guys, this
is my original image. Right here, we have
this object, a SWOT, IATI man swinging around a
SWOT with standard mode. As you can see, guys, in
this generated video, I'm swinging this object. We have another example. This time, it copied the background of
this object as well. Now in this example, I have three elements. What we have this image
of a car in nowhere. We have this background, and we have this subject. And with this prompt, with 1.6 professional mode,
this is what we got. So sometimes it generate
these kind of videos. So in my next generation, I typed this prompt and with the standard
mode, this is what we got. Now I remove one element
from these elements, and in my next generation, this is what I got with
these two elements only. So in this example, I only have two elements, a man sitting on a couch, and we have this car, and I type this
person sitting on a sofa beside an abandoned
car in a desert. And even with the standard mode, as you can see, guys,
the result is amazing. So the reason I showed you guys this example is because
with some simple prompt, we can also generate some amazing results
using the elements. So to fix your problems, you can generate more video. You can tweak your keywords
of from here and there. Now we have another example
with that same image, and this time we have
this different car, and we got this result. Card in the field.
Check this out. And this time, if we go to the
crop mode, as you can see, guys, I have only selected
this portion of this image. This is my element
that I have selected. Now in this example, we
have two anime characters, and I have typed both fighting with the standard mode,
this is what we got. Now, sometimes when you
generate videos using element, it will take the reference
note this element, exactly. I will take the reference of this element and it will
generate a brand new video. So with that same images, I have tied both fighting
with each other. With the standard mode,
this is what we got. As you can see, it
takes the reference of these two characters and
generate a whole new video. Now again, with
these two elements, if I open my image, this is my element number one, and this is my
element number two. When sitting on a chair, a big explosion happening
in the background. So this is the prom data
used to generate this video. Even with the standing board, the result is pretty convincing. Check this out. This is insane. So another reason why you are not getting the
result that you want is because you are
using the wrong images. So in this example, as you can see, guys,
I have these images, and we have these
two kid cycling, and this is a weird cycle that I have generated
using Labs Google. This is the problem, and
this is the weird result. This is another example, and this is the final example. So make sure you have
selected the right images. Now we have this example. In this example, I
have used for element, which is the most
element that you can use to generate videos
using elements feature. So first, we have this person. We have this mic.
We have this car, and we have this
as a background. Now for my prompt, I have typed
Boy holding a microphone, sitting in a car
parked in desert, camera slowly zooming in. With the standard mode,
this is what we got. Now with a
professional mode with the exact same prompts and
element, this is the result. Now we have another example
of most used element. So we have these four elements, and I have tied camera
focusing on a cyclist, then focus on plane and then
focus on man's standing. With the standard mode,
this is what we got. We have cyclist plane. And right here we
have a man, this one. Now for my next generated video, I remove the background, and this is what we got. We got this weird result. So the reason why we are getting this morphing effect in the video is because I am
using the standard mode. That is why we have
this morphing effect. Now, in my next generated video, I fix my prompt eyewek
word here and there, and this time it improved
the result a little bit. We have Standing Man plane, and we have the cyclist. So you can create
some dynamic scene like this as well
with the elements. Now in this example, I also died the
background again. And with this prompt with the standard mode,
this is what we got. Now for some reason, I have used the
professional mode with the exact same prompt
and the element, and this is what we got. It just generated
this brand new video. It takes the reference
from my elements, not generate video with
the exact same element, but take the reference
from my element. You can also improve your
generated video as well. So as you can see here with the exact same prompt with these images with this element, if you go to the
negative prompt, I have typed this keyword
that I don't want these thing in my
generated video, disfigurement,
distortion, and morph. We look at the original result, this is my original clip, and I have regenerated this clip by using
the negative prompt, and this is what we got. As you can see, the
result is improved a lot. We have less distortion,
disfigurement, and morphing effect in this
video than the original. This is the original
video and check out the distortion and
disfigurement in this video. And now check out
the latest video. Now you will also get a different result if you
regenerate your clip. As you can see you
guys in this example, we have this background. We have the Ferrati card
and we have this prompt. With the professional mode, we got this result. Now with the standard mode, I just remove my ferratic
keyword from here, place my car first, then place my background second.
This is what we got. Now in this example, I place my car first again
and place my background second with the exact same
prompt. This is what we got. So the reason why
I'm showing you guys this is because if you place
your background first, then it will copy
the element from your background and place that element to
your second image, to your second element. Let's experiment this so that you can better understand
what I'm talking about. Let's delete this image. Let's delete this image as well. And this time, I'm going
to corb my element. Let's remove this rug
let's click on Confirm. And we have that same image. With that same prompt
with the stand mode, let's create this video. So the video is now generated. This is a preview, and as
I told you earlier, guys, if you place your
background first, place your car second, we will get less background
than the previous example. This is the previous example, and as you can see, guys, we have the background second. That is why we are getting this whole background
element in this video. So my recommendation is if you want your background to be
static in the entire video, then place your background
at the end of your elements. If you have four elements, place your background right
here in the fourth element. If you have two images, you want to have a background, place your element to the
second as a background. And if you have two elements, and if you want to
place a background, then place your element at
the third element slot. Now if we go to
the Labs, Google, these are my generated images that I use in the
element lecture. This is my prompt
for this image. For this background,
we have this prompt. For the car, we
have this prompt. For the anime background, we have this prompt. For the background of the
field, we have this prompt. For the car image, we got this prompt. The anime man, we have
this simple prompt. We have this prompt and
we have this result. We have this image, and
this is the prompt. And for this image, I type this prompt. For the final image, with this prompt,
this is what we got. So you can use labs or Google to generate impressive images. So this is how you can use
elements feature of ling AI.
35. Multi-Element: Welcome back,
everyone. So the link AI New Update launched a new feature called
multi elements. Now, it is different
than normal elements. If you go to normal elements,
in normal elements, you have to upload
up to four images in order to generate a video. But in multi element, you can swap something
in your video. You can even add
something in your video, and also you can delete
something in your video. Now, currently, 1.6 Kling model is available
for multi elements. If you switch your model
to Kling 2.0 master, you can't really swap anything. You have to change
your model to 1.6. So if I go to image video, in normal elements,
we also have 1.6. We don't have 2.0 master. But if you change your
model to 2.0 master, and if you go to multi elements, switch to the appropriate model. So as you can see, we can only use multi elements
by using cling 1.6. First of all, let's use
the multi elements swab. So you have to upload up to five second video if you want to use
multi elements. Now, once you upload your media, it will analyze your video. If your video is more
than five second, then you have to cut your video. You have to crop your video, and you have to make a
video up to five second. So my high recommendation is make your video
under five second. Now we have this interface. Now we have to add a selection. We have to select an object
that we want to replace. So I'm going to select this car. So I'm adding this keyframe
for precise selection. You can also, if you
add some point and if you want to reduce or if you want to
delete this point, you can even select
dio selection, and we can click on it, and
it will delete this point. Now let's add this again, and now let's confirm. Once you confirm,
you have to upload a picture that you want
to replace this car with. So by using this prompt, I have generated this image. As you can see, we
have these images. I'm going to download this one, and now we will replace
that car with this one. Once you upload your image, you can select subject. So I'm going to crop my image, and let's click on
this confirm button. Now, once you select subject, as you can see, we have swap
and in bracket, we have X. So we have to name this thing. So I'm going to type vehicle right because we are swapping
one vehicle to another. So swap vehicle from
this image for this car. So I'm going to also
in the bracket, I'm going to type vehicle. Now we can generate this video. Now, as you can see,
it swap the vehicle. Now, if you are thinking
that you can flip the vehicle in the
image and then you will get a better result. So I'm going to show
you some example that I have generated before. In this example, I flip my vehicle of this image and
everything stays the same. As you can see, we have
that result again. I'm going to show you
the reason why we are getting this kind of error. So if you look closely, by using the same prom, I also have this image. So I have downloaded this image, and I use the swab video, and in this image, as you can see, I've
selected the whole image. I did not crow my image. And as you can
see, it also added the mud that we
have in the image, and it swap with the rod. In my final example, I did not flip the vehicle, and I crop my subject
in this picture. Now, this is the result. As you can see, now, it's just added a
little bit mud not too much like we
have in this video. So if you want to
swap your object, then you have to select
a similar object. As you can see, the car
is facing this direction, so that is the reason why I
have selected this image. That is the reason why this car does not working
with the video that we have. Now, you have to also
keep this in mind. If you swap a vehicle
with the animal, then you will also get
some crappy result. In this example, as you can see, I have swapped my
picture with a beer, and by using the same video, and in my prompt, I just
change the vehicle to bear. And as you can see,
we have this result. So you will get crappy result if you swap
vehicle with an animal. So you have to swap
the vehicle with some vehicle and you have to swap animal with some animal. Now we have another example. As you can see, we have this
clip and we have this image. So if I go to the
edit, as you can see, I have selected this code, and I swap this code with the dog that we
have in this image. Now to generate the image, I use labstt Google Image
ex and I type this prompt. Now the reason why I
have white background is because the AI work better
with a white background. So it will understand
better if you have a white background
behind your image. Now, as you can see, I swap
the goat with the dog, and this is the result. Now again, I use image
effects laps dot Google, and I got this result, modern cyberpunk jacket
with white background. So as you can see, we have a white background and we
have this cyberpunk jacket. Now I use this video, and I have selected my
whole subject like this. I have selected this
subject like this, and I ddt multiple keyframe
to select this subject. Now I have this jacket
with a white background. And in my prompt, I type
jacket four person jacket. And as you can see, this
is the result that we got, and it perfectly swam the jacket with our
cyberpunk jacket. Now, what if you want to
swap the jacket plus pent? Now in this example, I have generated
cyberpunk jacket plus paint with
white background. On the same video,
I use this image. If I go to the edit, as you can see, we have
cyberpunk jacket and paint, and I just type jacket and paint for person
jacket and paint. And as you can see, we have
this phenomenal result. You can easily swap
anything in your video if you use the picture with a white background because
with a white background, the AI better understand
and it will easily can track this image onto
your subject or object. Now let's talk about how we can use the multi
element add feature. The add feature work like if you want to add
something in your video. So by using the colors 2.0, by using this prompt, I have generated these images, and I use this image. So I have used my video
where I was talking, facing camera, and I
use this as a subject. Now, it will automatically
write this prompt for you. You have to just type
in this bracket. Like I'm going to first of all, refresh this so you can better understand how to
use the add element. Now let's go to
the multi element. And if you go to the ad, first of all, you have
to select a video. As you can see, it automatically write this prompt for you. Now, once it analyze your video, you don't have to
select anything like we selected in swab
feature of multi element. Now as you can see, once
it select your video, you have to upload an image. Now, once we have the image using the context of
this reference video, seamlessly add and we have
this X under bracket. So we can type Robot is
sitting on person shoulder. So you have to
type your contacts inside this closing
bracket from image. Now, if I go to my
video that we have generated before and
if I go to the edit, as you can see, inside bracket, I typed robot sitting
on man's shoulder, and this is the
result that we got. As you can see, it seamlessly added this robot on my shoulder. I have some other example of
adviiture of multi element. Now in this example, as you can see, I
have multiple images. I have bubble image, and I
have this creature image. Now using the context of
this reference image, simlessly add and I typed multi bubble in the
air from this image. A creator is
standing beside man, beside person from this image. And as you can see, I
only got the bubble. So I again retype my prompt and if I go to the
edit and in this prompt, I just type a creator
next to the person. And again, I did not
get the creator, I only got the bubble. So I fix my problem. In my next prompt, if I go to the edit,
as you can see, we have a dog in
white background and I have bubble with
black background. So I have tied bubble in
the air from this image, dog standing next to the
goats from this image. And as you can see,
it only edit the dog. The reason why it edits the dog seamlessly is because we have the dog in the
white background, as I told you earlier. So in my next example, as you can see, I have generated this image with
the white background. If I go to the Labs dot,
Google Image Effex, this is the multi bubbles
with white background. And as you can see this time it addit both the
bubble plus the dog. So as you can see, we have cyborg robot dog realistic
and we have this dog image. So I use this Cybergrobot
image of the dog, and I type Cyberg dog
next to the gods. And as you can see, we
have this crazy result. As you can see, it
matters the light plus, we have the shadow
of the dog, as well. Now let's talk about the final
feature of multi element, which is called delete. And this is how it works. First of all, you have
to upload a video. After you upload a video, it will analyze your video if your video is
under five second. Once it analyze your video, again, we have that interface. We have to select an object or subject that you want to
delete from the video. So I'm going to
select my subject. You can add multiple
points if you want to. Now in the final keyframe, let's add again,
multiple points, and let's click on Confirm. Now you have to type that you want to delete
from this video. Have selected the subject,
so I'm going to type delete person from
this reference video. Now I have already
generated this video, so I don't have to
generate again. Now if I play my
clip, as you can see, it deletes the
subject precisely. We don't have any fill
out error in this area. So by using the delete option, you can easily delete
anything from your video. So this is how we can use the multi elements
feature of cling AI.
36. Kling 2.0 Master: Welcome back, everyone. We have a brand new video
model called Cling 2.0. If you go to the
Cling AI website, you can go to the Global,
and as you can see, we can go to Video,
and right here, you can change your
model to Kling 1.6 to cling 2.0 Master. Now, if you want to
generate text to Video, you can go to TextVdo. You can type in your prompt. You can also get the help of Deep Seek to
type your prompt. You can ask him to enhance
your prompt as well. Now we can also use the preset. As you can see, you can use this preset in your
prompt like your lens, short type, light and
shadow, et cetera. You can also use Image to
video with cling 2.0 Master. You can upload your
image, type your prompt, or you can just use your
image and generate video. Now, let's compare cling 2.0
Master with other model. As you can see, we
have this prompt, and Kling 2.0 Master
is so damn expensive. It will take 100 credit per video of 5
seconds to generate. This is my prompt, and this
is the video that we got. And I'm going to be
honest with you. This is the worst video if
you compare with other model. With 100 credit,
this is what I got, and this is the worst video if you compare this
video to other model, as you can see, the
other model perform better than Kling 2.0 Master. We have this example.
As you can see, we have a lego
shark in the ocean. The design of the shark is
like the other ordinary shark. Now let's compare this video, this prompt to Kling 1.6. Now I have this prompt,
and as you can see, I use Kling 1.6
professional mode, and this is what we got. In this video, I really like
the design of the shark, but the Kling AI 2.0 master pushes the video to a
realistic approach. Now we have this example, Ferrari car transform
into Bugatti. This is what we
got with Kling 1.6 professional mode with the same prompt,
this is what we got. As you can see, even we have
some error in the video, we are getting a video. But if you compare
this to Kling 2.0, it just give you this
ordinary transition from one car to another. Now we have this example
and as you can see, it looks like a real
life video recording. I really impress now
if you use colors 2.0 master of Kling AI
image generation model and you restyle your image, as you can see, I restyle this
image into 90s enim style. And if you turn this image into video by using Kling 2.0 Master, as you can see, we have
this impressive result. Now you can generate high definition anime style
videos like this one. Now I use this prompt and I
have generated these images. And I turn this image into video by using clean 2.0 Master, and this is what we got. Again, I really impressed by 2.0 Master because the
dynamic moment is crazy. Now I'm going to show
you this example. This is the result of
1.6 professional mode, POV of a car, and this is what we have. Not bad. Now let's compare
these two Kling 2.0 Master. This is with the same prompt. This is Kling 2.0 Master. And as you can see,
it looked like a real life video
recording of a GoPro. In most of the cases, Kling 2.0 Master
performed incredibly great if you compared the Kling 2.0 master
to other model, but in some cases, it just perform far more
below than the other models. Like, it just gives you an image sometime in this prompt
like I got this image, and I have this simple pan
animation for 100 credits, not even a zoom in. So you have to keep this in mind if you want to use
Kling 2.0 Master. You don't want to
lose your credit, then my high
recommendation is use Image to Video model
because in to prompt, I use Image to Video model, and I really
impressed by image to video of cling 2.0 Master, because as you can
see, it just gives you a dynamic moment that
gives life to your image.