Transcripts
1. Intro: Okay. Hello, everyone, and
welcome to another reality. What you see now
is done using AI, and I'm going to show
you how to create also your characters
animated as well. The clip you just saw was made entirely from using
a few selfies. My name is Crence and
I'm excited to guide you through this process of
using AI in a creative way. I come from a background
of threeD animation, motion graphics, and design,
but over the past year, my curiosity for new
technologies led me to dive deep into AI and explore its
incredible possibilities. Since then, I've created
AI animations for brands, personalized clips
for major festivals, and different AI visualizers. If you're looking to create artworks, personalized
music videos, video eels, visualizers, and different AI animations,
this course is for you. I'll walk you in every step
of the process in detail, sharing the AI tools
that I use, my settings, and all the valuable insights that I've gained
from my experience. By the end of this
course, you'll have an automated workflow
that empowers you to create anything that
you can imagine in your own unique style.
Let's get started.
2. 01 What to expect from AI: Based on my experience,
I wanted to share what can you expect
from the AI workflow. The first thing is, if you come from the
graphic design world or three D art or
motion graphics, don't expect the same workflow because in graphic
design or three D, you start everything
from scratch. You have your blank canvas, and then you start
to add things on it. You work with different
elements, different colors, and do everything step by step until you get
the final result. While in AI, it is
not the same thing. Basically, you
generate an image. You can get the final result
just with one prompt, or you'd have to tweak it, but it is not a step
by step workflow, you start something from scratch and then you build on top of it. The second thing is that
nothing is 100% fixed, so you need to be prepared for a lot of testing and tweaking. As I was saying,
you can generate an image and that can
be the final result. But it can also be
that you need to do adjustment to that image. For example, if you create a scene in the night in
the forest where you need just one house
with the lights on in the forest in
a cinematic mood, you can generate that and
everything can be perfect, but maybe the lights are off. So you need to redo it, and then maybe you get the trees that are not
in the style that you like. Then you can redo it, but then
maybe it's not cinematic. So this is why you need to test and tweak until you
get the final result. The third thing is that the AI process is not
really an artistic process. It is more like an
engineering process. Personally, I've worked with digital tools like Photoshop, Cinema four D, after
effects, and so on. I've worked for,
like, eight years, and in these kind of softwares, although they are digital, you can still work with colors, with compositions,
with contrast, these kind of elements
that are artistic in a way that can also be
the same for a painter. While in AI, you don't really have to worry about
these things. I mean, you can just
write, for example, make it cinematic and you'll probably get cinematic results. You just have to have
the eye for that. And when I say it is an
engineering process, I mean, it is like an
process of researching. For example, you do an image, but something is wrong, so you have to think,
what's wrong with a prompt? You have to change
certain words, then you get another result. Maybe you don't put
the settings right because there are a
lot of settings that can affect the result. So it's like having a machine
with a lot of buttons, and you have to figure
out during the process how you should put the
levels for that machine, so you get the result
that you like. Fourth thing is that
there are a lot of limitations like camera
movement, body movements. And yeah, there is this idea that you can
do anything with AI, and of course, for
the past two years, it has improved a lot. You can see great improvements
from one year to another. But at this point,
when I'm talking, I know that there are
a lot of limitations. You can see when
something is done with AI because mostly because the
camera and body movements are not very natural. Also, we can see deformations of the face or of the features
when the camera rotates. So yeah, there are some
limitations that at this point, it cannot reach the same level of full production done,
like in real life. And you can also see ads, for example, an ad
from Coca Cola, where they created it with AI, and of course, you'll
see the difference. A lot of things
seems to be frozen. So at this point, just know that you
can do nice things, but don't expect for
AI to create miracles. So yeah, this is
what I wanted to share based on my experience. Now let's move on to
the next chapter.
3. 02 Overview of what can be done with AI: Okay, so in this part, I'm
going to do an overview of what can be done with the
tools that we will learn, and I'll show you
examples for each case. The first thing that you can
do is create your avatar, and I'm going to
show you an example. Around a month ago, I
created some avatars for some singers
for a music show. And yeah, I'll show
you one example. This is one of the singers. And here I will show
you the images that I've created after I
trained his, his avatar. So I had to imagine
him as Dracula, and these are some of the
images, as you can see. And yeah, once you
create your avatar, you can imagine it in
any scene that you like. The second thing is that
when you have an avatar, you can also train a style, so you can combine your
avatar with a style, and I'll show you
an example of that. Yeah, good example is this one that I did
for the weekend or this one that
I did for Grimes. You can see that I have combined the avatar with this
chromatic style. You'll see that each image
has a certain theme. They are in a certain style, and we will go through
that later in details, but yeah, I just wanted to
show you some examples. So I trained the style by giving it images that have
chromatic jewelry and yeah, this kind of style, basically. And also something else, like an example
that I can show you is this one that I
did for the weekend. Okay. Yeah. But here is
another style that I test it. Okay. Let's go to the videos. And yeah, the other
thing is short clips, so you can animate your avatar. And we can go back
to this example. So once we have the images, we can also do the animation. Yeah. As you can see, this was an image, and
now it is an animation. Similar things can
show you here. Okay, here is mostly the
camera that is moving. This one, I think is nice. Yeah, you can see the
environment that is animated. And yeah, so this is the
avatar and animation. Now, the other thing is
stylized short clips. So you have the avatar, you have the style, and
you can animate it. And the way that you would
do this course is just by creating images and
then animating it. So yeah, let's see
these results. We have the image,
then we animate it. Yeah. And then the other thing is doing stylized short
clips with lip sync. So you have the avatar, you have the style, you
have the animation. And then on top of that, you can add lip sync. And the way that you would
do this is, first of all, you have to record yourself or anyone talking and
then using that as a reference so your avatar can move the lips
based on that video. And these are some examples. Since I was a kid. As you can see, this
was just a video, and then I added the
lip sync, as well. Let's see this one as well. And yeah, these are some
of the possibilities that can be done if you
combine these techniques. Now we will move
to the softwares.
4. 03 What do I use: Now I will show
you what do I use to get the results that
I showed you before. So to train on Avatar, I go to this website, and here you need to go to
the Flux Laura Fast Training. You can click here. And yeah, here I'll show you
how to do it later. Can just log in here, add some credits, and yeah, then you'll be ready
to do the training. To train a style, it
is the same thing. So you still go to Flux
Laura Fast Training. But here, you need to check this style and do
some other settings. But basically, it is
a similar workflow. And to animate the
videos, of course, there are a lot of website. Every day, there is a
new thing coming out, but from my experience, Runway app works the best. So yeah, it is this one. You can try it. You just have to go here to
generative session. And yeah, here you will
have to pick an image, write a description,
and then generate. You can make it five or
10 seconds, but yeah, we'll go through that later. Now for lip syncing, I've used Face Hug, but lately, runway is better because they just share the new
features that you can actually go from video to lip syncing because before it was
from photo to lip syncing. So yeah, you can
see the ad here. Co, you can just
click TritNw and here you would have
to add a performance. So basically recording
yourself talking. You have instructions here, and then you would have to
add the character where you would like to add
the lip syncing. Again, we'll go
through that later. And here was the website
that I used to try before. This is image to video. You have to go to
this hugging face, go to spaces to
this live portrait. And what you'd have to
do is add an image. For example, let's try
what we have here. Then you would have to
add the driving video. Let's just test something from the examples that are here, and then you would just click animate and in a few seconds, you would get the result. But this is image to video, and also it has
some limitations. They also have video yeah, to lip syncing, but it
doesn't really work well. I've had to see a lot
of possibilities, and I think the
most efficient way at this point is runaway. Let's see the results
just for fun. Yeah, you can see what happens. And you can see that there
is a kind of head shake. This is the main
problem that this has. But yeah, since you have
runaway, you can try it here. And the last thing is ha GBT. Now, there are a
lot of things that you can learn about prompting, but I've figured out that
I can just ask Cha GBT. For example, yeah, let
me show you this one. Again, in this case, I just told Cha JBT that I have an avatar called
the name of the singer. Because you'd have to give
the name of the avatar. And I want to imagine
him has Dracula, give me some prompt ideas. And it would automatically
show you ideas. Yeah, let's just test it now. Yes. Okay, let's just test it. Yeah. And as you can see, it will give you endless ideas. You can just keep tweaking. For example, I can say, I want to imagine
him in the castle and in a cinematic style
or in a dark style. And yeah, it will
help you a lot. Because otherwise, you'd have to know how to write certain words. Also, if English is not
your first language, like in my case, it would be an extra step that I don't really want to deal a lot with. This is why I use ha Tipt. So yeah, this is
the theory part. Now let's move on to
creating the avatars.
5. 04 training your avatar: Now we are going to
do the training. And for that, I
asked my brother to give me some pictures of him. And here in this folder, I have a 42 images and make sure to collect
images from different angles, different forms like portrait from the sides, full body, body. Yeah, it's like AI scanning you, so the more angles, the more precise it will be. And I think 20 or 30
images should be fine. And yeah, then you go to
Flax Lora fast Training. Yeah, here we can add images. I'm going to select them all. Now on the trigger word, make sure to put something
and save it somewhere. Yeah, I'll just put this
don't check this in style. Let's go to more. And here
on the steps, I put 1,000. If you increase it, it is
supposed to be more precise, and it will cost more. But from my experience,
1,000 for portraits, it's really good
that I haven't seen any difference when I make
it 2000, to be honest. So yeah, I'll just
copy this, lick start, and I'll just create
a new folder here. So I'll call this um, train ID. And I'll just create
text document. No, actually, this
is trigger word. And on the idea, I'll
paste something else. So yeah, now I'll wait for this to finish in a few minutes. Okay, so now this is ready, and as you can see, it was
done in three, 4 minutes. I'll click Run
inference Inference. Yeah. And yeah, now the prompt starts
with a trigger word, and you can add
anything on the side. So in that case, we will use Chat TPT to help us. But before we go there,
let's just copy the path. So I'm going to save it here. Okay, so in any case, you can open flax Laura. You can just paste your ID here. You have your trigger word, and then you just do prompts anytime so you don't have
to train over and over. Now, let's just see the
additional settings. Here, I just change the size. For example, let's
pin this portrait, and the rest, I
leave it as it is. Something that you can
adjust is the scale. So the more you increase it, because this is the
scale of the Laura. The more you increase it, the more it will be
influenced by Laura. The more you decrease
it, the more it will be influenced by the prom,
and we can test it. So I'm going to copy this, and I'm going to ask
ChaGBT for help. This is what I'm
writing. I want to imagine him in surreal places, help me with some prompts, so
you can add here anything. And, of course, if you have other things in
mind, you can do it. This is just to test. Let's try this one. I'm
going to click Run. Okay, so as you can see here, it follows more the prompt which says futuristic
city and so on. So to get more the
look of the Laura, I'm going to ask here for
portrait pictures. No. Okay, let's just test this, let's stick with the futuristic. Let's try the new prompt. And yeah, as you can see, here is the first portrait. Now I can save this. And you can keep trying to change the seed
and get different results, but I'm going to keep the same seed and just
adjust the scale. For example, if I
increase it to 1.2, let's click around
and see what happens. Now let's go on the other side. As you can see what
happened here is we don't have the glitchy text, but we have more the portrait because we increase the scale. So the influence is more by the portrait rather
than the prompt. So if we decrease it, I think, based on this logic,
we are supposed to get something more futuristic. Yeah, as you can see, and here at 0.8, it looks
less like the person. Maybe for you, it's the same, but for me that I
know the person, I can see the difference, but here it goes
closer to the pn. So I think the middle ground
from my experience is 1.2. In our case, we see
that one worked better, so we stick with that and
we can click around again. And meanwhile I can yeah, just try something else. This is also nice, as you
can see from the side. Okay, just pasting something
else. Yeah, this is nice. So yeah, you can also change the size to landscape.
Let's try it. Okay. So yeah, you
get the idea now. Basically, once you
train the portrait, here you can add the prompt and adjust the
scale and the size. And for ideas, you can use your imagination or you
can ask TGBT for help. And the rest is just
changing the seed and testing until you
get what you like.
6. 05 training a style: Now that we train the portrait, we need to move forward
to training the style. So for that, we go to the same
flux Laura fast training. And now instead of
adding portraits, we need to add images
of a certain style. And for that, I used Pinterest, and I searched for
fire good art, so I can get these
kind of images, and I'm doing this just
for learning purposes. Here I collected,
let's see, 21 images. I think they are enough.
In the same style with this fire theme. And of course, it's up to you to collect images in the
style that you like. I think 20 or 30
should be enough. So yeah, I just went through
all of these and just downloaded the ones that I like and make sure
they are in one theme, so they are similar
to each other. So this way, the AI
doesn't get confused. And yeah, let's pick them
Okay, now the trigger word. Yeah, I'm just adding
something random because if I add
the word like fire, when I do the prompt and say, in fire style, maybe it gets confused with
a general concept. This is why I'm adding
like a random name. And here, it's
important to add this I style and now on the steps, make sure to put
it to 2000 yeah, from my experience,
this works better. And now we just click Start and wait for this
to be finalized. Now that it is ready, we do the same steps as
before, so we click Run. And before we move any further, make sure to save
this information. So the trigger word and the ID, Okay, now let's just
do some testing. So yeah, let's go
with a portrait, and I'll just type
human in this style. So this can be any name, and I'll just click Run. And yeah, as you can
see, this is similar to the reference
that we gave to it. But now I will show you
how to make it next level because if we
see the references, they are with this fire element. So to do that, what I'm
going to do is use hATTPT. So I'm going to attach a file. Yeah, let's go with this one. I'm going to copy
the trigger word. And I'm going to say to
hajBT something like I am training a style called this, and I want to use the image as a reference so you can
create a prompt based on it. Okay, so I just wrote
this thinking out loud. It's not like there is a
specific way of doing it. It's just doing a conversation with ChaGPT until it gets you. So I'll just test it now. Okay, so as you can
see, it created a prompt based on this image. And it would go like this,
create an avatar in testyle and the rest is with a
description based on the image. So I'm going to copy this
and let's paste it here. But And yeah, as you can see, now it really creates images similar to the style
that we trained it for. And of course, you can click
Random, get new generations. Also here, you can say, give me some variations or
upload any new reference, for example, this
one or this one, and it will create
different descriptions. Then you come back here
and yeah, just recreate. Yeah, this is from
a different angle. So yeah, this is how
you train the style.
7. 06 avatar and style combination: Now we have trained a portrait and we have trained a style. So the next step is
to combine them. So for that, we go,
to the character, I think, yeah, we saved it here. So we have the trigger
word and the ID. So I'm going to
copy the ID here. And I'm going to
click on Ad item. I'm going to edit here. So now we have on top the
style below the portrait. And yeah, let's just copy this trigger word
in case we need it. But from my experience,
if you say, create an avatar in the style, it would be the same as
if I just copy this. So the portrait in this style. Let's just do really
small adjustment, and let's just see how it works. I'm going to click Run and then see if we need to
change the scale. Yeah, as you can see, now we have the portrait in the
style that we trained it. I'm going to save this,
but I think it is better to put the scale of
the portrait to 1.2. I think it will
look more similar. Yeah, you can see that
it looks similar, and it is the same kind of artwork because the
seed is the same. Yeah, as you can see, the
new one is more similar to the real character
because the scale is 1.2. Maybe in this case, you
don't recognize it, but if you use your own
pictures, you will understand. Now you can change the seed
and get different variations. Yeah, this is nice, as well. So yeah, at this point, what I'm going to do is
just get different images, maybe 20, and I can try
in different sizes. Maybe it is better to
go with Landscape. I'll click Run. And I'll get some of them
with this prompt. Some of them I'm going to
test with another prompt, so I can give another
reference here and create a new prompt and yeah, create around 20 images, so then we can animate them. Okay, since these look
a little bit static, I'm going to say here, can you add some
action to the avatar? I just prompt. And yeah, let's see. Yeah. And as you can see, it shows
the avatar is in mid action, so it gives another direction. So I'm going to copy this Yeah, I'm just going to remove this. So we have the name of the portrait and
the name of the style. I think it's better
to have it like this. And let's click around. Okay, now I'm going to move on to another reference. And let's see what we get. So let's just replace this. Okay. As you can see, now we have the influence of
this image with the horns. Again, I'm going to
give some direction. Okay. Yeah, right here, I think the influence is too big because we have long hair, and I don't want that, so
let's just try something else. Okay, we have two options here, so we can test them
both. This is nice. As I said in the beginning, it's more about testing and tweaking until you get the
results that you like. Okay, I think this looks
less like the real person, so I'm going to move
to something else. Let's test this one as well. Yeah, I'm going to test the other one since
I don't like this. Yeah, again, this one
is also full body, so I'm going to move
on to something else. Okay, let's test this one now. Okay, I think I'm going
to stop at this point. And if I need more images, I can come back
later here again.
8. 07 adjust existing images: Now before we move to
the animation part, just wanted to
show you something quick that can be
helpful to you. And this is the in painting. So I'm going to click here. And what you can do here
is edit existing pictures. For example, I can pick this one and here I can do some adjustments
on a specific part. Let's say I don't like
how the face looks like, so I can just paint it. Click US Mask. Here on the item, I'm going to add the
idea of the portrait. And now on the prompt, maybe I can just add the trigger
word and just click Run. Yeah, as you can see,
the face changed. And for example, if
you like this one, you can just download it. You can change the
prompt or also click Run again to do
different variations. So yeah, that's how it works if you like
the image overall, but there is something specific
that you want to change. You come to inpainting. You add the image that
you want to change, you paint the part that
you want to change, and then adjust the prompt, add the path ID of
the Laura and click. I don't like this one, so I
can just click Run again and see until I get the
result that I like.
9. 08 AI Animation: Now we go to another cool step, which is to animate the images. And for that, as I said
before, I use runway. Okay, let's go to generative
session, the video. And here we can select the
image. I'm going to pick one. Once you pick the image, you can also add keyframes. For example, if you want
to start like this and let's say and like this. Yeah, it would transition. You can also add
another keyframe Yeah, let's just test this for fun. And then you can also do a
description, for example, for the camera or for any certain part of the face to move or
move the background, anything that you can imagine. And of course, you also
have a guide here. And examples, say, you
can test these yourself. These are very helpful in terms of using the
camera, for example. Like, if you want to
have dynamic motion, you can just click here and it would create a
description for you, as you can see right here,
or just click fast motion, and yeah, you can see the words. Here. So yeah, that's
something that you can check for yourself since it's just about reading
the instructions. The same thing here, I would suggest you to take
a look at this. But mostly from my experience, it's just testing things
out and see what you get. Okay, I'll do the first test
without any description. I'll just click Generate. Okay, this is done now, and, of course, I
put it to 5 seconds. For now it has 5 seconds
and 10 seconds options. Let's click Play and
see what happens. Yeah, as you can see, it does the transition from one
frame to the other. And personally, I wouldn't
really use this one. I would just go with
one image step by step. And also, yeah, I'm
going to delete this, so I have just one image. And here you have two options
actually actually three, but I use this Genesis three
and Genesis three Turbo. This one is faster, and I've
received very good results. There is something that
happens with the image. It creates a lot of highlights. As you can compare here, a lot of highlights
and contrast. While with this one,
it takes longer, and the image stays more. The video stays more
like the image. So yeah, I'm just going to
test it so you can see. Okay, going with Alpha, I'm going to go to credit mode, so it will generate faster. I'm going to leave
it to 5 seconds. Okay, let's take. And yeah, this was done without
any instructions, and I try this a lot
because I just leave it to AI to do whatever it wants. But of course, I can also say camera zooms out and click
Generate to see what happens. Let's play. And yeah, as you can see, we got the
instruction, camera zooms out. And yeah, this is what we get. I can also try blinking, closing eyes, looking up, and be ready for a lot of testing because maybe it doesn't get the
instructions right, but I'm just testing now. So let's generate
again and see what we get. Okay, let's see. Yeah, we have the
camera zoom out, but yeah, we don't
see any movement. So in this case, I can click Generate again
and see what happens, or I can add fast
motion as a word. Usually this helps. Okay,
let's try this one. Yeah, as you can see,
this one has movements, but it's kind of strange. And that is because
of the fast motion. As you can see, yeah,
we have the blinking, also the looking up part, but the fire is yeah, too fast. And yeah, of course,
that depends on what you are looking for. But overall, this
is how it works. And also, I'm going to
test the same thing now with the turbo version
and see the difference. Usually this one, not usually, but always it generates faster. As you can see, we have this
kind of highlight look. And even with the fast motion, we didn't see any big movements. So again, in this case, I can click Generate again
and just see what happens. This is why I did an unlimited subscription
because with the credits, you can test, but
it doesn't work, so you have to test
again, and again, you are never sure of how many testing you need to do until you get the
result that you like. Let's see what we have here. Okay. As you can see, with the same description, we
can get different results. So here the camera moves
a little bit faster. And yeah, let's try
something else. I'm going to delete
this fast motion. Let's just click Generate. Yeah, this one looks nice
because of the shadow, so it's a cool version. I'm going to also
test another one where I delete the description, so I just leave it to the AI
to see what it generates. Yeah, this is also nice. So again, as I said,
you have two options, the Turbo one and
Genesis three Alpha, and it's up to you to
see what you like more. You can try the technique by adding frames to get
a certain animation. You can try creating
a description. You can also use
the examples that runaway gives you like
cinematic drone, close up. This is more about
the camera movement. Also, you can read
this one to have a better idea of how
you can use everything. But again, nothing
is 100% precise. So yeah, at this point, I'm just going to keep adding images and generate until I get maybe five or ten
videos that I like. So I can create a short clip.
10. 09 Lipsyncing: Okay, now that I have
some videos ready, I'm going to move
to the next step, which is lip syncing. And for that, what I need
to do is first of all, have a video of a
character speaking, then also have an
image or a video of the AI character
that we are creating, and we are just going
to put them together, so we put the face
animation to our character. For that, I'm going to open my phone and I'm just going
to record myself speaking. I'll just say something
about the course. Yeah, just something random. Okay, so now I'm
recording from my phone, and I'm going to start
saying something. I need to make sure that
I am in the center, looking at the
camera straight and try not to make a lot
of head movements. Hello, everyone, and
welcome to another reality. What you see now
is done using AI. I'm going to show
you how to create also your characters
and animate it as well. Okay, so I stopped recording, and I'm just going
to get that in premiere or after
effects just so I can also sync it with
a microphone that I'm using from my computer
and then do the lip syncing.
11. 10 lipsync excecution: Okay, now I'm in after effects, and I just did some cutting, so I cut the pauses and yeah, just to have a smooth speaking so I can then
attach it to the AI. And yeah, this is
what I have now. Hello, everyone, and
welcome to another reality. What you see now
is done using AI, and I'm going to show
you how to create also your characters and
animate it, as well. Okay, so I'm just going
to put this as one clip, and actually, then I'm
going to divide it in three parts because
each animation that we have is 5 seconds. You can also make it ten, but I made it five, and well, this one is
around, yeah, 11 seconds. So this is why I want to divide it in three parts, and this way, I can attach each part to any character
animation that I have. Hello, everyone, and
welcome to another reality. Okay, I'm going to
precompose this And now I'm going to
export them one by one, Control M. I'm going to
export them as MP four. I'm going to make
the frame rate 25. Okay? And I'll do the
same thing for the rest. I'm going to save this
project just in case, and now we move to
the runway app, and let's go back. I'm going to click
here on the Act one. And here we have two tabs. One is the character
and one is the video. So for the character, yeah, we can use
any of the videos. Let's try this one. And now
let's add the lip syncing. Let's start with the first one, and I'll just click Generate. Okay, let's see what we have. Hello, everyone, and
welcome to another reality. Yeah. Hello. Welcome
to another reality. Hello, everyone that I have. Something that you can also adjust is also the
motion intensity. If you want it less intense, you can make this
one, for example, and let's click Generate. And of course, you can also keep generating until you
get what you like because the problem here usually
is with the teeth because you don't have a reference of how the
teeth would look like, so it is like random teeth, and maybe sometimes
you don't like it, so you can generate again. But as a technique,
it works very well. Hello, everyone, and
welcome to another reality. Hello, everyone, and
welcome to another reality. As you can see,
this is different. Hello, everyone, and
welcome to another Hello, everyone, and
welcome to another. As you can see, we have reality. Welcome to another that is also and welcome to
another reality. As you can see, we also
have Hello thinking. And yeah, this is how it works. I'm going to download this and also try it with other videos. Okay. After I did some testing, I think I have three
final animation to show you how to create also your characters Elastic as well. And I'm going to show you how to create on some of
them, it didn't work. We now is done using AI. What you see now
is done using AI. And yeah, now I can
combine them together.
12. 11 post production: Now we are in after effects, and here or in Premiere Pro is where you are going
to put all your clips. But I don't want to
make this a course about after effects
or Premiere Pro, so I just want to show
you a few tricks, and then it's up to you how you can combine them together, how you can adjust them
to the sound and so on. So yeah, that's a relative
thing that it's up to you. But yeah, let's
open a few clips. Okay, so I'm going to open
two clips that are the same, but they just change
from the lip syncing. I'm going to decrease the
opacity just so we can. What you see now
is done using AI. What you see now is
that one is animated, the other one is not. And what I've noticed is that when you do
the lip syncing, the animation gets
kind of blurry. I don't know, if
you can see, yeah. This one is without the lip
sync and it is more sharp. So what I do in this case is, I put the sharp one below, and let's rename this
and I put this on top. And what I can do is just
create a mask so I can go with this ellipse tool and just yeah, select the part of the lips. And if I click Play, what you see now
is done using AI. Yeah, you can see that
it doesn't look good, so we need to adjust it. We go to the mask. First of all, I increase the mask
feather, so yeah, this way it gets blended better, and then you need to
adjust the position. I click on the mask path, and let's go somewhere here. I can adjust it in different frames.
Let's click Play now. What you see now
is done using AI. Okay, so we see some, this one right here
needs to be adjusted. So what you can do is
just, for example, go on this frame And, yeah, adjust this or
adjust the feather. Okay. I did some adjustments, just adjusting the
position, the feather. What you see now
is done using AI. And yeah, I think it is
better than just keeping the lip syncing because it
gets a little bit blurry. But of course, there are
also AI tools like to pass that can be used to
enhance the video quality, but that's just a small trick that I use in after effects. And then after you do this, you can just do an
adjustment layer and adjust the colors. For example, color balance. Let's double click. And, yeah, it's up to you to adjust
the colors as you like, adjust the clips based
on a sound that you add, and create a short clip like the examples that I showed
you in the beginning.
13. Outro: Okay, so you made it to
the end of this course. And first of all, I want to say, thank you for watching, and I hope you have
found this helpful. And secondly, I want to
share some final thoughts. I would like to advise you, so if you like to
pursue this kind of career and you want to learn
more about AI animations, make sure to not stop here. From my experience,
I've started to learn AI tools since
two years ago, I think. And whenever I started
to learn something new, after I learned it, there was always something else
that was developed. And when I learned
that new thing, there was then another thing
that was improved and so on. So considering the pace of how fast this industry
is developing, I want to advise
you to always be updated with the new
tools that are out there. Example, the animated
lip syncing that you see now on runaway was
done a few weeks ago, and maybe after a few weeks, you'll have more camera control. Maybe after a few months, you'll have better image
generation in flux Laura. So always look up
for these things. And beside that, see for ways to be creative and
improve your workflow. I mean by that is that there is always a new way to use
AI. The tools are there. Of course, you need to
be updated and you need to improve your skills as the
tools improve themselves. But something else
very important is also for you to be creative
on how to use them. Maybe you can use
them to create clips, maybe you can use them to create advertisements or anything
else you can be creative with. So yeah, this is what
I want you to share. I hope you found this helpful. And, of course,
for any questions, you can reach out
to my social media through email or any
contact that you have. And yeah, I'll see you
in the next videos. Bye.