Transcripts
1. Introduction: Have you ever wanted to create a stunning digital
art, T cards, images, videos without needing any kind of advanced design skills, then this course is for you. Maybe you have tried
Photoshop, Illustrator, CoraloKvaPcreate, but the process is too complex and it's really time
consuming, right? Or perhaps you are
an artist, designer. Or an entrepreneur looking
to monetize your creativity, selling stickers on
ETS or on Rad bubble. Well, what if I told
you that you can create with just one click
all these images and videos for yourself
and for your business and even influencer that
is totally AI generative. A with the power of
this tool known as CofiUI and it is totally
free within your computer. Welcome to CofUINFlux Advance Generative
AIP digital artists. In this course, I
always teach you how to install Compui
understanding the workflow, understanding the
interface, generate stunning digital
arts and stickers. That is high quality, high
quality vector graphics, and even air influencers. And many more things
within this CompuI tool. Hi, I am current
Acadi specialist with ten years of experience
in graphic design. I worked with corporate plans, freelance as a
content creator for big brands to help them
streamline the creative process. When I first started
exploring AI generated art, I realized many
people struggling to get high quality images. Either AI images look messy or it was too difficult
to get the final look. That's when I
discovered the PUI. Now I'm here to teach you
how you can use I like A P. It is a project
based course where you will learn how to
create your own workflow. Use that workflow to generate
high quality images, high quality vector graphics,
high quality stickers. That are ready to use or
sell a project that you can use a control net to control
the outlook of your image, training your own Laura to
create a consistent character. By the end of this course, you have your own port for
you that you can sell, socase and use for
your own brand. This course is for
digital artist, designer freelancer and entrepreneurs,
content creators and marketers with no prior AI
design experience needed, I just need your curiosity
to learn this amazing tool. So it is for everyone
from beginner to advance. This course gets
regularly updated. You can share your project with me in the project gallery. I will always give you a
feedback and share tip. Are you ready to level up your AIR skills and start creating a high
quality digital asset? Let's dive in see in the course.
2. How to install comfyUI: First time going to teach
you how to install Comp UI. So go to Google search
for Compi Install. That's all you have to type. Look for the CUI the
most powerful click it. I will direct you
to the Github page, and you can download it
by scrolling down and direct Download button
you can see here. Click it, it will automatically start
downloading for you, and now you have to download I have already downloaded it, so I don't have to
download it again, but I will show you
the step what it will look like and how to
install it correctly. So first thing you have
to understand and you must have is NVDA graphic card. If you don't have
NVDA graphic card, then it will run on your CPU, which will take way more
time and you will lose all the interest how to use ComfUI you will lose
all the interest. And it is a ZIP 75, so you have to extract it before you run into
your so first, you have to download
it from Zip seven, download for your machine. After downloading it, you
have to extract that file. I will show you
how we can do it. As you can see on your skin, the file will look like this. This is the file that we have downloaded from that folder. It is a 1.5 GB file, and you have to right click, but before you have
to install seven Zip, like I have told you, so
right click seven Zip, Open archive or extract
files or extract here. I will choose Extract
to Cf portable NVDA, so it will generate
a folder for you, and it will automatically extract all the file
in that folder. So Double click
that folder and you only have to run NVDA
GPU. Double click. Okay. So whenever you Double
click at the first click, you will see some
window command window will open up and all the information it
will download first. So it will take around one
to 2 hours to download it. You must have a fast
Internet connection, then it will save
you so much time. This process is for
first time only. So after that, it will open
a direct window for you. That is a Come don't have
to worry about this thing. It is like a cluster
or a web so you don't have to worry
and scared out of it. Don't worry. I'm here to
explain every part of it. At first, there will
be no CUI interface. So what you have to
do is that load. Load default, you have
to select load default, and it will automatically load a default node
system for you. And you can see load, checkpoint amp latent image, clip text, clip encode, K sampler, VAE decode and save. Right now we are using only
a default system for you. First, I have to explain
you everything from sketch. After that, you will understand all the nodes of what is
checkpoint, what is flux, what is clip, what is as amply, what is DAV, all the things I will explain you on
this particular course. Obviously, I will teach
you how to prompt. So right now, we are
going to simply type a simple prompt hours Garden
and beautifuple standing. You have to click prompt, so it will take hardly
seconds for you to work. So Comfy has generated
a basic image for us. Let's change the
width of our image. It is right now 152 X 152, which is very low quality. And right now, I'm going
to use 102-41-0204. Okay. And prompt again. As you can see, the image it has generated using the
basic tool we have right
3. Important Information to Note about first section: Hey, everyone. Welcome
to the course. I'm so glad you are here to learn creating art with Kofi UI. In this first section, you all need to
do is just watch. This part is to help you
get comfortable with Kofi UI tools and
see how things work. Just relax and watch
it like a movie. No need to do any
hands on work yet. This first section takes
about 30 minutes to watch, sit back and get familiar
with everything. When we get to second section, that's when you will start
trying things out yourself and make your own
creative AI art. So let's start by watching
this first part together.
4. How use Comfy Ui: So we are getting
started with the CPUI. So in the pious lecture, we have learned how to
install the Comp UI. So in this video,
we are going to use the CPUI how we can use it. And we will talk about
the important notes that Copy has and their work. As we have downloaded
CompuI before, we have all these files that
CPI has already installed. What happened next time
when you open the CFUI? So next time when you open the CPUI, there are two options. First, run CPU or run NVD. As I've already mentioned
in this course, that you must have
NVDA graphic card because it will make your
work way more easier. So you just have to double
click the run NVD GPU, double click it, and the command prompt
will open for you. And this time it will
not take so much time. It will check the
basic information about the machine we are using, and it will automatically open your default browser with a
URL, something like this. And it is working
locally on your machine. Have already learned to install CompuI and come to this window. Now it's time to make
your ideas into reality. So how did I get this window? As you have already installed
CompuI using this prompt, you only have to double
click this again, and Cf I will open a new window for you in
your default browser. It could be Google Chrome
or it could be Windows. So it's working 100% locally, no Internet is used
right now because there is no web address you can
see in your search bar, and there will be two type
of window you can see in your first attempt
to open the CofiUI. First, you will see
a blank screen, or you will see a
load default screen. If you don't have any skin here, you simply have to click
a load default menu here. In the load default menu, fui will open a beautiful scenery nature
grass portal landscape. Pure Galaxy portal open for you. Just click the Q prom. Cofi will work automatically for is the first image that you
have generated right now. This is your first image, the CpUIO the creative
that are using cofuI have created
a wonderful image using cofuI Whenever
you click again, it will again generate an image. But this time it will be different because
of the seed nature. We have seed here, randomized document,
randomized fix. If you make it fixed, it will again generate same image for you
again and again. You only have to make it randomized if you want
different results every time.
5. Create Basic workflow and understanding the node system within ComfyUI: Getting back to the blank Canvas and right now, just click Clear. Okay. And now we have, again, a blank Canvas and we
don't know what to do. You only have to double click. There is a window pop up or you only have to click sampler, or you have to search
K sampler here. Come for UI core. You click it, and there is a node
has been generated. Now you can see it
is kind of a brain, or you can see it is kind of a heart of a
compuI node system. You only have to generate model, double click Load image,
say Lod checkpoint. It will work like a model. You only have to click
to the model menu. It will connect to the model. Now you have to double
click again, clip encoder. This will be the text
to add to our UI. Double click again. Or you can simply
Control C and control V, paste it bring it out here or you can bring
it anywhere in the canvas. It's up to you simply
in the condition, click on connect it
to the negative menu. If you want to resize a node, you only have to
drag it like this. And if you want to delete it, right, click the move. If you are mistakenly
delete your node, you only have to
control the set. I will back again. And now there is another
way to add a node. You just have to right click Add node sampling sampler. You only have to search from
this vast variety of nodes. Or if you want to
add another way, this is my favorite way or
it is actually easy to add, you just have to
drag out this node and release it in
the blank canvas. It will show you
the option which node you want to connect
with this node system. So you only have to add
ampitlatent image here. It will be
automatically connected for you and add
another node for you. For our heart, we have already had brain which
will act which we call checkpoint or model
and blood brain eyes, which our machine will
see through our text, and it is a latent image. Later on, we will use it as
a reference image or we can add a reference image
to give our prompt or K sampler to understand which type of images we want or which type of reference images
we want to connect with checkpoint and give
us a desired result. So right now on the other side of Ks empler we have a Latin, drag it out and you
can see VAE decode. There is another output
for the VAE decode. Let's first let
minimize this sampler. You just only have
to click this dot, and it will automatically
minimize or it will automatically will
minimize its window or if you want to
open it, again, you only have to
click dot Again, this time we have another image, so it will be a image
of preview image. So this time we will see
a preview image here. So let's add a preview
image, same preview image. So right now it is a
complete workflow. You only have to add text. Let's see if we
can get an image. Let's make it portal and OT
For the positive product, I have added portal, and for the negative watermark. Let's Q prompt. Let's see if
we have any request here. This is important for you
to understand if we have a problem in our workflow, the CUI will automatically
dett that problem and tell you that we have to add another node to
complete the workflow. Or if there is any problem, it will tell you right
away what is the problem. So you only have to read what is the problem and you have
to rectify this problem. Prompt output fall very
decent clip encoder. So we have to add a clip text encoder
required input is missing. So right now you have to
check the color here. We have a VAE color matching
with the VAE decode here and all the nodes that have other colors right
now turns into gray. So we have to connect it with
the VAE encoder and again, the problem we have
faced here right now, I have rectified one problem here and another
problem has popped out. So we have two problem
required missing input. Clip. So we have
to check the clip and check where it will be
added and it will be at here. And here. So let's first prompt again. Congratulations, everyone. So now you have Master tap
resize this image 210-24-1024. Q prompt. Let's click
it and drag it out. Now you can see the
clarity in this image, and it has dimension of
1024 pixels to 1024 pixels, which is square, and you now can post it
into social media. It is a complete workflow you
have generated right now. Press Control and hold it by clicking left click and
drag it out on the canvas. You can select all the
nodes here or delete it. Right now, I'm not going
to delete it by scrolling up and scrolling down or
click the scroll button. You can drag the Canvas
according to you. You only have to
click Control click, Control click, Control
click, Control click. The interface is
really, really simple. And if you know how to
use a basic Photoshop, you can understand how
we can use the node. Or otherwise, if you have
not used Photoshop before, there will be no problem
because your friend here will tell you
everything about compi. So you don't have to
go anywhere else. You only have to watch course. And one more request, please post your first
project image with me. I know you are very creative, and I want you to
generate a first image, very first image that you
have generated using CFI, and post it on project
or if you want to share with me directly on
Instagram, there as well. It will motivate me to
make more videos for you. Now let's generate Lion. Baby cute ion. Cue prom. Let's see what we have here. Here is a lion cute line we have generated here,
press control, select these three nodes, Control plusECpy and Control plus Sift plus V paste it here. Wherever you cursor has gone, it will automatically
paste here, and automatically, the
nodes will connect it. If you only press Control
and V without SIFT, it will be paste your notes. The three nodes which we have selected will be paste
without connection. So why we have done this, you will get to understand
better as we use CPI
6. Comfy Ui interface and group generation: Now if you want to move them, you can move by
pressing Sift key, and you can move
them as a group. Now you can create
them as a group also select by pressing Control, select your node, right click
anywhere on the canvas, and it is really irritating
for the chrome users. If you don't want this thing to come up every time you click, you should use your
browser default browser, which is our Internet Explorer
or Microsoft Explorer. So by right clicking that, you need to add group. Group name, group A sample. Now you have created
a group here. As you can see, I have
created a group here. Now whenever you move the group, all the nodes within the group will be placed along
with the group. So you don't have to move
them one by one right now. Now let's come to another part. Now let's add a
latent image here, like we have added empty latent image for the
case Empler we have added. For this case empler we
need a latent image, and the latent image will be, let's double click here and
add a latent image up scale, upscale latent, add it here, and now add this scale
here and ten with here. Now we have added upscale
latent image here, and now we are going
to upscale it, and we're going to upscale
all the way to 1024 to 1024. That's okay it. And one last thing we have
to add is reduce the noise to 0.2 because whenever
it upscale our image, it will create another
variation for us in that image, but we don't want
that to happen. So we simply need to reduce the denoise variation here
generating first image. Whenever we upscale our image, it will not change
our image very much. Just a minor twig, it will do for us after reducing
the denoise. Now try to queue our
prompt again here. After first generation,
now it's working on our image on our latent image. Let's compare both the image
that we have generated by using Jagannautbn safenu as C 1.5. Okay.
7. Save nodes as template: You want to save this
node as a template. You just need to
select this node by pressing Control
and select this. Right click, right
click on the Canvas. You need to select
Save select it as template name, image, scale. And now you want to
use that template, you just simply need to
select right click and go to Node Template Image scale. We have ed it. This here is our template.
8. Canvas interface change: Now as you can see,
we don't have space around here and how
to move our canvas. You simply need to hold
space and move your curson. Now you can move your
canvas anywhere around, leave your space button, and our cursor will
automatically work as usual. Now you press space, press and hold space button, and now you can see it is really easy interface
you can learn. So you don't have to find a space if your canvas
is full of notes. You simply press to
hold space. Es ps. You can see it
looks really messy, so you only have to
click this setting here, you can change your dark
mode to light mode. And you can change
edit attention. You can change grab, ink render mode to linear or
spiral and go to straight. Now you can see all the lines
have been changed to State, and it look really
nice and easy to use. Milk white, you can see
it looks very nice.
9. Comfy UI Manager: Let's talk about
Comp UR manager. Whenever you install CompuR
in the previous version, there is no manager here
install automatically. You have to install
it manually before. Now it will be automatically installed a Compi manager for you in all the upcoming update in the latest version of CompuI. So you need to select
manager and you can see which node
and which node we want to select and
deselect or which node we want to install automatically
in within the CompuI. It will help you manage
all the nodes here. Whenever you install
or use a workflow, it will show up
pink style of node, which means the node
is not available or node is not installed
in your system. You need to install
them using manager, and how you can install it, you simply need to install
missing custom nodes. You click here and it will show you all the results
which are shown in the different color or in that bright pink color
in the mpiUI canvas. And all the nodes that
are not installed, it will be appear here, and you need to select them and install them manually or it will show you that install all the nodes here or here
anywhere in the window, you need to install them by
selecting all the nodes here. And if there is
any workflow that you have installed
in your CopiUI, it has missing node and the missing nodes have
not appeared here, then you have to
find that node in the Github or hugging
pace website. Like I have told you, whenever
you try to queue prompt, there is error message here, and you have to copy that
message and paste in Google, and it will try to redirect
you to that missing node. Once you get the experience
of installing first node, you will automatically
understand how to install other nodes that
are not available in CPU. Or we can say, these are extension which you have
to install in the CPUI. In the upcoming lectures,
I will show them up. After installing those nodes that are missing in the CPUI, you need to restart your CPI, and it is very easy. In the managers section, you need to restart
your CPUI that's all, and the new window
will be pop up. Now let's get into another part.
10. Install Custome Node: Let's get to another
part, select manager and install custom node. And you will see there
are thousands of nodes are available
within the CFUI. Right now, there are
1404 custom notes available within the CFUI. Let's try to install
one of the nodes, try to install it and
restart required, and you need to simply
restart the CompUI. And if you want to install more than one note,
you node here, you simply select them and install them instantly
and restart.
11. Workflow in Comfy UI: You can change your
custom colors here, right click tighter mode colors and give it a right color. And you can see that color
has been highlighted here. You can give your own color. Let's give it a green color. You can change the shape
here around box, default, card, default colors, custom colors. You can select any
color you want. Hey. This pink color will
show up whenever there is a missing node
whenever you try to install or use a work flow
created by someone else expend click Colors All right. Colors help me identify where I have to type green
color to our text broom and it helps me to differentiate speed up my workflow
where to find my notes, how to differentiate my notes. Because these are really, really simple notes right now. And whenever we have
a cluster of notes, we need to identify them as
a group and also as colors, and it will help you in
your future projects. You want to use a radimi
template from other creator, you can find them from
Open AIRt or from Cit AI. You just need to load select workflow that has
been saved by other people, and you can use them by
selecting and create open. This is a workflow that you can see has been created
by someone else, and you can use this
and you can use those workflow for yourself.
12. Recent Update in comfy UI: There is a recent
update in the Comp UI. It will give you
update regularly. And there is a navigation
menu you can see here, zoom in, Zoom out, reset view, select space, togal link visibility to
remove your node system. There is minor update here, so we don't have to
worry about anything. I just have to show
you the updates.
13. Load Check points and Trigger words: So let's go to civitai.com. It is important to
know that this website is actually safe for you to use. And whenever you
open this website, you will see many images with Check Point,
Laura, with presets. There's so many preset Laura and checkpoints are available
in the CivitI website. Cute at I can see here, and there is a prompt you can see in the center
of the picture, a small asteroid is running
towards a giant game tamer. They get TD rendering
Q towards her. Now, this is a prompt that someone had given and
uploaded it on the website. Okay, now we have a workflow. What type of notes they have, which type of guidance
they have used, which types of how many
steps they have used, sampler and STS
information also. To get this exact same result, you need all this
information for the CFUI. First, let's understand
what is workflow. This is our workflow. All the information,
like I have told you, lip amping K sampler, the brain, VAE decode
and image sample. This is our checkpoint, dream shape, SD 1.5 and
flux, many checkpoints here, and I'm going to update
a new checkpoint if you have seen a red mark
here like we have Q prompt, and you can see the
green mark here. And this gain mark will come to this and this and this
and this and this, and the image will
be generated here. Whenever you see a red
mark or pink mark, there might be possibility that checkpoint
is not available. There might be some error
that CoVUI has rectified, and you have to
remove that error. Let's change to Dave flux, safe tensor, Check Point. As you can see,
red dot pink line, and it has already
selected an error that we have our error
in our work flow. And we have to
remove that error. Let's change to load
CheckPoint again. And I'm going to download another checkpoint
CofiUI Wildcard fantasy extreme Sd Excel, Checkpoint Wildcard fantasy. We have Laura,
extreme detail Laura, and A checkpoint here. Here is the checkpoint.
I've opened image and tried to use the
checkpoint in our workflow. As you can see, there's a
fantasy art in this checkpoint. But if you want to download another checkpoint
with your preference, you can find it in here models, Checkpoints fantasy Wizard which is let's open this checkpoint. It is actually a a, and it has base
model flux one d, and I'm checking for
SDSL highest rated. You have to check for highest rated or most downloaded because these are safe to
download and more stable. Let's get to highest rated
or most downloadable. Highest rated, select SDSL 1.5 Checkpoint. Let's filter it out. Filter by category. Let's get to Vical clothing
objects. It's up to you. Let's download this,
Laura, checkpoint. And you have to download
this in the CopuIFolder. Go to the folder that
you have downloaded CPUIOpen CompuI models, Laura Animate Checkpoint. And CPI model checkpoint, and you have downloaded here. It will download it for you, and all the checkpoint have
different sizes like five GV, six GV and it is about
two GB of checkpoint. And one thing to remember, it has a triggered word, which should be add in your
workflow in the text area. Copy this information,
and let's try to create a CUI dog in formation. Okay, so we will use
this information. You will get this link in
your video description. Let's copy all this information.
14. Learn in depth about some terms: So the information we have copied and the checkpoint already have downloaded
in the folder, like we have specified in the Cf UI model and
CheckPoint folder. Now let's go to our Cf UI
window and refresh it. Now let's check. Here is another checkpoint load. You can see now, select it, copy the information, copy it, and these are the
trigger word help your CUI software or machine
learning to understand that this checkpoint had this
particular information and this particular
information you have to load using this text prom. Iron Man. Poster
fighting Batman. I hope this will generate
something amazing. Mm. And let's change
the dimension. And again, step 40. Another single figure,
you prompt again. Okay, so it actually
creating Iron Man, movie poster image and
not fighting Batman, but ironman mixed with Batman. It is kind of that
let's use image. Let's see directly this
prompt in our image and use this negative
prompt in our negative. Step 99 and in the
negative prompt. In the negative in
the positive and step we have given is 99
generate now, Q prompt. It's getting somewhere,
not 100% like this, but yes, getting somewhere. I hope you understand how
this load checkpoint work. I just simply download
it a load checkpoint. Any checkpoint, I
have downloaded it, put it in this folder
like I have specified. Add positive pn, prompt, negative prompt, and
a atentive image, like we have discussed in
the previous lectures, there is a K sampler
and VA and save image. This is a basic model that CPUI has already in its default. In the upcoming lecture, we are going to
expand our horizon and we'll see how all these
things works one by one. Right now, we have checked or right now we have
load at checkpoint. Is important for you to
visit civiti.com website and check for different
as how these as work, what type of prom they are using to identify the load checkpoint. Which type of tigger
word they are using, create a document so that
you will not get lost. Next time you will use
Tiggerd Word or prom, and you must have saved this inner document for
your future afference. And trust me, it will help you like this one,
let's open it again, and they have used
flux Dave model, luminous shadow scape neon
retro wave by crononit. We will use these
type of images, and we will generate
these type of advanced images in our
upcoming lectures.
15. Ksampler SEED: So in the last episode, we have learned so many
things in which we have created an image Las and preset. And in the previous section, we have learned about
load checkpoint, clip encoder, text encoder, ampit latent image, asmpler load the ampity latent image and create an image that
is visible to us, and the case sampler is
the main brain here. So the case ampler
will cook together everything and show
us a visible image. And it will visible to
us after AE decode. In the case ampler
we have seed here. This is seed, and seed is the value in which
case sampler use the starting point and
the ending point of generating the image from
load checkpoint or any model. If we keep the
seed remains same, if we keep seeds value same, it will always produce the
same result every time we cube if we change
the value of the seed, it will create a
different image. Every time we change
the value of the seed. You can randomize the
seed by controlling after generate randomized fixed
increment or decrement. There are few options
you can see in the control panel
for the seed value. The minimum value of the seed is zero and the large
value is very big. With this large value, you can generate infinite
number of images using the same model or same prompt
using the same ingredients. Can send the seed
number to increment. Whenever you prom, it
will generate a image, and it will increase by
16. Exploring Queue Prompt: You can set multiple Q, just like you give multiple
command to the printer. You can also give
multiple command to Q. If I click it, one, two, three. I have tell him to
create three times. It has generated three times. As you can see, it has
generated three images for me. If you want to cancel any job, you can see VQ. All the process is
running or pending, just simply click Que prompt
and you can cancel it out by clicking the cancel button and you can close
this window again. You can see the
extra options here. Once check and click on AutoQe, it will automatically
generate images for you. And it will not stop
until you stop it. We have a decreement
number here. If you generate image, it will decrease by one. If you keep it randomized, it will generate
random number for you, and the probability of getting the same image is actually very, very low because of the large
difference in the number. Now let's keep the seed value
to fig and generate again. Now, whenever you que prompt
after fixing the seed, it will generate the same
image every time we click. Now let's change a prompt. Purple I have changed
purple to red. Now let's see. Now, it has
generated a red color galaxy. It somehow tried to create
the same image every time we click the Q prom because we have the
same seed number. If we try to change the prom, it will try to
create a same image. Every time we try
to add the text, but every text has different value in the
computer language, or we can say every word has different weightage so it will create image
according to that. But somehow it will
try to generate the same image every
time we que it will generate a different image
when we try to change the step to 19 or
20 or Aga value. Let's generate again. You can see there is a
difference in the steps. Let's make it 30 steps. It will try to change the
images to something similar, but you can see there is
a color difference here. There is always a subtle
variation in the out
17. Impact of Steps in Result: Keep it in mind. The
longer the step, the image will take longer
time according to the step. Let's make it five steps. Q prompt. You can
see the timing here, only takes seconds or 1 second.
18. Addign Effect to image CFG classifier free guidance : Have to check for CFG value. If you change the
CFG value to one, you might not like the results. If you change the value to ten, You might like the result
a little bit more. But the CFG value is not directly related
to the contrast. As you can see, there is a contrast
change in both the image, but you have to check for the CFG value in which it will comfortably
create image for you. If you increase the
CFG value to 15, let's see what I will get there is a change in the
contrast value of the CFG. CFG is not directly change
the contrast of the image. I actually give the effect to the image in which contrast
is already come to light. So the full form of CFG is
classifier free guidance. So if the CFG value is low, the model has the
freedom to create images according to
the load point or a. It has its own freedom to
create images randomly. CFG value is high. The model will stick to the
prompt that we have given, and it will try to create strictly or try to follow
the rule strictly.
19. How to change all the values in Ksampler automatically using Primitive Node: You right click on
the case sampler, it has many options and you have to check for the
convert visit to input, and it has many
many options here, like C to input, control after generate,
step two input. Let's try to convert
CFG to input value. You can see the CFG is
not here right now, and it has changed its position to the
input value of CFG. Let's drag out this node
and try to add a node here, and it has many nodes. Let's search for primitive. And this primitive
node allow you to add value to the CFG value. You can see just
like the seed value, you can change here the
control after increment. Let's Q prompt and it will automatically increase
the value of the CFG. So just like SD, we can control the CFG
value as we want here. It will help you in
larger workflow in which you want to experiment
prompt with different set. If you want to change
the settings of each value each time you generate the image and if you have not get the desired result, you have to change the settings
of every option manually, and if you are lazy to change
the settings of everything, you can use this type of node, and it will automatically
change the setting for you. Let's change our case sampler
back to where it was. Now we have to select option, convert input to widget, and we have only one input here. Let's click and CFG comes
back to its original place. And we have a node here, which is ampithe, you
can delete it out. Let's try to convert another one just to
give you an idea, convert visas to input, and you can select cellular denoise,
let's select denoise, and you can see the
denoise here and add try to add
primitive node again. Let's convert it, and you can see the denoise has
a primitive node. Now, let's again convert our K sampler to its original place and you can delete it out.
20. Batch Generation Of Images and Naming: Can also experiment
with sampler and dular. Most of the time, I use DPM PPM, that is DPM plus plus
two and normal caras. You have to expand what
work for a specific type of a or lot checkpoint or preset and which model work best with which
type of sampler? It's all experiment based and you have infinite
number of options. You can check and select
what best for you. You can see bad size
and you can click one, two, three, and it will
generate about three images. Lets Q prompt it. And once all the
images are generated, it will show here, let them generate so all the three images
has been generated and you can see one s three. It means images are three and you are watching first
images. Let's click it. Now you can see the second
image and third image. You can switch the images and check how many images
have been generated in the one batch and you don't
have to queue three times. You only have to click
here. Make it one again. You want to watch all
the images at once, you can select X. Every image you can see at once. You can change the
name of the images, as you can see there is
a prefix that is cofuI. So it means it will add a CUI or a number with your
images in the name section. So you can change it to bottle. It makes more sense. And if you want to see all the images that
have been generated, you can check for
the Cofi folder. In the CUI folder, you can check for CfUI output. As you can see, the
name is Kofi UI. Ter generation, it will now
add portal in your images. So you can see the images
that has been generated. These are all the images that we have generated in
this course so far. These are some high
quality images that I have generated for
my course also. I love these images the most. That's why I used
for my course, cover
21. Create group and Using group all the information for group: Now let's select multiple nodes. Press Shift click click and you have to
hold the shift and click click, click, click. Now we have selected all
the nodes here or right click on the case
Empler or any node, and you can select the option, convert to group node. Once you click it and you
have to name it Okay. Now the nodes we have selected have been merged into one node, and you can see
it is a group and it has all the
details text area, all the security
name, width, height. This is the batch number
that we have used, and it is a latent image
folder and it's a load model, and seeds and the case sampler details has
been updated here, denois and VID code. You can see the bottle here, fixed prefix name that
we have updated here. And we have not selected
the save image here, but we can add that
again, save image. I don't know why it
has been removed, but somehow it has been removed. So this is the node that
has been generated, and you can select and merge it. That is why I have selected
the save image and you can see we have now
added a node here, and I have named it text. Now the node we have
generated here, you can compare it with a workflow that we
have generated before. You can check it a checkpoint, it has text area. It has amp latent
image with a bad size, you can see the ampithe
latent image details, and it has case sampler details, SIDS control strap
and VAE decode, VAE decode that is image
has been generated here. Now you can remove
that and put here. Now it has a workflow here and I can delete all these nodes, select it, press Control and select the node and
delete simply Q prompt. So finally, we have
generated an image again. With the new group node that we have generated or
we have created, you can manage the group node and you have the power to
change the order also. Once you right click,
manage group node, and you can see all the node that we have generated here into a group and you can change
the position of the node, Checkpoint, Checkpointheckpoint,
checkpoint, and save it. Now you can see the
checkpoint has been shifted to the
bottom of the nodes. It is now convenient for
us to see and select. Now it is not looking so
much bulky right now. Right click again,
manage group node. Now you can set
the visibility of the sampler or any of the nodes. Once you select the
K sampler and if we don't want the
etular or denoise, you can see that we have
not used denoise here, once you click it,
save it and you can see the denoise option
has been removed. It is best for the larger
workflow once you create it and if you feel like there
is so much distraction, you can remove them,
and it's best for you. And if you want all your nodes back again and if you want
to see the nodes again, you can convert back to nodes. It look actually cluttered, but at least allow you
to watch all the nodes. Let's control. And I
have selected this, delete, delete, delete, delete. Why we have not added the same image in
the group section? Because once you
generate a image, there is a preview image here. Once you add the
premium image here, the size of the preview
image will not increase, as you can see the
increased size here. Now I have converted the nodes back to
its original place, and now you can
see press control, select all the nodes. Copy and paste here. You can see all the nodes has been now copy it and
once you prompt, first workflow will work and after that second
workflow will work. Once you select all the nodes, you have to click Add
group for selected node. Now we have a
selected group here, and now you can change all the nodes position at once and you can name
this group also. You can right click on
group and AdditGroup. You can name that group and I can name the group
into bottle and now you don't have to move
all the nodes one by one. If you want to remove the group, you can click group, did group, and remove.
22. How to by pass nodes in comfyui: Now let's double click on Canvas and you can
see search node. Let's select for bottle
that now we can see the bottle group has been
updated in the group section, and now this is the group that we have
generated a workflow before. And now we have two
workflow right now. One is Above and one is Blu. Now let's prompt. So we have two workflow
has been generated, generated its own
image without text, and it has worked
perfectly fine. And there is another workflow
that has been worked again. So two workflow, one cue, both have done their
job perfectly well. Now, if we want to make it work, that one of them will work, and one of them will not work. You simply have to select
press Control, select, right click on the group section and search for the option
bypass group node. Can bypass this group node, and now this workflow is hidden in the eyes of Q prompt,
press control that. There is a shortcut to
make it bypass select Control M. Now you have
hide this workflow, and it will not work again. As you can see, only one
workflow has been work. In the group section, you can select, control. Control M simply hide them or
if you want to bypass them, you can bypass them by
selecting the bypass group.
23. We are now getting into advance section: Can also enable or disable
singer note by using the same process and it will help you in your
larger workflow. Now let's edit group,
remove, select, delete Control and select Control M. I have
press Control twice. Once is disabled, and
another one is unable. Now with this lecture, we have covered all the
basics of ComfUI so that you can understand work
with the Cf UI workflow. Now we are going to generate high quality images with
different as and preset. After this lecture, everything's
going to blow your mind. See you in the next lecture. Thank you for
taking this course. I'm your friend Karen. See you in the next lecture.
24. How to update comfyUI and Workflow flies information: Hello, everyone.
Thank you for taking this course that you
have took this course. Let me tell you one
thing about Kofi UI. This Kofi UI or this
node system is, I think, kind of
universal kind of thing, because if you check for blender and also for DawncRsolve, DawncRsolve is a video
editing software and our blender is
a three D software. Both these software and upcoming software
actually applying this node system in
their these node system are also known as workflow. So we are going to create
a workflow that use flux. Let me tell you what is flux. Flux is a model in which images are compiled
according to your prom, and you will get
the visual result of your tech as an image. That's in a simple
or layman language, this is known as flux. Model. Check Point. It is not a
very difficult thing to do. Once you understand
this, you can create your own workflow. Let
me tell you one thing. Let's start by
clicking the manager. Once you click the manager
and click on Update All. Once you update all,
all your system get updated because
compuI is open source. Develop all over the
globe contribute in this machine learning software or we can say in programming. That's why it get updated daily. So I recommend you once in
a while you have to update your CUI to get the latest
and updated fast results. Once you update it, you
have to just click Restart, and it will get restart, reconnecting it, and it will reconnect and another
node will open here. Sorry, it's not a node. It is a window or tab. Another tab will open, so let's close it and wait
for the will take some time. As you can see, everything
is getting updated. Let's jump to the
future directly. Meanwhile, it's getting updated. Let me introduce with the
workflow that I have generated from the Internet and
not the Internet, but from the person who have made intelligent
and genius person, I respect all of them, and I have downloaded many
workflow and apply in my work. As you can see, these are workflow and many
workflow that I love in this the flux or comp need
of JCNFleRClick, open. You can see these are node
system that flux uses, and you can create
your own node system. Or you can save your
own node system.
25. ComfyUI latest interface update and walk through: Let me tell you one thing.
Every time you open the Comp UI and update it and you will see
a new interface, and it is really
confusing for you. Like, last time, everything was lower right hand
side and right, and now everything
has been changed. But it is not that
difficult, as you can see. The workflow, you can
see, workflow new opened. Wow. It is actually become easier way to use
ComfUI and all these. As you can see, there
are Q, history Q, and there is library. Wow. You can check for library within the
confua right now. Wow. This is actually exciting. You can see a node. Okay, got it. Okay, okay,
okay. These are nodes. I thought we can change
directly models here, but we can find the nodes here, like search for diffusion model. Diffusion. Yes, can. And it's working perfectly fine. Can we search here? Click is also working. Check Okay, yes, that's
what I'm talking about. These are old Wow. Can we check for dreams? Okay. Not bad. We have four checkpoint dreams
LV Let's see Checkpoint. Okay, Checkpoint and
SDSL one Juggernaut, Lod Checkpoint. Do we have flux? Can we use Flux
checkpoint directly? I don't know if it will work
because I think this is not, we can check for
diffusion model. Is it working? Oh Uh uh uh diffusion model
is not working. Okay, so adding diffusion
is not working directly. Text encoder, we can
check. We can check. No, I don't think
it's working fine, but it's working in that way, but I don't think it is working Let's check for
unsaved workflow dot. Okay, this is unsaved workflow. Okay, and upscale. Nodes map, upscale. Okay, this is the node map, and upscale is high here. Okay. Sample advance
custom node. Okay. As you can see, you can bypass directly. It is very handy. It is actually very handy to use this interface of Cf
and I really loved it. Wow. You can check
for batch count, like we have discussed earlier. You have to basically
experiment all these things so you understand the
interface of new CFUI. So let's check
okay, white plaque. You can directly change it. Wow. So let me tell
you one thing. So let's work on a new workflow, new open, save saves,
export, export.
26. Adance workflow in ComfyUI flux overview: Pot of workflow. So these are so many workflow that I have used. So let's check for Image
two video workflow. Let me check if that's. This is the workflow that
I have worked before. As you can see, there is a prompt and there is a character which is dancing
if I can show you that. I think that is not
loaded right now. Not an issue, but this is just
a workflow for our video. We will deeply discuss all these things in
upcoming lecture, so don't worry about that. So these are the images
that I have generated, and I have used
all these images, four to five images and
generate and using them, I have created this
kind of really, really, really amazing transition effect on images, as you can see. And these are high
quality you can post them online and create a portfolio out of
all these workflow. There is another workflow, same workflow that I
have shown before. But I have tried to
create something, but I have not
found it very well. As you can see, this
is the one I love. This is not my desired result, but this is I loved it. So what about code image
generation will I think copy. Okay, so this is
really amazing that we can do anything
with this plug. So let me first Q prompt it before I know that one of the node has been
replaced by another node, and we have to fix
that if that happened. So we have two lets Q and
I hope that it work fine. I just jump to the
future directly, so you don't have to wait
for all these percentage. I simply jump to the future because we have
the power of vidalty. This is a prom that already
has been implanted. Visa mountain peak down
with the Lord Gansia. Lord Gansa and Lord
Shiva, I hope you know. If you are outside from India, then probably you
maybe don't know. Lord Shiva and Lord Ganesh really respectful God
in the Hindu religion, we worship and as you can see, od Ganesha, and Lod Shiva. So it is working perfectly fine. So this is the workflow. I will share this
workflow with you. Okay. So you don't have to
worry about anything by creating this
amazing workflow, you don't have to worry
and you don't have to add any extra node
and no, no, no node. But I just want you to understand
how these node system.
27. Advance Prompt and Diffusion model and Loras and workflow: Let me explain it one by one, and I will share this amazing workflow
with you. Don't worry. You can find it in
the description anywhere on the platform
in the download section. Let me introduce with
the flux model loader. It is a flux model
der. It is a group. As you can see,
there is a group, and we have named this
group flux model loader. Load diffusion model. Here is a load diffusion. You can add this by Double click Load diffusion model,
Advanced loader, CUI core, as you can see,
Lord diffusion model. We have added this,
you can see here. Okay, so let's delete. And dual cap loader
and it has a load VAE. As you can see, we
have downloaded this flux model before, and again, I will explain
you how we can download it. Once you click for Flux, you can check for CheckPoint, and as you can see, flux
one D. Click on Download. It is actually Hughes
file full Model FPP, and I recommend you
can download this one. And after downloading this one, this is the image that I have randomly opened in Civit AI. It has Laura and they have shared the number and what
does this number mean? This number, how much weight you are using in
generating an image. Apart from that, let's
check for Laura. Click it and click it. Let's check for it. And once you open it, you can see it has a Laura and I'm
downloading this Laura. Do we have a triggered
word for this? No, we don't have
any triggered word. And I recommend you
that you create a document for all your
Las and all your flux, all your checkpoint,
all your model that you are using all have their
own triggered point. Yes, it has a triggered word. You can see that PG style.
You have to copy that. If you have not used
this triggered word, our model or our
CUI will not get an idea how we are
going to use this a. Probably the Compi will
ignore this Laura. And I don't know
how will it react if we don't use this
type of trigger word. Most probably it will ignore it. So what you have to do is
you have to download this, go to your CompI Windowsp table, CompUI check for models, and you can see that it has a folder name Laura and save it. After saving it, right now, copy this and I'm creating
an online dog for this. Yes, the dog we
have used before. I'm going to add a
triggered word here. And you can save it by triggered word that
is triggered words and check and check
for any Laura. You have to check
for this Laura. Save this No triggered word. And for this Laura, you have a triggered word. And Let's resize it. All my triggered word will get a red mark on
them so that I can directly copy them and I can pinpoint them that
this is a trigger word. Download another Laura that you can see,
download this Laura. It has two files. Download one of them. Same in same folder
and it's downloading. Let's jump into
the future again. Let's refresh. Refresh node definitions, it refresh update requested, and I hope this will refresh. Yes, the node has
been refreshes. Now you can check for Laura. Which type of a that
we have downloaded. First RPG, and first
is flux realism La and keep it 0.5
weight and so it's two, none, name RPG loa and
keep the weight 0.5. Now we have another La. Now let's paste for RPG style, and for this prompt, let's copy this prom directly, and let's see which type of
result we get RPG style. And most probably,
it has a tigerd word already embedded in this prompt. Let's check for this
if we get one RPG. And if not, let's add
one then. This is top. We have added Laura SD R style, offset, save tensor, RPG. As you can see, it has
already been added, and I don't know
if this will work. Maybe I just copy
it layered style. I don't know if
this work or not. Let's just embed it. Pages. I will explain you
about the prompting, in the upcoming lectures, we will talk everything. Right now, I'm just
trying to give you an idea how this thing work. Okay? So let's prompt, and I hope
this will work promptly. Let's jump to the future again. Okay, so we have not
get the desired result. We have a problem with
our I don't know, maybe with both of them 0.55. Okay. And 0.5. And let's send that to the 0.99. Let's try again. Okay, there is some
change I can see. As you can see, this type of
book we can see right now. So what I have changed,
let me tell you one thing. Before I have changed a denoise, that is 0.5, and right
now, denoise is 0.99. And the weight of the La has been 0.5 and clip weight is 0.5. And model weight is 0.5
Clip weight is 0.5. So I think they both have
clashed into each other. There's a conflict between
them, and now Hmm. This is amazing. This
actually amazing. Let's check for
the saved folder. Co PUI, and check
for output folder. Come PUI. Check
for output folder. As you can see, this is the image that we have
generated using flux. The result is pretty amazing. Let's experiment this
by not using any Laura. And what what will be the result if we are not
using any Laura in that? Let's check it out, Q prompt. Let's jump to the future again. Yes, results are almost same. I think somewhat If you check check if you
differentiate in both the images, there is a difference
in the texture of these both images. This is kind of
pretty smooth type of texture is shown here. And yes, they're pretty much kind of not any kind of
realism in them. And I can see there
is a realism in them. Yes, you can see the
texture difference. Yes. These are imperfect things. Make this image a realistic
one, as you can see. And with a somehow I feel like, to be honest, these
both are amazing. I really, really appreciate
the prom that they have. Given to us. Actually, prompting
is not a hit and trial or working with of UI is
not a hit and trial thing. It is the thing that once
you understand how you have to convey or how you have to explain to this
machine learning, you will understand the prompt. That's all. So depict an
ancient mystical book resting on a weathered wooden table in a dimsly tilt in a dimly lit, magical setting, the book's
edged, cracked leather cover. Feathers, a vivid illustration
of a dark and int forest, leather cover features, okay? The forest is bathed
in the pale moonlight. It's nulled, leafless trees stretching their roots
over mossy forest floor, the winding steam cascade
through the scene, forming miniature waterfall that glisting in the
sterile light light. Artwork appears
seamlessly embedded into the book's leather
as if enchanged. Surrounding the book are
atmospheric details. A few melted candles in ornate brass holder cast
a warm flickering glow, their light illuminating a dedicated silver
crescent moon, amulet and a tarnish
curtain nearby, a quill and ink wells sit alongside a scrolls
and weather Tom. They are feathered
gold, lettering, hinting and the
Sanchis of knowledge. The table is rich with textures. Is surface is scattered with
dust and fainted scratches. As you can see, the
prom that depict this is somehow actually depict all the prompt that we have given to the Cf UI
in the background, soft focus, Silet of other magical artifacts and book hint at the scholar
of Maze workspace. The color plate balance
more greens and brown with a warm golden glow of candle light and a silvery
scene of moonlight. The scene exclude mystery
and timeless enchantment, inviting viewers to imagine the secre hidden
within the tombs page. And this is art style
that we have given.
28. Difference in Flux Dev and Schnell and Commercial use license: We are getting deep into flux and understanding it better. I have gave you the
overview for the flux, how we can use it and
about the workflow. I will explain you
briefly about the flux, a new model that I
absolutely love. I will walk you through
how to install and use a different version like
the Dave and Snell version, which one works best in terms of quality,
speed, and setting. So let's understand what
is flux. What is flux? Flux is created by
Black Forest Lab, a team of smart researchers and engineers who used to
work with stability AI. Now they have
introduced the flux version, one model family, which performs really well, but heads up this model needs a good computer for
smooth performance. There are three versions Persion day version,
snail version. Pro version, API only, not downloadable, Day version, best quality, but
resource heavy, Snail version, fastest, but
slightly lower quality. The license for these
versions are a bit different. Dave version, no commercial use, you can use the
image it generated for commercial purposes
like selling them, but not to train a
computing model. It simply means you
just can't use flux to train another model so that it will compete
with the flux. Nail version fully allows commercial use for both
model and its output. But for the snail version, they provide all the
rights to the user. It's always a good idea
to check the license files yourself for the
most accurate details. On the left hand side, they have shared flux pro in the middle, flux Dave and flux
Nail on the right. Precision in text rendering, both flux pro excelt accurately reproducing
text within images, make it idle for design requiring
label words or phrases. Whether it signals book
covers or branded content, flux pro delivers clear and
correct text integration. However, Flux Dave
does a decent job. Snell structures to render text effectively.
Complex composition. All flux model demonstrate
exceptional skills in understanding and executing
complex composition. Whether you are dreaming
up elaborate fantasy, realism or precise
product visualization, Flux effortlessly, bring your multi element prom to life with stunning accuracy. As you can see, here
is the difference. Anatomical accuracy. All flux model demonstrate
great anatomical accuracy, especially in the rendering
of human features. These models
consistently outperform previous open source alternative like stable diffusion three, SDxl and so on. In creating realistic and
proportionate body parts, significantly
enhancing the quality of character focused image. As you can see, here
is the difference. It's actually look
pure realistic, but it is a fake. It's look good, but
as you can see, the difference in hands, here and here. And now here. I looked like a filter has
been applied on this image, but look realistic.
Fine tuning color. All models excel at accuracy reproducing
specified color palette, making it a go to tool for
maintaining brand consistency, evoking specific moods or bringing creative vision
to life with exactness, whether crafting
marketing materials or pursuing artistic endeavors. Flux model delivers the desired color scheme
with unpalett precision. Okay. Yes. Here's a difference. Same prompt, three output. I like this one, by the way. But as you can see, it
is more realistic one. So alters and aesthetic. All three flux motor produce turning high
resolution images that capture intricate
details delivering top notch clarity and aesthetic. This is a real one. It looks
like someone has clicked this image and look about
this Dave and about Snell. It looks good, and
it look good, too. But this one is the realistic
and aesthetic look. Yes. Here's the difference
with these three models. As you can see, these
three models have the difference or
whole new level. They just go into the whole new level
of generating image.
29. Download Right files to use in right path for Flux Diffusion model and checkpoint: Download, you just have
to go to Flux Dap. You can find Flux Daf
fusion model weights here in the folder
and you have to save in the CUI model
diffusion model folder, and it has a file of 23.8 GB
and you just download it. Just download it in flux
Snail diffusion model wights. Download this file in
CopuI Models Unit folder. That is, go to CfUI open
ComfUI Find models. Here is the models. Check for Unit, and just save it here. As you can see, I have
already downloaded it. It has the weight of 23.8
GV. Let's download it. Copy this path and paste here. So let's download Flux Snail. Meanwhile, it's downloading. Let's check for
another Dave folder. Now, let's check for
the regular version. It is a regular full version. So for flux Dave, you have to put it into the CUI, into the CUI models
and diffusion models. You have to download it in here. So with the diffusion model, you just click it, open it, and download, check
for files version, and scroll down FAX Double. Flex one Dev Sefner
downloaded it. In this folder. Just copy the path
and page here and just enter download in here. Let's wait for the download. So now we have a
smaller version. You have to download it
because we are going to use them in upcoming videos. So flux nail we have downloaded, flux Da we have downloaded,
regular full version. You can then load or drag the following image in
Comfy to get the workflow. So this image is
itself a workflow. You just have to save it, save Images diffusion models
or save it in workflow. Flux Nail example. And it is a simple use of FP
eight checkpoint version. That is, flux Dave has a
load checkpoint version. So right click Save Image
as Checkpoint example, save it into the CI
workflow folder, save it. Flux Snail, right
click, save as image, save Now, let's download it's simpler
version of flux Dave. It has a 17.2 GV
of file and it has a name flux Dave flux
Dave let's download it. You have to download this file in ComfUI models
Checkpoint directory. That is Go to CUI models, check for checkpoints
and put it here. Download, paste it here
in the checkpoint folder. Now, let's download
flux SNL version. SNL means simply fast. Just click that Download and download it in same
directly of Checkpoint. FP eight, flux one
Snell FP eight. So let's wait for the downloads, and it will take some
time for you to download. You must have a fast
Internet connection or I recommend you to
download it one by one. Don't download it
like I have done it. So let's wait for the download.
30. Work flow for Flux Schnell and Flux Dev Checkpoints: The last episode,
we have a sampler. First, we have to add K sampler, add K sampler node, add clip Node two Clip Node. One is negative and
one is positive. But adding before positive, you had to add a flux guidance. A flux guidance. For this latent image, I have added a
pretty latent image and bought the prompt has
been added with a clip. Now we have a load checkpoint. These both are the flux
that we have downloaded. These both are flux we have downloaded in the
checkpoint folder. After K sampler,
add a VAE decode, Latent image with the
samples connected, VAE with the load checkpoint. Portrait of a girl
wearing hat Harry Potter. Let's Q prompted. Prompt output fail. We have a valid checkpoint. You should have a valid
checkpoint that is day one. Let's be prompt it again. For practice purpose,
I will share this workflow in
the resource file. It might get some time at first. We are not getting any result. So let's cancel it out. Why we will get to
know in some time. So understand what we missed
out here in this workflow. Check for the images
that we have downloaded, and these are the images
with the workflow. Let's drag it in here. After dragging it, you can see this image simply
converted into a workflow. You can see, drag it
and it's converted. Drag it converted. And I'm using flux Snell
example, add it here. And we have a perfect
workflow here. But before getting into
the diffusion model, let's check first
for checkpoint. This is the checkpoint,
and let's see what we have missed out in our workflow
that we have created. We have used K sampler. We have used flux guidance. We used clip and coder. Yes, clip and code negative. Clip code negative ampit
SD three latent image. This one is a different node
that they have used here. And now everything is same. Only the ampiti AD three latent
image is different here. Now let's see what
the note says here. Note that flux Tav and Snell do not have
any negative prompt. So SFG should be set to one. Setting CFG to one means the
negative prompt is ignored. So we have to note
that flux tave and SNll do not have
any negative prompt. They don't use
negative prompt. It. They just simply add a negative prompt here
just to keep working with a sampler and they keep the CFG value to one so
that it will get ignored. The negative prompt
that we have used here, the negative node has been
ignored. So let's add. Cute anime massive floppy. Now, let's add. Let's tiger, cute three cartoon style
with movie background. Hollywood. Let's Q prompted. It looks nice and you can
see the clarity is amazing. And if you add a
negative prompt, you will get a image
that has no result. One more thing that
we have added extra is ampit SD three latent image. This one is different. And within 1 minute
we got this image. As you can see, it
took 52 seconds, 4 seconds, 2 seconds,
0.7 seconds. These all time taken
by these notes, go back to our images
and just check for Flux Snell checkpoint. Right now we have use
flux Dave checkpoint and flux tape flux SNL checkpoint. Now let's see. This
one is also same. It has no change here. And with this, let's read. Note that flux tape and Snell do not have negative
prompt. It is the same. But this line is extra here. The Snail model is a
distilled model that can generate a good image
with only four step. So we have to check for the steps that
we have taken here, and this says that
the Snail model is a distilled model that can generate a good image with only four step.
Let's check it out. Tiger cute, three D with
pack ground hollywood. Let's see, and let's prompted. Yes, let's look good. Let's increase the step here,
like we have done before. Let's take it to 15 steps. Now prompt it again. The reason SNL is
fast because we have decreased the step to just four, and you get the image, and you just get the image. And with just four step,
you get this image. After increasing a step, there might some
difference in the images, but with just four step, the SNL give you the image. That is why it's fast. We are getting into
flux SNL example with the workflow of
using a diffusion model.
31. Create Advance workflow for Flux 100+ Styles Image Generation: Now I am loading the flux Nail version because
it is faster. Now add a node and
search for style. Search for style and search for prompt multiple style selector. Just select it, add
this node here. After this, next, change
this right click and convert widget to input and
convert text to input and decrease
the size of it. Yes. Now, increase the space between the nodes after
adding the space, now add another node that
is just double click here. Search for easy space positive, and you can see this one
easy use prompt, select it. Now, after adding this, double click again
and select for text. Concatenate, let it. Place here, now add
positive to text A, and positive string to text B, and now add this
string to text input. Why we have done this, we
simply add this tag to a visit so that itself
don't take any prompt. It's take prompt from
another node and simply push it forward
into the K sampler. We are adding extra
condition into prompt. This is the prompt that
we will add in this, and it will only
take just a prompt. And this is the style that he this prompt work according to the style that we will
select in this node. And after that, they
will just combine into this text concat, and
after concatenate, they just give this prompt to this clip text
encode and this will encode all the
prompt that we have given and push it
into the K sampler. And Ksampler will do its work. Now let's add tiger three D Hollywood Q tiger
three D Hollywood. Now I have added a prompt here, cute tiger TD Hollywood
and add a style. We have the file that is file here that I have
downloaded from Internet. You don't have to go through all the stuff and search
for it and download it. You simply have to put this into the Style folder in Cofi. Also put it into the main
folder in cofuI and check for the main dot pi or nods dot so you can
pinpoint this folder. You simply have to
copy this here also. Like you can see, I just opened the CPUI and just paste
it here. That's styles. And in the styles
folder, I have additive. Now, before working into the main workflow,
let's save it. Simply export this
workflow with stils and confirm and you can see
here is the export, Jason, and you can just cut it and save it into
here workflow. And now just paste it and
you will get the workflow. To work with the style, that's set up our
workflow so that we can work with the
prompt later on.
32. WAS 100+ Styles Workflow setup: Now go to CPI Manager and go to custom Notes
Manager, search for WAS. And as you can see,
it has reviews of 1248 try update, restart required. And if you have
not installed it, you just have to install
it WAS Node suit. Easy use. Install it. Yolan. You can see he is the author and I'm
just trying to update it. Try update. You can
just install it. Got it and just restart it. Done. Wait for the restart and the new window will pop up
after you have copied it, the style, right
click copies path. Now in the Cofifolder, look for custom
notes, custom notes. Double click search for Was I have too many things
that I have installed here. You don't have to go through it. You just have to search for As, search for W as node suit, and just double click it. Now, if you have run the Cf
ui after installing the node, you should have
this information, these files, and if not, you have to run confuI. Now open the Cf suit file, open with notepad and just Check for the WebUI styles. Now, here the style
that is null. And you see here null. You just have to paste the
part that you have copied, paste it and double
the backslash here. It is important. Just double it. As you can see, there is
double backslash here also. Just double it here.
Now, just save it. Just save the file
and go back to Cf UI. Now, right click. You just need to
reboot CfUIRbooto. In my case, I don't know why it's just not getting reboot. So just let open
ComfUi and run again. Now, let's check. Here is the style
that we have given in the WS style sheet now has
been updated, as you can see. If you correctly install it, it will appear, and if not, you have to check if you have missed some step or
any step in between. Now with the load
point load checkpoint, flux Nell, our same positive
prom that have added. And now let's add a anime style here and just add
simply Q prompted. That's all. Now see
what we have seen here, value, not in the least style. It's asking me for the value. No style, select it no style, select it no style. I have added the value. Now prompt it again. And I hope we get
the anime style. Now we have a anime
style three D character. Yes, it is kind of three
D. Let's remove three D. Now prompt it again? Yes, it looks good. Now, add another style. Painting, acrylic,
fancy, illustration, cartoon, Jamat,
fashion, sticker. Let's try sticker cute
and cube prompt it again. Yes, it's working.
Perfectly fine. Now we can create stickers also. It look amazing. Now try to add cat
and Q prompted. Perfectly amazing.
Yes, it's working. Now let's add. Now you can save this workflow export
with styles confirm. I will share this with you
in the resource files. It's working perfect.
33. Remove Background from Stickers: New year to you and your
family from my side, and I'm really excited to share amazing amazing amazing things with you in these
upcoming lectures. We are starting again from
the making of stickers, but there is a small
catch in making stickers. If you want to create a
high quality stickers, if you want to share your
stickers, how you can share. You can share them as a
JPACO we can say JPG file. Image format that
we use all over the Internet or we
can use a PNG format, which is a transparent
background format. Okay, so how we can remove the background of any image.
How can we remove that? For that reason,
we have to create another node system
so that we can create automatic
sticker making machine, a complete sticker making
machine I'm talking about. In the previous lectures, I have shared the information
about the workflow. As you can see, these
are all the workflow that we have used
and Snail version, flux day son, flux NL version. And I know that you already have used this
version that I have shared with you and you
can see a SNL version, flu da version, Checkpoint, and without Checkpoint,
how we are going to use and how we are going
to create images with them. I'm right now using a
Checkpoint version, so that for you, if you have a low VRM or
a lower graphic card, you can still use them and
work simultaneous with me. And if you have a high quality
graphic card just like me, I have a 16 GB of
VRAM in my computer, I can use this bottle also in which I'm using
a diffusion model, dual clip loader and load VAE. We will talk about that later, but right now we are going to create a sticker PNG format. I have created this version
for you so you don't have to work all the
way from the start, you just have to drag
this workflow in the CFI interface,
as you can see. So we have all the workflow
that I have created for you, so you don't have to
worry about that. Flux SNL checkpoint with styles. This is the style version, like we have already discussed
about the style version of FluxNL checkpoint version. These are all the information
that we have shaped. And let's check about the cute girl, baby. And let's check for the style. And search for the
cute sticker cute. And check for the sticker,
watercolor stickers, Illustration stickers
and simply Q prompt it. And we have a small
error in here. What we have missed
out, we have to select the style
that is no style. And again, just Q prompt. Now our checkpoint
is loading again. Let's wait for some time.
So we have a cute baby. It's just a cute baby
has been generated. So we have a cute sticker, but the problem is that, as you can see, it just saved the image in your
CUI output folder. As you can see, I will show you. This is the CUI, CI window and output folder. It
just saved it here. The problem is that there is no background or
transparent background. Okay, so it is not
just a PNG format. It is a PNG format, but without we need a
without background, okay? So how can we generate
it and how we can create a copy of it and just automatically create an
automatic machine for that. So how we can do that? Let's not waste any more time and you can see high
quality sticker workflow, sticker with PNG and upscale, and sticker making with upscale include flux sticker and PNG
convert to for business. Okay. So this is
the one that I have generated for you and you
just have to drag it out. And I will show you
what I have done here. Everything is the same.
Everything is same. Okay, cute baby girl. I will type it Illustrator. I sticker, cute and
sticker Illustration, no style, no style. Got it. And now there is a small window that
I have added here. These are two notes
that I have added here. How you can add them, if you want to add
them, you can do so. You just have to go to manager. In the CFUI you just have to go to Manager, go to Manager, search for custom Node Manager and search for remove PG. And as you can see, there are so many nodes that
have been used here. But the one we are going
to use is that we are going to use Image remove BG. Easy use image BG. We are going to add this one. You just have to add this into your Cofi After dragging this workflow into
your CI interface, if there is any red node here, you just have to go to manager. And install missing
custom nodes. You just have to do this thing. Install missing
node, and it will show you the missing node
and you have to install it. Okay. So there will no
problem in using this model. I have added extra node
that is remove BG image, and I will save it
here separately. So we have two images we
will save here and here. One is with background and
one is without background. Both are PNG version. Let's save it. And
I'm using RM BG 1.4. Why? Because I was actually getting better
result using this. So let's prompt it, and I will show you
how you can actually generate an automatic
machine. Here is a cue, baby. A baby girl, a baby
girl has been saved, a perfect sticker you can use, and here is a perfect
sticker without background. And I will show you that
will blow your mind. A sticker without background. You can print it out and just
take it anywhere you want. You can sell it out
online. That's all. You just have used some electricity using
your graphic card, and you have just created
Amazing stickers. Wow. Now you are the
graphic designer. AI graphic designer, I
34. Generate 30 stickers in 1 click: At a batch of three
and Q prompt ten. It means it will generate three images
in one go like this is, and it will run all
this process ten times. This is the meaning of Q, and this is the meaning of Badge. The images in one into
ten images into three. That is 30 images. You will going to create
30 images. Got it. So right now, I'm just creating only one Q prompt and I'm using
a batch of size of three. You can increase the
badge size if you want. Right now, I'm just using the batch size of this
and just que prompt it. You have to wait
a little longer. Why? Because it is
creating actually three images in one go.
You have to wait for it. And what this node
is doing is just simply removing the
background of image. That's all it's doing. We will use it separately, and let's see how we
can Oh, wow. It's nice. Yes. We will talk
about it later. So here are three images,
as you can see, one, two, three, two, and three. These three images
has been generated, and you can see across here. Just click it and you can
see all three images. You just click it,
click it, click it. And actually, I love I really, really love these stickers, actually, and you
can sell it online.
35. Create high quality stickers or images about 6000px: You can create 1,000, 10,000 lags of
stickers according to different characters and
according to different genre. But if we want five MB or six MB or the size
of the images, like in the red bubble, if you are going to
upload a sticker, you have to create at least 5,000 X 5,000 pixel
of image or PNG. So they will accept it
and they considered it high quality PNG.
How we can do that. For that reason, we are going to use another workflow that I
have already created for you. That is sticker making
with upscale include. Let's drag it out here. Now here is a small catch
again. Everything is same. Lot checkpoint, IostPanda
Stiker with pink background, same Illustration,
children book, same, everything is same,
everything is same. Everything is same.
What is the difference? Here is the difference,
you can see. It is the ultimate SD
upscale that I have added. And here is a load model. We are going to use
this four X NMKD model, and we are going to
use clip encoder. You don't have to
write anything here. And after upscale it, we are going to
use our node again to save our image
without background. So let's start again. First, this workflow is
same as you can see, this is the same workflow that we are using
again and again. But in this version, we are going to save this
as a image about 1080, 1080 or 51500 X, 1,500 square images
we were using. But right now we are using a upscale version
here so that we can upscale it by three. As you can see, here
is a upscale by three, and if you are going to
multiply this three, with this 1024, 1024, let's check for the result. Let's make it 1,515 hundred. So I just created a
Latin image of 1,500, and if we multiply this
by this upscale by three, we are getting the So
for the RAD bubble, if you are going to upload
it on the RD bubble, we are going to use
we are going to change the dimension of
our Latent image is 2000. And again, 2000, we are going to use this
2000 2000 Latent image. And with the
ultimate SD upscale, we are going to
upscale by three. So 2000 into three is 6,000. It is a simple calculation. In the Internet, if
you are going to use 6,000 X 6,000 pixels, it is actually a really
high quality image. You can see and you can use that image anywhere
in the Internet. You can upload that.
You can sell it. You can create
smaller version of it and sell them
according to size. You can create high
quality stickers and sell them on Red bubble
or As website. Let me generate a small, small, small image for you. Let's cute inositPanta
sticker with pink background. Cute sticker, no style, no style, no style. And right now, just
que prompt it. You just que prompt it. And let me tell you one thing. It's going to take
a lot of time. If it is generating a high quality image,
it will take time. So at first, it will show us the image that what it has
generated. Here is a panda. You have to wait a little
longer for this because upscaling the image to this extent will
take a lot of time. So generating the image
is not that bulky. Here's a sticker that
has been generated. The second part of our
workflow is in the action. How it's working, it's
simply increasing the size of your image tiles by tiles, as you can see, tiles. These are the width of the
tile and height of the tile. These are some complex terms that we don't have
to understand. We just have to wait how
it's working. That's all. As you can see, it's
working tile by tile, and it depend on the
complexity of the images also. I'm just fast
forwarding the process. So here is the high
quality, you can see. What is the catch here? Let's see refresh it.
Now, as you can see, it has a 5.83 MB of size. And if you are going
to zoom it out. So we have created
high quality images without spending Rs,
sketching or editing. And you can see it has a 500 KV and it
has a 5.8 MB of size. It has actually a big size. And I hope you
love this lecture. See you in the next lecture.
Thank you for watching.
36. Understanding ControlNet and different preprocessor: How it works. As you can see, there's a stable diffusion model and here is a control net. And as you can see, there's a time decoder, text
decoder, encoder, time encoder input, encoder, encoder, encoder encoder.
There is a output. So we just input
the image prompt, and it will process middle
box like a K sampler, as we have already discussed
in our previous lectures, and about this,
there is a output. So all these are
information that is encoded and then decode
and then we have output. So now here comes a control net. When you give the
input, it just process, go to K sampler and
with this middle works, a control net, A
condition transform, is a condition that
has been generated. It just encode all the
conditions and after that, it will give its own output which will after
zero convolution, there will be a decode
and we cut the image. This is how stable diffusion
and conton at work. We are not going in
a technical aspect because we just want the result. Leave all these
technical things to the engineers who are
working behind it. So what are the advances
about the model? Use bucket training like
novel AI can generate high resolution images
of any expected or use large amount
of high quality data. The data covers
diversity of situation. Use recaps and prompt like
Dali and generated Good. You can read it out and
understand it one by one. We design new
architects and that can support ten plus types, Tmage to image, text to image. Thank you very much.
Hg information. These all are the variation that has been
generated validator, tile de tile super resolutions. You can see it just increase
the ratio of the image, open pauses open pose, different type of depth, open pause, CE this is known as nE This is canny line art. Anime line art,
scribble, PD, soft edge. All these are technical terms, open pause plus any. Control means we are going to control our
image generation. It's name itself
explained much to us. This is known as Control Nt. Now let's use it practically. See you in the next lecture.
Thank you for watching. Thank you for your really, really night reviews.
I really love
37. ControlNet Aux Preprocessors Different processors output of images: Use the Control Net node, we need custom notes section, and we are going
to install that. First, let go to the manager and go to
Custom Node Manager. Now search for the term Venture. Search for SI PH E XYZ. Try update. If you
have not installed it, you have to install it. I just updated it,
restart and restart it. And after that, search
for pre processor. Comfy Control Net auxiliary
preprocessor. Try update it. I've already installed
it, I just updated it, and if not, you have to
install it. That's all. Now, go to Com f roll and
search for Cf Studio, Szel TrydtO install it. So after installation, all these nodes, just click Restart. It might take some time.
Now in your CI interface, go to dit, clear workflow. Now, the workflow
has been cleared. Now let's add a node. Search for CtrolNtPre and you can see Control
Net preprocessor. Here is the name, art
venture, loader, click it. Now you can see here is
the Control preprocessor. So this node has one input of image and one
output of image. Just click drag the point out of it and you can see you have to select
the load image here. Once you load image, load the image
from my CofiFolder automatically and on the
right hand side, drag it out. You can use either save
image or preview image. I'm using save image here, so our image will get
save automatically. So now let's Q prompt it. Tu it just process the
image as you can see, we have got really
nice amazing result. I'm just kidding
there is no result. It is the same image that I have created using the same image
that has been loaded here, it just drag this image out of this without
any processes. Why? Because there is no
preprocessor has been selected. So we have to select
the preprocessor. You can see there
are many, many, many things has been it. As we have talked about before, there is a scribble in art, animal in art, A
PD, MS SD poses. As you can see, all these poses, all these processes
has been added. So there are many processes that we have already discussed a little bit of overview that we have talked about
in our previous lecture, canny scribble,
MLSDPoses, open pose. We are going to use all
of them one by one. Now let's select for the canny. We have canny processor overt, none, and just Q prompted. Now you can see here
is the Ky selected, Cy prey, Q prompt.
There's a difference. Line art, Q prompt, here is another output, scribble prompt,
here is scribble. MLSD que prompt,
here is MLSDOput, open post, open post
will not work here. Some preprocessor will not
work depending on the images. You have to select the right
Control net preprocessor for that specific image. You have to experiment it. Now you can see here
is the Kenny result we have what preprocessor
has done with the image. It takes out the
information of the image, how image look using the exposure and
shadows of the image. As you can see,
there is a exposure. And it just create a
exposure out of it. The object has been
highlighted in this image and the
background is separated. So it just check for the edges, the outline and the depth. Depth means shadows, exposure, and the distance of the
object and background. How far the background is from the image or from the
so it just process all the information and give us the result
that we will going to use in our AI
image generation in our upcoming lectures. So you have to experiment
with the resolution also. Once you increase the resolution
and try to reprocess it, it might give you
a better result, or in some cases, it might not give
you a better result. Check it out, can he? So there is a limit of the resolution you have
used for the Kenny. So you have to check which
resolution works fine for you. So now we have understood, as you can see,
here is the pose. He is the pose, but
it's not working. Let's check for another image. No. No. Let's check
for the pose here, CP as you can see,
here's the pose. So pose is like a stick figure. It just use the
pose of the image according to the face,
it detects a phase. And with the face, it
just check out for the hands and borders
of the image, like these are the borders. So solders, it detects
solders like a borders. He is the body part, is the hands, and
here is the hands. So you have to experiment it. You have to understand
it on your own. Select for that dep prompted, and it will give us a
depth map for the image. So it might give it a
bright area and dark area so that the AI will understand the objects and the brightness of the objects,
how far they look. In this case, as you can see, the brightness, the contrast, the background, and the image. It's working really,
really well. I hope you understand the basics of this
control net processor. In next lecture, we are going to use these detail
in our workflow. See you in the next lecture. Thank you for watching and please give it a
five star rating.
38. ControlNet and Flux output Comparison: So after the update,
the problem I faced in the CUI is that it's not working with the SD
Excel or ST 1.5. I don't know why, what is
the reason about that. But the case sampler always show some error
during the process. So I have started
working on flux, and it was working fine. And the results are really good better than SD and ST Excel. So we will talk about
flux and how to use and how to use
Control Net with flux. So as you can see, on
the left hand side, you can see it is the original
image that I have used, and on the right hand side, with the flux model, it's generating really nice
results, as you can see. I have created more results using this image and some more image I'm
sharing with you. You can just check it and understand how
these things work. I will share the workflow and
how to use that workflow, I will share in
upcoming lectures. You just have to sit tight
and understand that workflow. Meanwhile, you can check
that generated output. All these things were created
with different settings, we will discuss all of
them in upcoming lecture.
39. ComfyUI Advance workflow for Control Net Manipulate your images: Hi, my friend. Welcome back, and thank you for
taking this course. I really appreciate.
And if possible, please rate five star to this
course. It will help me. Right now, this is
kind of a web and it feels scary until
I will explain it. What you have to do
first, you have to go to your resource file and simply drag this into your CUI
workflow like this. And this is what you will see. This is the workflow that
I have generated for you. It is a simple flux
control so that we can control our image output according to the
image we will input. This is Tazml from India, and it comes in the seven
wonder of the world. Once you load this workflow, what you will see, you
will see two things. First, you will see
a bright pink color of node here and somewhere here. Why? Because the node
which I have used here, it is not installed
in your system, so you have to install it. So how you can install it. First, you have
to go to Manager. In your CFUI workflow, go to Manager, click Install
missing custom node. When you click it, the
nodes will appear here. Right now, there will be no node which will be going to appear here because I have
already installed. But right now it just
checking if there is node that I have
missed out or not. So once the node appear here, you have to install that node
and restart your system. If it's not restart, then you have to cut
it again like this and run your CUI from
the start again here, you have to run it again using NVDA GPU and if you have a CPU, then you have to run from CPU. After installing
that, you have to go to install custom nodes. I take a scenario and if you don't get any
missing node here, then you have to go to
custom node Manager and search for X lab sampler. Search for XLAb. Let's
check it out. Custom node. Wait for some time
and search for Xab. Here is the X lab node. Mixed X lab nodes we have X flux CPI, you
have to install it. You have to mix lab,
you have to install it. You have to install
both of them, and the node will appear here. And if not, after
installing that node, if it's not appear here, then you have to Double
click Search for xlab, check for the node, which I have used here, and you can see this is
the one that I have used. You can see and just click
it. Here is the node. By the way, you don't
have to go that far. So this workflow
will work perfectly. Don't worry. I have
already tested it. And after that, whenever
you run the prompt, what will happen your prompt will stuck here and here,
what we have to do. In that case, as you can see, here is a flux one
DEV quiveks dot GGF. You have to search for it. Example, Flux one day four, our quarter or quarter. Anything you can use. This has a value of
till o qu eight. If you have a higher VRAM
or higher graphic cap card, then you can use qu eight, which might take some time.
So let's search for it. You can see quarter, odqter, third quter, our
qu eight, just click it. This all our value has been appear here and you can
download it directly. So how you can download it
if you go to source file, Hugging past dot Man
and search for GGUF, you can check quarter,
odqterthquter, QqourQqar o qu aright, and right now we are
using quarter five. As they have already
mentioned here, too, where you have
to download it. Once you click Download, you have to go to
ComfUI Models Unit. Let's check it.
Go to Cou Models. Search for UT. Here's a Unit. As you can see, I have
already downloaded it, and now once you run this XLlab sampler one time and it
will give you an error, what it will do,
it will generate a folder in the name of XLAb. But first, you have
to run it once. Even if it gives you an error, you just have to run it. Once you run it, it will automatically generate
this folder in the CPI model and just click it and you can see
it has a Controt folder. You just have to double click and you can
see there is a file. Flux dap Control net
V three saf ener, and here is the file. Load flux Control net. Here is a control
flux dap control. As we have already discussed about the auxiliary
process, what it will do. So to process it, we need a flux
depth control net, which specifically
work with the flux. And let's check
it and search for depth and run it Let's
see what we get. Here is the control
net debt available. And in the same image, what we will get,
wait for some time, and we got nothing,
as you can see. Here is nothing we can see. Why it's happened here, we have to play with the
settings of Control Net. Let's make it seven. Okay. Make it denise
0.8 and run again. So how you download this
flux deep control net, you have to go to flux
Depp Control Net, you have to go to flux Dp Control Net V
three, search for it. You can see Hugging face has the first link with the XLlab. Here is the demo that
they have shown. Here is the file
and download this. Where you have to download,
I've already mentioned. You have to download it in the
X flux in the Control net. See, we are getting result. Now, let's go to 0.4 strength of the image
and 0.5 for the denoise, and let's check it out
if it will work or not. Wow. Now, this is the image
generation that we got. But for this, we need
to go to five and let's check it if it will work according to the
image that we want. And I have increased
the value here, increase the strength of
the image to control it. As you can see, we are
getting somewhere, as you can see, in the forest, building covered it with grass, and here is the strength
that it's using. Let's increase the
strength further 0.7. And Q prompted. Yes, we are getting
somewhere, and right now, 0.8, one, six, and Q prompted. You get the idea, right? This is the basic workflow for the
control net using flux.
40. Advanced ComfyUI Techniques High Quality Outputs with more control on Depth Map Control Net: And once you go to
XAV Control N taped, here's another
workflow, you can see. Here's another workflow,
just download it. And once you download it, I just download it in
my workflow folder, Control Tab volume three and
just go to your interface, press this press Plus button, click, go to Control Net Tab
workflow and paste it here. And here is the already
prepared workflow you can use. And if you get any of the node
in the bright pink color, you just have to go to Manager
and install missing node. You just have to click here
and just upload an image. Let's input this image open. Here is the image, and
now just Q prompt it. It's creating a dap
map of the image. Midas tap map. Now during this process, I got this type of error. Here is the error that I got. So here is the diffusion model. So we have to select for
the diffusion model here. Let's select plug Tav. Here is the error. Let's prompt it again. Plus Dave Sapner It's loading the model.
It's now working. I just change the
diffusion model here. That's all. That is the flux. And right now, XAb
is working its part. And now XLb is working.
Wait for some time. This was the image
that we have used. And this is This is
the image that we got. And I loved it. I
actually loved it. The way it detailed
this image is really amazing, really,
really amazing. As you can see, Wow. Now, let's BEA, beautiful. Woman Wait, beautiful woman in fast red dress style
fashion work image. Now I have used our
prom beautiful woman in red dress style
fashion. Got it. Now, let's prompt it
again with this prompt. I save this workflow for you. Export workflow. For course. You can find it in
the resource file. I will save this workflow
and share with you in the resource file so you don't have to be worried
about the workflow. We are getting result. I don't know if the hands
are okay or not. This is the workflow. Can you understand? I
start from this sampler. Like K sampler, there is a X lab sampler
from third party, and you can see we
have connected model. We have connected conditions. Here is a clip, and
we have connected latent image like we already
did in our basic model. And with that, we connected
control net condition. He is a control net condition. With this condition, there is a control net which control
the depth of the image. Here is the image,
and here is the image that has been generated
using extra Dapmp. Here is the image image. This is the image goes here. Here is the control
net of the image, and here is the model which use the control net to create
this tab map here. And for using a diffusion model, we need to use a dual clip coder here is the extra clip encoder, which has a prompt, a negative prompt,
a positive prompt, and here is the image preview for the DAP map which
has been created, and here is a preview image, here is the saved
image of the tap map. And here is the diffusion model. Dual clip encoder,
diffusion model. And here is amply latent image. If you check thoroughly
all the workflow, you will see all these notes
are using in a common way. Results are good,
but hands are not. Let's again Q prompt it
and wait for some time. I simply jump into the future. Once Q prompted, the
workflow is not working. Why? Because we have a noise
or seed, which is fixed. So I just make it randomized
and now Q prompted, wait for some time, and now I just jump to
the future again. We got the result. And now results are good but
hands are not good. Love the result. You
can see the image. Now let's check another image. I just check it out here. Now cue prompt it again, that's beautiful woman in
plu, magical background. Just click it. And I think there's
a cue prompt, and I don't know if
it's work or not. And here's a beautiful woman
in flash magical background. And if it will work or
not, let's wait for it. Wait for the result. I think
it's not getting the result. And I think because of the
prompt that I have used. So let's prompt it again. Now I have used beautiful woman in blue fashion
magical background. And let's see if we
get the result or not. It's tried to create
or recreate it. And it has a third
hand. Spooky one. But yeah, it's try to create
that, as you can see, it's tried to create
that, but I think it's a bit difficult
for it to generate. So let's again, try
to use a portrait. Let's try to portrait
of this image. So I'm just taking
a portrait again, a well defined
portrait of our image, and now click the prompt. And I hope this time
we get something. You can play around with the
image dap map if you want. The dual clip coder, you have to select for the flux. This is the important part. Once you understand
the settings, how these value work,
you will get to know how you can control
your output every time. I feel like I have
missed something. Why? I feel like I just have to sitting on the table, sitting on the
table, holding pen, purple, right now
the result is good, but not that much. Now let's check with
another prompt. I'm taking so many examples so that you can
understand properly. And you will get to know the idea how to play
around with the values, how to play around
with a prompt, how to play around
with the image. That is why I'm using so many examples to
explain it to you. Right now, it is working
with the tap map only, and in the upcoming lecture, we will going to use
different type of that map, Kenny edges, and many
more different type of control we are going to use
in our upcoming lecture. Understand them one by one. Can you see hands are
same, almost the same. Results are somewhat yes, same, but I really don't
like the result much. So yes, results are
somewhat same, yes, as you can see, like the dap map is generate
a really good image, but it's not eye catch image. Let's try to regenerate, and I just simply change the forest magical to
magical forest background. And I hope this time we like it. With this result, hope
you get the idea how this prompt work and how
this workflow is working. And now let's dive into another part of
this control net module.
41. Overview of creating our own LORA: Before moving
forward, let's recall that this is what we have
created in our last lecture. And after that, I just
created, Oh, this is me. I just come from the space, and right now I'm just
wearing the astronaut and I don't want to share my
personal project here, but it just come out in front of you, so
I have to share this. So in the upcoming module, you will learn how we
can create our own Laura and use that for personal
and for professional use, and you can create
your own Laura using some different
type of image patch. This is me, and
you will learn in the upcoming module how we're
going to use this Laura. I just experimented it on myself can you see
how handsome I am? This is me in the war. This is me. Wow.
I'm just flying. And this is my side business. This is my side business, and I'm also superman. This is also my side business, and I'm Iron man right now. This is me in the Tose world, and behind me, it's Nebula. This is me, handsome
me. This is me. Can you see how handsome I am. And I have six pack. Yes. This is me, my muscles, and my under arms. This is also me, me, me, me, me. This is me, handsome me,
beautiful me. This is me. Son, this is me. This
is me. This is me. This is me. This is me. This is me. This is me. This is me, me, me, me. And it's look actually real. The body, can you see the body? How it just merge everything. This is me. This
is not me, not me. And this is not me.
This is beautiful me. Can you see how beautiful
I am? This is me. This is I just created anything. Can you This is me me me
me me me, and can you see? Yes, you can use your image, your own flux, your own Laura. And use them in
any kind of thing. So in upcoming module, we will learn how
we can create our own a and how we're going to use that Laura with flux
and create our images. So you can play around with
these images yourself. We will talk about that
in upcoming lecture. This is just an overview. In x lecture, we
will learn about CNET I will share a
workflow with you, which will be very handy. And in upcoming lecture, we will learn how to
use all these Las. Okay? So in Lx lecture.
Thank you for watching.
42. Advance ControlNet workflow all in one Depth map: So this is the workflow
that I will share with you and you can download it
from your resource sector. You just have to drag it like this and it
will appear here. There are a bunch of nodes
that you have to install. Don't worry, we will learn that. This is the workflow and
don't scare out of it. Okay. This is really
simple to use. So first, you have to understand
how these things work. It just load the image. It just process it, process it, process it, process
it, process it, and then now come clip text code where we
put our magical prompt. After that, it just c the control net how the
image will look like. It just get the overview or
the sketch of the image. After that, it just
created the image here. And after that, it will just upscale the
image for higher quality. So let's don't use it right now. This is the unable
and disable part. You just have to click
Unable or disable. You can do it with anything. Okay? So this workflow
is really simple to use. There are going to be so much
nodes that it will install. So once you click Manager,
Install Missing note, there are going to
be so many nodes you will go to see
and you just have to click here and install
one by one or install here. Install it and restart
your comp UI. That's all. After that, so we are going
to load GGUFFlux Model. Where you will going to
get that to install it, you just have to go to Manager, you just have to go
to Model Manager. You just have to
search for qu five, and you can see you just
have to install one of them. Don't use flux one
Snail version. You have to use flux GGF. You just have to
match this name. As you can see, this is
the flux D quarks dot GGF. You just have to go to
Manager, match this name, install go to modern
Manager, flux q quarter. This is the name that
matches correctly and you just have to install it. That's all you have to do. It might take time depending
on the size of the file. Right now it is eight GB file. You just have to install it. Simple and very easy to use. Okay? This has a range
of quarter to qu eight. Ir qu eight is the best one. If you have a higher VRAM, you can use o qu eight. And if you have a lower VRAM, I have 16 gigabyte of VRAM, so I'm using a quive one. Okay. And if you have
a lower version of VM, you can use lower
version of GGUF model. After that, you will go
to see a red mark hair, a red stroke or a
red border hair. You just have to copy
this name again. Search for flux slashaedt
SafNer Manager, go to Model Manager and
search for Flux one. You can see, this is the
one that I have installed, Flux one VAE model 335 MB, VEE model, ae Sefner VAE. And let me tell you one thing. Please don't just install
anything from Cf UI. There are some hack is going
on in Cf UI, right now. So you just have to save
yourself from Cf UI. Hacking software is
going to just malvey PC. You just have to
be sure you just copy the same exact file
name that I'm showing you. Those are safe to use, okay? Okay, so you just
have to install that. So get back again. And this goes same
for this file also. You just have to go to Manager. Model manager search
name or just copy. Just check the name
and just install it. It's very easy to use. And after that, there is Laura, Laura, and there is prompt. Now, same goes for
this also clip L. And if you want to enable the Laura here, you can do so. You just have to select this
Control plus M. That's all. You can enable or
disable it or you can just right click Bypass, right click, Bypass.
That's all you can do. Another thing, you just
click it and Control B, you can see, easy one. And it is just an optional,
you can see here. It is just an optal
you can load it. And if you don't want
it, you can load it. Okay. And make sure flux is
selected here. Type is flux. You have to select.
Now, as you can see, there is a lot more thing
you are going to see here. And once you check
it and uncheck it, you just have to select one. You just have to
select this one. Can you see flags on Saka lab Controller Unit
Pro diffusion, Picho M you just have to download one Controlt
Load Control net model. There are so many control
we going to see here. And you can find the
flux SECA lab controle Unit Pro diffusion part and you just have
to go to manager. Copy this name SECA f
Control Net Union Pro. You just have to search for it. Again, Mode Manager. Saka lev Control at Union P. I have installed it already.
You have to install it. That's all. As you can see, it is installed already and you can get back again, close it. And if you can't find that, you just have to go
to your workflow, and you just have to
rephrase it. That's all. And if you don't
know how to repress it, refresh notes definition. That's all you have
to go. A, refresh. And now we have already
set up everything. You can check this
out, check that out. We just set up everything. And now the prompt model girl pretty with magical background. No BEAUTFllGirl, in the magical world of butterfly. Now, let's Q prompt. Without ya, we are using. And right now, I just bypass and I have bypassed
the upscale factor, so you don't have to wait long. And if you want to upscale
it, you can upscale it again. Let's wait for some time. The loader is loading. Let me tell you one thing.
How are we going to achieve the image
that we have used? How are we going to achieve a similar effect is by
using this strength, start and end percentage. As you can see, the strength
is I'm using right now 0.7. Percentage of start is zero
and end percentage is five. Have to experiment all three
values with your image. Once you get the hang out of it, once you understand the
basics of the stand, start and end percentage, and you will learn how to use. Once you play around
with this value, you will get to know yourself. And this is the thing that no
one can explain you easily. You just have to experiment it and you have
to understand it. Once you create about
ten or 15 images using this value with
different different values, you will get to know how
all these things work, and you will get to know how and what it will
do for you. Okay? Let's wait for some time. You will going to
love this image. So as you can see, this is the result that I got. Using this image, this is
the result that I got. Can you see? It's
really amazing. And now after that, once you use this
type of Control N, you can just disable it and unable second
number of control net, and you can play around
again, with the value. And right now, I'm just using and percentage 0.5 strength 0.9. Let's Q prompt it again. You just get a line out of my image and
wait for the image. What will happen next?
This is the image. This is the image, and this is the image. Let's
compare them. And this is all the
output that I have used. This is the image.
This is the image. Can you see the pose is same. And now after that, I've used another image. Output. This is the image.
This is the image. This is the image, I just created using this
image reference. This is the image and
here is the reference. Here is the reference.
Here is the reference. Here is the image, and this
is the image it just created. Recreated, this is image. Recreated image,
here is the image. This one is a recreated image. This one is the
image. Tis, Tiss. This one. This is the image. It just re created
using this image. You can see this is
the Tasml that it just recreated the Tasmlimage
we have used before, recreated, recreated, recreated,
recreated, recreated. This one is the image
that I have used. This one is the image
that I have used. This one, this one,
this one, this one, this one, this one, this one, this one, this one, this one, this one. We have discussed all
these things before also. So right now, let's
check for our image, and now it just created another image using
this structure. You can experiment
with all these things. You just have to uncheck, check and just use La. And let's check for the
Laura. Let's bypass it. Now let's prompt it again and wait for our
result again. Wait for it. We will get in some time, let's jump to the future again. So here is the result. It's look somewhat similar, but not that much. But let's understand
the basic structure and create a image out
of it according to the liner that it had created that we call as Kenny Control N.
43. Download right file to create you own Lora for Flux: Create your own flux, first, you have to go to Pinocchio. I will share the link
in the resource panel. Jim, or you can just Google it. Pinocchio flux Jim. That's all. After that, you have to go
to Download Pinochio first. Once you click it, a
new window will pop up and you only have to click
the Download button here. After that, it will take loading and scroll down
Download four Windows. It will download, and it
is around 100 AmboFle, so it's downloaded
I just open it, open it, and run it. Simply double colligate and Window will ask you
to protect your PC. You just have to
go to more info. After that, more info. Run anyway. It will
take some time. Now a window will pop up
and you only have to allow Pinocchio to make the changes in your system.
Windows will pop up. You can select your
theme light or dark, Save as a window will pop up. After that visit discover page, you can check all
the information here and start downloading
as per your need. But right now, we
only need Flux Jim. You can search it on
flux Jim, FLUXGYJim. This is the program
that we needed. Click it and start
downloading it. After that, in the new
window, scroll down, install. It will take some
time, not sometime. It will take a lot of time. EQT is downloading
a bigger files, like we have downloaded before. So it is around 2021 GB of
file. You have to wait. All the process you can
see here, as you can see, it's downloading and
once it download, a new window will pop
up and you just have to save as fluxjym dot. It is just asking you where
to download your files. That's all. You can save
as Get and download it. Again, a new window will pop up, flux Jim and start installing. Now it's again downloading many files for
you, wait for some
44. Steps to Create the lora how to run right way: So after downloading, you
will see this screen. So first, you have to
just screen this size, and after that, if you want to terminate it,
you can just stop it here. Let's open it in web, open WebUI pop out,
clone, pop out. When you click Wop
O, it will just open in your web
browser like UI. So it has its own user phase. So in Jim and publish, here is the Laura and
here is the Laura Info. What is the name of the Laura? Let's name it Karen. Transcript. Okay, just working. It's transcripting work, so you don't have to worry
about your Laura. Triggered word.
Word or sentence. Let's make it current. Now, base model, I'm
using flux, Snell or Dev. It's not working
well with Snell, so we are using flux Dav. So let's check for. So for the VRAM, I'm just using a 16
gigabyte of VRAM, so I'm just checking it. I have a 16 gigabyte or if you have a 12 GBO
gigabyte, you can use this. Repeat trains per image. Make it simple, simple,
don't check it. You can just experiment with these things,
expect training steps. You can Nothing.
Just leave it blank. Simple image prompt, separate
with new lines, current. Now sample image
every new N step, re size data size image. Let's keep it five, one, two, Y because of the
size of the image. Now let's upload your images. Simply, you just have
to select your images. Now you have selected
your images, you just have to check for step three and start
training. That's all. If you want, you can
take another step. But before you start training, you just have to click on the ad AI caption
with Florence too. You just have to
click it, install it, and it will generate a caption for all the images
that you have used. After that, you just have
to click Start training. It will take some
time, not sometime, it will take around
three to 4 hours depending on the power of your machine and
depending on the images, how many images
that you have used. It will take a lot of time, but the time you invest, it will worth it. Now, if you want to check
the safe file or the Laura that has been created by the
training use Influx Gym, you just have to go to publish. And in the published section, you will see a link. Trained La link. You just have to copy
this path and go to that part and here is
the output folder. Once the training is done, you will see many files
will going to appear here. As you can see in the dataset, flux Gym data set. In the current section, you can see all the
training has been generating and the
file AMB ten MB, 15 MB, you just have to wait for the training to be complete once the training is complete. You just have to
click the sample. In my case, sample
was not working. If it's working in
your case, it's good. But right now it's not
working in my case. So the training has
been completed. Now let's check our La
that we have created. So in my C drive, I have created the
folder Pinocchio. I double click it open. Now check for API, flux git, double
click the output. This is the folder in the name of current
has been generated according to the oa.
Now, double click it. And in the output folder
in the current folder, you can see current safe third. So this is the loa
that has been created. I just simply copy this loa
that I have created and paste it into in the download folder in
the CUI Window portable, in the CUI, go to models, select for Laura and
simply paste it here. As you can see, I have already pasted the Laura
in here and just run the Cf UI and use and use all the workflow
that I have used before. You already have the workflows. Now you can use your own.
45. How to use Lora you have created: So this is the workflow. You will find this workflow in the download link that
I provided you before, and you can also find it
in the resource section. So you don't have to worry
about this workflow. And I feel like this
workflow is really amazing. So first, you have to
check for the Laura, as you can see here in
the CR Laura Itech. You have to select
the Laura here, the current Laura that
we have generated. And about this current Laura, you can download this Laura
and create images of myself, and you can share it with me, and I will only laugh after you create
this image of mine. And I can't wait to see all
that image that you have created using my
face Jokes apart. Now, you just have to normally
like you select Laura, you have to select this
Laura again. Okay. So run the prompt
here after prompting, you just have to click
Q prompt, that's sold. And all the images
you will see here, it will just run
let's check it out. What it will give me. Yeah, I am become astronaut. And after that, I just
paste the prompt. I just copy the prompt from
the hat GPT and ask JA GPT to create something amazing for me in the forest. This
is the prompt. You can just copy this prompt. And here is me standing alone. After that, I have tried so many times and run
this confui on my face. So here, as you can see, here are the images that
have been generated. So many images that
I have created, Iron Man, Superman, so many. And this is me, my body. This is in the Buddha phase. Actually, I have tried to
create something artistic, one, but it's not
working well with that. But after some hit end trial, I just finally got my female
version and you can see, it's getting better and better. As you trial. This is all about trial and error and the
Epanion based thing. But one thing I understand
in this creating Laura is that the more images you use
to create your own Laura, the different type of imagery, the more it will get
better, that's sold. And I hope you understand
what I'm trying to say. And I really thankful that
you people like this course, and I request you to
please give it five star. And your positive review, I really, really appreciate it. And I really try to update this sures on the regular basis and share the latest technology
with you in Come PUI. Thank you for watching.
See you in the next
46. Install Raster to Vector Nodes and Basic Information: Welcome back to this
CUI course series. So today we are going to
generate vector images, and this will going
to SVG vector files, so you can scale
it out as you want further and further and further
without losing quality. So first, you have
to go to manager. In the CPUI interface, you have to go to manager, custom node manager,
and you have to find this node that
I'm typing here. Now, search for SVG. And install CPUI to SVG, Yanick 11, two, and install it. Wait for some time,
and now restart it. And now you have to confirm
you want to reboot, reboot the server, and confirm. Let's wait for some time. Now it's reconnected,
get back to it. Close. Now double
click on the canvas. Add SVG. And as you can see, there is a SVC preview
SVG preview node. This note allows
you to a preview of your SVG file before
saving it or using it. It helps you check if
everything looks correct. Save SVC node. This note lets you save
your work as an SVG file. SV files are great because
they don't lose quality when resized making them perfect for logos, vector based design. Rester to vector node. This process convert
an image made of pixels into a clean
scalable vector. It's useful when you want to turn a blurry image
into a sharp, high quality design,
vector to raster. This convert a vector
image into a rester image. This is helpful when
you need to use a vector design in
place that only support pixel based images like photos or certain
digital artworks. Raster to vector. This is a
special type of raster to vector conversion
that simplifies the image into only
black and white. Make it useful for logos,
dances, and linear. Raster to vector node
that is in color. Unlike the black
and white version, this method keeps the
color while converting a raster image into
a vector format. It's great for creating
colorful logo, high quality vector design.
47. Create Black and White images to SVG files simple workflow: Now we are going to use
our register to ctify. Let's select it. Here is the node that selected. And now, Dole click and load image flected add this image here. So these both nodes
are connected, and now and now
let's add save node. Means SVG and select
for save SVG or PV SVG. Selected, done. Save SVG, Image
background, SVG string. Now these are connected. Now let's load a image, drag it here and if
we just load it, and now let's Q prompt it. Once you que prompt it, it will just run it out as fast as it can depend
on the processor. And for me, it just take 0.2 seconds in
a blink of a second, it just created my SVG file. Let's check it out. But we have created in
the compu folder, find your output folder
and check for SVG. In the SVG file,
you can see it is SVG file and how you can
check if it's SVC or not, you just have to check
for illustrator. I have Illustrator. Let's understand how
all these things work. Okay? In the Photoshop,
as you can see, Photoshop works
on rasterization, which is sed image. So Photoshop work
on raster images. As you can see, it just
raster and pixelated. This pixelated is known
as is and into our AI, that is Illustrator,
and as you can see, once I increase or scale it out, it will not register. You can see it rested. As you can see, this
image is pixelated. This is a simple JPaC image, and this one is
scalable vector image or scalable vector graphic. It's pretty cool
that we can easily create our images into vector. And if you don't have
Illustrator or oral row, you can use CF UI for yourself. We can do this within
Illustrator also. Let me show you how there is an option in the
Illustrator image dress. Once you image dress, click it and it just created it, as you can see, it
created into a vector. It is very simple image. It is not very complex image, so it is easy for it to
change it into vector size. As you can see, both
results are pretty good. Here is the SV five. This one is that we have generated
within the illustrator, and this one is we have
created in CFI UI.
48. Restor to vector and Different setting for better results: Now, double click on the canvas, select for SVG, and again, select for vector to raster. Select this with the SVG string, and now again add another node, double click Save Image. This will going to
save your image. It will convert back your
image into raster image, and your AST image will be
saved here as a PNG five. So just run it again. Here is your raster image. And in the CFUI refresh. Has your raster image. Hazo S VGF. Once you select the mode, they are different mode and once you polygon it and run it again, can you see the difference
in both of the images? What it has done to your images, you can select polygon when you have a straight line
design, like here, they have a straight
line design, and it will create
nice and straight line for you for the spline, it will create a smooth
curve save in the design. There are more settings
available in this. Once you work with
the complex images or more color images,
it will help you in
49. Ask ChatGPT for prompt the right way: Let's load the workflow
that we have used before. Here's the workflow for Flux
tape, bring it out here. This is the simple version
that we have created before. As you have follow along
my courses and curriculum, you will see this note. And if you are lazy
enough to go back to your workflow
and find it there, simply download from
your resource file, the link that I
have provided you. And for this purpose,
I use hat GPT, go to hat GPT, and ask hat GPT, give me prompt for
black and white, cute dog prompt for stable diffusion without. And in the code, you snip it for easy
copy and paste. Paste it here, run the prompt, wait for some time, and this is not the image
that we want it. So let's ask chat hPT. I want it in the alone. Style. And if you have not got the images that
you have desired, you just have to tell
the chat GPD and It it in the slot
design or style, just copy it and paste
it again and prompted. For that, this puppy
is actually cute. Let's wait for some time again. No, it's not working. Background white
and clip art style. Copy it again, paste it here. After that, prompt again. And I hope this time we get it according to Slott
of the black shape, we clay part, high contrast, minimistic clean
edge, tie style, simple and pool design. Yes, now we got phase. I convert it Q doc phase. But right now we have a QTc
but I just addit QTc phase. So let's Q prompt it again. It's give you sometime
blurry images, but for the day version, you might see this type of
images which looks blurry. You can reduce this
by steps, goes to 30. And now, again, Q prompted. These are all the problem
that you will going to face, and I am showing you all
the problems you might face and how you can rectify all these problems
that you will face. Is the pace, but there is
no eyes and mouth pace with eyes and mouth. Remove fork, ball design, simple
cartoon style. I have added some prompt here, as you can see, there is a cartoon style,
vector style simple. With the increase in the steps, it might take extra
time in the generation, but prompt it again
with the style that we have added now and
here's the image, which is not blurry now, wait for some time again. And let's see what we got
now we have Got this image. And now let's convert
it into vector, and let's add some node
in this work flow itself. I will share this workflow
with you so you don't have to work again or generate all
the workflow, all your bi.
50. How to upscale any image : Shared some inside
of the workflow, like how you can only
upscale the image. And I believe it is important for you to understand
how it works. Check for the resource file. I have already shared it
with the resource section. You can check it
and download it. You simply have to drag this
workflow into your Canva. Now, this is the workflow that I have already
created before. What you have to do in
this workflow is that you simply have to go to
Sampler Custom Advance, click it Control M.
Now it's enabled, and once you click Control
again, now it's bypass. Or we can say it's
disabled right now. What you have to do is that you only have to
choose a file here, choose file to upload, select this file, select any
of the file, just click it. Once you click it, I'm using a SASke that I
have created before. Once you click it,
click it open. Now the file has been selected. It is a PNG image that I have created from one of my workflow. We have discussed about it how to create a PNG and
how to create a stickers. And I want to tell you we will discuss more about it
in upcoming lectures. So what you have to do
is that simply remove text I think we need a prompt here and upscale. You can type anything. It won't affect any of the picture. Once you click it, and here you can
see the settings. Upscale by two means the size of this image
will get double. And if you want to
increase the size, you can do so and
just click it by two. Now, leave all the
settings here. If you want to change the
variation of the image, or if you want a
variety of this image, you can just change the
denuance section here. Okay. Now after that, you simply have to go
to prompt. That's all. Once you Q prompt it, all the things or all the
nodes here will not affect it. As you can see, VAE
decode here is red. So this node system won't work. The only thing work here is that this upscale section only. And you can see ultimate SD
upscale is now working here. We have to wait for some time because upscaling
might take time. Once the preview is
now, as you can see, as I was going to
explain it to you that once preview is here, you can see your image
is getting scaled up. The time is depend on
the size of the image. If you want to scale it by fax, you can do so and
it will take time, depending on the power of
your graphic card or machine. It simply scale it
by this factor. Five, one, two X five, one, two. It is scaled like
a tile section. First, it create
the upper right, then first, it scale
up the upper left, then upper right, then upper, then lower left, lower right, one, two, three, four. One, two, three, four. This is how it scale up. As you can see, it is now
working on the upper side sex. This is the section,
upper right. And after that, it will work on lower left then lower right. So here is the upscale
image, as you can see, I only make the background
is black for the PNG file. So it's saved in our folder, that is the output
folder of CPUI. We have to go to Comp ui folder in the CPI in
which you have installed it. Check for the output folder, and here is your
file double in size. That's all. You can
upscale it even further. Depend on your use of the image. As you can see,
the size is 2048. The original size was 1024, so it doubled the
size of the image. That's odd, as you can see. Now here is the upscale. You can use it anytime. Without any changes or without
creating any of the image. So this is the file that you can find in the
resource section. Thank you for watching. See
you in the next lecture. And I really appreciate that you please give it a
five star rating. It keeps me motivated
for my new lecture. See you again. Bye bye.
51. Create Anime images using loras from Civit AI: Any five star ratings I'm
receiving to this course, and I believe that
you love this course. So we are going to move forward. We are going to
use some extra as to generate our images
according to us. Now, let's use a workflow that I have already
shared with you. So this is the workflow that
we are going to use workflow for and once you drag
this workflow here, you will see this
is skin pop up. And if there is any
problem or there are node is not available,
that might not happen. According to me, I guess, there will be no problem with
this workflow as of now. If you get any problem
in this workflow, you can message me
directly or start a discussion in this
course section. Anytime. I will help you out. So if you have any kind of
problem you are facing, you can ask me directly and
check for this section, Section number 11,
discussion of ask. I will update a
video here so you can check it here if
you have a question. I will just record a
video so that other can understand that thing I was explaining to
you specifically. So it will help everyone
with the same problem. So if you have a
problem, you can ask me anytime I'm here for you. So right now I'm at civiti.com. I simply sign in Civ AI. So right now we
are going to work with these kind of images. Let's start with this. I think we call it
Wifu Takari Saske, nub AI, Laura seven, checkpoint. We are not going
to use this one. Let's check it. Net pus Civ cha. So this is CVCa and we are going to use this
Laura for ourselves. Right now, just click it. And remember, they have
a triggered word here. These are the triggered
word training images that you use to
train this Laura. And right now, let's simply
download this Laura. It is around training
data, Laura. Let's download this Laura. Where you have to save
it, you simply have to go download in the CFUIPortable
window where you have already installed
a CPUIopen Comp UI, check for models,
search for models. This is Double click, Laura, ChiPcha save and they
have a triggered bud. Let's check it. So you have already downloaded the aura in your folder and
they have a triggered word, and I'm pasting it
in our Google Docs. I will share the link with you so you don't have
to worry about that. Copy, paste, comma Copy, Paste. So here are the three
tigaword that they are using. Creating a doc for trigger
words is important because obviously we
will forget this word. So we have to copy
these words in our doc. So once you get these
Tigabats I will simply copy the link of this Laura and paste it here
so you can use it directly. And I also copy the
title of this Laura. So you will
understand it easily. Paste it. So here is
the copy of this. So now, once you get it done, simply go to your CPUI. You have to go to At refresh
not definition. That's all. Once you refresh it, it
will take some time, about 1 minute or 2 minutes depending on
the power of your screen. Oh, I just happened
in seconds with me. So once you get it done, here is a a name
will appear here. Say, ha, here is the
a. Click it now, Sivcha is appeared here and there is no Laura
has been selected here. Flux tape, I'm using Snell. Why am using Snell? Because it has steps,
which is lower. So I am using four
steps for this one. And as the upscaler
is already off, and you can upscale
anytime, right? We have discussed it in
our previous section. In our discussion in
our help section, you can check that how
we can upscale image. So once you Q prompt it, let's simply copy
these three Tigaworts, copy it and now paste it here. But I want help from
Chat JPT create a prompt for scene of UW Girl with text, I will find you. Let's enter it. Chat GPT
will give a prompt for us. Let's copy it and paste it in our prompt clip text and code. Once you copy it, you have to
copy the trigger word also, copy and paste it in our Chip. Now, press Q prompt.
Now wait for some time. First time it will
take some time to load the loa or load the model. Loading the model
might take some time. We want it like this one, right? The one I can see here. So to understand that, let's click on one
of the images. After that, check photo
prorompt if there is any. No. And if there
is no update here, let's cute anime style
wi fu girl standing. Copy, Paste, shicha a soft
pink wearing a pastel hoodie. No, let's move this
wearing with cat ears. With cat ears holding
a glowing phone, her eyes spike cle
subtypukO lips neon hurt float around it in a poldTreamy letter
above her head, the text reads, I will find you. The scene is a dreamy wp her
wb starting with sparkle, soft lightning and plus tone. She looks adorably determined yet playful ShihaRmove this. Let's Q prompt it again. Let's specifically work with
this type of generation art. And for this one, let's use this prompt. Show more prompt, copy, and we have a prompt here. As we have already discussed
that negative prompt won't work in Flux. So let's simply Q prompt it and understand
how these things work. Master work, masterpiece,
past quality, hyper detail eight k, USD, one girl, Civcha
pink hair blue eye. Made uniform. Okay, that is the trigger we I
think made uniform, Laura IVa 0.09
quality masterpiece, extremely aesthetic, high
detail dynamic composition, vibrant color, vivid color. So they have not explained a complete
type of scene, right? I will find you. This is
kind of looking dreamy. And yes, we are getting
results like this one, right? But yeah, image
generation is good. Which type of generation
model they have used? Guidance, Step 35 A. This is the metadata
that they have used, and if we use steps till 35, yes our generation
will take more time. Results are good, not bad, that word w and
let's understand. Modal bit, Laura,
why don't we just simply paste the trigger word
here. Let's work with that. Let's work that type also. Once we click only
the trigger word, let's see what we get here. SiphaPope pink here. Now let's work with Deb and increase the steps
two, I guess 20. Let's make it 20 and they have a Euler sampleym Euler and guidance is 3.25 0.5
that they have used. They have a seed number. I think they have also
a seed number for that. Seeds this one is
specifically Euler a 35. See this paste and steps 35. Let's use this specifically, and I hope this time we
get the desired result. Let's prompt it again. Before that, let's
copy this page. And if there is anything
they have used here, only they have used this
data generation data, solo, tag, and what they
have used, nothing. Let's you prompt it. And this will take
some time because we have 35 steps to process. Let's jump to the future again. Results are pretty good, but not like this one. Click it. Let's use this prompt. And paste, paste
it, Q prompt it. Let's wait for some time again. And let's see if we
get the result here. Results are not
like that, but yet, it's kind of cute and
it's kind of not cute. And these are the killer eyes, I think, and these
are the cute eyes. Now, let's keep it to Snell
version and the steps, keep it four steps for better and faster
performance, keep it 3.5.
52. How to install and Create images using Flux redux: Simply copy this workflow
here so you can find it by clicking here or you can find it in
the resource panel. I have shared the workflow here from redux style mix up
flux to Images workflow. I simply downloaded it
and once you download it, you have to extract it and after extracting,
there is a file. Here is the file, and you simply have to drag this workflow
here in your CUI. Once you click Area, once
you drag the workflow here, there will be a alert open here. You simply have to press
X once you have done it, now you have to go to
your ComfUI manager, Install custom machine
node, install update all. That's all you have to do.
Now, it might take some time. Meanwhile, you have to download these files that I have shared here. This is a link. Once you copy this link
and paste it into here, it will directly
download your file. And to download this redux file, you first have to log into huggingpas because it might not give you the
access to download it. So you have to go to your hugging phase and
copy this text here. Copy is restricted. Go to hugging phase. In the search panel, you have to paste
this text here, enter to access this file
to download this file, you would simply
have to go to files and Verson and there might be and there will be a
access permission, they will ask you to download
this file once you accept that and give them access to control to
download the files. After that, you can simply
go to your workflow and copy this and paste
Once you give the access, it will start
downloading directly. Here is the workflow. You can directly download it. And to install it, you have to go to your CPI folder where you
have install it, go to models, and
go to style model. This is the folder. And you have to download Flux dux here. I have already downloaded it, so that is why I'm not saving it here. So I'm
canceling it out. And now, si clip, you have to download this into CPI model, clip Vision folder. In the models, clip
Vision folder, you have to download it and save it file and
save the file there. Now, basic setup is done. Now let's check for second node. The redux model lets
you prompt with images and we use flux tab
and Snail model workflow. We have already downloaded it. You can chain multiple
applies style models node if you mix multiple
images together, if you get error of five
in the correct direction, see the top of the
example of the page link, flux fave, Comp UI. So you have to download fluxdavmdel dot saf N
in CPI Models Unit. Let's check it out if we
have done it already or not. For that purpose, let's
go to Comfy Window CompuI and models select for UNT. In the Unit Seton,
flux Dave fun. Yes, we have already did that. And if you have not done that, you have to download Flux Dave
model in the Unit folder. And I believe if you have
followed my lectures already, you have done it already. So for that purpose, let's check for T five XL FT
Safner into CPI models clip. Now, go to CPI Models
and check for clips. Once you check for it, TTX F 16, yes, we have
already done that. So we don't have to
worry about anything. Now, AE safeneres in A E
safener goes in model VA E. Now go to models and
check for VAE, VAE. Here is a VA folder. And yes, Asafn we
have already got it. Now, you can set the weight deeper above the top
if memory issue. Okay. Now, you can control, weight standoff images
implies style mode. For using GGF model, right, click on G group node, side group node to always, uf install machine custom node or direct from Comi Manager. Unit loader, Comfy sampling
flux, Duallip loader, flux Dave UFT M these are the models that you
have to download to use GGUF Model loader that
they are referring to. But right now, we
are not using it. And if you are using it,
you simply have to connect this model to here and
this clip to here. That's all you have
to do. You simply have to replace the images. That's all. Now, line
in vector art style. Now we have to
upload the images. So these are the images that
we have already generated. Let's use this image open and
now choose another image. Select it open. So these both are the images
that we are going to redux. Let's Q prompt it, and I hope this will work. And if not, we will rectify it. So here is the
problem that we have. Prompt output failed
validation VE loader. Value nought list, In A
is safener not in flux, A is safer, Sefner VAE. Unit loader, Value naught is list unit name flux Dave one. So close it. So here is the flux tape at. Let's check it. Yes. Now you can check for the AEVloder
now check for the VAE loader. Where is the VAE loader? Crop center crop
center clip vision. I have to check for the VAE
here is the VAE loader. A Effner, here is a AE for flux, for flux AE, let's check
for Q prompt again. Yes, now it's working. So here's the so
problem here is that our file name is not match with the file me that they are using in the workflow. That is the problem, right? Now, and I hope we will
get something amazing. Let's close our eyes and
open after 1 minute. Wow. We got the result. Wow, we got the result. It looks not that bad, actually. Now, let's use image. So we are using image of Naruto. And after that and now use
and let's use Gso here. Now, now let's Q prompt it. Now we are mixing
Naruto and gozo, and if you watch anime,
you already know it. If we are getting something, let's change this image
to Goofy AD image. The save image, save. Now let's check it out.
If we get something. After that, prompt it again. So I have used this is the
image that I have generated. This is the image that I
have downloaded from Google. And after this generation, we will just Qu prompt
it and wait for the generation also.
Why waste time? Let's do it. Imagination
is not that bad, but yes, the quality
is really bad here. Okay? They are taking
the collector together, and we have ofi. I think the quality that I have used before is low quality. That is why the quality that I have received
here is really bad also. And right now, I can see
the sharpness in my images. And here I can check. After the generation, let's
see what we will get here. So here is the generation
we have received. Not that bad, actually. Let's check our output
image of CPU y. Here's the copy output, and here's the image. So that is how reducts work. It simply makes two images and give you mix two images
and give you the output. That's how image reduct work. And see you in the next lecture. Thank you for watching and thank you for giving this
course five star rating. I really appreciate, and
it really motivated me to create lectures further and further and further.
See you again. Thank you for watching.
53. Flux Redux overview and use mix 2 images and create something new: Simply copy this workflow
here so you can find it by clicking here or you can find it in
the resource panel. I have shared the workflow here from reduct style mixer
flux to images workflow. I simply downloaded it
and once you download it, you have to extract it and after extracting,
there is a file. Here is the file, and you simply have to drag this workflow
here in your CPU. Once you click Area, once
you drag the workflow here, there will be alert open here. You simply have to press
X once you have done it, now you have to go to
your CompuI manager, install custom machine
node, install, update all. That's all you have to now
it might take some time. Meanwhile, you have to download these files that they
have shared here. This is a link. Once
you copy this link and paste it into here, it will directly
download your file. And to download this redux file, you first have to log into hugging page because it might not give you the
access to download it. So you have to go to your hugging page and
copy this text here. Copy is restricted. Go to hugging page. In the search panel, you have to paste this text here and to access this file
to download this file, you simply have to go
to files and erin, and there might be and there
will be a access permission, they will ask you to download
this file once you accept that and give them access to control to
download the files. After that, you can simply go to your workflow and copy
this and paste here. Once you give the access, it will start
downloading directly. Here is the workflow. You can directly download it. And to install it, you have to go to your CPI folder where you
have installed it, go to models, and
go to style models. This is the fold. And you have
to download flux dux here. Okay? I have already
downloaded it, so that is why I'm
not saving it here, so I'm canceling it out. And now, SiClip you
have to download this into CUI model,
clip Vision folder. In the models, clip
Vision folder, you have to download it and save it file and
save the file there. Now, basic setup is done. Now let's check for second node. The redux model lets
you prompt with image and we use flux tap
and style model workflow. We have already downloaded it. You can chain multiple
applies style models node if you mix multiple
images together, if you get arr of files
in the correct direction, see the top of the
example of the page link, flux one tape, CPUI. You have to download flux dapmdel dot saf
N in CPI Models Unit. Let's check it out if we
have done it already or not. For that purpose, let's go
to Cofi Window CPUI and Models select for UT in the Unit SeconFlux Dave, we have already did that. And if you have not done that, you have to download Flux Dave
model in the Unit folder. And I believe if you have
followed my lectures already, you have done it already. So for that purpose, let's check for T five XL FT safener into
CPI models clip. Now, go to CPI Models
and check for clips. Once you check for TTX F 16, yes, we have already done that. So we don't have to
worry about anything. Now, A Shaffner goes in, A E Sapner goes in model VA E. Now go to models and
check for VAE, VAE. Here is the VA folder. And yes, A Sapner we
have already got it. Now, you can set the weight above the top if memory issue. Okay. Now, you can control weight standoff images
imply style mode. For using GGF model, click on Grookne side
group node to always. You have install
missing custom node or direct from Copy Manager, Unit Loader, Comfy
sampling flux, dual clip loader,
flux DPF TM these are the models that you
have to download to use CGUF Model loader that
they are referring to. But right now, we are not using it, and if
you are using it, you simply have to
connect this model to here and this clip to here. That's all you have
to do. You simply have to replace the images. That's all. Now, line
in vector art style. Now we have to
upload the images. So these are the images that
you have already generated. Let's use this image open and
now choose another image. Select it open. So these both are the images
that we are going to redux. Let's Q prompt it, and I hope this will work, and if not, we will rectify it. So here's the problem
that we have. Prompt output failed
validation, VE loader. Value list, V A is safe ner, not in flux, A is
safer, safener VAE. Unit loader, Value naught is
list unit name flux Dave B. So close it. So here is the
flux tape a eight. Let's check it. Yes. Now you can check for the AEVloder now check
for the VAE loader. Where is the VAE loader? Crop center crop Cena clip son. I have to check for the VAE
here is the VAE loader, AI Sefner here is AI
For flux for flux AE, let's check for Q Prompt again. Yes, now it's working. So here's the
problem here is that our file name is not match with the file
name that they are using in the workflow. That is the problem, right? Now, I hope we will
get something amazing. Let's close our eyes and
open after 1 minute. Wow. We got 30, sorry. Wow, we got the result. It looks not that bad, actually. Now, let's use image. So we are using the
image of Naruto. And after that and now use
and let's use Gso here. Now, now let's Q prompt it. Now we are mixing Naruto and Gozo and if you watch
anime, you already know it. If you are chatting something, let's change this
image to fi AD image, then save image save. Now, let's check it out. If we get something. After
that, que prompt it again. So I have used this is the
image that I have generated. This is the image that I
have downloaded from Google. And after this generation, we will just que prompt it and wait for the
generation also. Why waste time? Let's do it. Imagination is not
that bad, but, yes, the quality is really
bad here. Okay? They are taking the
arrector together, and we have ofi. And I think the quality that I've used before is low quality. That is why the quality that I have received
here is really bad also. And right now, I can see
the sharpness in my images. And here I can check. After the generation, let's
see what we will get here. So here is the generation
we have received. Not that bad actually. Let's check our output image of CPUI here's the CPI output, and here's the image. So that is how reducts work. It simply makes two images and mix two images and
give you the output. That's how image reducts work. And see you in the next lecture. Thank you for watching and thank you for giving this
course five star ratings. I really appreciate, and
it really motivated me to create lectures further and further and further.
See you again. Thank you for watching.
54. Flux Redux using GGUF Model: Welcome back, everyone.
So we are going to use another model
that is other than flux, and this is GGUF. And if you have followed
my course thoroughly, then you already know
about this GGUF model. Now, let's Control M. I think
Control M is not working. Let's right click. Nod go to next node. Workflow Image, edit group. Yes. Let's select this
one. Not the group. I don't know why, but directly, I can do this thing
with a group A. But now let's select for the
model. Here is the model. And once you click it, select this model, q five before there is
a qu eight model, and just change it to
this and for clip, let's change it to this. Everything will be same only
this thing has been changed. So what we have to do We have to run the prompt. Well not list in the TE. Okay. For this one, we have to change this also. Let's scrape TT XXL. Let's run it. So I
simply what I did, I simply changed the
model that so have done. And to increase
the size of this, we can change the width
and height of this image. We will do with the next
run up and with the update, we can see here is a run up. And here we will experiment
with this value, once we're done with this one. So here is a result,
somewhat same. I don't think there is
so much change in that. Let's change it to Naruto. I think we have to
change it to Naruto. I simply download all
this image from Internet, so you can experiment it out. So let's strength it. First, let's experiment it
with this particular value. After that, we will work with the strength and
multiply things. Not that much good. Let's keep the strength to five and change the
antibs strength type. Let's make it run again. But yeah, it's taking the shape of the
image like this one. So here is another image, we can see here, here. So let's change the strength
of Naruto to one and change the strength of Lufi
0.7. Let's run it again. You only have to experiment
with these things. The value of these image, you have to experiment them. And I think right now we
are getting the result, which is really fine. And the results are
acceptable this time. Yes, now we have LFI
with Naruto eyes. Yes, it's at least acceptable,
better than before. Decrease the stngth further, we will get another result. So you get the idea how we
can use the redux model with GGF model and create an
output of our image. Now let's create I download this monster
image from Internet. You can download
it from anywhere. I simply write monster
PNG and download it. And after that,
there is a scene of city So this is a random
image with people. I simply download this image. I'm taking a random
picture from Internet. So I just simply put this image and with
the strength of 10.7. So for this image, we want the image of this one is one because we want this
image to work with this one. And yes, we want
strength of this one. Let's connect both the images. We want a 0.8 because we want this thing
to be printed on this one. Okay, so to do that, let's experiment
with this value. After that, we will
interchange the value. So you will get the idea better. Now let's run it again. And wait for some time. I think for some reason, the GGF model is working fine. I can see monster here. And now we have the Awesome. It's awesome. I love this image. How this beautifully
changed this into this. I really love this image. How this thing
changed this to this. Really loved it. Whoa.