Create & Sell Stunning AI Art: Master Generative Tools Fooocus & ComfyUI | Karan Rathore | Skillshare

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Create & Sell Stunning AI Art: Master Generative Tools Fooocus & ComfyUI

teacher avatar Karan Rathore, 10+ yrs Exp : AI artist/Video Editor

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Introduction to Generative AI Art

      1:16

    • 2.

      How to Download and Install Fooocus Free

      0:58

    • 3.

      How to Download and install Fooocus Free Part 2

      1:09

    • 4.

      Fooocus First Generation / Generate your first AI Art

      1:47

    • 5.

      Upscaling Your AI Art - Part 1

      3:19

    • 6.

      Upscaling Your AI Art - Part 2

      2:39

    • 7.

      Optimizing Presets in Fooocus - Part 1

      4:44

    • 8.

      Optimizing Presets in Fooocus - Part 2

      3:17

    • 9.

      How to prompt and Use of Civit Ai

      2:09

    • 10.

      Beautify your art

      5:28

    • 11.

      Explore Loras and Models

      8:56

    • 12.

      Explore Loras and Models Part 2

      6:58

    • 13.

      Use ChatGPT for Prompt

      6:55

    • 14.

      Food Art

      4:24

    • 15.

      Food Art part 2

      9:43

    • 16.

      Understandin the art

      8:02

    • 17.

      Efficient work flow tips and tricks

      1:26

    • 18.

      Lecture 15 Selling

      1:37

    • 19.

      Introduction Video Comfy UI

      1:09

    • 20.

      How to install comfyUI

      3:36

    • 21.

      How use Comfy Ui

      2:31

    • 22.

      Create Basic workflow and understanding the node system within ComfyUI

      8:04

    • 23.

      Comfy Ui interface and group generation

      3:31

    • 24.

      Save nodes as template

      0:35

    • 25.

      Canvas interface change

      1:28

    • 26.

      Comfy UI Manager

      2:26

    • 27.

      Install Custome Node

      0:33

    • 28.

      Workflow in Comfy UI

      2:05

    • 29.

      Recent Update in comfy UI

      0:30

    • 30.

      Learn in depth about some terms

      3:57

    • 31.

      Load Check points and Trigger words

      5:51

    • 32.

      Section information

      0:36

    • 33.

      How to change all the values in Ksampler automatically using Primitive Node

      2:32

    • 34.

      Addign Effect to image CFG classifier free guidance

      1:25

    • 35.

      Impact of Steps in Result

      0:15

    • 36.

      Exploring Queue Prompt

      2:29

    • 37.

      Ksampler SEED

      1:36

    • 38.

      How to update comfyUI and Workflow flies information

      2:25

    • 39.

      ComfyUI latest interface update and walk through

      2:58

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

124

Students

--

Projects

About This Class

Learn how to create amazing digital art using cool AI tools like Fooocus and ComfyUI! This course will show you how to download, install, and use these tools to make stunning art. Plus, you'll learn fun ways to mix and match styles to create your own unique pieces. By the end, you’ll even know how to sell your awesome AI art online. It's a fun and easy way to turn your creativity into something special!

Meet Your Teacher

Teacher Profile Image

Karan Rathore

10+ yrs Exp : AI artist/Video Editor

Teacher
Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction to Generative AI Art: Hi, this is our first lecture on AI generation, as you can see on my screen. First of all, thank you for joining this course. As I promise you will going to learn so many things, what we can imagine we will create in all these lectures. First, I'm going to tell you what we can create, and then I will tell you how we can create all the stuffs you actually watch online on Instagram UT everywhere AI generated art. Come UI. Focus. All these tools are paid and some of these are free. So I'm going to use here free tools that you can run in your machine locally. What I mean locally, you only need graphic card, at least four GB of graphic card, that is we call V RM and 32 to 16 GB of RAM plus a decent processor. As you can see on your screen, my computer specification is this. It is 32 GB of RAM, 16 GB of graphic card, that is 40 60 TI, I 712 generation processor. At least you should know you should have a decent working computer detop laptop, you should Otherwise, it will take more time. In next lecture, you will learn how to install the required software to do all these job on your computer only. 2. How to Download and Install Fooocus Free: You have to double click it and install it. I already have installed in my computer. That is why I'm not installing it again. It is extracted file. To extract a file, it is the Town audit focus file, and you have to click and you have to right click Seven ZIP Open Archive or extract archive. Or extract here, you have to select a folder and you can extract it in the same folder. One thing I recommend you, you should have at least 100 or 200 of GB space in your computer. It will help you in your near future so that you can download more files and use these software As you can see, I have downloaded so many files in my computer and the space it required already 83 GB, and it will require more space in near future as I'm going to use more and more. After extracting this file, you have to double click it and you will get to see these files. 3. How to Download and install Fooocus Free Part 2: First, you will get familiar with this software so that you can learn easily and understand all the basic concept. After that, we will do professional work using these tools. So search on Google FOO, S U S. Focus. Google will redirect you to the first link. You have to click on this link. After that, you have to scroll down and go to downloads window, you directly download focus with this. You have to click on this. A window will pop up and you have to save this seven zip file in your computer. It is approximately two GB of file, and it will take some time. After that, you have to export this file. To export this file, you have to use seven Zip. You have to search on Google seven Zip. In the first link, you have to click seven Zip and download seven Zip for 32 or 64. If you have 64 bit of windows, you can use this, and if you have 32 bit of windows, you can use this. So you have to click this download, you have to save the software and install the software directly in your computer. After that, you have to extract 4. Fooocus First Generation / Generate your first AI Art: First, you have to run run anime realistic. So you have to run this file, and it will take at least 30 minutes to install all the software it has. I have already installed it, and in your browser, there is a window will load. As you can see, there is no website here. It is running in your machine locally. So you have to simply type first, a man or a girl. It is a simple prompt that I'm going to run, and you have to click Generate. It will take some time. As you can see, there is a sampling and steps is going on, and it's working, as you can see. There are two images are going to run in this software. We can change it later on. First result and second result. Without any details, we at least get some decent amount of work done by this software. So we know how it all works. It is on basic level. We type here prompt, like what we are thinking in our mind. We write here, and it will show us what we want to see. I want to see frog jumping in pon, generate. Every time you write or prompt, it will actually amaze you. As you can see, the work it done is really amazing. I like better this one. It is actually a decent work done by our machine. In our coming lectures, we will going to explore all these things one by one, all the settings, all the styles, all the models, all the advance and Laura's. We will going to explore all these things in our coming lecture. So stay tuned, see you in next lecture. 5. Upscaling Your AI Art - Part 1: Welcome back on this lecture. In this lecture, we are going to upscale this image, we can sell our image or we can print our image if something amazing we have made using this generative AI tool, and we want to print it how we can do this, if it is only in KBs. For the dimension, we have of these images in very low resolution. So how we can print it out or sell it online. For this thing, we have a upscale option here. First, you have to Check input image and drag this image here, and you can see there are many options here. Disable very suple very strong Sple very upscale upscale two x fast. Whenever you click any of these things and generate and click on generate, it will take some time. Not every time, but first time it will download some software online. Automatically, you don't have to do anything. You just only lead Internet connection a fast Internet connection because all these files about six GB ten GB approximation. Even if you click this option and you click on generate. It will take some time for the first time only. After downloading it, it will run smoothly. So I already have run all these options. That is why it's not taking much time for me. It is not downloading the software again and again. It is only for first time. In the input mage section, there is an option upscale or variation. You have to dig this image here and upscale two X. First, we are going to upscale this image. Let's see, I'm fast forwarding the video so that you don't have to waste more time on these things. It will take about 30 seconds to process this image. This one image or approximation 1 minute to 2 minutes depending the powerful machine you are using. While it's processing. So where we can see our generated images. In the folder that you've extracted, click, double open it, Open focus, and search for outputs. Double click here, and you can see all the images that is already arranged in the folder date wise. Today, 24823, I'm going to open it, and you can see all these images that I have already generated, like you can see, It is actually done a very good job. It is feel like that I have already clicked this image in a studio, but it is not a studio. Here are the images that are software generated. So let's see, let's check out the KBs it already have. It's about 914 KB and the image that is generated first second. So it is about 3.8 B of image, and we can zoom it out. It has already done really amazing job. As you can see, It is all pixilated with no upscale. But after upscale, when I zoom in, you can see the detail of the image, like someone has already clicked this image in his or her DSLR. 6. Upscaling Your AI Art - Part 2: There is two outputs we get here. If you want to further upscale this image, you only need to drag this image. Like I cross it out and I have to drag our upscaled image that we already did and upscale it again two x and generate it. It will increase the B of your image about ten to 16 B or 20 b depending on the resolution of the image. We are going to click on sub and generate this Mage. In advanced mode, and click on image number. We can increase or decrease it. The maximum amount of images you can generate is 32, and minimum is one. So in advanced step, click on image number, that is one and generate it again with very suble change. Let's see what it will do to our image. As we process with each option, you will get to know what all these options done to our image or our EI generated image picture art, whatever you can call it. There is a slight change when it upscale it. Let's very strong. Let's very strong, and I'm not going to use any prompt here. Click on Generate. So I'm going to use our low resolution image, so it will take a small amount of time. Let's very strong. As you can see the changes it already done with our picture with very sub and with very strong sub. So here it has done already a change in our picture, as you can see the difference in our picture. So you understand the power of this software. It is the same frog that it has already generated, and now there is another angle with the same frog and with the same view. It feels like that there is someone who's standing in front of this frog and clicking the images from different angle. You can see the power of this amazing generative AI tool. So here We have not received any output in our output folder, so you can save this image by clicking the download icon here and open this image. Here we have a downloaded image. And you can upscale it further. It is up to you, and it is about your satisfaction. When you are satisfied with the output, you can save it and print it out and sell it anywhere, and you can show your work to whole world whatever you want to do, you can do it. So in the coming lecture, we are going to explore advanced section. See you in next lecture. Thank you. 7. Optimizing Presets in Fooocus - Part 1: We have already come this far and we have generated fog using a simple prom that is frog jumping in pond. When you click Advanced Option, you can see these option here, and you are already familiar with this option image number. We are going to use only one here, and the expect ratio we are going to use here is one on one, that will be square, or it will get nine oh seven. Let's take nine oh seven, or you can or you can have any size you want. The expect ratio we are going to use is this and minimize it, and we are going to use performance as speed, you can explore this option, you can see quality, quality means, you can see the quality of your image. We are going to use speed here because we are going to generate our images at very high speed. So I'm using a speed prompt here. All these are prompting. You are teaching your software or this machine learning tool that we are going to use all these things. So whenever you click generate, is prompt going on here. So all these things, the engineer have already made this thing for us are the user friendly, but at the back end, it is doing very extensive job, using our high performance machine. After performance, we are going to click on preset initial Aime default ICM Lightning, playground, pony, realistic s. Whenever click any of these options here, it will automatically download it for you. For this thing, I'm going to use a preset again here, Aime. After clicking this, L et's generate again. After selecting im, let's generate again. As you can see, it's loading a model again, and these things are models. And you can see our command prompt here. Please do not cross it. It's loading a model. Now it's preparing a task for you. Sampling step, step th. Whenever you click any of this preset, it will automatically select two images here. So every time you select preset here, every time you have to select one image here, or it will depend on you how much image you want to generate. So I generate two images for us, as you can see. First, anime style, a frog is jumping into the pond in the anime style. And another thing it has done is with a anime, a beautiful anime character with a frog jumping into the pond. Let's it out. It has already done an amazing Zob for us. As I have told you, every time we generate an image, it will go into amazes. If you're not satisfied with the result, you can click on Generate again. If you're satisfied with the result, you can click on Image. Same step here, you have to drag maze and upscale two X. It will automatically upscale for you. I can do it for you again. Let's up scale two X again here and generate it. And I have already clicked one image number, and let's wait for some time. So it up scaled our image. Let's go to our output folder. We have yet not seen any update here, and the reason is that I'm recording this lecture at midnight. According to the clinder, it will generate another folder for us, and here is the image that we have generated. Double click, open, five b of mage, one and one b of image. Let's 00 it in. It already done. It already done the job. We didn't do anything. We just click the generate button, and it already done everything for us. As you can see, this seems like a poster, and we can print it out in high quality. In the next lecture, we are going to use default here, and we will see what we will get and how it can amaze us further. As I've already promised you that you will get amazed in further lecture. See you in next lecture. 8. Optimizing Presets in Fooocus - Part 2: Come back. So for the preset, we are going to use default here and click on Generate. Let's see, as I have told you whenever you select preset here, it will automatically select two images of output here. So every time you select preset, you have to select one. So there is an input image here. That is why it is using the information that we have already put in our input image. So I'm going to skip this step or stop this process. So the process I have already stopped it, and after default option, I'm going to generate image again. Let's see what we will get. Now, this is the result that we want here. Frog jumping in pond. Here is a frog jumping in pond. This is the result we want. So let's jump into the next pi set. I'm going to use here ICM or lightning. I'm going to use he lightning here as I don't know what ICM do, and I have not downloaded it yet. You can try it out. It's up to you. I'm going to use lightning here, click Lightning, Ima number is one and generate again. As you can see, with the lightning, it ready selected the lightning image, and what we get Hm. No bad, with another playground preset, number one, and generate again. It's again loading the model, preset, encoding preparing the tasks. With the playground model, we actually get the loads of contrast and color correction in the playground. You get to see these kinds of result. But whenever you get to see these kind of result, click on negative prompt and right here that what you don't want in your images. Ma i tras L et's generate again, and I hope there's no high contrast we get to say in this image. Let's compare this image high contrast because we are using a preset of playground, and it always changes the image to high contrast. Whenever I use this preset, it always high contrast. Let's remove negative prompt here. Use Pony week six and generate again. It is not a frog here, and there's a image one and two. It seems like a painting. Human like prog we get here. Let's generate again, and I hope this time we get something amazing. A, girl. Can you see what this preset is doing with our prompt or images? It is actually doing an artistic work here. There is a water here. It is like a water paint or acrylic paint here. So I'm going to use another prompt here for our pony preset or pony model. 9. How to prompt and Use of Civit Ai: I'm going to use is a technique that you all can use and learn prompting, how we can do prompting here. So first, you have to search for a website that is Civit AI or Google it prompt for Pony V six or prompt for any other model that you are going to use. And you can see here is a website that is Civit AI, and it is a website, and it is popular among all the AI generative artists. Here, you can see Pony Defuse V six Excel prompting resources and info, and you can see the article here. I'm going to share this article with you. Click on the Dogs. These are the prompt that you can use for the Pony model. My collector full list. I'm using the Pokemon list, create inmate. Let's. Paste a female male and generate. So there is a job that has been done by using Pony six. I'm actually using random prompt, generate one image. I'm using my prompt here, Let's see what it will do. It has done a good job for us, but what about the hands? We got really scary hands here. What we can do here 10. Beautify your art : N paint drag here. Select our image control shift shift, scale in or scale out. You can see the brush air just like the area that you want to remove or you want to change. Improve details. There is a option in method option, you have to paint or outp default, or you have to improve details, hands, eyes, et cetera In paint prop. You have to you have to write here detailed hand. So what we have done here, in the input image section, so we want here to remove or change the look of our hand. And we are going to select input image. We drag the image here. We have applied brush here that is called inp and Opt option like inpint and Opt option. And we scroll down select the method here in an outft, default, improve detail face hand, et cetera, and blow it, we have write a prompt that is detailed hand, and we are going to generate, and we are going to generate about two options. Let's see what we get here. It is not what we want here. We want to remove this image. So about detailed hand. So what we are doing here, we are going to use modify content and click here, remove and or add hand of a guy matches the ands of the image. Let's see what it will generate for us. I'm using improved details face, and I heres and generate again. It is giving me again a same result. Let's stop this, remove the image here, remove input image here, and type here. I'm going to use negative prompt here and click Generate again. Let's see what we get here. Okay, we are getting somewhere at least. Sometimes we have to hit and trials to get the desired result. In the next lecture, I'm going to share some of my tips that I use while generating the images that will increase the workflow of your generation and help you in your future generation too. See in next lecture. 11. Explore Loras and Models : Welcome back, everyone. So in this lecture, we are going to explore advanced section with the style, models, and advance. What it will do to our imagination. So before that, you have to understand some basic thing I'm going to teach you here. You have to understand the Civit AI. Please go to Civit AI and open and create your ID here. From here, you have to click and search for arches the fantasy, Kanas. This is what we get. So here is a checkpoint of 6.46 G, and you have to download it download. You have to download it in Focus Focus. In your Focus folder, in the model section, you have double click, open it checkpoints. Here you have to save this file here. Like I've already saved it. I'm not going to save it again. Arcane is the fantasy here. You have to save it here. Okay. After downloading it, you have to search for both style. Bos style with double S. And as you can see here, here is a Pony and el Bos style by Hint and Bos fight here. You have to click any of the image and check for Laura's here. Here is a Laura. It will be around 200 MB of file, and you have to download it again. You only can download it when you have a ID. So you have to make ID first. Download. Focus again models and search for Laura's. Here is a Laura, and you have to save it here. Like, I have already saved it here, like spirit boss battle by Hole Nate, botyle spirit pony. I'm going to save it again here. Hey, the spirit pony. So I'm going to save it here. And you have to save trigger words. It is really important for you to save it in your Google Docs. So why we are saving it, you will get to know shortly. So click one of the image here and check for prompt, what type of prompt they have used here. Copy the prompt. Go to our Focus window. Paste it here. Go to settings. Negative prompt. Here is a negative prompt. Copy, Paste it here. Select JPA PNG. It's up to you. Image number two. I want at two image generation here. For the preset, I'm going to select it. Actually, while you're selecting the base model, you don't have to worry about this preset. O, you only have to worry about models here. So for the models, first, you have to refresh it because you have updated the models and Laura, so you have to refresh all files here and select for Laura that we have save both Battle Spirit pony Hare. Click. Base model arcane is the fantasy, like we have downloaded the biggest file, six GO fle and 200 MBO file here. Advanced action, vitales, style levitalities, now generate. One thing you have to understand is that we have saved both style trigger word four hour pony or arches model. You have to save it and highlighted it. Every time you use a prompt, you have to use this post style. Why? Because this post style is a trigger word that it will tell your model that we are going to use this type of model and this type of specific in our upcoming generation. Whenever you click generate, it will tell these two things that we want something amazing and related to imaginary already happened here. See, this is what we get. Okay, we get two results, and it is phenomenal. Here is another output, and it's incredible. Let's upscale it and see what we get here. Actually, I want to see high resolution of this image, so I can print it for myself. Here is the SD result that we got here. Let's compare both the images with high resolution and low resolution. Low resolution, high resolution. It look like cinematic style of work we have done here with just some words and our imagination. It looks really cinematic. I'm going to remove it again. Uncheck it. La botile here. Spirit unicorn, female character standing with a glowing sort and looking angry. Let's see what we get here. Generate. And I'm going to generate only one image here, generate. It is not the result that we want here. So I'm going to remove it. It look good, but this is not what I want. So I'm boss style remove. And with the style section, I'm going to remove this focus and enhance models, Laura. Generate again. Okay, so you have to understand that. In the prompt section, spirit Unicon, female collector. What our machine understand is that Unicon with a female collector, with a glowing and look angry. He female. Standing female character standing in front of it, and let's explore what it will give to us. No. It is not the cise results that we want here. Let's skip it out. So there is a little bit of information we are missing here. Fantasy character standing in front of giant growing spirit unicorn. Let's use this prompt here. Magical forest background. Copy, Bost. Okay, this type of result that we want here, a magical type of work. So fantasy tor standing in front of a giant green growing spirit unicorn. I change it to dragon, or I change it to proc or cat. Let's do with the cat. Okay, this is very exciting, and the result we got here is really incredible. Again, this is what we want. You have to understand basic prompt here and tell you machine that what you want, and you have to clearly explain it. I hope you get the basic idea what we have done here. In the next section, we are going to explore more Las and more models. See you in the next lecture. 12. Explore Loras and Models Part 2 : Hi, everyone like we have understand the basic models and Las here. We are taking another step into Laura's models. Here are models and Las that I have loaded it, and there are some kinds of models and Las already installed with this section with Fucus. With the CVT AI. With the CVT AI, we are going to experiment many of the Las here. Let's check Laura, what type of Las we got here. You have to understand this also, that what type of Laura is here and which base model will support this type of Laura. You have to understand this also. So for this Laura, we are using SD xcel one, and we have a triggered word here. Copy this trigger word. Copy this triggered word with the Laura name. We can remember it whenever you use this type of La. Download this Laura in your Laura folder, focus models, focus, Hoc models, Laura, save it. This is a fresh Laura for me also. I'm going to use it for you. We have downloaded it, and for the base model, SDL 1.0. Go to our Fukus window and open DSL or SD 1.5. Base model, SDxel. You have to select Laura here. As you can see, we can't see the La here, so we have to refresh it again, refresh it and check for the base spirit. Guardian a Tenner, this Laura, we have downloaded it recently, and with the juggernaut run diffusion, check for the basic prompt that they have used here. This imagery, I like this image. I'm going to use this prompt here external generator and negative prompt select page the prompt go to settings and copian page, the negative prompt here. First, you have to click on generate and understand how this Laura work. I've selected model here. Base model is SD Excel on Lora here. It is not the result that we want here. I'm going to stop it. No. Let's wait for it, what we got here actually. This is the result we have got here. We have to use a Laura weight also. Juggernaut Excel run diffusion, Laura. Not excel run diffusion. Yes. Weight 0.8, the weight we are using here is 0.8. Let's explore it again. What we have done here with the weight is, we have to experiment this weight slightly up and down because this weight tells our Laura how of the information we are using from this Laura and with this base model. Whenever you select SD Xel select run diffusion, Ue 0.4 for SD 0.1 T, 4.667 for SD one ime model, 0.8 for cel refiner or any value for switch to two excel models. Let's not choose 123 information here, and we're going to use 0.9. We are using two models here. It's look scary to me, and I scare of the spiders also. I'm using not spider are this time. I'm using, what do you think we can use here? Let's use a snake here. Okay, Snake is giving us a pretty good result here. It looks good. Let's use cans model. Laura this guardian. None and generate again. We are experimenting with the base model and Laura here. No. As you can see, her hands are looking creepy. Snake is good, but her eyes also. Let's experiment it with im pencil. Generate again. After understanding all these settings, you can understand how all these things work. And with the pencil setting and Laura Garden Spirit, this seems real, real, real sexy. And for the animal lover, you actually going to love this imaginary. Let's select another base model, Pony generate again. Here is an error we have got here whenever there is an error, like Laura and pony diffusion are not going to work with each other. So I will give us this result. You have to experiment it with the Lauras. Another thing we are going to use in our next lecture is that we are going to use had GPT for our prompting, and I hope you understand why we got error here because of the base model and the Laura, these both are not compatible with each other. That is the only reason we got this type of screen. For another generation, the software will work or not, you don't have to worry about it. This software will work every time you generate an image. Let's refresh it, and I will see you in the next lecture. I really thank you that you have made this far and took this course. It will help me if you give me five star for the knowledge that I'm sharing with you. Thank you very much. 13. Use ChatGPT for Prompt : Lecture, we are going to explore Chad GPD prompting. Here is a basic prompting that I have given to Chad GPT and tell him, give me detailed prompt advance with a fantasy character standing in front of a giant fantasy post style character. So Chad DPT gave us this scene character interaction atmosphere and ton. All this information, I have copied it and directly paste it into our image generation prompt. But before that, I have type pos style, that is a trigger what we have used here, and we must use here. So after using it, I've generated the image, and I've got two results here. But before that, I've got two more results here, like I've told you. These are the two results here, and I want to share another information with you. I'm going to use enhancement here. Check on enhance Check on Advance, style two images, PNG, JPA, it's up to you. Style, Levi Lit ats model, alkanes fantasy, spirits style, Honite. You have to directly search it on Citi, Excel spirit pot style. Excel, Spirit Bos tile. I love this Laura. I'm downloading it. It has a triggered word of K style, and I'm saving it with the Laura name. This is new for me, also. What about our spirit both style by Holy night Safe? Refresh it spirit both style. Sp boss B Bos fight Excel. This is the Laura. You have to download it. Again, with the spirit boss, Battle action, Horne save tenor, refresh it. Or if you not get this type of La, you can use Bose Batter spirit pony. You can get this pony spirit also, generate the cacter here. So we have already selected the enhancement, and it will automatically upscale to the two X after last enhancement. Whenever it done with the second image, it will automatically enhance our first and second image simultaneously. So you can see the objective result here. Refresh it. Let's see what we got here. Meanwhile, it enhancing our two images. Okay, this is It actually gave me goose bump, and I really love to see these type of images. It's sampling with our both the images. I can't wait to see its higher resolution. Mmm. Now I can see the detailed structure here. Let's wait for the second image generation. Meanwhile, you can read it out to understand the prompting, how prompting works. It actually gave us a detailed structure. What our character is heroic character, boss character, interaction, atmosphere, to, everything. Okay. So we have four generation here, a powerful spirit watching our hero. This is what we got here. It feels like a god to me, and it feels like I'm into the game, and it's giving me command to do to kill someone or to save someone. I don't know. I only can imagine, and I want to see a game with this character or with this post label, and I want to fight this post with this character hands on with this same sort. Someday someone create a game out of it, and I'm very positive about it. So let's understand the enhancement part again. So I enhancement part, we have done already up to two scale after last enhancement. First, it generate this image, then enhance this image. And after that, it generate this image and upscale this image for us. So we don't have to waste our time in image up scaling also. It will do for us automatically. Next lecture, we are going to explore some still photography type generation, food generation that is more popular in our culture or on Instagram because these type of food and still images have crazy amount of download on website like Adobe Stock, Ferri Pick, and Many More website. We are going to explore all these website and the earning potential in our upcoming lectures. See you in the next lecture. And thank you for taking this course. I really appreciate your feedback, your five stars, plus your beautiful comment or feedback message on the course. Thank you very much. 14. Food Art : Welcome back students. You have made incredible progress. Let's keep this momentum up. In this lecture, we are going to explore our photographic skills without any DSLR or camera, using only computer, and we are going to generate incredible images for us. Food photo. Here is a Lora food photo. And you can see food photography, you have to search on Ct AI. We have a triggered word here, which is food photo, and it will work with base model SD 1.5. So I have downloaded it, and now you have to refresh it, refresh. Actually, it will automatically download it in our same folder models Laura, okay? And after that, you have to refresh the window here. Refresh, save this triggered word for this Laura. Copy this and paste here. And now. Actually, you can download it from the download folder in UM, from the download resources in the UM section. You can download this doc that I have created during the course. So we are selecting food photography and selecting here S D four point SD 1.54 0.4. To use SD 1.5, we are typing the value 0.4 here. Let generate simple burger, and we are using a base model of SD and SDL SD 1.5. It will not work here because we have not used food. I forgot to use this food photo prompt here. Let's wait to what it will generate for us. We have found We have got burger here, style sinematic focus. Let's remove the style here and food photo. Let's wait. Now generate again. I hope this time we get better result. According to Image generation, we are going to use this prompt here. Yes, this looks more natural. The food images are making my mouth water. Everything looks absolutely is. I simply type volcano cheese, and here is a cheese pizza with the volcano behind it. Actually, it looks an advertisement type of picture. We can call it a spicy pizza kind of thing. And what about second one? It's not looking like natural thing. It we have put some twice. This looks more natural, and we can spices pizza in the world. 15. Food Art part 2: T Let's note any of the loas here and go to basic settings with the initial initials model, none, advance. Let's generate again. So now we have got a result. It is better than before. So let's again go to realistic and dry again with only one image this time. But before that, let's check, second image we have got, It's not looking like a chicken. It's like we have tied the legs of the chicken during cooking. This time I've used realistic preset and focus photography negative. Laura. It has used its own Laura, with film photography, realistic, and with the scene, yes, now it's looking better than before. Somewhat realistic. Let's change this whole scene into a simple prompt burger with chicken standing beside it with sad face. Let generate again. There is no chicken I can see in my image. It looks realistic, cinematic, but this is not what I want this time. Let's remove these things. Use model, playground, non, satins speed. Okay. It is not working with our images. Right now, I have to select playground itself with one image style Focus V two, model playground. L generate again. Burger in fire. Yes, it looks somewhat good, but it is not that what we want here. Focus semi realistic, Master. I It's not chicken, but this time we have selected anime and this is what we want. So we should experiment this with all these styles we have. Chicken standing on a table, sad face generate again. There's no chicken. Okay, I think we should add a live chicken standing on a table. I think it's already cooked and present it behind the burger. And that's why we can't see it. And I hope this time, we got a chicken fi. But the size of the chicken is not up to the mark. Okay, so we are getting rests slowly slowly, but we are getting at let's say mine craft version. Okay. So this time there is no burger, but we got chicken with angry face. Yes. The result is really fine, low poly. This is good mode, Laura, pencil, no pc. We should check for pencil. Step advance, generate again. So this time we are using anime model, and it is not working with the pre set we have already selected. Let's change the p set to anime one style. There is no style we have selected. Okay. Semi realistic models, anime pencil, Laura, none, send it again. H. This is the result we want. What we want it in a realistic manner. What we can do here. Let's explore civet AI. And so, this is what I got here, and food step is the triggered word here. Let's save the food step. Copy, the Laura name here, downloaded it. It's about 400 ambi. Laura, you can download it. Let's check the prompt here, which type of prompt they have used, very cute appealing ethos Laura Food Pat E negative SDSL food Pat SD xcel. We have a checkpoint of SDL, SDL one. It is about six GB. Actually, we have already downloaded it, DSL one, or in the focus prompt software, you can it will download it automatically for you. And for this checkpoint, the result are pretty amazing, Award winning heroic sort of mountain Yes, you can use this type of prompt here and see what it can do with our SD X one. There is no checkpoint here. Let's select SL, Wood and hence SDxel, offset example Laura, one SFN, and generate. Base model Juggernaut, run the fusion it as selective. Let's add negative prompt here and generate again. We are not getting results that we want here. This is for this prompt is used for comfy UI, comfy UI. It is a node baits software, which help you generate really amazing thing. But right now our target is to get food photography on next level. Let's find another model if we get here, models, Laura, let's see fs first. DSL one PT DXL one. Okay. Cat made of noodles. L. Let's direct prompt here copy paste. Remove the negative prompt, generate again, Sel V AE fix, easy negative SDxel food pads. No, it is not the results that we want here, that generate again. I think we are using another model, which is not working ex we want here. Let's change weight to 0.8 and generate again. This time we have changed the set. This time we have changed the weight of power Laura and see this is what we want here. C, this is a result that we want here. Queen What we have learned in this lecture is that we have to use weight also with the Laura. We have to experiment the weight here. If there is weight one, we might not get the result like we desire. But whenever we change the weight 0.87 0.5 or 7.7, this type of weight can help us significantly to use the Laura direct. So let's get to our main topic with photo weight eight S DSL one and chicken cooking in a forest with burger rain. And I hope this time we get the result. Chicken is cooking. It somewhat looked realistic. One more thing that we forgot here is that to use our trigger word, which is food photo photo Okay, so W f photoa with a weight of 0.8 and SD Xcel one base model. We can see the result. Hare is actually looking realistic, but this is not the result we want it. We have to experiment it with different style. Again, choose another style like focus photography, negative, size cinematic, and generate again. With every step we had taken so far, we are getting better and better with every generation and understand how this old software is working. You only have to experiment this software. The more you generate, the better you get to understand these type of software and prompting. This is looking like a DSL photographic style image. In the upcoming lecture, we are going to explore some of the website that we can host our images and sell them online, and we can generate passive income with that. Using this, and I hope you are learning something new with every lecture you have taken from me. Your feedback is really appreciated. Also, please give five star rating to this course and to your teacher also. That is me. See you in the next lecture. 16. Understandin the art : Hello, everyone. Welcome back to this new lecture, and thank you, everyone who have taken this course and made it this far. In this lecture, we are going to explore the advanced section in which we have used several type of preset already, and we are going to use and now we are going to experiment this preset with the tyles and models and advanced. But we have to understand the style section first. So with a basic prompt, dog standing on a table in a circus. Dog standing on a table in a circus. This is my prompt, and my style is Fucus, V two Fucus, enhanced VCs. Fucus Shop setting initials. Let's generate what it can do. We already get a pretty good result here. So let's experiment this with some styles. Let's uncheck all this style and select focus semi realistic. Generate again. Results are okay, but not up to the mark. Unselect it and select Pi anime and generate again. So as the word itself says, every style have their own output. And it is actually a good result. If we want an animal character, we can have this one or if we want game mine craft, we are generating with a style game mine craft. We are getting our dog in a circles on a table with mine craft style. He is a result, and it is really amazing that we have got here a mine craft style dog. That's uncheck it and select painting style. You can experiment all the style with your prompt, and you will get different results with different style. It actually not seems like a painting to me. It actually looking like a dog who's been tortured enough, and it got blood on its hair. Let's uncheck it. Check futurism. As you can see. With the prompt, you can see the cat here, and it gives you a basic idea that what it will do for you. Yes, this one is good, unded art. Let's select un Dad art. And actually, I like this unded art. It always gave us a Zombie look. Futurism, punk style, cybercity, let's see. All these things, you have to experiment and check what it will do with your prompt, and you will get what you want. For the cinematic look, we are selecting sizinematic, and uncheck futuristic cyberciti, and now generate again. Actually, I like size nematic. Every time we generate with size inematic, it gives us pretty good results. As you can see here, blurred background, and focus dog, cinematic dynamic. I hope you get the basic idea what these style d to your image or your imagination with the help of prompt settings and styles. Let's now let's explore playground, image number one, Slack focus, V two, remove it, si cinematic, anime model. It already selected a model here. Advance. Okay. Now generate again. Like we have already explored the playground preset. What it really did to our image was, with the high contrast and high color, it enhanced our image. So with the anime style selected, and with the playground reset selected, it already enhanced our imagery. As you can see, it did a job that we have never expected, and if you want to posterized it, you can have up scale of two X, and you can paint it out or you can sell it online. So we have five legs on our dog. So let's remove one leg here. In paint Dragged heir. With the control scroll up and scroll down, we are increasing and decreasing the size of our brush, and with the shift, hold shift, and scroll up and scroll down, you can see the size and decrease. Like in the negative prompt, I have already typed bad like structure, bad quality. Watermark all these things, we do not want in our image. Let's generate again and wait for a few seconds. Let's see what it will do to our image. It's already processing the image. As you can see, the extra leg we have got in our imaginary has been removed. And I hope you understand how we can remove extra leg or extra finger or unwanted thing in our image using in paint and outpaint. Like I have told you in our previous section, we have to save our work. We can understand later or we can use it later on in our work flow, which will save our turn off time. So I'm going to save this option that I have already explored because it is also new for me. I'm going to save this option in my Google Docs, like I have told you in my previous section. I'm selecting two of these style, like anime and street fighter. I'm stopping it and input imaginary, I'm stopping it. Generate again. No. As you can see, I don't like the result here, so I'm stop it again. Remove Ci anime and generate again in a street fighter game style. Our dog is getting muscles here, and its outfit is like a street fighter. You can explore all the section, adorable three D character. It is actually an adorable. So you can explore all this section and experiment it one by one. Also share your work with me on Command section in ud me or on Instagram, you can search me and share your work. I can't wait to see your world. I really appreciate that you have taken this course and want to learn and want to explore. Also never stop your curiosity to know new thing. See you in the next lecture. 17. Efficient work flow tips and tricks : Hello even. So in this lecture, I'm going to share some of my experience here to work efficiently with these prompting. You should always use skin shot and save all of them in your Google docs or anywhere you want. You just only need to skin shot it. And paste it here. Why we are doing this? It is because if there is a image, and we forgot to save the prompt of that image, and we want same desired result or similar result, but we don't know what to prompt again. So here it will save our life. You only need to save the prompt. Or maybe you forgot which type of preset you have, which type of negative prompt you have used with the image that you love. But you don't know how to generate that image again. By cha, you have deleted that image, how you are going to get that image again. This is where this documentation will help you get the desired result again. In the next lecture, we are going to explore styles. As you can see, so many styles here, and it will take some time to understand all this concept. In the next lecture, we are going to explore this styles section. Se in next lecture. 18. Lecture 15 Selling: In this lecture, we are going to explore some website, where we can upload our AI generated image. There are some places where we can sell it online. First of all, we can sell print on demand services and upload our art on these website, rebel, society print full. These kinds of website use our art for their print media. Sell it to their customers, and we will get our royalty for the art we have generated. After that, all the print work will depend on them. Websites like Adobe Stock, Free pick, these kinds of website, we can sell our image directly on these marketplaces. After that, social media comes into play. Daily about three to four images, we can upload and build a following and use the platform to showcase our work and connect with our potential buyers. Actually, buyers will find you through social media itself. You can use Twitter or you can create a website for your I prefer you go with one or two things because you can't do everything, but you can do anything. You have to find the best suited marketplace for your art, like A Dobetok and FP plus your Instagram account. You can pick these three places. Upload your work, wait for the approval, and after approval, the process is tough, and it takes some time. But in the end, it will worth it. For the title metatag description, all these things, you can use JAD GPT or Google Gemini for these things. It will surely help you. Also, there is an option of bulk upload in these website. Where you can upload directly using a format CSV. In upcoming updates in these course, I will surely will share how we can upload and earn a passive income out of it. I'm waiting for your feedback. See you in the upcoming lecture. 19. Introduction Video Comfy UI: Everyone. Welcome to this exciting course on Comp UI. I am current, and I'm here to guide you through everything you need to know about this amazing tool. Whether you are beginner or an advanced user, this course will help you unlock new possibilities with ComfUI. What is Comp UI? What is AI art? Well, AI stands for artificial intelligence, which is like teaching a computer to think and create just like humans. With AI, we can make art in a whole new way, whether it's picture design or even things that look like they were painted, whether it's pictures, design, or even things that look like they were painted by a real artist. Of it like an art studio where the computer does the hard work for you. You give it instruction, and it follows them to create something unique. KomfUI has a simple interface that lets you choose different options to create your images. You can adjust things like styles, color and even the type of the art you want. The great thing about Kofi UI is that it's beginner friendly but also powerful enough for expert. Once you get the hang of it, you will be able to experiment and create all sorts of amazing art with just a few click and obviously prompt. Let's jump in and explore how Kofi UI can turn your ideas into art. 20. How to install comfyUI: First time going to teach you how to install Comp UI. So go to Google search for Compi Install. That's all you have to type. Look for the CUI the most powerful click it. I will direct you to the Github page, and you can download it by scrolling down and direct Download button you can see here. Click it, it will automatically start downloading for you, and now you have to download I have already downloaded it, so I don't have to download it again, but I will show you the step what it will look like and how to install it correctly. So first thing you have to understand and you must have is NVDA graphic card. If you don't have NVDA graphic card, then it will run on your CPU, which will take way more time and you will lose all the interest how to use ComfUI you will lose all the interest. And it is a ZIP 75, so you have to extract it before you run into your so first, you have to download it from Zip seven, download for your machine. After downloading it, you have to extract that file. I will show you how we can do it. As you can see on your skin, the file will look like this. This is the file that we have downloaded from that folder. It is a 1.5 GB file, and you have to right click, but before you have to install seven Zip, like I have told you, so right click seven Zip, Open archive or extract files or extract here. I will choose Extract to Cf portable NVDA, so it will generate a folder for you, and it will automatically extract all the file in that folder. So Double click that folder and you only have to run NVDA GPU. Double click. Okay. So whenever you Double click at the first click, you will see some window command window will open up and all the information it will download first. So it will take around one to 2 hours to download it. You must have a fast Internet connection, then it will save you so much time. This process is for first time only. So after that, it will open a direct window for you. That is a Come don't have to worry about this thing. It is like a cluster or a web so you don't have to worry and scared out of it. Don't worry. I'm here to explain every part of it. At first, there will be no CUI interface. So what you have to do is that load. Load default, you have to select load default, and it will automatically load a default node system for you. And you can see load, checkpoint amp latent image, clip text, clip encode, K sampler, VAE decode and save. Right now we are using only a default system for you. First, I have to explain you everything from sketch. After that, you will understand all the nodes of what is checkpoint, what is flux, what is clip, what is as amply, what is DAV, all the things I will explain you on this particular course. Obviously, I will teach you how to prompt. So right now, we are going to simply type a simple prompt hours Garden and beautifuple standing. You have to click prompt, so it will take hardly seconds for you to work. So Comfy has generated a basic image for us. Let's change the width of our image. It is right now 152 X 152, which is very low quality. And right now, I'm going to use 102-41-0204. Okay. And prompt again. As you can see, the image it has generated using the basic tool we have right 21. How use Comfy Ui: So we are getting started with the CPUI. So in the pious lecture, we have learned how to install the Comp UI. So in this video, we are going to use the CPUI how we can use it. And we will talk about the important notes that Copy has and their work. As we have downloaded CompuI before, we have all these files that CPI has already installed. What happened next time when you open the CFUI? So next time when you open the CPUI, there are two options. First, run CPU or run NVD. As I've already mentioned in this course, that you must have NVDA graphic card because it will make your work way more easier. So you just have to double click the run NVD GPU, double click it, and the command prompt will open for you. And this time it will not take so much time. It will check the basic information about the machine we are using, and it will automatically open your default browser with a URL, something like this. And it is working locally on your machine. Have already learned to install CompuI and come to this window. Now it's time to make your ideas into reality. So how did I get this window? As you have already installed CompuI using this prompt, you only have to double click this again, and Cf I will open a new window for you in your default browser. It could be Google Chrome or it could be Windows. So it's working 100% locally, no Internet is used right now because there is no web address you can see in your search bar, and there will be two type of window you can see in your first attempt to open the CofiUI. First, you will see a blank screen, or you will see a load default screen. If you don't have any skin here, you simply have to click a load default menu here. In the load default menu, fui will open a beautiful scenery nature grass portal landscape. Pure Galaxy portal open for you. Just click the Q prom. Cofi will work automatically for is the first image that you have generated right now. This is your first image, the CpUIO the creative that are using cofuI have created a wonderful image using cofuI Whenever you click again, it will again generate an image. But this time it will be different because of the seed nature. We have seed here, randomized document, randomized fix. If you make it fixed, it will again generate same image for you again and again. You only have to make it randomized if you want different results every time. 22. Create Basic workflow and understanding the node system within ComfyUI: Getting back to the blank Canvas and right now, just click Clear. Okay. And now we have, again, a blank Canvas and we don't know what to do. You only have to double click. There is a window pop up or you only have to click sampler, or you have to search K sampler here. Come for UI core. You click it, and there is a node has been generated. Now you can see it is kind of a brain, or you can see it is kind of a heart of a compuI node system. You only have to generate model, double click Load image, say Lod checkpoint. It will work like a model. You only have to click to the model menu. It will connect to the model. Now you have to double click again, clip encoder. This will be the text to add to our UI. Double click again. Or you can simply Control C and control V, paste it bring it out here or you can bring it anywhere in the canvas. It's up to you simply in the condition, click on connect it to the negative menu. If you want to resize a node, you only have to drag it like this. And if you want to delete it, right, click the move. If you are mistakenly delete your node, you only have to control the set. I will back again. And now there is another way to add a node. You just have to right click Add node sampling sampler. You only have to search from this vast variety of nodes. Or if you want to add another way, this is my favorite way or it is actually easy to add, you just have to drag out this node and release it in the blank canvas. It will show you the option which node you want to connect with this node system. So you only have to add ampitlatent image here. It will be automatically connected for you and add another node for you. For our heart, we have already had brain which will act which we call checkpoint or model and blood brain eyes, which our machine will see through our text, and it is a latent image. Later on, we will use it as a reference image or we can add a reference image to give our prompt or K sampler to understand which type of images we want or which type of reference images we want to connect with checkpoint and give us a desired result. So right now on the other side of Ks empler we have a Latin, drag it out and you can see VAE decode. There is another output for the VAE decode. Let's first let minimize this sampler. You just only have to click this dot, and it will automatically minimize or it will automatically will minimize its window or if you want to open it, again, you only have to click dot Again, this time we have another image, so it will be a image of preview image. So this time we will see a preview image here. So let's add a preview image, same preview image. So right now it is a complete workflow. You only have to add text. Let's see if we can get an image. Let's make it portal and OT For the positive product, I have added portal, and for the negative watermark. Let's Q prompt. Let's see if we have any request here. This is important for you to understand if we have a problem in our workflow, the CUI will automatically dett that problem and tell you that we have to add another node to complete the workflow. Or if there is any problem, it will tell you right away what is the problem. So you only have to read what is the problem and you have to rectify this problem. Prompt output fall very decent clip encoder. So we have to add a clip text encoder required input is missing. So right now you have to check the color here. We have a VAE color matching with the VAE decode here and all the nodes that have other colors right now turns into gray. So we have to connect it with the VAE encoder and again, the problem we have faced here right now, I have rectified one problem here and another problem has popped out. So we have two problem required missing input. Clip. So we have to check the clip and check where it will be added and it will be at here. And here. So let's first prompt again. Congratulations, everyone. So now you have Master tap resize this image 210-24-1024. Q prompt. Let's click it and drag it out. Now you can see the clarity in this image, and it has dimension of 1024 pixels to 1024 pixels, which is square, and you now can post it into social media. It is a complete workflow you have generated right now. Press Control and hold it by clicking left click and drag it out on the canvas. You can select all the nodes here or delete it. Right now, I'm not going to delete it by scrolling up and scrolling down or click the scroll button. You can drag the Canvas according to you. You only have to click Control click, Control click, Control click, Control click. The interface is really, really simple. And if you know how to use a basic Photoshop, you can understand how we can use the node. Or otherwise, if you have not used Photoshop before, there will be no problem because your friend here will tell you everything about compi. So you don't have to go anywhere else. You only have to watch course. And one more request, please post your first project image with me. I know you are very creative, and I want you to generate a first image, very first image that you have generated using CFI, and post it on project or if you want to share with me directly on Instagram, there as well. It will motivate me to make more videos for you. Now let's generate Lion. Baby cute ion. Cue prom. Let's see what we have here. Here is a lion cute line we have generated here, press control, select these three nodes, Control plusECpy and Control plus Sift plus V paste it here. Wherever you cursor has gone, it will automatically paste here, and automatically, the nodes will connect it. If you only press Control and V without SIFT, it will be paste your notes. The three nodes which we have selected will be paste without connection. So why we have done this, you will get to understand better as we use CPI 23. Comfy Ui interface and group generation: Now if you want to move them, you can move by pressing Sift key, and you can move them as a group. Now you can create them as a group also select by pressing Control, select your node, right click anywhere on the canvas, and it is really irritating for the chrome users. If you don't want this thing to come up every time you click, you should use your browser default browser, which is our Internet Explorer or Microsoft Explorer. So by right clicking that, you need to add group. Group name, group A sample. Now you have created a group here. As you can see, I have created a group here. Now whenever you move the group, all the nodes within the group will be placed along with the group. So you don't have to move them one by one right now. Now let's come to another part. Now let's add a latent image here, like we have added empty latent image for the case Empler we have added. For this case empler we need a latent image, and the latent image will be, let's double click here and add a latent image up scale, upscale latent, add it here, and now add this scale here and ten with here. Now we have added upscale latent image here, and now we are going to upscale it, and we're going to upscale all the way to 1024 to 1024. That's okay it. And one last thing we have to add is reduce the noise to 0.2 because whenever it upscale our image, it will create another variation for us in that image, but we don't want that to happen. So we simply need to reduce the denoise variation here generating first image. Whenever we upscale our image, it will not change our image very much. Just a minor twig, it will do for us after reducing the denoise. Now try to queue our prompt again here. After first generation, now it's working on our image on our latent image. Let's compare both the image that we have generated by using Jagannautbn safenu as C 1.5. Okay. 24. Save nodes as template: You want to save this node as a template. You just need to select this node by pressing Control and select this. Right click, right click on the Canvas. You need to select Save select it as template name, image, scale. And now you want to use that template, you just simply need to select right click and go to Node Template Image scale. We have ed it. This here is our template. 25. Canvas interface change: Now as you can see, we don't have space around here and how to move our canvas. You simply need to hold space and move your curson. Now you can move your canvas anywhere around, leave your space button, and our cursor will automatically work as usual. Now you press space, press and hold space button, and now you can see it is really easy interface you can learn. So you don't have to find a space if your canvas is full of notes. You simply press to hold space. Es ps. You can see it looks really messy, so you only have to click this setting here, you can change your dark mode to light mode. And you can change edit attention. You can change grab, ink render mode to linear or spiral and go to straight. Now you can see all the lines have been changed to State, and it look really nice and easy to use. Milk white, you can see it looks very nice. 26. Comfy UI Manager: Let's talk about Comp UR manager. Whenever you install CompuR in the previous version, there is no manager here install automatically. You have to install it manually before. Now it will be automatically installed a Compi manager for you in all the upcoming update in the latest version of CompuI. So you need to select manager and you can see which node and which node we want to select and deselect or which node we want to install automatically in within the CompuI. It will help you manage all the nodes here. Whenever you install or use a workflow, it will show up pink style of node, which means the node is not available or node is not installed in your system. You need to install them using manager, and how you can install it, you simply need to install missing custom nodes. You click here and it will show you all the results which are shown in the different color or in that bright pink color in the mpiUI canvas. And all the nodes that are not installed, it will be appear here, and you need to select them and install them manually or it will show you that install all the nodes here or here anywhere in the window, you need to install them by selecting all the nodes here. And if there is any workflow that you have installed in your CopiUI, it has missing node and the missing nodes have not appeared here, then you have to find that node in the Github or hugging pace website. Like I have told you, whenever you try to queue prompt, there is error message here, and you have to copy that message and paste in Google, and it will try to redirect you to that missing node. Once you get the experience of installing first node, you will automatically understand how to install other nodes that are not available in CPU. Or we can say, these are extension which you have to install in the CPUI. In the upcoming lectures, I will show them up. After installing those nodes that are missing in the CPUI, you need to restart your CPI, and it is very easy. In the managers section, you need to restart your CPUI that's all, and the new window will be pop up. Now let's get into another part. 27. Install Custome Node: Let's get to another part, select manager and install custom node. And you will see there are thousands of nodes are available within the CFUI. Right now, there are 1404 custom notes available within the CFUI. Let's try to install one of the nodes, try to install it and restart required, and you need to simply restart the CompUI. And if you want to install more than one note, you node here, you simply select them and install them instantly and restart. 28. Workflow in Comfy UI: You can change your custom colors here, right click tighter mode colors and give it a right color. And you can see that color has been highlighted here. You can give your own color. Let's give it a green color. You can change the shape here around box, default, card, default colors, custom colors. You can select any color you want. Hey. This pink color will show up whenever there is a missing node whenever you try to install or use a work flow created by someone else expend click Colors All right. Colors help me identify where I have to type green color to our text broom and it helps me to differentiate speed up my workflow where to find my notes, how to differentiate my notes. Because these are really, really simple notes right now. And whenever we have a cluster of notes, we need to identify them as a group and also as colors, and it will help you in your future projects. You want to use a radimi template from other creator, you can find them from Open AIRt or from Cit AI. You just need to load select workflow that has been saved by other people, and you can use them by selecting and create open. This is a workflow that you can see has been created by someone else, and you can use this and you can use those workflow for yourself. 29. Recent Update in comfy UI: There is a recent update in the Comp UI. It will give you update regularly. And there is a navigation menu you can see here, zoom in, Zoom out, reset view, select space, togal link visibility to remove your node system. There is minor update here, so we don't have to worry about anything. I just have to show you the updates. 30. Learn in depth about some terms: So the information we have copied and the checkpoint already have downloaded in the folder, like we have specified in the Cf UI model and CheckPoint folder. Now let's go to our Cf UI window and refresh it. Now let's check. Here is another checkpoint load. You can see now, select it, copy the information, copy it, and these are the trigger word help your CUI software or machine learning to understand that this checkpoint had this particular information and this particular information you have to load using this text prom. Iron Man. Poster fighting Batman. I hope this will generate something amazing. Mm. And let's change the dimension. And again, step 40. Another single figure, you prompt again. Okay, so it actually creating Iron Man, movie poster image and not fighting Batman, but ironman mixed with Batman. It is kind of that let's use image. Let's see directly this prompt in our image and use this negative prompt in our negative. Step 99 and in the negative prompt. In the negative in the positive and step we have given is 99 generate now, Q prompt. It's getting somewhere, not 100% like this, but yes, getting somewhere. I hope you understand how this load checkpoint work. I just simply download it a load checkpoint. Any checkpoint, I have downloaded it, put it in this folder like I have specified. Add positive pn, prompt, negative prompt, and a atentive image, like we have discussed in the previous lectures, there is a K sampler and VA and save image. This is a basic model that CPUI has already in its default. In the upcoming lecture, we are going to expand our horizon and we'll see how all these things works one by one. Right now, we have checked or right now we have load at checkpoint. Is important for you to visit civiti.com website and check for different as how these as work, what type of prom they are using to identify the load checkpoint. Which type of tigger word they are using, create a document so that you will not get lost. Next time you will use Tiggerd Word or prom, and you must have saved this inner document for your future afference. And trust me, it will help you like this one, let's open it again, and they have used flux Dave model, luminous shadow scape neon retro wave by crononit. We will use these type of images, and we will generate these type of advanced images in our upcoming lectures. 31. Load Check points and Trigger words: So let's go to civitai.com. It is important to know that this website is actually safe for you to use. And whenever you open this website, you will see many images with Check Point, Laura, with presets. There's so many preset Laura and checkpoints are available in the CivitI website. Cute at I can see here, and there is a prompt you can see in the center of the picture, a small asteroid is running towards a giant game tamer. They get TD rendering Q towards her. Now, this is a prompt that someone had given and uploaded it on the website. Okay, now we have a workflow. What type of notes they have, which type of guidance they have used, which types of how many steps they have used, sampler and STS information also. To get this exact same result, you need all this information for the CFUI. First, let's understand what is workflow. This is our workflow. All the information, like I have told you, lip amping K sampler, the brain, VAE decode and image sample. This is our checkpoint, dream shape, SD 1.5 and flux, many checkpoints here, and I'm going to update a new checkpoint if you have seen a red mark here like we have Q prompt, and you can see the green mark here. And this gain mark will come to this and this and this and this and this, and the image will be generated here. Whenever you see a red mark or pink mark, there might be possibility that checkpoint is not available. There might be some error that CoVUI has rectified, and you have to remove that error. Let's change to Dave flux, safe tensor, Check Point. As you can see, red dot pink line, and it has already selected an error that we have our error in our work flow. And we have to remove that error. Let's change to load CheckPoint again. And I'm going to download another checkpoint CofiUI Wildcard fantasy extreme Sd Excel, Checkpoint Wildcard fantasy. We have Laura, extreme detail Laura, and A checkpoint here. Here is the checkpoint. I've opened image and tried to use the checkpoint in our workflow. As you can see, there's a fantasy art in this checkpoint. But if you want to download another checkpoint with your preference, you can find it in here models, Checkpoints fantasy Wizard which is let's open this checkpoint. It is actually a a, and it has base model flux one d, and I'm checking for SDSL highest rated. You have to check for highest rated or most downloaded because these are safe to download and more stable. Let's get to highest rated or most downloadable. Highest rated, select SDSL 1.5 Checkpoint. Let's filter it out. Filter by category. Let's get to Vical clothing objects. It's up to you. Let's download this, Laura, checkpoint. And you have to download this in the CopuIFolder. Go to the folder that you have downloaded CPUIOpen CompuI models, Laura Animate Checkpoint. And CPI model checkpoint, and you have downloaded here. It will download it for you, and all the checkpoint have different sizes like five GV, six GV and it is about two GB of checkpoint. And one thing to remember, it has a triggered word, which should be add in your workflow in the text area. Copy this information, and let's try to create a CUI dog in formation. Okay, so we will use this information. You will get this link in your video description. Let's copy all this information. 32. Section information: Hey, everyone. Welcome to the course. I'm so glad you are here to learn creating art with Kofi UI. In this first section, you all need to do is just watch. This part is to help you get comfortable with Kofi UI tools and see how things work. Just relax and watch it like a movie. No need to do any hands on work yet. This first section takes about 30 minutes to watch, sit back and get familiar with everything. When we get to second section, that's when you will start trying things out yourself and make your own creative AI art. So let's start by watching this first part together. 33. How to change all the values in Ksampler automatically using Primitive Node: You right click on the case sampler, it has many options and you have to check for the convert visit to input, and it has many many options here, like C to input, control after generate, step two input. Let's try to convert CFG to input value. You can see the CFG is not here right now, and it has changed its position to the input value of CFG. Let's drag out this node and try to add a node here, and it has many nodes. Let's search for primitive. And this primitive node allow you to add value to the CFG value. You can see just like the seed value, you can change here the control after increment. Let's Q prompt and it will automatically increase the value of the CFG. So just like SD, we can control the CFG value as we want here. It will help you in larger workflow in which you want to experiment prompt with different set. If you want to change the settings of each value each time you generate the image and if you have not get the desired result, you have to change the settings of every option manually, and if you are lazy to change the settings of everything, you can use this type of node, and it will automatically change the setting for you. Let's change our case sampler back to where it was. Now we have to select option, convert input to widget, and we have only one input here. Let's click and CFG comes back to its original place. And we have a node here, which is ampithe, you can delete it out. Let's try to convert another one just to give you an idea, convert visas to input, and you can select cellular denoise, let's select denoise, and you can see the denoise here and add try to add primitive node again. Let's convert it, and you can see the denoise has a primitive node. Now, let's again convert our K sampler to its original place and you can delete it out. 34. Addign Effect to image CFG classifier free guidance : Have to check for CFG value. If you change the CFG value to one, you might not like the results. If you change the value to ten, You might like the result a little bit more. But the CFG value is not directly related to the contrast. As you can see, there is a contrast change in both the image, but you have to check for the CFG value in which it will comfortably create image for you. If you increase the CFG value to 15, let's see what I will get there is a change in the contrast value of the CFG. CFG is not directly change the contrast of the image. I actually give the effect to the image in which contrast is already come to light. So the full form of CFG is classifier free guidance. So if the CFG value is low, the model has the freedom to create images according to the load point or a. It has its own freedom to create images randomly. CFG value is high. The model will stick to the prompt that we have given, and it will try to create strictly or try to follow the rule strictly. 35. Impact of Steps in Result: Keep it in mind. The longer the step, the image will take longer time according to the step. Let's make it five steps. Q prompt. You can see the timing here, only takes seconds or 1 second. 36. Exploring Queue Prompt: You can set multiple Q, just like you give multiple command to the printer. You can also give multiple command to Q. If I click it, one, two, three. I have tell him to create three times. It has generated three times. As you can see, it has generated three images for me. If you want to cancel any job, you can see VQ. All the process is running or pending, just simply click Que prompt and you can cancel it out by clicking the cancel button and you can close this window again. You can see the extra options here. Once check and click on AutoQe, it will automatically generate images for you. And it will not stop until you stop it. We have a decreement number here. If you generate image, it will decrease by one. If you keep it randomized, it will generate random number for you, and the probability of getting the same image is actually very, very low because of the large difference in the number. Now let's keep the seed value to fig and generate again. Now, whenever you que prompt after fixing the seed, it will generate the same image every time we click. Now let's change a prompt. Purple I have changed purple to red. Now let's see. Now, it has generated a red color galaxy. It somehow tried to create the same image every time we click the Q prom because we have the same seed number. If we try to change the prom, it will try to create a same image. Every time we try to add the text, but every text has different value in the computer language, or we can say every word has different weightage so it will create image according to that. But somehow it will try to generate the same image every time we que it will generate a different image when we try to change the step to 19 or 20 or Aga value. Let's generate again. You can see there is a difference in the steps. Let's make it 30 steps. It will try to change the images to something similar, but you can see there is a color difference here. There is always a subtle variation in the out 37. Ksampler SEED: So in the last episode, we have learned so many things in which we have created an image Las and preset. And in the previous section, we have learned about load checkpoint, clip encoder, text encoder, ampit latent image, asmpler load the ampity latent image and create an image that is visible to us, and the case sampler is the main brain here. So the case ampler will cook together everything and show us a visible image. And it will visible to us after AE decode. In the case ampler we have seed here. This is seed, and seed is the value in which case sampler use the starting point and the ending point of generating the image from load checkpoint or any model. If we keep the seed remains same, if we keep seeds value same, it will always produce the same result every time we cube if we change the value of the seed, it will create a different image. Every time we change the value of the seed. You can randomize the seed by controlling after generate randomized fixed increment or decrement. There are few options you can see in the control panel for the seed value. The minimum value of the seed is zero and the large value is very big. With this large value, you can generate infinite number of images using the same model or same prompt using the same ingredients. Can send the seed number to increment. Whenever you prom, it will generate a image, and it will increase by 38. How to update comfyUI and Workflow flies information: Hello, everyone. Thank you for taking this course that you have took this course. Let me tell you one thing about Kofi UI. This Kofi UI or this node system is, I think, kind of universal kind of thing, because if you check for blender and also for DawncRsolve, DawncRsolve is a video editing software and our blender is a three D software. Both these software and upcoming software actually applying this node system in their these node system are also known as workflow. So we are going to create a workflow that use flux. Let me tell you what is flux. Flux is a model in which images are compiled according to your prom, and you will get the visual result of your tech as an image. That's in a simple or layman language, this is known as flux. Model, checkpoint. It is not a very difficult thing to do. Once you understand this, you can create your own workflow. Let me tell you one thing. Let's start by clicking the manager. Once you click the manager and click on Update All. Once you update all, all your system get updated because compUI is open source. All over the globe contribute in this machine learning software. Or we can say in programming. That's why it get updated daily. So I recommend you once in a while you have to update your CUI to get the latest and updated fast results. Once you update it, you have to just click Restart, and it will get restart, reconnecting it, and it will get reconnect and another node will open here. Sorry, it's not a node. It is a window or tab. Another tab will open, so let's close it and wait for the will take some time. As you can see, everything is getting updated. Let's jump to the future directly. Meanwhile, it's getting updated. Let me introduce with the workflow that I have generated from the Internet and not the Internet, but from the person who have made intelligent and genius person, I respect all of them, and I have downloaded many workflow and apply in my work. As you can see, these are workflow and many workflow that I love in this the flux or compu need of JCNFleRClick, open. You can see these are node system that flux uses, and you can create your own node system. Or you can save your own node system. 39. ComfyUI latest interface update and walk through: Let me tell you one thing. Every time you open the Comp UI and update it and you will see a new interface, and it is really confusing for you. Like, last time, everything was lower right hand side and right, and now everything has been changed. But it is not that difficult, as you can see. The workflow, you can see, workflow new opened. Wow. It is actually become easier way to use ComfUI and all these. As you can see, there are Q, history Q, and there is library. Wow. You can check for library within the confua right now. Wow. This is actually exciting. You can see a node. Okay, got it. Okay, okay, okay. These are nodes. I thought we can change directly models here, but we can find the nodes here, like search for diffusion model. Diffusion. Yes, can. And it's working perfectly fine. Can we search here? Click is also working. Check Okay, yes, that's what I'm talking about. These are old Wow. Can we check for dreams? Okay. Not bad. We have four checkpoint dreams LV Let's see Checkpoint. Okay, Checkpoint and SDSL one Juggernaut, Lod Checkpoint. Do we have flux? Can we use Flux checkpoint directly? I don't know if it will work because I think this is not, we can check for diffusion model. Is it working? Oh Uh uh uh diffusion model is not working. Okay, so adding diffusion is not working directly. Text encoder, we can check. We can check. No, I don't think it's working fine, but it's working in that way, but I don't think it is working Let's check for unsaved workflow dot. Okay, this is unsaved workflow. Okay, and upscale. Nodes map, upscale. Okay, this is the node map, and upscale is high here. Okay. Sample advance custom node. Okay. As you can see, you can bypass directly. It is very handy. It is actually very handy to use this interface of Cf and I really loved it. Wow. You can check for batch count, like we have discussed earlier. You have to basically experiment all these things so you understand the interface of new CFUI. So let's check okay, white plaque. You can directly change it. Wow. So let me tell you one thing. So let's work on a new workflow, new open, save saves, export, export.