Stable Diffusion Masterclass: AI Assisted Art Generation for Drawing, Illustration, & Graphic Design | Chester Sky | Skillshare
Drawer
Search

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Stable Diffusion Masterclass: AI Assisted Art Generation for Drawing, Illustration, & Graphic Design

teacher avatar Chester Sky, Entrepreneur and Producer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Course Trailer

      1:31

    • 2.

      Installing Stable Diffusion User Interface

      4:44

    • 3.

      Installing Stable Diffusion Base Model

      1:53

    • 4.

      Text To Image

      13:02

    • 5.

      Image Variations

      3:55

    • 6.

      Upscaling

      7:56

    • 7.

      Installing New Models

      4:09

    • 8.

      Inpainting

      4:29

    • 9.

      Outpainting

      8:49

    • 10.

      Img2Img Case Study

      10:42

    • 11.

      Infinite Zoom Intro

      0:23

    • 12.

      Infinite Zoom

      7:58

    • 13.

      Create Prompts for Stable Diffusion with ChatGPT

      3:26

    • 14.

      Installing Controlnet

      3:35

    • 15.

      Introduction To Controlnet

      6:46

    • 16.

      Intro to making video with artificial intelligence

      4:34

    • 17.

      SD Configuration for Video Creation

      2:58

    • 18.

      Creating Video With Stable Diffusion

      10:01

    • 19.

      Deflickering AI Video

      10:01

    • 20.

      Stable Diffusion Inside Photoshop

      9:43

    • 21.

      Vector Image Intro

      0:50

    • 22.

      Creating Vector SVG Images

      5:13

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

754

Students

5

Projects

About This Class

From best selling course instructor and author comes a brand new course: the STABLE DIFFUSION MASTERCLASS. This course will teach you how to harness the power of artificial intelligence to create stunning, one-of-a-kind works of art.

Do you want to create production grade artwork? Need beautiful graphics for your projects but don’t have the time, artistic background, or money to create them from scratch. Why not use artificial intelligence to assist the art creation for you? Its faster, cheaper, easier, and can arguably create better results than you could on your own. Whether you're a seasoned artist looking to expand your skillset or a curious beginner eager to explore the cutting edge of creativity, our course has something for everyone.

The course covers everything from the basics of AI-assisted art creation to advanced techniques for pushing the boundaries of what's possible. You'll work with a variety of AI tools and platforms, gaining hands-on experience with the latest software.

THIS COURSE WILL SHOW YOU HOW TO:  

  • Use the most popular AI Art creating tool: Stable Diffusion. IT'S FREE TO USE!
  • Create artistic masterpieces using text prompts with no need for prior drawing or design skills
  • Generate AI art in any style (Disney, Pixar, anime, photography, famous artist, anything you can think of)
  • Customize your images to get them exactly the way you want them
  • Learn Inpainting: swap objects in your images with any object you can think of
  • Learn Outpainting: extend existing images to make them larger
  • Upscale resolution of any image or video
  • Create infinite zoom animations
  • Create videos with artificial intelligence
  • Post production effects to improve AI videos
  • Use ChatGPT with Stable Diffusion
  • Use stable diffusion inside of photoshop
  • And much much more…

 

TODAY IS THE GREATEST TIME TO BE CREATING DIGITAL ART

What do you need to get started?  

  • You don’t need to know how to draw, design, or have any art background.
  • You don’t need to know anything about coding or artificial intelligence.
  • You just need a desire to create, experiment, and find joy in making art. It’s now possible to learn all the tricks and tools to become a professional digital artist from the comfort of your own home. Everything you need can be done from home on your computer and this course will show you how.

 

WHY AM I DOING THIS?

AI assisted art creation is becoming mainstream and taking the art world by storm. It takes a lot of time, effort, and money to create high quality art by hand. AI assisted art generation can create graphics for you efficiently, cheaper, and produce spectacular immediate results.

Learning AI art generation can be a dauntingly technical task. There’s lots of problems you can run into while trying to learn. It can take a lot of time trying to piece together all the information from resources online.

That's why I created this course - to walk with you through the entire process with everything all in one place.

If you're ready to take your art to the next level, don't wait - enroll in our AI art course today and start creating artistic masterpieces.

What are the requirements?  

Do you have a computer? That’s all you need. If you don’t have Stable Diffusion yet, I’ll show you how to get started with it. This course will cover everything you need to get started - from finding resources and sources of inspiration, bringing your artistic ideas to life, and enhancing your artwork to production grade quality.

Meet Your Teacher

Teacher Profile Image

Chester Sky

Entrepreneur and Producer

Teacher

Producer and Composer

Official Website: http://chestersky.com

Facebook page: https://www.facebook.com/RealChesterSky/

Twitter page: https://twitter.com/RealChesterSky

Instagram: https://www.instagram.com/iamchestersky/

See full profile

Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Course Trailer: Welcome to the Stable Diffusion Master Class. This course will teach you everything you need to create art using artificial intelligence. In the videos ahead, you'll learn the key features of the free open source AI Arts making tool stable Diffusion. We'll start you from the basics, assuming you know nothing, and we'll get the software on your computer, and then you'll create your first artwork using text prompts. This will allow you to create artwork in any art style without needing any prior drawing or design skills. You learn how to swap out any object in an image with any other object that you can think of, and you learn how to extend existing photos to add content to them to make them larger. You'll learn how to increase the resolution of any image. You'll learn how to create videos with AI, as well as techniques like the infinite Zoom animations and you'll even learn how to use stable diffusion in combination with tools like Chat GPT and Photoshop. By the end of this course, you'll be able to make art pieces that are production ready and you'll be able to make this kind of art in the time span of only a few seconds to minutes. You're going to learn all of this and so much more in the stable diffusion master class. 2. Installing Stable Diffusion User Interface: In this video, we'll deal with the housekeeping, installing stable diffusion, discord, etc. If you've never used stable diffusion before, and this is your first time hearing about it. You should go to this website here to try out a simplified version of stable effusion. So you can go to this website, stable diffusion web.com and unstable diffusion web.com. You'll see a very simplistic version of stable effusion where you can enter in some texts, e.g. I. Say, a small cabin on a snowy mountain in the style of a Disney station. And I click Generate Image. And just like that, it will create some images for us. So you can see this is pretty beautiful. It's exactly what we typed in. If you wanted to make a snowy cabin and it's in the style of stuff you'd find on the art station website. And if you click on an image, you can see how they look and you can right-click and save an image. So this is like the bare bones, the tip of the iceberg. For what stable the confusion can do. We're going to go into so many different features and tools that you can use AI for creating art in this course. But if you've never played with before, go to this website right now, check it out, type in different things into the prompts. Hit Generate, see what kind of images you can create. And then we can start getting into the meat and potatoes. Hence, install stable, efficient on your computer. If at any point during this course you have issues, you run into bugs problem. So you don't know how to install properly. You want to see what other students and classmates and other users of stable effusion or doing go to the discord, which is available at this link here. Discord.gg slash stable diffusion. If you have any questions in this course that have regarding stable diffusion, post your questions here rather than anywhere else or sending emails or messages, put them here. And users, including the developers, have simple diffusion, will be able to answer your questions. Without the way. Let's talk about the getting new stable diffusion. And that's available at the website github.com slash automatic 11, 11 slash stable dash diffusion dash web UI. This is gonna give us the web UI interface that makes it so much easier to see what we're doing with stable effusion and you can interact with it and creates all of these beautiful images and artwork. Once you're on this website link, you can scroll down to the install section. Here we go, installation and running. And you can follow these installation steps. Depending on whether you're using Linux or Apple or Windows, you'll have a slightly different installation. I personally am using Windows, so for me, I installed Python, I installed Git. And then in case you've never used it before. What happens is you figure out, after you've installed Git and Python, where you figure out a place on your computer that you want to install. So in my case, I'll have some documents folder that I create. And let's say I make a new folder and I say this is where I want to put my stable diffusion stuff, stable diffusion. And I go into that folder and then I write, I left-click into the URL section. I type in cmd. And that's going to open up a command prompt in this folder that I just created. I can then paste this git clone information, which is what I just copied here. And that will clone all of this GitHub code into the web folder. Then just find the executional file, the web UI dash UI dot bat from Windows and run it and it will install simple diffusion. So do that and then we can get started. 3. Installing Stable Diffusion Base Model: So I have stable effusion here loaded up and you may or may not see a model already preloaded in the checkpoints, able diffusion checkpoint here. If you already have that, you can skip this video and go into the next video. But just in case your installation didn't automatically install it, we're going to cover that here. So we're going to need to download the model for stable division to create images. And we can get that model at either here, this is the stabilization to 0.1 model or here stable effusion 1.5 model. I'm currently in that most of this course is using the 1.5 model, but you can use a table revision to model as well, whichever you like. Essentially you go to this URL here which is Hugging Face, Runway model stabled fusion, dash v1 dash five, Tree slash main, or this one here. Whichever URL you prefer, you went the newer version or the old version. And you go to staple the fusion files, inversions. And then you're going to download the model, that's the safe tensor model right here. You download this. You want to download to a very specific place. You want to download it to your stable diffusion web UI models, stable diffusion folder. So let's just take a look at this. Let's go back out a little bit. In your stable diffusion, this is your overall software. You're looking for the models folder and then you're looking for the stable diffusion folder and you're going to stick that in here. Once you've done that and you're stable diffusion, then you can click refresh and you'll see whatever models you've downloaded into that folder showing up here. So just a quick aside or making sure everyone has, has the basic table efficient checkpoint model. 4. Text To Image: You've installed stable deficient successfully. Now we can start to create some artwork using stable diffusion. When you first load up the application, you'll see something similar to what you see here. This is our dashboard that has all the amazing controls that will give us find intricate detail control over everything that we want to do with our image generation. Before we start creating images though, we need to decide where we want to output our images. Where do we want to save everything? And to do that, you're going to want to go to the tab that says settings, settings, and then paths for saving. And here is where you can define where you want to save your images. So I've set up a folder on my computer. You're going to want to do the same thing. Just create a folder somewhere and copy the folder path and stick it into these here. And that way, all of the images that you create will be saved into that folder on your computer and you can find them easily. Then hit Apply settings and reload your UI. Once you've done that, we can go back to our text to image, text to image here, and let's start creating some AI art. The first thing we need to understand is the positive end and negative prompts. This is how the retell, stable diffusion, what art to create. So I just type in a man and I hit Generate. You've gotta do. Good look and do too is wearing a suit. He's smiling, looking at is great. And if we click this little folder here, it'll open up the folder on our computer that we designate it. And we go into that folder and we see are a little man here. Here we go. Congratulations, you just created your first art in stable effusion. Let's explore the settings a little bit more. So over here, this is the positive prompts. This is where we type in the stuff that we want to see and we want stable. I say, well, if fusion to use, you go to a website such as civic, civic, civic, civic AI. You can find images that were created by other people using generative AI. And you can figure out what kind of prompts we use to create those images. So this is civet ai.com. And if I click on one of these images, you can see the positive and negative prompts. So e.g. I. Can copy this positive perhaps here. Copy paste. And I can copy this negative parameter. Negative profits is the stuff we don't want to see. And paste that here. And let's take a look. Let's examine this a little bit further. So here we see a photo of a 48-year-old man in black cloths. It tells us the resolution of the photo. Eight k has got film grain high-quality. Okay, so all of this is going to give us some control over what kind of image we want to create it over here and the negative prompt, this stuff we don't want to see where you can see all of the things that we want to avoid, such as low quality cloned face, missing legs, missing extra arms, etc. If we regenerate, we should see something similar to certain extent to our reference image. Should note that this is using a different model than we are using. So here's won't look exactly like that. But if you just want to get an idea of what kind of prompts to use, what kind of properties were being used, what kind of sampler, etc. This is a great way to get some references. Okay, we know how positive promise work, we know how negative Brown's work. Now let's dive into some of the more detailed features here. Sampling method. What is sampling method? Sampling method is how? Well, what kind of sampling methods is used. These are all the different options you can choose from. There's a lot of them here. And frankly, most of the time, unless you're really going into detail, you're probably not going to see a huge difference. Like if you go into the documentation here which feel free to, it's disabled effusion dash art.com. You can see the comparison between different sampling methods. And yeah, there is a difference. But it's kinda subtle most of the time. Um, unless you're carrying, pairing them exactly side-by-side, you're probably not going to see a huge difference most of the time. I'm fine with you, Larry, and most of the time you get good results from that sampling steps is how much, how many times you want to go through the process. Because it keeps making all these different levels of noise and applying them, going through all that. If you stick a low number here, you're going to see really blurry, crappy image. And if you take a really high image, you can go all the way up to. Come on, come on. There we go. You can go all the way up to 150. But if you do 150, it'll look really good. But it'll take a really long time to get there. Most of the time you can stick 20-30 and that should be good enough. 20th, 30 steps is usually plenty. Generate this guy's face again. Here we go. Restore or faces. You can check this and it will do its best to fix any ugly faces. So e.g. if the eyes are cross-side or maybe he's got weird teeth and I don't know. Maybe like the missing eyebrows or something like that. Restore faces can attempt to do its best to fix it. Tiling tiling is kind of a fun feature. Tiling. Tiling does is it creates a tile. So that way if you were to stick this image next to it, it would naturally flow into the next image in it. E.g. you can see this bottom half of the man would naturally flow into the top man here. So if you were to place this exact image right below it, it would naturally flow just like a tile. So that's what tiling does. You'll get some very strange results sometimes, but you can see that this is naturally going to flow into this person here. High-risk fix this is for upscaling your images. If you find the resolution of this image is not high enough. You want it to be better. You can upscale it, you can increase the resolution of that. In general. This is a good, obscurely here. R S Gan SR again four times plus you can play around with these latent is one of the first ones that was created. These other ones are variations. This is thoroughly optimized one, you can even install your own, which is what this one is. We'll get into that later. So that's what high resolution fixes. Widths and heights. This is the dimensions of the image that are being created. So by default is 5125 12th. If you pick a different scale, e.g. 90 by 540, generate, we can get a different dimensions for our image. Here we go. We've got a guy, a guy with a weird-looking arm, but it's a guy nonetheless. So you can get different dimensions. You will sometimes get strange outputs with double images, doubled people. If you change that I mentioned out from a square scale. And the reason for that is because when images are fed into the model, they always fed in a square shape. But what you're telling it to do is say, I want to create this image that's not a square shape. So it's going to create a square, but then it doesn't really know what to do with this extra information to the sides of it. So sometimes it gets confused and it creates double or clones of whatever the guys. That doesn't always happen, but sometimes it does. So here we go. Even though we said just a single man, we got two guys. Now instead of one, batch count, batch count is how many images do we want to create by default set to one. But you can say for, Let's make for, and what it's gonna do is it's going to create the images one after another iteratively. We go One guy. When there we go, we've got our four images. Batch size is how many images to make simultaneously. It was making one at a time, but you can tell it to do more than one at a time. If your computer can handle it, depends how fast your computer is. Cfg scale is saying how much attention do we pay to this prompt, to the positive, negative prompt. If you set it to one, you're going to see an image that really doesn't look much like your reference image. It's just who knows whatever that is. And if you say, I want to go all the way up to 30, it's going to build something that's exactly what your texts parameters. But you also notice that the colors get really saturated. So you usually don't want to go all the way up. You may want to experiment between going up a little in a little bit down though. Seed. Seed is very important for stabilization and for any other generative AI model. Seed by default is negative one. And that means that every time you create an image, it's going to be creating a completely random photo from scratch. Every time you click this Generate button, we get a different dude. But you don't have to do that. You can say, I want to reuse the seed from last generation. And if we do that, we get the seed number right here. So if I click Generate again, we're actually going to create the exact same guy. By. There we go. We created another one. It's pretty much identical. The reason is because we use the same seed. You click on extras here. You can see there's variation seed and that's saying, well, I have two images. One is one seed and this is some other C, let's say nine or some other number. And then you can say, how much information do I want to use from this seed? Compare it to the information I want to use from this seed. So you can say I want it to be more influenced by this one and slightly influenced by the second one and so on. And then if you say I just want to use the seed using the width part, you can do that here, and I just want to use it from the hate part. I can do that here. But these will drastically change your image. These ones, by changing the scale. You're going to see major changes where this one is probably best if you just want to be influenced slightly. Once you've created your image, you'll notice here at the bottom, all the details that went into creating your photo. This is our positive prompts. This is our negative prompts. We can see the steps. We use a sampler, the CFG scale, the seed, everything, all that details right here. Nicely laid out for you. And when you click this thing here, Open Images and output directory, we can see the images that we created. Because remember we set up that folder in the beginning of this lecture. And here's all the photos that we made. So there you go. You now know how to create images, texts, images in stable diffusion. 5. Image Variations: Let's say I want to create a variation of an image that I generated. I don't want to create a completely new image. I want to just create something that's similar to the existing one. So what do I mean by that? Well, by default you have a seed that's negative wide, which means come up with a random seed every single time. And as the image it'll generate, you'll get a completely different image. In this case, we have some lady with a sword. If I click it again, we'll get another completely different image, even though we're using the same positive prompt and the same negative prompts, we're getting a very different person. So I want to create something that's similar to this one though, I don't want to have something completely different. And what you can do is you play with this value called the seed value. Over here, you can say Vc from last generation. So this will use the seed from this last image. But there's other ways to figure out the seed, e.g. if we look down here, you can see the seed value of the image that was used. And if you choose to check out the image that was created. So this is in the output file, and then we click this, you'll get your output folder. And you can see in the name of the image that was used, they have the seed value. So we can stick this seed value and into here. If we were to click Generate again, you'll get the exact same image now, because we're using the same seed value, you're not gonna get a completely different image now. But I just want to have a variation. I don't want to use an identical seat. So to do that, you select this little extra drop-down. Here you'll see this thing called a variation seed. And this is useful because you can now stick a second seed value and you can use the second seek to influence the first one. But we're still only using two seats instead of randomly creating a new one every time. What you might want to do is take a look at the images that we created previously and say, well, maybe I want to use some of this image influenced as well. I quite like the little red on her and maybe I like that at some of this influence, but I also like this original one and this is the one I mostly want a half, but I just want a little bit influence of this image. So I'll take that seed value and I'll stick that in here. And now we can play with this slider called the variation strength. And its variation strength. We can then say, I want to use how much of the first seat and how much of the second stage I want to use. If I go all the way to the second seed, we're going to see that first image that we created in the beginning. Or at least something close to it, at least. So this is just using the influence of this second seat. But if we want to play with the first one, which is our goal here, is we want to check the slider instead of two all the way to one with just stick it to some percentage of it. Now we should be able to see the image here with just a little bit of influence. Let's try another one as well. Stick it up to 0.2. And by doing that, if we now compare the images that we generated here, we can see we have some slight variations. And this is the original one that we had. And these are variations of those two. If you don't like the influence of the second image, you can just play with the new image here, variation seat as well. And we'll see what else we come up with. There you go. So that's how you can create variations of any image that you generate. 6. Upscaling: Let's talk about how to create high definition images are disabled diffusion. So let's say you, we've found our prompt, we're happy with this. We've gone through a bunch of iterations. I found an image that I like. If I look at this image, it's 512 by five pixels and it looks decent, but it's a little bit blurry. It could be a little more high-definition. This is not a fork, a photo at the moment. It turns out there's ways to increase the resolution. First of all, you're going to probably want to make sure you save the seed number that you want to use. Then you can go to this high-risk fix. And you can choose these things called up scalars. And these up scalars allow you to increase the resolution of your image. They work by corrupting the image first. Then their images are reduced to a smallest size. And then they use this neural network that is trained to recover damaged images and try to fix in all the details is a bunch of different upscale is here. The latent ones are the ones that were first created when say, well fishing was fresh made. This one here, our scan four times plus is an excellent up scalar, works very well. It wanted Award in 2018. S are again stands for enhanced super resolution generative adversarial networks. If we were to choose the upscale, we're going to increase it by two times. D is noising strength. You can set to 0.7 or even 0.5. I like doing 0.5 allowed at the time. And then we'll hit Generate here. And let's see how this looks. You can see it did change the photo a slight bit, but the benefit is going to be worth the change in image most of the time. So here we have our loading. Here we go. Here's our before, here's our actor. We can see this is much larger, much more crisp photo. It looks pretty good. There's another app's scalar that's come out. You can install your own upscale. And as it turns out, you're going to see here the one that I like to use, one called four times ultra sharp. That one doesn't come built-in with the little effusion. Now, if you want to use that particular app scalar, Let's try that one and upscale by two times. You can download the app scalar from this link here. And you're going to download this four times ultra sharp dot PTH. If you want to, it's a small file. It just download that. And then you can stick that into this folder here called the SR. Again, folders under your stable effusion under the models folder, under S. Again folder, you just stick that there. You want the documentation on how upscale us work. You can check out this link here. But once you've reloaded, you're a UI and use simple diffusion. It can be load by going into the reload under Settings. And then you're going to see your new full-time older sharp appearing in this drop-down. So we've created a image, we've upscaled it with two different upscaling. Let's compare them now. So this is the first one. This is the small one. This is using the SR again four times, and this is the full-time ultra sharp. So there are subtle distinctions, but I find the ultra-short been does a little bit better of a job. These along the eyes, did a good job with the eyes. A little bit blurry here, they've got a little bit more detail. Assuming that you want to upscale even more, you can go further than that. You can go and click the center extras button for any image that you have. And you click Santa extras and it will open up the image tab and send your photo there. Alternatively, you can just load your folder photo manually. You can click and drag and drop, et cetera. But I'm just going to use the extras way. And then you can try your obscurely here. You're going to see you're up scalars, your RS scan full-time bus, or in my case, that four times ultra sharp. And then you can choose how much you want to resize. In this case, I'm just going to go to and then click generates. It'll take a moment to load. And then we should be able to see very nice high-definition image. Once this is finished loading, we're stuck into a different folder, but that's okay. So let's compare these now. This was this is our this is our original photo. This is our upscale using the scan. This is the fourth times ultra sharp, and then the second time after the second ultra sharp, we have even more detail. So if we go in here, it gets a little bit pixelated. But if we go to the four times, who will let detail looks really good. Now it turns out you can do this in batch. You don't even have to do this one by one like we're doing here in this kinda slow process, you can go to this batch from directory. And we can do is you can select an input directory and an output directory. So in order to do that, we're going to need to have a bunch of photos to work with. So let's turn off this high risk for now. Let's stick in four images and we'll have a random seed. Let's just make for images here. We'll clear this out for now. These have served their purpose. It's going to make these four images. They're all going to be 512.5 Kelvin solution. And if I go to the extras and I select Batch from directory, I can choose the directory that I want to send the images from. So this is my input directory. I'm going to stick that here. And then I have to choose a place for where the photo should be sent to. So I'm going to make a new folder here. I'm going to call this output scales. And I will copy the path of that and stick that into the output directory. And then I choose the obscurely I want to use. I select resize. And just like that, we are now creating upscaled resolution images of air photos in batch. So the batch thing is really useful because let's say you had a video. You can break the video into individual images and jpegs and PNGs. You can then say which folder you want to use as the input. And then it'll go through and create the upscale images for all of those. So here we go. You can see our upscaled images. It has been done in batch. So there you go. You now know how to increase the resolution of your images using the up scalar. We can do it before in the creation of the image. And you can also do it in post. After you created the image, you can go in and choose to increase the resolution as well. 7. Installing New Models: In this video, we're going to talk about creating arts using a variety of different models that will have different art styles. We're going to learn how to find and install different models into stable diffusion. So this image here was not created using the original stable diffusion model. By default. This is the model that came in, at least in my case, for staple diffusion. But this immature was created using a dreamlike diffusion. So we're going to show you how to get a different model and then you can create art just like this one. So first of all, we need to find a different model. There's a bunch of different sites. I'm going to refer you to two of them. One of them is civic AI. So this is website lists, examples of different artwork that's being created. So we can see all of these pretty little images here. And if I wanted to create this exact image, I can download the model on this website. You see this little download thing here. You can see this is saying that this is a safe file to download. And people have said it's pretty good. They like it. You can download that. And once you've downloaded it, you're going to go to your stable diffusion software folder. Wherever you installed it. Go to your stable diffusion. Go to your models, go to your stable diffusion. And you're going to paste that file into this folder. Once you've done that, you go back to your stable diffusion and you just need to reload the software. So in that case, that means go into settings and clicking the reload UI. And then after you've done that, your model will appear in this top-left drop-down Gordon Drop-down section. This particular model I got from dreamlike art, which is a website called hugging face. Hugging Face, dreamlike art, dreamlike diffusion and dash 1.0. If you go to this website here, you're going to be able to download this exact model as well, and you can do it for free. This model is very similar to MIT journey. Mid journey is a paid AI art generator software very similar to stable diffusion. You can type in text and it'll create images that are very beautiful. Here is you can create artwork that looks just like this. Similar to MIT journey. Really the only difference between mid journey in stable effusion is that you have a lot more features and stapled diffusion and this free. So why not use the free one that gives you lots of features. So I say, so here we are. This is the model on Hugging Face. If you want to download it and you go to files and versions, and you want to download the file. It says dreamlike diffusion dash 1.0, safe tensors. Sek PT file is the original model and the safe tensor file means that they've done some serialization and it checks that filled with some kind of viruses are bugs. If you do download the CAPT file, just make sure you went through a anti-virus software before you start using it. Anyway, once you've downloaded that, you stick that into the folder and you start DUI and it'll show up here in the top-left corner. So I think that's the main gist of it yet, you can download different models from David AI. Or if you can go to the Hugging Face and you can find hundreds or thousands of different models that people have been using. And then you can stick in your text prompts and create artwork in the style of whatever model you download it. 8. Inpainting: In this video, we're going to talk about inpainting. In painting is the ability to replace objects inside of your photos and images with other objects. You can just swap things out. So what you need to make in painting work, you just need a photo, a illustration. Something can be a image created in stable effusion like I have here. But you can also just take a photo from your computer or a drawing or whatever it is that you want. It will depend on the, the results that you get will also depend on the model that you use. So consider which model do you want to use for your inpainting. Once you have your photo ready, you can either go to image to image, and then in paints. And then you can find the photo and your computer somewhere. Or in my case, since I built the image in stable diffusion, I can then go to sense in pain. So you have an image now in, in paint, and you can choose a paint brush here. And this is going to help us decide what we want to replace it. My case, I'm gonna give this lady glasses. This gave her some nice shades. And I have to do is go to that prompt up at the top and say what I want to appear in the image. So let's say give her sunglasses. Over here. You want to make sure that in paint mask is selected because that means we're going to replace the area that has been drawn over. If you'd like, you can play with these different features to experiment, get different results, but I'm just going to have these values for now. And then I hit Generate. And let's give a few different outputs with this batch. So let's get some good luck and shades on this lady. Okay, here we go. Looking good. Granny in the sun. Right? It's got some Elton John looking glasses, or maybe this is coming out nicely. Let's see what else we got. We got the blue shades. Very nice. We got some teal colors. Look a little fake to me. Oh, wracking those. Here we go. Okay, so that's in painting. You can take any image that you want. You can draw all over it and you can do multiple iterations. Maybe, let's say I'm okay with that one, but I want to play around with that and then gave her Let's say let's give her give her gloves. Let's see how that turns out. Gloves and are degenerate again. See what we get here. Now, you'll notice something that her eyes are changing again. The reason that they're changing is because in paint currently has this little bug where if you haven't reset up here, it's actually still using the previous drawing and painting that you did last time. So if you check these out, you'll notice he has different glasses and she has gloves as well. Oh, that didn't look come too well. But you can see sometimes it's doing a good job of time, it's not. So to fix that, make sure you go up to here and set reset each time. If we do it again, now we'll just have the, the gloves instead of the glasses as well changing. Alright, so let's see how she looks. What kind of gloves you're gonna give you this time. There we go. That's a nice black gloves. At this time. Some kind of gloves. Their hands. Got a biker gloves. You get the gist. We were able to replace objects just like that. You now know how to do inpainting to replace any objects inside your photos. 9. Outpainting: In the last video, we talked about inpainting in paintings, where we can replace objects inside images with any other object. In this lecture, we're going to talk about out painting. About painting is a method where you can extend images. It builds on the technology that we used in inpainting. But this way you can make images larger and add existing, add additional content to existing images, make them wider or taller. Add more objects, but even outside of the original canvas frame. So we'll talk about how to do that. What you will need to do for this is you'll probably want to have a specific model that's built for inpainting. We're going to want an ensuing Inpainting model model. And you can get an Inpainting model here at stapled diffusion in painting. At this URL right here, Hugging face.co. When we model stable, efficient and painting, there might be other ones, but you're going to want to find one that is specifically mentioned in painting. These tend to give you better results. You can try it with other ones, but well, you'll find out if it works enough for you to do that. You'd go to the file and versions and you download and you want to stick that with all of your other models. So when I say all the other models we're talking about into your staple effusions. It's April the fusion going to your models, to your stable effusion folder with all of the other models here. So I don't want to stick it into this folder with all of you others. Then you go back to stable diffusion, go to your settings, reload you UI. And in theory, you may have to restart the application, but your model will show up here in here, and you can see I have a few different inpainting models. I have one here for this one, I have another one here. So a bunch of different models have in painting options. Once you've done that though, you can take an existing image and you can import an image here, for example, P&G info. And I'm going to take another image that I've already previously created in stable effusion. In this case, this is this little guy. You can use images that were not use created in stable diffusion. But it means you will have to come up with the prompts from scratch. Whereas here in my example, since I created this and stable effusion in the first place, I already have the prompt that gets preloaded when I drag an image into PNG info here. And I can see my positive prompts, and I can see my negative prompts. And it just saves me the step of trying to figure out what is involved in creating this image. If not, you can just look at the image and tried to describe it in the best detail that you can. Well, this is the character I want to create an environment in colors and style and so on. Yeah, so we have this image, I'm going to now send my image to inpaint. In other words, as the image to image section right here. Alternatively, you can just load up your imagined in painting here and there's this tab here in pink tap model. The first step that we're going to want to do is go to the resize section because we want to extend this canvas of the image. So in my case, let's make this, let's make this double the size, thousand and 24. Let's start with increasing the width. You'll also notice here we have a seed value that's pre-populated and that once again got taken by the PNG info. You want to make sure you're using the seed that was used originally. If you have the ability, that will give you better results. And now we're going to want to check this option here, resize and fill. And that's going to allow us to re-size the canvas and fill it with whatever detail stable effusion thinks will work and you'll see what I mean in a moment. So let's click Generate here, and let's see the result that we get. Might take a little while. Okay, I image is loaded here. And let's see what is done. We have on the left side here, this blurred little image here, and on the right side, it's also kind of blurred looking image. And what is done is this taking the images from the outskirts here, and it's extended it to the left and to the right. Now, this is partly what we want. We have a larger image now this is now a different canvas size and this, this is 512 by 512, and this is 1024 by five-twelfths. We have the larger canvas. We can also see that this isn't really similar to our original image here. What we now need to know is we now need to paint over this and replace whatever this info is with new objects. So what we're gonna do here is we're going to load this image here into this. So let's close this thing out and we'll send it in paint. So I've sent this image over here. And now we have this thing over here where we can extend, replace all of this info here. I'm just going to paint over it. I'm painting over and adding what they call a mask. A mask of all the area that we want to replace And I will only do one side at a time. I'm not gonna do this right side quite yet. And the reason for that is we want to not confused able diffusion. They want to be replacing one side using all of this other material is referenced. We don't want it to be trying to replicate everything on both sides at the same time. It will necessarily know which side to use reference. We're going to use this side is referenced to fix this. And then we're going to use this side is referenced to fix this. Over here. Now, everything I think we can leave the same. We don't need to change anything necessarily over here. If you want, you can play around with these, um, but I am not going to change anything in this example. I'm going to hit Generate. And let's see. What we get. We can see is now working on the side is building something in here. And what do we know we have something we now have more detail is not just exactly the same color, there's definitely something different over here. Now let's do the same thing as the other side. I'm going to clear all of this information over here by hitting this reset. I'm going to send to integrate first. And then I'm going to clear all my mask. I don't want to be using that. And then we're going to paint over the other side. And we'll hit Generate. That's pretty good, That's not bad. We have now a whole bunch of information that did not exist in the pre and the original image on the left and the right side. We can see there is a strong line here, but we can fix that up. We can just do another one of these in paints. If you see any results that are not quite to your liking, you can just send it to another one and just paint over the area that's a little bit lacking. And hopefully that will fix it. Once again. In order to create this proper detail in the background, it needs to have a positive and negative prompts. And when you're using, it'll be able to create a. Here we go. This is our, here's our image here. And we can compare the before and after, where this was the original image, this little square thing. And now we have this much larger image. It works much better of course, with images that have blurred backgrounds. The more detailed tobacco in the more it may have a little bit of some discrepancies, but you can get pretty decent results. So this is one way to do out painting in stable effusion. We will cover a later topic further in the course. I've had to do this within Photoshop, which is actually a lot easier and faster. But if you don't want to use Photoshop, you can use this technique within staple diffusion to do out painting 10. Img2Img Case Study: In this video, we're going to talk about the image to image tab and some of the incredible features that you can use with image to image. And the way I wanted to show you this is with a somewhat real-life example. I gave you a little case study here. And I'm going to show you a little video that was created using image to image. So here's a silly little video called Yoda Meet Stitch and take a look and then we'll come back after the video and explain how we made this. One. Talks more about sucks. An experiment you're starting. Curious. Trigger assumed to be true. This planet expired or Bosch. Much trying to render. Right now, I bought on an Orthodox Church role to know about the force. I know no. Forces and energy field binds us two things. Objects control our minds. Even see the future. The wrong girl stitch. First, you must learn to focus your mind. Close your arched rock over there. What's your mind? To bind to focus? Much progress. You must work hard to lift them up with the force. Must progress through the rock. Shelter. Now. Walk-through must move. Your emotions. You must relate to the dark side. Still give up. I must flow. One boss to the dark side. You've seen the video of Yoda Meet Stitch. It's a little silly, but it doesn't really matter. The goal is just to show you how stable the fusion can be used to create this. First of all, you'll notice there is some wonky dialogue going on. And that dialogue is created with chat GPT. I went to chat EBT and I said, write me a conversation where Yoda Meet Stitch and teaches them about the force. And it came up, came up with a bunch of strange little dialogues here. And I just did this a few times and picked out the little parts of it that I liked the most. That's where the dialogue comes from for that video. Now let's talk about the actual images of themselves. So images here, what are we dealing with? We have these characters, which is this Yoda guy. And you'll notice that he's moving closer than the image in the background. So there's this depth here. And there's a few ways to do that. There's the way you'd probably do it if you're doing this for production, some kind of professional way, which is you create the images in stable diffusion. And you'd go into some photo software like Photoshop and you select the character and you cut them out and you make sure you get all this details. I'm done, right? So we only have this image and then you can paste it into your video editing software. And you've had that in the foreground. First sounded like a lot of work. You know, I didn't wanna do all that as it's too much effort. Sounds like a lot of times I don't even know which images I want to use. Maybe I like somebody who isn't. I don't know if some of these other ones I don't want to spend time we're going into Photoshop and cutting them out, at least not for this sort of thing. I wanted to make something quickly and I wanted to see what's a faster method to come up with this foreground and background character. So the solution I wanted to do is use the green screen technique. So this, all of this is done by creating characters in front of green screens. So we have a character here, this Yoda guy is actually in front of a green screen. I then can use something called chroma key, which removes the background color. And then I can. Replace it with another image. So that's what you're seeing here. So let's show you how you can do this in stable diffusion. So I have my stable effusion here. First of all, I go and I come up with the character that I want to create. In this case, I played around with some models I liked. I played around with some texts, bumps I liked. I eventually came up with this little guy is like, hey, this guy's kinda cute. He looks like a Yoda character that I want to make. I'm into this guy. So I am okay, I'm happy with that. I'm gonna go to send the image to image. The only reason we care about this is to get the text prompts. We don't even care about the sandwich. I can delete that right now. We're in the image to image tab. We have our prompt. Now what we need to do is you have to have a photo of our character in front of a green screen. And when I say character, I mean any character, it can be this. This is what I was using, is some guy in front of a green screen. And it's not even a great green screen as you can tell, it's, it's kinda chunky is just a single color object. Now there's an issue with a Yoda and that you can't use Yoda in front of a green screen. We can do this some stitch, no problem. But Yoda is a little tricky because gold is green. So I needed to use a blue screen, so, okay, let's swap that color out instead of green will have blue. I did that in some photo editing software. So I have my character, I have them in front of a blue screen. In my case. I can take a look at my settings here, but they're not really that important. I'm using a I have some can play around the CFG scale. The same seat if you wanted the seed from your text to image. But I'm okay with all that. The one thing I do want it to switch though, is I wanted to have the output to be In the dimensions of a video, or at least the same scale. Because when I create a video like this, it's 1920 pixels by 540 pixels, right? So I wanted to have a image that was in that similar spectrum. Okay, so I have my dimensions, I have my photo, I have my prompts. I'm ready to go. I can then hit Generate. And stable effusion is doing his magic. He's coming out with a nice little Yoda character. There we go. We've got a Yoda. He's in front of a blue screen and say, well, the fusion did that for us. So you can do this for an unknown. Maybe you have a batch of 20 different notice that you wanted to create. And maybe I want to have them in slightly different perspectives. I have wanted to have one close-up of his face. So let's try a close-up one. See what it looks like with this close up face. Let's see how that looks. I can try different poses. It doesn't really matter what we wanna do. You can experiment and try different, different fields. How close you are to the object, may want to have different hands suggestions, it doesn't really matter. We can try and experiment of course. Anyways, once you have all of your different photos, you're going to have some directory of all the characters that you want to use. And from there, you'll notice that these are a little bit, little bit blurry, little bit, not that detailed. So I'm gonna go into my extras here and do the batch from directory. So in that case, that means I would select the folder of all the images that I used as the input. And I'll specify some place for the output image just to be saved. And once I've done that, I'll now have this nice collection of Yoda characters. And when my case ditch characters two in front of blue screens and green screens, the stitch of course is blue, so I put him in front of a green screen. Diode of course is green, so I put them in front of a blue screen. Okay, Then I went into my video editing software. So in my case, I'm using Premier here. And all I did here is I added an effect called ultra key. This is the original photo I got from this table diffusion. I stuck that into my timeline here. I then apply the ultra key, which removes everything that is blue. Because you just have this ultra key and you say, What color do you want it to mask out? You can just select the color. In this case. You then have to play with these little settings a little bit. But once you found out something that looks essentially black, then you're good. And you can stick in the background image, which in my case is another photo that I generated in stable diffusion. I just typed in dangled bar, which is planet that yoda exists on. So that's how I was able to create these characters in stable effusion, where you have a character in the foreground in front of a green or blue screen and a background environments separately. That's all using image to image. So if you want to know how to use image to image, That's how you could do it. You can create all of these different characters. Using your prompts. You can get the poses with your blue screen, green-screen character. And then you can upscale them all done within stable diffusion. 11. Infinite Zoom Intro: Here's a really cool application of disabled diffusion in that it can be used to create what is known as the infinite zoom. You have this image that just keeps on zooming in forever or zooming out forever if depending on whether you want to be going forwards and backwards. So this is a application that we can use using stable diffusion. And you'll learn how to do this in the next lecture. 12. Infinite Zoom: To create the infinite zoom, we need to first install an extension in simple diffusion. So in this Extensions tab, in the install from URL, you're going to want to paste the URL from a very specific git repository. In that case, that is this repository right here, github.com, the eight HID infinite zoom automatic 1111 web UI. So you're going to copy the code from this GitHub repository and you're going to stick it in there. And then you're going to hit Install. And then it'll take a few moments. Once you've done that, you'll go to the installed category here. You'll hit check for updates and then you'll apply and restarts. And you may need to also close down stable diffusion and then restart it up again. So just be prepared for that and make sure you smears until you've done that. You're going to also need to have a checkpoint here that has an inpainting. In painting. Essentially built on a model that can be used for in painting. And it's usually has the name in painting in it. So you can have that initially. You're going to need to go find some models to do that. So here we are uncivic. Ai can find a bunch of different models and they have examples of what the imagery might look like. And you can click on one of these models and it'll show you examples. For in my case, I pick the rib animated. The key thing is you need to have a model that isn't inpaint model. So you need something that's in painting because that's gonna be designed for work better with the infinite zoom. So you take that, download one of those models and you stick it into the stable diffusion folder. Just like all of our other models that's disabled diffusion than the models folder and then disabled diffusion folder. So you do that. And we now have our model showing up in the inpainting as a model up you can select. Then we can go to the infant is Zoom tab. The infant, the Zoom tab appears because we've installed the extension, so I won't want to be in there before. And now let's take a look at some of these options here. In the main tab, you have the batch just like previously. So how many versions of this video do we want to create? Shows you the length that we want to set for. How long do we want a video or two before? And then here it's quite intuitive really. It's saying, okay, at this second 0 s, what do I want to see? Oh, the first thing I want to see is this tropical forest. And then it's going to go into a lush jungle. So it's trying to create this lush jungle and then thick rain forests. And then eventually you get into this burdens canopy. So this is just the initial thing that you'll see. You can also insert prompts and add rows isn't much do you want to go? It's quite intuitive. Here's the prompt that we have set for the, we want to see That's common amongst all of this information. So we're changing this. So this is the most highest priority, but it also tries to keep in mind all of the positive prompt here. And keeps in mind the negative prompt trash removal of those. The seed, of course, just like the rest, if you were to regenerate this image and you had a different seed, you're gonna get a different video. But if you keep the seed from last iteration that you created, you're going to get a, well, if you have negative one, you'll get a completely different video each time. But if you have a constancy, you're gonna get the same video each time. Samplers, you'll array or one of the GPM plus plus are great samplers to use with this. The output width, output height. This is the scale of the video. In this case, it's this square but doesn't have to be a square root, you can change it. And sampling steps is how many? If you increase the number of steps, it'll do a little more work, but it'll take a little longer. Of course. The customer initial image is kind of interesting because you can choose what do I want it to be, the first image that we see, and then the output will be based on that first image. You can upload an initial image to start with. For video shows you the number of frames per second. Usually want to leave this as whatever your ending goal frame per second. And it's usually usually 30 frames per second or 24 frames per second. The Zoom L is initially set by default, which means you go to the first prompt and then you zoom out to the next branch, and then the next prompt in the next parent. The zooming in means that we're reversing the order. So it starts at the last prompt, and then the second last prompt, the prompt zooming in, it's moving backwards. Often the zooming and gives you better results. And in your editing software you can always reverse the speed of the video. And the reason why is because when you're zooming out, it's trying to create something from scratch that doesn't actually exist already. It's just trying to look at the noise and add some information. Zooming in. It already has information to base upon. So it's just replacing the inner workings of this based on the stuff around it. So it has a little more reference materials to use. So zooming and usually gives you better results. The number of start frames, it's quite intuitive. So let's say you do want to have a little hole before you start zooming in. And the same thing for the last frame. The Zoom speed is initially set to one, but that's usually two quick. Usually want to set that to at least two or four or something higher, just so that it's a slower, gradual Zoom. Although that does mean that the video will be longer and slower. So instead of being 5 s that we set here or here, it's actually going to times that by four because we're zooming in slower. But it's usually more graceful and enjoyable to view. The painting the masking is dealing with when it creates an image. Initially, if you don't fade the mask, you're gonna get some jarring results. So let's take a look at the outputs here. Note that the outputs folder is in not in the text to images, it's in the Zoom folder. So it's gonna be its own section. So if I take a look at some of these initial versions, we can, since, even, since even worse, it's fine. One of these ones. Here we go, we can see this very clear square because it hasn't been blurred the mask very well. So you usually want to have a little bit of blur the mask and it'll be a lot smoother. As you can see in this one. It's probably say, a little bit of a square, but you can probably play with the mask blur settings to make it even smoother. And in the post-process, you can upscale, so you can choose enough scalar. This is one of these ones are usually recommended and they will increase the resolution of your video to make it higher with more detail. Note that that will increase the time it takes for your video to process. So keep that in mind. If you do have a noise multiplier for image to image, make sure this is set to one. If you have it set to another value, you might not get great results. It might not even work. If you have color corrections, make sure that's not enabled. If you don't have these, don't worry about it. That's only if you have them enabled already and they're showing up there. So yeah, that's essentially the gist of it. Once you've got all your settings set up, you just hit Generate video and you'll end up with some nice video footage **** essentially zooms in forever or zooms out forever. 13. Create Prompts for Stable Diffusion with ChatGPT: If you're a fan like I am, I've got GPT. It turns out you can use chat GBT to create prompts for stable diffusion. So you don't have to worry about trying to come up with ideas for what kind of texts to use as your prompt. You can have stable diffusion, create them for you. So how do we do this? Well, stable effusion is available at chat.openai.com. And you can go to there and you can create an account and then you'll have access to this interface. And it's essentially where you can type in text here. And chat be GPT will then do It's miracles and come up with answers to whatever you put in here. The thing that I want to ask chat GBT, I wanted to come up with some prompts for me. So here I have my text prompt that I'm going to stick into chat GPT. You will want to have something similar to this. Let's go through here. But we are putting in here examples of a high-quality product for a portrait of a boy playing chess for the text to image ai, image generation. So we're telling chat GPT what the overall goal is. We show some examples of prompt information That's the chat GPG can use. And then we're saying give some variations of the objects. We're creating, different seasons, clothing's, et cetera. Don't use this. Avoid using that. And here's the most important part is start every prompt with these exact words. And in this case, the most important part is the boy playing chess because I want a boy playing chess. You copy that. And you go to your stable effusion here or at your chat GPG and you stick that in. And we're going to have chat GP. Gbt generates a bunch of nice prompts here and it does it pretty quickly. Lot quicker than I'd be able to come up with prompts. That's for sure. It's got some nice looking stuff there. I'm pretty happy with that. I'm going to copy this, go to my stable diffusion. And I'm going to stick that into my knot, into their, into my script here I'm gonna go to scripts and go to prompts from file or a text box. And then you can paste the information that you just got from chat GPT into your list of prompt inputs. And once you've done that, then you can choose your sampler. I mean, obviously you pick your model and then you can hit Generate. And you'll see that chat GPT, stable diffusion is coming up with these images based on the input that we just gave and create it in chat GPG. They look pretty good. It does a pretty good job. Let's take a look at these a little bit. So we've got some boy playing chess in the winter. There we have it in the fall. There's the summer, and there is I guess the other whatever season that is. So that's how you can use chat GPT to create prompts for stable diffusion. 14. Installing Controlnet: We're going to introduce a topic called control net. Control net is an extension for stable diffusion that allows you to pose the art that you create. So in this example here, I have a static image where we have this character that's being created with some prompts. And what we can do here is use his original image and pose it. That it will now always be in this exact position that in this case is characteristic. Now this is cool on its own and that you can pose your AI however you want. But what's really useful about it is you can use it when we're starting to create ai video. When we start to create a video that has movement of its right arm, I need the AI to also pose the exact same way and control that is the tool we use to pose. So that's what we're building up to in this course. We're building up to AI video, but control that has lots of cool applications. And we'll go through them a little bit here in this course. But for this video, let's just focus on installing controlled it. Okay, let's install control net. So you're going to want to go to the Extensions tab over here. And you're going to go to, you're not going to see this quite here yet, and that's what we're trying to get. You're going to need to go to available. And you're going to search for stabilization web control, net. That's what we're going to look for, or at least that's what it is currently named. And you're going to find it on. My case is not showing because I've already installed it for for you. When you do this, you're going to see it show up on the left-hand side here. And you'll find that. And then you're going to click Install. Once you've installed that. And you click the Install button, then you're going to go and installed and you'll see it show up here. And then you can click Apply and restart UI. Now what that's gonna do is that's going to create this new folder in your sale effusions. And it's going to be this stable, efficient web UI control net that folder is going to appear now, it won't be there beginning, but it'll show there afterwards. And we're going to want to create some models here. So this will create these models. This will show up. Initially. We're going to need to add some other models. So we're gonna put the model's not in this folder, but in the stable Diffusion Models control net folder. That's what we're going to navigate to. And we're going to download these models. And you're going to download them from this website here. Hugging Face, LLL, ESBL slash control net slash Tree slash maintenance models. You're going to download these models here. They are quite big and you're going to put them into this folder here. Remember that's disabled diffusion web UI models control net folder. Notice this is different than the place that you'd be putting all of your other stable diffusion models so far. Once you've done that, you can go to simple diffusion. You might have to apply it restart UI again. Maybe you have to close it down and started up again. But if you follow those steps, you should see this tab here, control net showing up under your text to image. And when you click it, you'll see all these models. And you'll see the models that we just downloaded from Hugging Face. Get that setup and then you can move on to actually using control that in the next video. 15. Introduction To Controlnet: So, so far we've been creating characters that are just looking at the camera for the most part. They might be quite bland in the way they're positioned. But using control net, which we installed in the last video, we can now have our characters in specific poses. So e.g. we have a pause here. We're ladies sitting on the ground, her legs crossed her arms on the floor. And she's always in the exact same pose, even though we changed the clothing and we changed the background color or whatever else wants to do, the pose is always maintained. So let's take a look at what we can do with control net for this. Well, in control net here, originally if we were to load our character, We're gonna get something like this where we just have a character, It's kinda bland and facing forward. That's fine. But what we wanna do is go to control net here. We want to stick a pose for our character to be positioned by controlling it. So what do you stick in here? Well, it can be anything really. You can take a photo of yourself in a pose where you have your arm above your head or whatever. It doesn't even have to be a human. I can say as long as it's humanoid shape with arms and legs and have some form of a head. In that case, there's not even a human. I can use that and we're now going to have our character 0. Make sure you have enabled here, this needs to be checked and the processor needs to be. For this example, we're going to use Canny. And the model needs to be the exact same. So if you have candy here, you need a candy here. If you had a different model here, you need to make sure the exact same model is used. So once you have these things selected, you have enabled. Then you can go here and generate. And we're gonna find out that our character is now being created in the pose. We had now have that white lady in a position where her hands in her pocket and it looks like she's tried to create something like a briefcase, but a little wonky here. The eye is doing its best to recreate this posterior. Let's show you a few resources for getting poses if for some reason you need some inspiration. One of them is the website pose maniacs.com, where you can get oppose, such as this one here you go to Home oppose. You can find a bunch of different poses that you can choose from. In this case, this is the post of the day, that's the one that we currently have. And essentially what it is is it's a character that you can move around, just a model. You can flip it. You can choose to make it male or female. You can choose some presets positions that you might want to experiment with. And then finally, you can even choose lighting, but we don't care about lighting because the AI is going to be replacing lighting. Then you can make it full screen, save a screenshot, and you can stick that in disabled fusion. Another one we have is postmarked.com. And you go to postmarked.com and you'll get something like this. And you can add models and many models is you want to use. And you can also add prompts, although that's not gonna be helpful for our scenario. You can have premade scenes such as this thing here, e.g. let's add this character. So this is a pretty nice-looking post. I'm quite happy with this. This is very dynamic. I can feel a lot of movement going on here. Let's find a pose that I quite like the position of, I think I liked the way that one looks. You go into settings here. You may see some things like this where you'll have shadow and floor, but we don't want any of those things because that's going to confuse the AI. We don't need the ground, we don't need the floor. And the only thing we care about is suppose really we're going to disable all those other features that are distracting us. And finally, we're going to just select screenshot. We have our screenshot, select it. There we go. Now we can go back to simple diffusion. And I can drag my hopeless. Always delete the previous one. I can drag my screenshot in. Here we have our little character. And I'm going to hit Generate now. And we'll have our character, white haired lady. Now in the pose of the posts that we just created. So there we go. Can see AI is doing its best job to recreate that. So how is this doing this exactly? Well, it's using something called a depth map. And a depth map is this second image that's created over here. If you're not seeing this, you can go to settings and make sure that this thing under control net is unchecked. That's a big important thing. You want that unchecked if you want to see this background. So what is this? Well, when we use the preprocessor and the model, it's creating a depth or a map here, which we're looking at depending on the model that's used. In this case, we're using the Cannae. So the candy model is used for edge detection. So it creates fine lines around the output, around the model here. So it's creating little lines. Here is drawing little lines. And then it's using that to influence the pose that our character is used is created. There's a few different models here that we downloaded. Here we have the candy map which focuses on fan lot fine lines, which is good for high detail, good for Anna, May we have a depth map here which is useful for identifying space? We're gonna go into another video that will go into that in more detail. We have the H-E-A-D, which is similar to the county map but doesn't care so much about fine lines. It makes more fuzzy lines around the edges. The MLS D is good for architecture. So if you have blueprints or some kind of buildings that you want to get the positioning of. That's great for those. Normal map is useful for 3D software where you need to know the height of images and some kinds of different volume. It's going to create those. Open pose is useful for just creating essentially like a stick figure. And the stick figures arm positions will those, those will be used to influence the output. And finally, the one here I have is scribble. It's just, can take essentially just a sketch of a piece of paper and it'll convert that into your drawing. So this is a nice introduction into what control net can be used for. 16. Intro to making video with artificial intelligence: So far in this course, we've just been learning how to create images, just static images that aren't moving. But it turns out that simple diffusion can be used to create video. So what we're going to do in the next few lectures is learn how to create moving images created add a stable diffusion. What you're seeing here is some footage of a couple on vacation. And what we've done if we stuck it through stable diffusion, simple diffusion recreates every single image here, but does so consistently. So there's still looks like it's a video and it's moving from one frame to the next frame. Now, you may notice that this particular usage of simple diffusion looks a little bit watercolor, but that's just this particular prompt that we were using. You could use this to create the enemy or really whatever else you wanted to convert your video footage. You enjoyed this video. And then we will learn how to create video using stable effusion for yourself. Okay? 17. SD Configuration for Video Creation: Okay, let's get going with creating video with stable diffusion. So first step we need to do a little bit of setup configuration. In the savings slash images slash grids under the Settings tab. You're going to want to probably choose JPEG for your file format for images, you can use PNG. Normally it's default setup to PNG. It's just gonna be a larger file size. Depends on how much memory you have on your computer. You're conscious about saving memory or not. Paths for savings. Make sure you identify where it's savings so you can find it later. You can always add in your own link which folder you want to save your creative video files, the images with pickup location there. Under the stable diffusion, you're going to want to check this with image-to-image. Do the exact amount of steps the slider specifies. Normally you do less with denoising. So you're going to have to check that. And we'll come back to this, what this does in a moment. For user interface. Under the quick list, you're going to want to make sure you have this text showing here. And under the control net, remember we install control on that in a previous video. If you have not installed control that yet, you will need to go back, watch that video installed control net first. Here. This is going to be this little button here it do not append a tech map to output. You're going to want to have that checked when you're doing video, make sure this is checked when you're not doing video, you're going to want to uncheck that. Alright, once you've done that, click apply settings, reload UI. And you will see this little noise multiplier slider appear here after you have saved and reload it. One thing that you will notice here is we're going to want to set this to zero. But when you go all the way down to the bottom, it goes to 0.5. Sometimes this might be fixed in the future updates, so this might not be an issue. But in the meantime, what you can do is you can do Inspect. And you can click on this little men here and put that to zero. And then you can right-click on this thing here. Just on the slider and select the Min. Put that to zero. And now you can actually drag it all the way to zero. And if you're wondering, well, what is that? Well, if you look at these settings thing and we go to that settings that we just changed earlier and disabled effusion. Um, do the exact amount of steps a slave as specifies. So we're saying we don't want to do anything different than what this setting here. We want to do the exact number of steps that the slide are specifying. Okay, we've done the configuration. Now. We can get into creating the video. 18. Creating Video With Stable Diffusion: So let's think about how we can create video with AI. We can use text to image, because texts, the image will have a prompt, it'll create an image and it will be a different average every single time. Even if we use the same seed, it will still, there won't be any flow between one image to the next image. And a video is just a series of images in a row. It's just a sequence. So we need to use image to image. And then we need to have images that are related to one another so that they are moving. Are they create a sense of movement as you go from one frame to the next frame. So we need to have a video. And then what stapled diffusion can do is it can take each frame of the video and convert it into whatever image that we want. And we can have some consistency between them using the same seed information. So we need a video that we can convert to a video. So if you don't have, if you have your own video, you can use whatever video you want. Otherwise, if you want to download a free video, you can go to pexels.com, pick whatever video you want. It doesn't matter in this case, I picked this little fella here. And we need to have this broken down into individual frames. There's a few ways to do that. You can go to a site like this, easy gift.com. Go to video two jpeg. Second your video there. And then you just choose how many frames you want per second. And you can get that and just take all the images. You can do it through a video editing software. In this case, if you have something like Adobe Premiere, you can just add your videos, your sequence, export, and then choose JPEG. That's another way you can use whatever thing you want. You don't have to use Premiere. I was just showing you in case you have that software or you can don't have that software, you can use something like Easy gif.com. Once you've converted it, you're going to have a sequence of images. So if I were to click on this guy, the first image here, and I just click left. We can see going one frame to the next frame. We have this guy slowly moving. So stable effusion. What he's gonna do is it's going to take every single frame, is going to convert it into whatever creation that we choose with our text prompts. But there'll be consistency because it's moving. So it will have an image that's changed each time, but we still have our basic reference image. Here we are in image to image. I'm going to load up the first image in our sequence. In the image to image here, image to image tab. We've added a positive prompt. At a negative, you can put whatever you want. We've chosen them all out. I want to use you can obviously use whatever model you want. I have a sampling method. I picked one that I'm happy with. In this case, my reference image is 1,920 pixels by 180 pixels. That's not a square. So I've adjusted the width and the height to match the value or the scale of my image. This is exactly half of 1920 pixels, and this is approximately half of 1080 pixels. You'll notice that if you do it and then you divide it by two, sometimes it will adjust the value automatically and that's not too big of a deal, it just needs to be approximate. So now we come to the CFG and de-noising strength. This is a little bit tricky because CFG scale don't forget is how much we want stable diffusion. Jim fires and come up with its own image. So the higher we go, the more freedom simple diffusion has, the lower we go, the closer we're getting to the original image. Now, if we go really close to the image, That's good. Because one frame will look like the following frame and the sequence. But it also means that we're not getting stable diffusion to do as much of its work, it's becoming less, my case, cartoonish pastel painting ish. So we want to have a value that's somewhat low, so that it looks like the previous image. But we also want it to be high. So that's stable feature will work. So you have to experiment with going low and going high. And this is a little bit of practice, a little bit of taste and preference as well. Same with the de-noising strength. The higher we go, the more stable fusion has to experiment. One thing you do wanna do is once you've found a image that you like, make sure you keep the seed consistent. And for the control net, you enable it and you choose the same pre-processor as the model. So whatever you pick, you want it to be the same. If you pick e.g. the depth, make sure you also using the depth pre-processor. Anyway, so I've clicked Run, and that's how I got this image here. I have this little guy who is essentially this image, but plus the positive and negative prompts. The pose is also being taken in consideration because of the control net. What do I mean when I say the CFG scale and the de-noising strength needs to be low. Or else you're going to have too much variation. Well, let me show you the output that I got from this image here, this image sequence. This is the output. You'll notice that the square root, because I had the scale a little bit wrong, but it's fine for this example. You'll notice that he changes quite a bit from one frame to the next frame. You can see the ethnicity. If the guy even changes. You can probably change the positive and negative prompt to try and maintain that more. But we want to be relying less than the prompt. Because you can't predict exactly how every single frame is going to look. We can try as much as we want, but it's still going to change to a certain extent because he's recreating a new image every single time. I recommend that you try and experiment with the CFG scale and de-noising strength, keep those low values as you can. When you're happy to get rid of this, get rid of your image here. Get rid of your image here. Why? Because it's going to influence your previous, your next images in sequence. We don't want that. We want every image to be considered on its own. So let's go to batch here. And in the batch, we're going to choose our input directory. So what do I mean by that? I mean the place that we have our original files are input files. So copy this folder location you stick out there and choose the folder where you want your output to be. Just double-check here that you see this is gonna be consistent every single time. Otherwise you're going to have issues. And then you can hit run. And once you've done that, you're going to, eventually, it's going to take a little while. Eventually you're going to end up with your output folder full of images. These guys. And we can then combine all of these images into a video sequence. You can do that either using video editing software or you can use Easy gif.com. You go to this site and GIF maker, and you upload your photos. If you're using a video software like Premiere, you go to File, you go to Import. You select the first image of your sequence. Notice the naming is important when simple diffusion names of the images in names them in a sequence. And that sequence is based on your input images. So remember we created these input images. You're going to not want to play around with these names too much because it's going to be looking for this when you're combining them together later on, if you aren't starting with this 0010203, it's not going to be able to combine these together. So consider that. Yeah, So you have your image sequence, we've dropped it in here. We have our video and we play it. And it looks well, it looks like I created a video, which is great. That is what we want. There are a lot of frames per second. It seems really quick. There's a few things we can do to try and fix it up. One of the things we will cover in the next video, which is D flickering. Another thing you'll want to consider is maybe you don't need so many frames per second. Alright, it's can only really process so much at a time. And if every image is different, it's struggling a little bit. So we could e.g. increase the length of the amount of time that we see each frame. Maybe only need to see. I can triple up the frames. And then when I've played the video, it'll look more like a comic books sketch to a certain extent. Slightly less jargon. The eyes. We will fix it up, as I mentioned later on. But you can do that. If you are doing this technique though. Maybe you don't need so many frames in your original video. Maybe you can reduce the number of frames in the first place. So when we were breaking up the frames of this guy into individual images, you can go to sequence and say maybe only need 12 frames per second. You don't need 24 frames per second or 30 frames per second. And then because the output, and maybe you're gonna be doubling and tripling them up anyway. So that'll save you some time when you're creating all of your videos with simple diffusion. So there you go. You've now created a video using stable diffusion using some reference footage. 19. Deflickering AI Video: In this video, we're going to talk about how to fix a lot of the flickering that occur as in your video after a equates them. So this is a video that I created in stable diffusion. And you'll notice there's lots of flickering in the background and it's very hard to look at it and it's kind of painful on the eyes. Because the reason is because every time stable diffusion takes a frame of a video, it recreates it at every frame is slightly different than the next frame. And because of that, there's lots of little glitches and splits and slightly different flickering and every single image that you see. And what we wanna do is add on some effects. I try to smooth that out. I've tried to identify when one frame is completely different than the next frame. And it's just like a little dot here and there. Well, let's try to smooth those out and take out those little blotches and glitches. So that's what we're gonna do in this video, is figure out how to remove flickering from your AI videos. So let's do that. You will need a tool and the tool that seems to work best for this is dementia resolve. Da Vinci Resolve is a paid plug-in. It's a video editing suites. It has lots of effects and you can do professional stuff on it. Yeah. So if you do want to look into D flickering, you can use this plug-in. Here we are inside of Da Vinci Resolve. And I've imported the video clip from the AI processing. So this is the little video thing here. You'll see it looks like everything's in Fast Forward. And the reason I've done that, rather than play every single frame, what happens here is I exported it out of my other video editing software after combining the images together in a lower frame rate. So I think this is only in 12 frames per second instead of your usual 24 frames per second or 30 frames per second. Now why would I do that? Well, what happens here is when I was creating the video with stable diffusion, instead of doing 24 frames per second or 30 frames per second, I wanted to speed up the process so that the stable version didn't have to render as many images, images. Just because stapled fusion takes a while to do that and I didn't really want to wait that long. And what you can do later on is I can just slow the video down. So right now this is only a twelv frame per second video, but I can just make the time go in half speed. And that'll be in, it'll have some duplicate frames later. But since it's kind of a cartoony watercolor we're looking thing. It probably doesn't matter that much. If I have some duplicate frames in there because it's a little bit hard to see all of the images with all the details that are kinda cartoony anyways. So I don't mind having duplicate images and so I don't mind exporting them in a some duplicate frames. So here we are with the video now and I've inputted the video in Jue de Vinci resolve. Now why have I brought the video in here? Why didn't I just bring the image sequence? The reason for that is I find that when we're applying the D flickering plug-ins, it works better, at least in my experience on videos than it does on images. I find a struggle sometimes to handle image sequences with the flickering effects. I don't think that'll always be the case. Maybe this is just with the current version that I'm using, but at the time I'm using, I need to use videos when I'm applying the D flickering up plugins. So I've added any video clip here, and I'm now going to go to the Fusion tab. This is where we're going to apply all the effects. Here. We can see the median in and the median out. Meeting n is our input video. That's the video that we're going to have coming in. If I hit, click this little note here and I hit one on my keyboard, I can add the video screen to this side. And if I left-click on the mediant out and I hit two on my keyboard, I can add it to the other side. So this is the input video before effects, and the other side is the output video After Effects. So let's start adding some effects to this. I'm going to click on the median in. And I'm going to hit control space on my keyboard, assuming I'm using Windows Control Space. And we'll open up this Select tool. And here I can type in the effects that I want. I want the automatic dirt removal. So I'm going to add that automatic during removal. That didn't actually do. I want, I want to click on the node first and then automatic dirt removal. Now it's added it to the chain properly. So this will get rid of any little splashes, a little piece of specks of dirt that appear in only one single frame, but don't appear in the next frame or the proceeding frame. That's what that's gonna do. And now we're going to add in a flickering plugin. So I'm going to click on this next node, and I'm going to click in D flickering. Here is the D flicker plugin. Once again, I did that wrong. Click on the automatic dirt and then the flicker heavier. And that's added it to the chain. Over here in the D flicker settings. We're going to want to change this from time-lapse to fluorescent lights. So now we have our automatic dirt removal effects, entity flicker effect. If you want it, you can keep adding more D flicker effects. Example, I can just copy and paste few times. This d flickering isn't enough. I can just keep copying the effects. If I go into the next one, I will want to change this a little bit. I'll change the amount of detail will be restored after the Flickr. So maybe I'll do like that. And in the last one maybe I'll write this down. Something like that. So now we have a whole bunch of, uh, of effects here that will help to remove flickering. In theory, this should remove for the majority of the flickering. That would be difficult to see the AI video. So I'm assuming you've done that. What would be the next step? I'm going to remove a few of these for illustration purposes. Once you have your effects that you want to add, you want to hit play within DaVinci Resolve. And actually I can see here it's actually not working properly because this little green, there should be a little green line here coming up. So let's, let's try another one. Let's try this with just the automatic dirt removal. I don't see little green line. It makes me nervous because it doesn't mean that it's loaded properly. Let's try coming out and coming in again. Okay, now I see this little green thing. I want to see this because that to me tells me that it's loaded properly. You want to run through all the effects. You've added your D flicker effects and then you've clicked Play and you've ran through this whole thing towards the end. Why do you want to do that here? Why do I care about this little green lines so much? When you're processing the D flickering and the automatic dirt removal. If you render it previously in this fusion tab, it will save you a ton of time when you export the render. If you try to export the render without running it through here, it can take a very, very long time to apply the D flickering effect. It, in my case, it took me four days to render a few minute video. Whereas this can only take a few minutes if you do this here in this stage. So add on your Mac dirt, add a new div flickering effects. Go to the front of the video timeline, click Play, make sure it runs all the way through. You see this green line all the way through, then you know, you're good. You know that the flickering can work and that it's already somewhat pre-processed. Alright, That's one time-saver. Next time-saver, let's do some optimization for DaVinci Resolve. Make sure you have your smart set for your rental cash. Maybe you do this first actually. But yeah, you want that smart unless you smarter than whatever smartest and you know what settings to set for that. Once you've done that though. And you ran this through, you go to the Deliver tab down here. And you're going to export your video. So you pick your filename, you pick whatever location is in your computer, you want to save it to select the output type. In my case, I'm picking a MP4, which is the H.264. If you have a GPU, make sure you use your GPU. Otherwise you're not using everything you can. And I go to the advanced settings here, and I'm going to select the use render cached images that we'll use any pre-processing that da Vinci Resolve is ODEs. So far. You've done all that. You've, then you click Add to Render Queue. Your video will appear on this chart right side. And then you can click render all. And you'll have your exploited video with all of the D flickering FXS applied. That's it. That's everything you need to D flicker your AI videos 20. Stable Diffusion Inside Photoshop: Okay, So we've been using stable diffusion so far on his own. But it turns out you can use stable diffusion inside of Photoshop. And this plugin is a free plugin that integrates with stable effusion. So you can use all of the power of simple diffusion inside a Photoshop. So let's get this installs and then you can take a look at how all of this works for yourself. So the first thing that you need to do is install this plugin into Photoshop. So you do this by going to the website github.com is a KD dev slash table dot art. You go there. You're going to find this GitHub repository. But we're not going to leave with the repository. We're just going to download this file here that how to install if all of these steps here, you download the CSV file and you run it. Creative Cloud will do all the rest. It will install the plug-in for you. Now, we need to start running stable diffusion and we have to enable it so that Photoshop can access it. So what we need to do is we need to go to the table effusion web QI plug-in folder to access this file called the Window's batch file. And you're going to edit it. And you're going to add this argument here, dash, dash API. For the majority of you. This is what you'll need to do is add this dash, dash API. There are a few of you who may have something slightly different, but at the end of the day, you need to add this argument to enable it. Once you've done that, you can restart disabled diffusion. And once you've restarted it, you're going to copy this local, local host URL. You're going to copy that. Then you can start up your Photoshop. So photoshop loads up. You go to your plugins, you click Save and diffusion are stable art. You open the stable art plug-in, distinguish show up. And what you wanna do is you need to stick in the URL that was created in your command prompt that you've been using for stable diffusion. Essentially you copy the URL. That's this thing right here. Copy that. That's also the same, the same thing as your site normally. So when you have the local host, that's what this thing is, is this URL here. So you copy that and you stick that in here. And once you've done that, that's where you'll find all your models get populated. So we've open this up. I'm also populated. We can now stick in a positive prompt and negative prompt. It's essentially the same user interface that you expect with stable diffusion. Just inside Photoshop. You can have your random seed. You choose whichever sampling method you want to use. Choose the number of steps for completion more steps, the longer texts, but the more detail you get, see if G scale the lower it is, the closer it is to the prompt, further it is, the more creativity stable fusion will have. For the text to image, you can essentially select a rectangular tool wherever you want. Just like that. And stable fusion will fill in whatever space that is imaged image if you draw over top of an existing image and it will replace or try to create based on the prompt. And we'll come back to inpainting in a moment. The advanced settings is the number of steps you can choose if you want to upscale, downscale, meaning high resolution. So let's assume we want to do the text to image feature here. So I select text to image. I select the area of the screen that I want to create an image for. So let's try and replace this entire rectangle here. And I'll hit generate. Less able deficient is gonna do it's thing, it's gonna load. Probably take a little while depending on how big the area that is that you picked. Also, if you pick up scale, it's going to take a little longer, but you'll get a better looking results. And there we go, Almost done. That would load the load. Beautiful. Here we go. We have a nice-looking image here. It looks like we have some weird artifacts in the background. There we go. So we have our image and it's come up with several suggestions. We have this one, this one, this one, they all look pretty good. The image to image is if I want to replace, use this as my reference to. Um, come up with a new image. So this is more useful if you have a bad image in the first place and you will want to replace it with you prompt texts image is going to create something from scratch. In paint, we check out the inpainting feature. So in paint means that we can replace part of the existing image with something else. So e.g. I. Could say I want to select these eyes here. And let's change the color of the eyes, blue eyes. And I have the inpainting selected and I can hit generates. So here's the results that the Photoshop generated for us. We have this eyeball here. We have a few of these aren't too great, like this one obviously is wrong. This one is okay as well. That one's not too bad as well. All of these ones are pretty decent. We zoom in, we probably can find maybe some resemblance of sometimes you'll see is some issues here, but this one did a pretty good job. So it'll smudge on this little corner here. But you can always fix that up in Photoshop. Another thing is if you choose a different model, e.g. you pick a model that has inpainting built into it. Some of these models have in painting, e.g. this one here is intended for in painting. About painting, you'll get better results. Another thing you can do with the inpainting is just use colors from your existing image. And you can use that for extending the image so you can use the painting trick in Photoshop. So e.g. I. Could say, I'm going to take these colors here. Just like this, extended a little bit, something like that. And then I can just select the area. In this case, maybe this square thing here. And generate hope. Let's make sure we have inpainting selected yet hit generate and stable fusion will essentially look at this area and compare to the prompt and try to fill in all of that detail here, this image, you'll see in a second, they'll probably be another lady showing up. Based on that output. There you go. It's tried to extend it. It's probably need to fill around with the prompt and the fill around the colors and maybe what you're selecting a little more, but you get the idea you can extend images as well using this. So this is an incredibly powerful tool plug-in with Photoshop that's essentially free for Photoshop, you don't even have to worry about trying to create images from scratch in Photoshop. You can use stable version to do the majority of the work for you. If you go to the Explorer tab here, you can see examples of images that other users have created. And the most important thing here is you can a search what you want to see. If I want to see Samurai, something, I can search up the same way. But then I can click on this thing here. And it will copy the prompt that was used to create that image into the prompt here. So then you can create an image just like that. And the final thing I want to show you, if it's probably something you already know if you have Photoshop, but just in case you don't. Photoshop is a whole bunch of Neural Filters. And your filters, meaning filters that use AI products to assist them. So e.g. you have skin smoothing of portraits to change the way the expression looks. You can transfer makeup. You can apply styles, just color schemes. You can auto color your images. E.g. say, I have this image, I just want to change all the colors of it. It's really easy to do. The super zoom is similar to up-scaling. I don't find it as good as the stable diffusion upscaling. The simple diffusion upscaling is actually creating new content. The super zoom is more just playing around with the noise to get more texture. Depth blur allows you to essentially bring the objects in the foreground and more focused and you complete everything in the background. And these ones allow you to get rid of a little scratches and blemishes and so on. These are all AI tools that Photoshop has. If you have Photoshop, you already have these with your subscription. So definitely check out the stable diffusion plug-in in Photoshop. If you are a Photoshop user, the possibilities are endless. You can create whatever you can dream of. 21. Vector Image Intro: In this video, we will learn how to create SVGs or vector images using stable diffusion. So just a brief recap. What is a vector image compared to a JPEG image or a PNG image? Well, let's take a look at a JPEG image here, PNG image. If we zoom in enough, we can see that the resolution becomes blocky and chunky and you can't zoom in that much. And if you were to expand this huge amount and make it really big, you're going to see all the resolution breaking down with a SVG or vector image. If I zoom in all the way, at least as much as this can go, we can see that it retains the colors and it retains the image without breaking down. So that's what we're going to learn in the following video, is how to create SVG is a vector images using staple diffusion. 22. Creating Vector SVG Images: Let's learn how to create vector images. In stable diffusion. You're going to need to install another extension. Extensions you're gonna go to install from URL. And from the URL you're gonna get the Git repository for this extension, which is the staple diffusion web UI vector studio. You're going to copy that, copy the code and you're going to paste it here and click Install. That's the first step. Then you're gonna go to installed, check for updates. And you have your vector thing I should be showing up here. It'll update, you'll hit apply it and restart UI. You've may have to close down the application started up again. That's the first step in this same Git repository, which once again is reached here at github.com slash store, legato stable diffusion web UI vector studio. You're going to scroll down and you're going to find the installation depending on whether you're using Linux, Mac, or Windows. And I'm assuming that you're using Windows here because I'm using Windows, but follow whatever it is that you need to do for your computer. In this case, what you need to do if you're using a Windows is you download this. And once you've downloaded it, you're going to find the file. And you're going to copy the executable file, the portrays file. And you're going to copy that. And you're going to stick that into a very specific place. And that place is into the stable effusion. What BWI extensions, stable diffusion web UI vector studio was just gotten installs in the bin folder. And you're gonna put that portrays file there. You do all that. You may have to restart stable effusion again. And once you've done that, you'll see the spectra studio tap appear here. Now we're not actually going to use the vector studio tab, but we're going to use the built-in plugin in an alternative method. So here we are in the text to image tap. And what you're going to now see is under these scripts, you're going to see this little vector studio. And if I click that, that's going to use the information of the plugin that we now have in this tab here. But this is how we're going to create the SVG files. You have a bunch of different options. You have the illustration, the logo, drawing, artistic tattoo, Gothic, enemy, etc. You can check any one of those that you like. In this case, let's try creating a logo of a hippo. Essentially, all we have to do is enable that. From now on everything else gets created is going to be a SVG file. Now you can choose if you want to create a transparent PNG as well. That's also an option. Let's click Generate though and see what kind of output we get here. There we go. We have a hippo. This is the PNG, this is the SVG. So we can see here that the white is actually part of the image. Maybe you want to have this to be transparent though. And if so, you're going to have to just click White is opaque and it will change the seed to keep the, retain the image that we had last time. If I generate that a second time, we should now see our hippo with a transparent background now. And there we go. That is our SVG. So there's PNG and SVG created. So if you were to stick a color in the background, the color would show through. Now you're going to notice one thing about isn't that there's no color. That's interesting. Why is there no color it? Well, that's currently exist the way this plug-in works, if you want to do color, well, there is a way you can still do this. You won't be needing to use all this fancy stuff that we've been installing here though, you can turn off that plug-in scripts. All you have to do is just take your image of a hippo, will just create another hippo here. Here we go. We have our image. And all you have to do is go to this website called express.adobe.com slash tools slash convert to SVG. Now this is a free thing you can use. You actually don't have to pay for an Adobe subscription. You just have to sign up for an account and then you can drop in your image that you created. In this case, I'll stick in our hippo guy that we just created originally. And you can download it. And there you go, you'll have your, your SVG image. So that's how you can create SVGs. With stable effusion. You can either use the built-in plugin and that will create proper looking SVG is that have you can do all within stable version, but you're limited to black and white. Alternatively, you can just take Canada image that you want and then stick it into Adobe express.