Put the Art in Artificial Intelligence: Create stunning Digital Art in seconds with Stable Diffusion | Jesper Dramsch, PhD | Skillshare
Search

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Put the Art in Artificial Intelligence: Create stunning Digital Art in seconds with Stable Diffusion

teacher avatar Jesper Dramsch, PhD, Scientist for Machine Learning

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Introduction to AI-based Digital Art

      1:58

    • 2.

      Class Project

      2:17

    • 3.

      Turn Sentences into Digital Art with Prompt-based AI

      13:30

    • 4.

      Getting Creative with Watercolour Styles, van Gogh, and Hyperrealistic Art with AI

      14:39

    • 5.

      Using Tools and Lookbooks for Prompt Inspiration

      8:25

    • 6.

      What is the Stable Diffusion "AI"?

      7:26

    • 7.

      Make Stunning Fake Photography with AI

      5:50

    • 8.

      Ethics of Generative Art and AI

      12:01

    • 9.

      Getting the Vibe of Pictures Right

      6:56

    • 10.

      What to Learn Next

      4:22

    • 11.

      Conclusion

      1:49

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

1,395

Students

51

Projects

About This Class

Unleash your creativity and learn how to create stunning digital art using the latest AI technology!

This AI will interpret your sentences and create images in seconds.

In this Skillshare class, you'll discover the power of Stable Diffusion "AI" and how it can transform the way you create artwork. From there, we'll dive into the nitty gritty details of creating better prompts to make the most engaging digital art possible. From the very basics, you'll learn how to create captivating prompts that lead to engaging digital art with a process called prompt engineering.

By the end of this class, you'll have the skills to create breathtaking digital art in seconds with the help of AI. All you need is an internet connection and a willingness to learn! Enrol in our AI Digital Art Master Class today and discover the future of art creation.

This class needs no knowledge of AI, Machine Learning, Data Science, Coding, Programming, Python, or anything else. We are starting from the absolute basics. All you need is an internet connection!

-----------------------------------------

Will AI replace artists?

While AI will most certainly transform the art world, it does not stand to replace artists anytime soon.

AI can only really generate a product that is identical to what a human has created. But to create something truly novel is a process that requires creativity, inspiration and thinking outside the box. This is why human artists will always be special and unique, while AI can potentially create a perfect replica of a painting or song.

AI has been used to compose music pieces and also write poetry. And they say in the future it will replace actors as well. It is so well-suited for tasks like these because of two reasons; The first one is since AI uses a vast database of pre-existing material, it needs no imagination of its own. Second, it mimics extremely human-like behavior, and thus can memorize and improvise.

AI will not replace artists, but artists with AI will replace artists.

Learn how to use AI to assist your creative process with free tools!

-----------------------------------------

Get the Look Book

-----------------------------------------

So what is AI good for?

  • Get Inspiration for physical art pieces
  • Make quick prototypes for clients
  • Create more diverse ideas
  • Try a different style of art
  • Iterate faster on ideas
  • Have fun

Learn this fascinating tool that is just a few months old! (August 2022)

-----------------------------------------

Who am I?

Jesper Dramsch is a machine learning researcher working between physical data and deep learning.

I am trained as a geophysicist and shifted into Python programming, data science and machine learning research during work towards a PhD. During that time I created educational notebooks on the machine learning contest website Kaggle (part of Alphabet/Google) and reached rank 81 worldwide. My top notebook has been viewed over 64,000 times at this point. Additionally, I have taught Python, machine learning and data science across the world in companies including Shell, the UK government, universities and several mid-sized companies. As a little pick-me-up in 2020, I have finished the IBM Data Science certification in under 48h.

-----------------------------------------

Other Useful Links:

My website & blog - https://dramsch.net
The weekly newsletter - https://dramsch.net/newsletter

Twitter - https://www.dramsch.net/twitter
Linkedin - https://www.dramsch.net/linkedin
Youtube - https://www.dramsch.net/youtube

Camera gear - https://www.dramsch.net/r/gear

Meet Your Teacher

Teacher Profile Image

Jesper Dramsch, PhD

Scientist for Machine Learning

Teacher

a top scientist in machine learning, educator, and content creator.

In my classes, you'll learn state-of-the-art methods to work with AI and gain insights from data, along with over 7,000 other students. This takes the form of exploring data and gaining insights with modelling and visualizations. Whether you're a beginner, intermediate, or expert, these classes will deepen your understanding of data science and AI.

I am trained as a geophysicist and shifted into data science and machine learning research, and Python programming during work towards a PhD. During that time, I created educational notebooks on the machine learning contest website Kaggle (part of Alphabet/Google) and reached rank 81 worldwide. My top notebook has been viewed over 70,000 times a... See full profile

Level: Beginner

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction to AI-based Digital Art: In the future, auditors will have to incorporate AI into their workflow to be able to keep up with the demand that is put on. In this class, I want to teach you how to make beautiful art pizza digital art on your computer. My name is Jesper Dramsch and I'm a scientist for machine learning. I have been working in machine learning for a couple of years now and I'm fascinated by it. I've talked to radiologists, geologists, and they all ask the same questions. Artists will AI replacement and AI will not replace. But artists with AI will replace. So in this class, I wanted to teach you the introduction to a fascinating new tool where you can use sentences to generate beautiful digital art that has not existed before you came up with it. And I think this is a fascinating time. The tool that I'm showing you is two months old at the at the point that I'm making this video came out in August 2022. And it is on the cutting edge of our research. And our understanding has seen over 5 billion different images and understood the captions of those and mentors. And it has cost $600,000 to generate. And you have access to this for free. So I hope to see you on the other side because I will show you how to generate stunning imagery with this amazing tool. 2. Class Project: Welcome. On the other side. I'm so happy you chose to take this class. I had so much fun creating this. Let's talk about the class project. This is a fairly easy one because everything we do during this class is going to be generating really interesting things. So if you want to upload something cool that you made, then please do. This can be your project, which waving image. So take one of these images and if you feel confident enough, also in the description, share their sentence that we use to generate that image. Because in my experience, through collaboration and learning from each other, we can really thrive and make possible to improve our craft and improve how we understand these tools. If you want to post these to social media as well, please tag me. My name is in the description of this class as well. You can use Twitter, you can use Instagram or almost any other social media that you like. And I love to see what you do. So make sure to post those. Creating projects is a great way to show what you learn. In this entire thing. We're going to use a tool called stable diffusion, which is the AI that the AI system that we're working with. Throughout this class. I will slowly introduce you to more sophisticated ways to interact with this and sprinkle that in-between. Also to understand how this works. Don't worry. It's an introductory class. I want this to be focused on you on how to use these tools. There's going to be no math, no coding involved. That's going to be a different class if you wanted. But for now, let's dive in. Let's start making our first generative art with AI. 3. Turn Sentences into Digital Art with Prompt-based AI: So this is the first real class. And I want to generate beautiful art pictures. All of these out of simple language, out of just the text. And this should be possible, right? Because we use language to describe so much in our lives like we can describe everything we see, hear, feel. But even the weird dreams that have no real anchor in reality. And those funny sketches that we think are up fantasy, we often have words to be able to describe those. So we really want an AI to be able to take our language as an input. The funny, the weird, all those sentences that we can generate and turn those into images. And this year, we are at the cutting edge of machine learning and AI right now. This year we got these models where you can write a sentence and get out. Amazing, really good looking pictures. And I want to show this to you. However, I don't want to overwhelm you with math. I don't want to overwhelm you with code. So for this introductory class, we're going to work in a website because this model that has been built by a huge international collaboration, it costs $600,000 to create. It's available for free. And this is mind-boggling to me. You can just you can run it on your computer. You can install it and run it yourself. But this is not, this is not in the scope of this class, maybe a future one. Let me know in the reviews or in your project if you would be interested in that. But right now, we're just using a web interface which offers, that is incredible that we can just use a website to use this machine learning system, this AI to generate pictures. And yeah, I'll slowly build this up. We'll start simple, just go there, build something, and then we're getting better and better at this in this class. So let's get on our computers. Remember, all the links are going to be in the resource section. I'll show you how to get those as well in the modern AI community. And I'm going to use AI and machine learning here interchangeably because I don't really think this is an AI yet. This is a machine-learning system. But I call it AI just to, just to conform to the expectations, right? But basically we're using a lot of Hugging Face and Hugging Face as a startup in this community, it is used by Google, Microsoft, Facebook, Intel, all the big tech companies that exists today. So yeah, but first things first, I don't want you to have to pause this video all the time and like click, Copy this and yeah, get typos. You can always go to the class. This is another class of mine. And you can go here to Projects and Resources. And I will leave links in here where you can find all of these websites that I'm sending you in this class, instead of painstakingly typing those into your browser. So yeah, you'll also be able to find all the resources here. And when you have created something awesome, you can share your project right here. I always share my own project so people know what to expect. Yeah, I, I expect you to make something really cool and this one. So let's, let's dive right into this. We're using stable diffusion, which I'll explain later. Just take this as a, as the brand name, let's say. And what this is, it is prompt based image generation. Now, a prompt is just a sentence. Essentially, you can get really creative here. The beautiful thing is you can come up with whatever you want, play around with this. This is free. There isn't a pop-up saying, oh, you used five of this. Now you have to pay. This is completely for you to use. If you don't know what to do yet. You can go to the examples right here. And it's very diverse. So like a solar punk utopia and the Amazon rain forest, Pikachu, fine dining in view of the Eiffel Tower. All simply a cat lying on a rug in front of a fireplace. So this should work. Then we put in generate image. We can see where in the queue right here. So there's ten people before us right now. It takes approximately 20 s to generate. We skip the queue. We were lucky. And now we should be able to see this quite soon. You can go to this website, play around with whatever you like, get really creative and try out different things. So we can see right here this one, it's a fireplace, but I would like to see more fire. It's a very Derby cat, but I love it. Another count, the fireplace is not on unfortunately, but it does look very cozy, although like it could use some pupils. You'll, you'll see that these unprocessed outputs have problems with two things. And that is faces. And also texts. I'll show you in a minute. This cap looks well. It's definitely doing tattoo yoga. I hope it's okay. I don't see a fireplace, but this one, I think is very cute. The cat is cute. It looks fairly realistic. And the fireplace is also very nice. It's a nice rug going on right here. So if we wanted to keep this one, we can save those as I'll open it in a new tab right now to show you another thing. This is a 512 by 512 image, so they're not huge. But this is what we're working with in this free online version. Now, normally, we also have advanced options, but these are temporarily unavailable. I am assuming this is due to the popularity of this. I think this one is the most popular thing I've seen on Twitter recently. This is what everyone is using right now. So it might happen that there is a big red box popping up right here. I'll show you a screenshot and a second. We can also look at a smaller version of this, also free cat sleeping on a rug in front of a fireplace. Now this may take up to 2 min. We also see the little count on. This was the, this is basically the little sibling of this diffusion demo, the stable diffusion demo. Dolly is another model that we'll get to know a little bit later. And mini deli as just a way for us to actually use it in the browser because these can be quite chunky, quite big. But when we go back here, I promise you that we will have a look at how it doesn't really do great with texts. And we can simply test this by saying, assign, saying, hello, Skillshare, and generate image. So let's have a look. If crayon, crayon finished this, this a little bit green, but very cute. You can see that the face is a little bit mangled, but not the worst nightmare material yet. Then let's have a look on this one is quite cued. Again, face terrible. I don't know what's going on here, but it might be a cat. Now, do we have a cute one? Not really. No. This is kinda cute. So at least is getting the fluffiness right. But we can see that this one is performing a little bit poorer than the big sibling. And if we wanted to save one of these, this one was quite okay. We can also see that these images are smaller. We have nine different ones. So it can be very nice to get a couple of different ideas. But yeah, I mean, it's a little bit freakish. I know. So just to be aware, when you're generating cats, the phases might not look that great right now. But we'll have a look if we can get better at this. So I'll just generate a few other ones. This is also why you should save these. They are generated out of nothing, out of noise, out of the ether of this, out over this AI. So when you rerun the prompt here, we'll get something completely different. Let's have a look at the signs. Well, she yellows Galileo. These show it. There is text, but it doesn't understand to put texts into this. But I would say this is a great starting off point for some Photoshop. So I mean, this is very easy to fix, right? We just take this out, take this one out. And we have a perfectly good sign that we can use for a basis for adding hello Skillshare to this, maybe the little logo. So, yeah, not terrible, especially if you expect that you will have to do some editing, which most people do. Let's have a look. Do we have some cute cats going on right here? This isn't too bad. I really liked this. I'm going to save this one. But yeah. Honestly, if you're already generating some very cool ones right here, don't be afraid to just head on over to the projects tab and add it to your projects. This would be the easiest way to complete the project. And of course, you can check out my other courses here. But of course this doesn't have to be your final output. But, you know, that's, that's the first one. But if you're just like your outputs, you can definitely post them on Twitter. Tag me, on Twitter, on Instagram, where I'm also just for drums. I have all of these linked in here, but also in the main tab of my course right here. So you can always, always find me if you want to put them on LinkedIn, that be funny as well. But you obviously don't have to go to LinkedIn for this. But yeah, this is your start and to prompt based image generation, you can just go to a website, input, whatever you come up with. And it can even be branded things. Obviously, I don't want to show them right here because I don't want to get in trouble with Skillshare or in trouble with whoever's brand it is. But play around with it. You will be surprised how well this does on different things. Isn't this incredible? And this is two months old. This started existing in August 2022. This is mind-boggling. Honestly, I work in this and this blows my mind that we can do this now. And this is available for free to play around. And yeah, we'll we'll have a look at the license later. But this is available for you to use commercially, non commercially, just in an ethical way. And honestly, this is incredible, isn't it? So we have some very simple ones. And remember, play around with this, figure, something really funny, owl, weird out. Whenever I show this to friends, they can stop making it funny Pokemon or imagine what, uh, one was really funny. Someone imagined a cow writing and Monterey because we were just diving and we saw a mantra rays. So really play around with it. But also in the next class, we're going to have a look at how to customize this a little bit and write better prompts. So how can we change these sentences to get different outputs and to fine-tune what we get out of this AI. 4. Getting Creative with Watercolour Styles, van Gogh, and Hyperrealistic Art with AI: Welcome back. So in the last class we had a look how we can basically form basic sentences, how to use the website and just play around with it a little bit. In this class, I promised you watercolor. I promised you Van Gogh. I promised you hyper realistic paintings or drawings rather. And how is this possible? Well, this machine-learning system, I often don't call it a, it's called AI because that makes you click on this class. But in my eyes, this isn't sentient, right? So this is a system that has seen, but this billion with a B, this has seen 5 billion images, or rather a subset of those images that some people said were aesthetically pleasing. And with those images, it also got a description of what is seen in this image. So this machine-learning system is learning how images look like. And it is learning how these are described. So that way, it can learn how to make these images out of our descriptions. And in those images, we have van Gogh, we have watercolor pictures. We have basically any form of photography that you can think about and drawings. And so we can really fine tune the sentences and get a very different outcome than before. So let's have a look at how we can change our very simple sentence and make it a little bit more interesting and more on the style that we're really interested to generate out. And now, after your first interaction with this prompt based AI, you may think that there's no real art behind this, right? So there's a lot of criticism around submitting this to digital art competitions, e.g. and like, there's some merit in this, but we can definitely get very creative with our prompts. So instead of just talking about the New York skyline, which would give us a fairly normal skyline. We can now start thinking about how we can modify this prompt to look more like what we actually want. So there's a lot of randomness. Of course, in this bud, we can do prompt engineering to make this look even more realistic. So e.g. if we wanted this to be a better picture, we can add picture for k and also eight K. These are just little tricks that you learn when working with this. And a way to find these out is by looking at what other people used in the prompt. Now, looking at what other people did can be a little bit tricky because not everyone is producing safe for work content. So take this all with a grain of salt. Especially when looking on Twitter for these kinds of things, you'll probably, well, there is no safe search, so just be aware that that may not be the first place to look. So we can see here that this looks very much like those cheap images that you can get on Amazon. But this one looks like an HDR picture. This one has nice depth to it. So very clear foreground and then the blue background. So definitely some ways to be able to modify this, but let's, let's think about this. We can also make this in 1920s gray-scale picture and see what comes out of this. Now, while we wait for this, there is another nice website that you'll find in the resources, which is lexica art. I already looked for the New York skyline right here. And if we wanted to find this one, we can see the prompt right here, Bob Ross painting of a New York cityscape. And see, see the different generated pictures out of this, which is very nice. I think. I'm just happy little accidents. And this is a collage of them. Now this one is quite a bit different. You can add line brush, minimal paintings for this. I'm not sure if this is overlaid Russian texts. I don't see it. See it. But yeah, you can change the time to make it more wealth, more accustomed to if you want the full moon, e.g. the reflections here, quite nice. We could try that out. So this is very different. Just by adding 1920s gray-scale picture. We have these very cool views of the New York City skyline. So let's see if we can make it at night. Selecting in some liver. And that way you can, you can fine tune how you want the AI to generate exactly what you want to, then be able to take it into Photoshop to further process it, to like clean up some of the mess that you can see. This is interesting. I'm sure there was an apple somewhere here. But yeah, you can see right here that this has much more River in it. This one is quite nice, I think. Beautiful. So this is how you can really get very, very custom with what you want. Yeah, get more into, into the style you want as well. So what happened if we had the skyline at night? I don't want it to gray scale anymore, but painted in watercolor. I think this is also very good to get inspiration because we often have a very particular image in our mind. And especially when we work with clients or other people, then it can be difficult to come up with a variety of different ideas. But right here we can see four different color palettes and also slightly different styles and how you would paint this. The paper has a very nice green. Here. We can see the reflections as well. And how this is done. And this obviously very different. I'm still very nice paper grain. We can see if we can maybe change this on linen texture. Maybe that works. That way. I get like more diverse idea before we come up with different pictures that we present. Our clients are, well, if we're doing this for ourselves, we can definitely get different ideas for this and use this as an inspiration without the fear of accidentally copying something that we like a little bit too much, which it happens. But it's not that great. But yeah, I'm here. We can see a little bit more of that texture as well. Yeah, this is nice. This is how you can get easy inspiration for your watercolor. Now. Well, let's, let's move away from the New York skyline because we've had a lot of this now. So how about we talked, we have a look at the catcher in the Amazon forest. Now this has got an acute, we have different styles. Again. This looks like something you could find on a travel Instagram, to be honest. But what if we have this in the style of Van Gogh? I think it is already pretty amazing that we can get pretty good pictures of Pikachu and prolly other Pokemon as well, or whichever. Like, funny thing you see. But we can also generate like Van Gogh like images from this. Obviously it has seen a starry night. Now this is slightly terrifying to me on this, but sometimes we do fuel our nightmares unfortunately with this. But yeah, you can generate these different styles maybe you're not a fan of and go, which I can't fault you for. He's one of my favorites, but that's totally okay. Maybe Edward Munch from the screen, I think that's what it's called in English. And that way get very, very different outcomes from basically the same prompt. I mean, this looks very close to what I would expect. This is what we call prompt engineering, or basically changing our prompt to exactly see what, what we want out of this. Let's take another one where I promised you a hyper-realistic art. So hyper-realistic art of a cyber punk bowl with fruit in it. We'll see if this works. Hyper-realistic can be, can be a nice modifier. I like to use it to get these very bright pictures that look very nice for k. K is to get very realistic, like photographs. And yeah, this, this looks like something I would expect. This is very funny. And here we can see that these are basically hyper-realistic drawings, are, well, they are drafted up in this style of hyper-realistic drawings. This is very cool. And that way you can change something that you already have and put it into the style of something you want. So if you go back to the cool pictures that you had before, then you can essentially now play around with them. But I want to show you another tool that you can use to get inspiration for your prompts. Let's have those slowed while Fraser is another free tool, it tries to get you to the dream studio better as well. Oh, this is adorable. I'm going to keep this for later. Yeah. So this is what happens if you have hyper-realistic instead of like pictures. But anyways, Fraser, so this will point you to the Dream Studio Beta, but you can get all of these as well. So we'll have a look at Fraser in our next lecture. Isn't it interesting how we can change just a little bit about our prompt and we get such different outcomes out of it. I love this, like this is so fascinating. Yeah, I, the resources. I have a small book that goes through some different styles that you can have a look at. It's free. I made this for this class. So please download it and if you want to share it, go ahead. But like, be nice about it please. This is for you. This is for you to just check out what different styles you can do and play around with and eventually get your project going. So really, this is for you as a help, so you can use AI to your advantage. And in the next class, I want to introduce you to a tool or different tools that you can use to get some inspiration and some better information. How to generate more diverse prompts or different prompts. Or maybe it's something you didn't think about. I was really surprised at some of the things that work really well for, for generating better, better inputs to have nice images or images closer to what I imagined. So, yeah, see you in the next lesson. 5. Using Tools and Lookbooks for Prompt Inspiration: So I may be a little bit too analytical for this one, but I really like to read what other people have been doing this or see information how other people generate really pretty outputs. So I use other tools for this. E.g. in the resource section, I linked to a couple of different look books. Essentially, like there's the gallery, gallery where you can basically have a look at a little book that has different art styles in it. And I'm going to link to a Google Doc as well that has some different styles and with examples which I love, I also created a little e-book for you that you can download in the resource section. Which is also a thing where you currently Skillshare sometimes changes, but I think right now that is where you would upload your project later. So keep that section in mind. It has a lot of value packed into it for you. And yeah, now, let's have a look at two websites. One way you can look at art that was generated by other people. And you can actually look at what prompts they use to generate this out. And then we also have a tool that actually can generate these prompts for you and you just click through it. And in the end, you get a nice succinct prompt that should generate exactly what you want. And then you can delete stuff and play around with it to really see and fine-tune what you're getting out of this machine-learning system. I'm always baffled by the generosity of people, by collaboration, what people can achieve. These documents and these websites are just amazing for, for us to find out what we can actually do with this ai. And I think for a beginner like you, this is a fantastic way to find out what you can actually achieve by using stable diffusion when you're taking this class, which might be a couple of months from now. That means there might already be new tools right here that you can play around with. Or there might be another free or pay for tool to create prompts. This space is moving extremely fast. But either way, what you're learning here is applicable to most of these models. So we select stable diffusion, which is what we're working with right now. Then we can see are maybe we do a 3D render right now. You can do it in different languages, but we'll keep to English right now because this class is in English, it checks for you if your prompt has enough of that information. Let's take the sleeping cat in front of a fireplace again. And it says, okay, this is good. These are similar prompts that we had before in all these different models. And let's click Next. What style do we want? Well, we can take this Blender guru, Leonardo da Vinci, Vincent van Gogh. You saw all those before. Let's, let's have Picasso, give it a go. The coloring. Well, I think we should have a nice orange maybe. And if there's a texture, we played around with this a little bit before. I think. We can, we can take any of these, maybe the night sky. Now, the resolution that we're aiming for is medium. Right now, this doesn't change how large your image actually is. It changes how it looks. And now we can also give this a feeling. And we want this to be, maybe I want inspiration as nice. And this one is contemporary. Now, this whole login thing. This stumped me the first time as well. But we can just click on go full page right here. And we can see orange 3D render made off night sky texture that the data copy this over. I saved that cute cat. Oh no, it's busy. So we'll get back in a few. Okay, here we go again. It is rendering for us. Now this prompt is not natural. Natural, obviously. We're often talking about natural language processing. So we can see right here that this is something, it's not a 3D render. But also like this prompt seems a little bit overloaded. So let's take this night sky texture right out. Let's change this render. Seems to be that that time of the night where the application is a little bit busy. As long as I don't get banned as a spammer because I'm having a lot of fun with this. Everything's fine. Um, but yeah, you can see that the Fraser isn't always the best for everything. But you can definitely get some inspiration for your prompts and see what you come up with right here. This is cute. I'm not sure if it's definitely a Picasso, but it has orange in it. Obviously it's not a picture because it's, it's a, it's supposed to be a Picasso. It's sleeping cat in front of the fireplace. And this way you can, you can really get, get different ideas by using these tools. I'm also linking a phrase book for the tool called Dolly. But just different ways to get inspiration. And when you look through this four different art, the lexicon art or these places, you can get a lot of inspiration of what to do. This long exposure might also work really well. Essentially they're all, I'm explanations are no descriptions of pictures that are on the Internet. So anything that someone used to describe a picture that they upload it somewhere on the Internet is probably used in training, is used as something that this AI has seen before. And that you can then replicate to make something similar and just get these really interesting images out of it. And yeah, yeah, tilt shift is also a good one. I really liked tilt shift, but I think this is it for Fraser at the moment. Just another tool. You'll definitely find others that can help you generate other prompts, better prompts, just different prompts. And in the next class, we'll actually have a look at how stable diffusion works. And don't worry, I don't wanna get into code. I don't wanna get into math. But I want you to understand why I don t think this is an ai. And I think you should understand your tools as well. So it's really great. It's an opportunity for me to nerd out and for you to learn how this actually generates these beautiful pictures. 6. What is the Stable Diffusion "AI"?: In this lesson, we'll have a look at stable diffusion, the algorithm that we're working with. So it's using a really, really neat trick that I think everyone that read the paper was just baffled by how simple that idea is and how well it works and what's going on. So machine learning is this computer sciency thing that is fairly new, where we're basically show a computer and an algorithm running on that computer, a lot of data and have it figured out relationships within that data. In this case, we're showing it images, billions of images with descriptions. The really cool part is those descriptions. We can turn those into numbers because we've been doing texts processing on computers for so long. We are slowly figuring out how to turn those into numbers that the computer understands and images. Well, they've been digital for a while as well. So we have those both components that a computer can understand. And now what we're doing and this is the fascinating bit, is that we take the image and we're slowly making the image worse. So basically we're adding TV static to it. So we call it noise. And just a little bit, each round we're, we're putting a little bit more noise on that image. And the fascinating bit is that we can train this machine learning system, our AI, to recognize that noise. But we're actually doing it the other way around. So we have our image and we have the image that is a little bit worse. And we're teaching our AI, not the way that we're making the image width. We're teaching our AI to create the original image out of it. But trying to teach the AI to get rid of the noise on our deteriorate deteriorated image. And this way, we can apply this AI that is learning how to reconstruct our images. And we can take it and apply that to something that was noise from the beginning and generate images out of it because it is slowly generating something that it has seen before. So one of those beautiful images of a landscape or watercolor, a Van Gogh painting. It generates something like that out of just TV stat, which is fascinating. I'll show you right here. So we have a starting image of just color noise, which is basically TV's static but just colorful. And then we're running our AI on at once and we see that the noise changes, but there's nothing really happening. And then suddenly it flips and we're getting an actual image out of it because so much noise has been removed out of it. So we're generating a image out of nothing, out of randomness, out of noise. And I think this is such a fascinating idea. So you may be wondering, but how does it know what to generate? Because right now it's just doing something. Well. Because we have this description. We can always tell the AI that, but it's reconstructing right here from this noise. This image is this description. So basically we're nudging it in a direction now. So when we give our AI a sentence to work with, we are now notching this, this removal of noise into a direction that it knows like a watercolor image. So it is now removing the noise in a way that it knows how. Well, usually noise looks like on watercolor image. And it knows how to remove the noise on a Van Gogh. So really this very smart way of mixing these media of text and images and using them together to get your computer to do basically magic. Of course, there's a lot of math behind it and a lot of little tricks that I use. But basically what's happening is we're nudging this machine learning system into a direction using text. And it knows how to generate images by this reverse process of removing noise from just noise and making it into this image that it has been nudge to do by the text. And that's it. So it's fascinatingly simple to really do this. And I think this is a beautiful concept. And this is also why I don't think this is Ai. This is a very smart algorithm. This is fascinating an adult, things that were never possible before. But it is not mentioned. There is no conscious and there we are. Just showing it lots of pictures. And we're getting it to do exactly what we want, which is generate pictures from sentences. And yeah, I think mats beautiful in itself. It's a way to, well, to work with a simple idea and make it work with billions of pictures and your, your ideation and the n. So with that in mind, I want to go to the next class because I think this understanding of what this is actually doing is enough for you to actually work with this in a way where you're like, okay, I, I get what it's doing. It's just working with noise. And it's making this. It's not smart, it isn't doing anything clever. All the cleverness was done by the people that created it, by the people that infused it with this idea of making it smaller, this idea of removing the noise. But there is no inherent artificial intelligence in this system. So, yeah, but still a lot of fun to work with. And I think in our next class, well, in our next lesson, this is still the same class. We'll have a look at how to create different styles of photography with this AI. 7. Make Stunning Fake Photography with AI: In the very first lesson, I already showed you how to make a couple of nicer pictures. These pictures were in the style of actual photography, but we can do a lot more. And in this lesson, I would like to explore a couple of different ones just to show you what we can do. But I don't want to dive too deep because this is an introduction. We can go over hours of material of fine-tuning what we want out of our pictures. But this should give you an idea if you've ever tried photography on how to make different styles and different kind of perspective on your picture that is generated out of the sentences you're given. And let's go back to the website. Let's go back to our New York skyline. Now, when we start this, we get a fairly basic image. We can see, well, we already did part of this. We have all these different ones, some of this obviously like product photography. So the simplest way to get higher-quality images and to make sure it's a picture is putting in eight k for k. So this is because pictures online and we're often tagged with those high large images. So large images often mean that it's a good camera. Like my camera can photograph in eight K, I'm pretty sure. But yeah, this is already much nicer. We can see a nice blue hour here, I think. And yeah, Generally this is one way to go about it. But we can always go through our photography vocabulary and see maybe we want a nice tilt shift in here, the one that we saw earlier in the lexica art. And that's just this one right now. I think that the one before wasn't ideal. Maybe the mix out of 4k8k and tilt shift didn't quite work. I think this one is getting closer. So we can see the tilt shift right here. This is looking okay. I think we can try a couple of different other ones. Maybe we try some long exposure. Might be worth it to add some traffic in there maybe. Oh yeah, though we can see this is already looking like it's taken over a long time. This is obviously at night, but a very light night sky from taking long exposures shots. And of course we can try HDR. This is a popular one. People like to take pictures, but also that is very colorful. Yeah, this is, well, this is how some HDR photo does look like. I think this one is probably the best one out of the out of the lot. It looks very nice. Very it's popping. Yeah, I like this. This is good. Some other things that we can try is maybe depth of field. We can check if this looks nice. It does. So these are ways how you can, how you can play around with actual technical terms out of your field. Photographs. If photography isn't yours, but you rather work with acrylics, you can, you can add terms that are technical to describe styles into your, into your, your prompt right here. So, yeah, experiment with it. If you have a couple of nice ones, you can also create a collage now to post as your project instead of, Isn't that neat? I think it's fascinating how we can take these technical terms that make a lot of sense in your actual camera. But this ai is able to generate realistic representations and perspectives of those inputs because it has seen so many of these images, um, during the training, during the conception of this machine-learning system. And yeah, I think it's fascinating. So now that we know how to make these realistic looking things, I think we should have a look at ethics and how to be responsible and working with this type of AI. 8. Ethics of Generative Art and AI: When we work with this type of system, we have an extremely powerful tool. And that means basically, we have to go with the Spiderman quote, with great power comes great responsibility. I beg you to not skip this class. Don't skip this lesson. Because I think this is extremely important for artists and for people. Because there are different aspects that we have to consider when we use these kinds of tools. One of them is how these tools were trained. So basically, this machine-learning system only knows what it has seen before. It can generate creative new outputs. But we've seen it a lot of times. Unfortunately, with these systems that e.g. a. Machine-learning system was only trained to recognize why people. And then it had huge trouble recognizing people off color. So when we use these tools, we should have a look at how these models were trained, how this AI actually was created. And luckily for us, we have something called modal cards. We can have a look at that talks about biases in the data and biases and training as well. So e.g. this has a strong bias to a static pages. And we can go have a look at the model cop and actually understand what it's doing. When we go to our staple diffusion demo, we can scroll down. We see the license right here, which is important for you if you want to use this commercially. Yeah, Check this out. But we want to look at the model cart right now. Because yeah, just like it says right here, And despite how impressive being able to turn text into image is, be aware of the fact that the model may output content that reinforces and exacerbates societal biases. Yeah, so this model card right here, I need to login for this. There we go. I was wondering why this was so light that we can see right here what this model essentially is trained on, what the dataset is and what thoughts go into this. So here you can see what the intended use of this model is. How misuse and malicious use and out of scope obviously are. Basically, don't use it too. You do demeaning or dehumanizing or otherwise harmful representations of people, cultures, religions, etc. Don't intentionally promote or propagate discriminatory content. Because of course, this type of AI, especially without a filter, is, I'm capable of doing this, unfortunately. So we have to be ethical users of this power for this powerful tool. Yeah, so these are malicious use and misuse. But obviously we have to have a look at the limitations and biases. Because I told you before this cannot work with text. It is not perfectly photo-realistic. I think we saw that when we were looking at the making stunning photography class before, there were some errors and some of it, it does not work perfectly on more difficult task. We saw that when I was mixing tilt shift or some other keywords right here, faces and people in generally are not great. So a lot of people then go into Photoshop to actually put in realistic pictures. Then the model was mostly trained on English captions, which is something that you'll have to adapt to. And this is a limitation of this. Then we have a lossy auto encoding that as a technical term. But this was. Trained on lay on five B, which contains adult material. So you can also get some spicy images out of this n here the biases, well, it consists mostly out of Western images. So things that we're used to, they often have a certain bias. So from communities that are largely affluent on the Internet, have access to two cameras. So we're talking about a lot western white cultures in this, which will affect how, how this generates images and perspectives. So be aware of this. Because we, well, we don't want to make things worse just because we're using a tool that makes life a little bit easier for us. So just, you can read this when you, when you click here and go to the bottom to the model card and you can read about the dataset it was generated from as well. This is the lay on five for 5 billion. Yeah, just consider this as as users, we have to be ethical about the use of this. And of course, there are other considerations that you should not generate images from in the style of living artists and yeah, all of this. So really try to be a good person about this. And yeah, let's go on. In addition to the model card, we are also subject to the license of this model. This model is licensed to you and can be used commercially, non commercially free of charge. But they do ask you to not use it in a harmful way. Now, there are unfortunately people that use these systems to create fake news or recreate images of people that are in the public eye, e.g. and yeah, that's that's not great. Other uses that aren't exactly totally kosher is creating art in this style of artists that are still alive because they're making a living with this and just taking their style and saying, Oh yeah, I create this app and maybe even selling that art is a bit problematic, right? I think we can all agree on this. So consider the impact of what you're doing. Of course, if you're playing around with it and you want it to create some art that you just want to use for yourself. People have always imitated artists. This tool just makes it even easier to do. Consider the impact of what you're doing. And be aware of how this model was made up. And be aware of the impact that you're going to have with your outputs. So these original models that were created by OpenAI and Google before, they actually have filters on them against hateful speech, hateful symbology. And while they're there, this is an understandable way to go. Bad actors will always find a way to circumvent this and create harmful imagery with other means. Just create, finding ways around these filters, e.g. so this one is unfiltered obviously. But yeah, consider the impact that this can have and work directly with these kinds of models. Because this should be something that can celebrate the beauty of fantasy, of your imagination and shouldn't be used to make the world a worst place because we can all use a little spark and use a little imagination and fantasy and beauty in our lives. And we don't have to spell a good thing just because we're using it in a bad way. This is a problem that we call dual technology and machine learning. This can be used for good, this can be used for bad. People will use a nefarious, but I think don't tell anyone. But this is a little bit of a side quest for me in teaching this. By understanding how easy it is to use these and how, well, how you can get very creative with this. I think it's also more and more important for people to understand how these models work and how these could be used and if erroneously, to generate fake news, fake images. And yeah, I think you're in a better place by just understanding how powerful this is and how you can use this to make beautiful things. And yeah, this may be a little bit Pollyanna. But I believe we can all have a little bit of a spark in our lives right now. In our next class, I want, since this is a vibe. So in the next class, after this heavy topic, I wanted to talk about how to get the vibe, the feeling of the image right, that we're generating. So join me in that next lesson. Right now. 9. Getting the Vibe of Pictures Right: Now that we know how we can work with this responsibly, let's think about the vibe that we want to portray. And I know this may sound a little bit TikTok. But in the end, a lot of art, or a lot of what we do about our art is about the vibe, the feeling that we want to convey. There's a reason that as musicians, we choose a minor scale for most samba, or maybe even sad piece of music. And there's a reason why we choose more muted colors because, well, it's not a summer day. Maybe it's rainy. Maybe we're processing our pain. On the other side. Maybe we're using neon colors, using something sparkling to have joy, to have a party, a celebration. So we want to get the vibe right. And our descriptions often have emotions. So when we talk about art, when we describe odd, There's often emotion somehow woven into that kept. And our AI learned those emotions. So we can actually use that to our advantage to tweak the vibe that we actually get for our image. So let's have a look how we can convey emotion with these generative art pieces. When we're trying to get the vibe right. We already have a couple of ideas right here, where we have this nice gray scale or some golden hours and photography. But of course we can convey emotions like gloomy, for example, and we can see how that affects the output. Because often when we describe that as gloomy, well, often when we describe images online, we do assign them emotions as well. So right here we can see that this has way more clouds. We still see these images that we've seen the entire time with the with the Empire State Building featured prominently and just but yeah, very gray tone. We can tell it to have muted colors, for example. To also go more on the side of yeah, of some bonus, right? To not have it pop. But we can go the completely opposite direction and see how this looks in neon Cyberpunk. Wow, I love this image actually, I'm going to save that for later. These are just ways for you to play around with this. But also if we group Friends hyper-realistic, think that's written without hyper-realistic and then say happy. So they're all happy at the faces don't even look terrible in some places. Thad is another signifiers. So we can really modify what we're doing by assigning it emotions as well. It's duration. This almost looks like friends. So this is how you, how you can use the same style and the same motive, but change the entire vibe of the picture. So in an alley way, we can make this fork a picture. Give it an eight k2 because that's just how this works. And we can make this upbeat. So those are very light pictures. Very nice. But we can change this because Gotham in Batman usually has a dark vibe. It's still looks fair, fairly nice. But yeah, experiment with feelings and different vibes. This is much more likelihood so we can see some dark parts and in the back, we can add gravity to it. Although graffiti sometimes doesn't work. So you have to say spray paint on the wall. Yeah. I love this. Okay. This is interesting because it doesn't have a cap, but this is looking like a cat. But yeah, play around with this. Change the vibe of a picture just by adding little modifiers like this into it. And isn't it interesting how the AI interprets our prompt and changes how a, an image feels to us using the same image. I love this. I think this is fascinating. And yeah, with that, I just want to finish on our penultimate class where I will talk about a little bit what this machine learning system can also do. What other things you could be learning. And to be honest, this is a little bit of a chance for me to also prompt you for feedback. Because I would love to know from you what you would like to learn next. I had so much fun making this class. And I would like to do a follow up if you want that. So see you in the next lesson. 10. What to Learn Next: This isn't quite the conclusion, but this is about this. Understanding the power of the system. Because stable diffusion can generate these images out of sentences. But it can do more, much more. But for this, we would have to install it ourselves or use online tools to customize it a bit to our needs. And I wanted to keep this introduction as friendly to anyone that is interested as possible. So if you're interested to see how to set this up and use this on other online tools where you can do more with it or create more granularity about what you're doing with the ai. Let me know, write it in the review, write it in your project. I would love that feedback if you're interested. I might be able to make that class because that is my expertise. And the other things that you can do with this. Incredible, because you can't, well, you don't only have to create images out of nothing. You can also take an image and transform it into something new. With the text prompt. You can use this to do something called out painting, where you have your image and you then create what is around the image. So the AI comes up with a completion of the outsides. And we can do the same thing, reverse. We can basically take damaged images, whether we did that intentionally or if it happened over time. And we can have the AI dream up what this image should have been, where it has been damaged. So we get interesting new ways that this image is then repaired. And I think you can see how powerful this can be. And how you as an artist can use this to match up, well, extensions to your own work or variations on your own work. If a client doesn't quite like what you did there in this image, you can use AI to get new ideas, new inputs. And I think you don't always have to use this to give this rod to a client, right? You have your own style, you have your own ideas. But sometimes we're stuck in our own ways. So we can use this as inspiration. We can use this as a schematic and it can help us, well, help us get more diverse outputs and more diverse ideas. And that's what I love this for. What I think is the power of having this type of tool in addition to your own imagination. So you're now combining what you do, your expertise and your creativity with the power of 5 billion images that are baked into this tool. And yeah, that is what you can do next. So you can take this and not just use the website that has been built for you, but go to the code. Now, the next step isn't coding yourself. You don't have to know programming. The next step is to use code. Others have written and do slight tweaks to it. So you can use a full powered machine learning system, not with a nice website in front of it, but with a little bit of tweaking and generate all these fantastic things. And you can learn how to do all of those tasks instead of just generating something from your imagination. Now, if you're interested in any of those, please let me know. I would love to create those as well. And with that, I'm going to let you go on to the last lesson for our conclusion. 11. Conclusion: Congratulations, You made it to the end. I know. I think this is a fun class, but still you sat through a lot of material. I know that I tried to sprinkle in. Well, making more interesting problems and more interesting art. But we had some heavy topics in between. You'll learn how stable diffusion works with the noise and everything. And then we even talked about AI ethics and how to use this to responsibly. So we had a thorough curriculum right here for an introduction class for sure. So pat yourself on the back and after that, please show me your favorite image in the projects. I can't wait to see what you come up with. And if you are so inclined, also post your promise so others can learn from it. And yeah, maybe see variations on what you did and play around with your ideas and bounce off each other. So yeah, thank you for making it all the way to the end. Make sure to also leave a review. This helps others find this class. This helps me make future classes better. And also, if you find any topic that you would like me to go more into, especially if they're out of the last lesson where we talked about future learnings, then let me know. You can write to me. You can leave those in the review, you can leave those in the projects. I read all of them. And with that, thank you so much for being here. This was incredibly fun to make. I hope you enjoyed it just as much