Transcripts
1. Introduction: Hi everyone. My name is Oliver and in this course I'm
going to show you the exciting new field
of AI generated art, as well as giving you
all the information and tips to have you creating your own stunning
AIR in just minutes. In terms of what we'll
cover in this course, there will be a introduction to free AIR software and some
background on what AIR is. We'll also have a step-by-step
demo for creating your first stunning AIR using a program
called Meet journey. And we'll also look
at how to construct a good prompt to tell the
AI what you want to create, as well as modifiers to create
exactly the way you want. The great thing is that
creating art doesn't require any specialized
programming knowledge or even much talent for design. That said, constructing
a good prompt requires practical experience
including knowledge of special techniques and
some domain expertise of your subject matter too closely replicate an image
circulating in your head. But we'll get into that
in the next video. So for now, let's get
started with AIR. Hi everyone. Welcome back. In this video module, we're going to look at how
to construct a good prompt. Now, as you may be aware to
create AI generated art, you will need to enter
what's called a text prompt, which is instructions for the AI about what
you want to create. And that could be as simple as a person sitting at
a cafe, for example. The AI engine will
then do its best to generate an image based on
the prompts you provided. Now, similar to how
people have their style for entering keywords or
phrases into Google search. There is no fixed method or
2. What are Text Prompts?: Precise code for
writing a text prompt. This means a prompt
can be a list of words separated by a comma, such as bare tree. It could be a fragment
of a sentence. Bear under a tree and imperative
sketch next to a tree, or a full sentence, a sketch of a bear
sitting under a tree. Correct grammar isn't
necessary as long as your instructions can
be clearly understood, just like you're talking
to a human being. And in fact, the easier your prompt is to
understand that better. Importantly, you
also need to use natural language as that's the language the
AI is trained on. Natural language for
first human language, such as everyday
conversational English or what you might write in a text message as opposed to an artificial language
such as programming code. This means you should
communicate to the AI program more
like it's a human. And avoid using artificial
languages such as CSS and Python to write
your text prompt. However, keep in
mind that there are exceptions with one being the specialized
keywords or syntax that come with the
AI software program, such as forward slash, imagine in the case
of meat journey, which must be the first
word of your texts prompt, and also keywords for
dictating the weight, size and other structural
aspects of the image. So in this example here,
there are two keywords. One is optimistic and
one is devastation. They have different
weighting and the waiting means
that optimistic, we'll have more emphasis than devastation because it is
double in terms of waiting. Now, in general, the more
specific your prompt is, the more specific
your outcome will be. At the beginning of this video, we looked at this
simple prompt of a person sitting at a cafe. While this is more than enough information for the
AI to generate relevant art, there's an opportunity here
to refine our prompt and generate a more deliberate
and nuanced output. And this is where modifiers
come into the picture. While your base prompts
will most likely describe one or more objects and the
relationship to that scene. The modifier applies
additional instructions regarding the stylistic design
you want the AI to take. So if we take our
original base prompt of a person sitting at a cafe, we can add a modifier in the form of a person
sitting at a cafe in the style of a 1920s
Art Deco poster. This will produce a much different output
than compared with the original output using that simple engineering
prompt that we first had. Similarly, you can
also go back and edit your original prompt to add even more precision
to your request. So let's update it
now to something like a distinguished
middle aged man sitting at a Viennese coffeehouse
in the style of an Art Deco poster
from the 1920s. Most AI software programs also
have a built-in feature to apply variations that let you riff on what you've
already generated. And sometimes it can take perhaps ten or more variations until you land on the
exact look you want. Next, keep in mind that
most software programs come with a character limit. For Delta E, the texts prompt cannot be more than
400 characters, which is more than
enough, but just about anything you
want to create. However, if you
are really precise with a prompt and you
create the right words, then sometimes less
is actually more. And in fact, you can even
generate decent results using just emojis depending on what AI software
program you are using. Also, a simple adjective
like Art Deco poster, already contains a number
of characteristics. For example, reaching colors, lavish ornamentation,
geometrical shapes, and the actual
material or medium of the art in the
form of the paper, rectangular in shape
that you would otherwise need to list
and define separately. So where's the actors? Containers can be
really efficient in terms of creating
a text prompt. Artistic errors
such as art deco or decade like the 990s will not impact the style
of the illustration, but also the fashion, architecture and other materials present in the illustration. Unless otherwise defined. If however, you
are having trouble capturing the exact style that you want to create using broad terms like
art deco or grunge. You probably need to
add some extra details like the actual location. So for example, Wall
Street in New York, you might want to
specify the weather, the time of day, the political, economic, or social backdrop. So for example,
Occupy Wall Street, September 11, the global
financial crisis, or perhaps COVID 20 to 20. Now, for photography, you can also borrow domain
specific terminology to define everything
from shutter speed to lens choice,
lighting, framing, etc. Or bundle all these
attributes into a style of photography
such as action shot, National Geographic cover, Japanese photo book
called co-worker Sophie, or perhaps a vogue photoshoot. Likewise, if you want to create
a specific visual style, then you need to
understand the framing, which is the angle of
the image, the material. What's it made of? Lighting materials for replicating
that particular style. So this ability to articulate and describe visual
art in words. We'll definitely
become a useful skill that you will develop
over time and maybe even send you down a rabbit
hole as you learn the fine details and specialized terminology in many different subject matters. So for me now, every
time I watch a movie, I kept finding myself
trying to articulate different scenes in my
head as if I'm taking mental notes to pass
onto mid journey or an AI software program as a text prompt that
I can use later. In addition to becoming more articulate with
our text prompts, you also want to do your best to understand the
details in the shot. So this means having
a good understanding of your subject matter. If you want to generate a
image of a medieval knight, then you need to
understand the fashion, social class system, and the weapons of that
particular period, including the names of
some famous knights in order to be specific. So for example, there's
a difference between a Templar knight dressed
in white and a Red Cross. And sort of more
general English night from an earlier period, joining their own unique
crest and fashion. Now, for most of this section, we've talked about
how important it is to be detailed and specific. You can also go the
opposite route and let the AI come up with
his own ideas. Thus, rather than using
a detailed prompt, you can hold back and use a
more vaguely defined prompt. This technique might be
useful if you yourself don't know what it is that you're
really trying to create. Or perhaps there's few
existing references that you can think of, such as a stadium built in
the metaphors, for example. You also want to keep in
mind that the same prompts will come out differently with
different AI software. So Meet Jennie, for example, skews towards a more
artistic style. Whereas Stanford
diffusion sits on the more conservative
and liberal end of the design spectrum, I'll give you something
much more realistic. In either case, you
may need to provide some more instructions
to your texts prompt. If you wish to overwrite the natural style of the
soft program you are using. Now, while the AI soft
programs are often highly reliable and matching your prompt within
several attempts. Sometimes it will take
a few failures in order for you to get the right outcome that
you're looking for. So just remember
that you can make variations to the
art that you create. Next, we want to talk about
some of the common reasons why your art doesn't turn out
the way you wanted it to. So there's primarily
two common reasons. And one is that your
prompt was ill-defined. And the second is
that the AI software was not trained on images relevant to your
particular texts prompt. So Delta E, for example, that was trained on
650 million images with text descriptions
or captions. And while it's quite likely that the AI learns from
new data over time, there may be a time gap between something going
mainstream and the AI learning what that thing is as it actually has to
go back to school, so to speak, by studying
training examples. One way of thinking of this is if there's a new
movie that comes out and everybody else has seen
it, but you haven't seen it. So you can't really talk about it or make any
jokes or comments. So in this case, that's the AI. Maybe people on the street
have seen this movie, but the AI hasn't
seen the movie yet. So it can't really be a reliable source for remixing
content from that movie. In some way, it might
be possible for the AI to easily
replicate the style of an established artists
like Van Gogh or to create a politician
like Barack Obama. There could definitely be
a delay before the model is reliable at
recreating new trends, subjects or words as images. Thus, the AI model doesn't know everything like an all-seeing
genie or a godlike figure. It only knows what
it's been trained on. This means you can't
tell the AI through reliably reveal the exact face. So Satoshi Nakamoto
or Banksy, of course, you can still ask the AI to make an attempt based on images
already in circulation, but obviously, the
result will not be reliable or accepted
in a court of law. Likewise, if you use the word me and your text
prompt than the AI really has no idea what
you're referring to exactly. That means it has
no way of adding your own physical
resemblance to the image, which makes your
prompt ill-defined, even if you are famous and
your name is out there in the public domain and this
portrait images available, you still need to spell out the exact name of that person, which in this case
is your own name. So maybe that's Elvis Presley
or Michael Jackson, etc. And maybe also you
need to maybe even add some contacts in case there's other people who
have the same name. So you might need to
say that I am X person from this particular industry or famous for this music
joiner, et cetera. In order to be more precise.
3. Framing Techniques: Hi folks. In this quick video, we're going to have
it look at framing. This is a super
important part of capturing your artistic vision. And yet we're not talking
about picture frames here. Framing influences the
angle of the visible field, including the width and height, and controls how the content appears in terms of perspective. Examples of framing
include point of view over the shoulder, long shot, close-up
shots, and isometric. Generating an image of a person making a speech to
a room of people will yield different
results from an image frame from the point of view of the audience, e.g. likewise, framing the
view of a city from an isometric perspective
will provide a dramatically different
view than if you use a plain text prompt with no
specific framing information. Isometric, by the way, is
a fun technique to learn. It's basically a method
for visually representing 3D objects in 2D and is often used in technical
and engineering drawings such as town planning
documents, framing. I'd say it's actually one of the key differentiators
in terms of a novice and an advanced
user for AI generated art, most people focus on
the subject matter and the adjectives for describing
the scene in their head. But often they overlook
the field of view, the angle, and the perspective
of the art itself. If you have any experience
in TV and film, framing will probably
come naturally to you. But if not, it's a relatively
easy skill to learn. Once you've become
more acquainted, you will start to notice how
these techniques are used in the media or your
favorite Netflix show to add different effects. If you want to geek
out on framing, you might like to check out
the NFL dot edu resource, which has 80 plus different framing angles that
you can review, such as the cowboy shot, which shows the subject from
a mid thigh and up angle. And you can use
those techniques as inspiration for
creating your own art.
4. Tour of AI Art Software: Hi, welcome back. In this section,
I'm going to give you a quick run down of the most popular and
powerful software options in the market for creating
AI generated art. Now, in terms of the leading
applications in this field, there's definitely no
going plus Delta E, but there are also many others including made journey
and stable diffusion. But let's start with
Delhi eat anyway. Delhi eat or data E2, which is the latest version, is a revolutionary AI
software developed by OpenAI. This is a San Francisco
based research lab. Now, unlike some of the
other image generation algorithms out there, delta E works with both abstract and realistic
image generation. It's algorithm is also capable of understanding
complex sentences involving multiple objects and their attributes
including color, shape, size, style,
lighting, location, etc. So for instance, you can
easily generate an image of a red rectangle next
to a blue triangle. You can also generate
more abstract images, such as a pink cloud with
two eyes and a mouth. What I like about Delhi E2 is the clean web user interface, which is super easy to
operate and navigate. And you'll start to appreciate
it more after you use other software
options in terms of getting started after signing up for a new
account with them. You can type in
your text prompt, or you can choose to upload
an image and then edit it directly or generate
some variations based on the original image, images that you generate
using Delta E and then save to your account for
convenient access later. For me, one of the
best features of Delta E is the image upload, a tool which is useful
for manipulating existing art and design elements to create novel combinations. To illustrate, we can take an
existing photograph and add a unique elements such as
I find pig or a robotic. This opens up tremendous
opportunities for creative exploration, also allowing artists and
designers to easily explore new concepts and generate
inspiring visualizations. Next up we have
Pete Joni Mitchell, and also really stands out among the first generation
of AI software. Mostly for the fact
that it's far more chaotic and complicated to use. Unlike delta E and most
other AI software, meet journey wasn't released as a standalone website
application. Instead, you'll need to use
mid journey through discord, which is a popular chat app prominent in the gaming
and crypto space. In order to create
your AI generated art. On top of that,
there are thousands of other users
creating and modifying the AI art on the midtone discord
channel at any given time. So there's a constant flood of texts prompts inside
the chat rooms. This can be distracting, but the shared work
environment does offer a valuable
opportunity to gain inspiration and to
observe the workflow and results of other users in
terms of getting started, once you've joined the MIT
journey channel on Discord, you'll be able to
create images using the forward slash
imagine command, followed by your text prompt, which is the instructions on what you want the AI to create. New journey offers
everyone a initial trial. I'll 25 queries, which means
you can generate 25 images. And then there are
several options to buy a full membership with unlimited or more generous
image generation limits, as well as commercial terms. For me, made journeys primary
strength is that it offers an advanced range of parameters to customize your artwork. This includes things like
the resolution of the image, the complexity, and also
the quality of the image. It also provides tools for
integrating animation and motion into your visuals to help them stand out even more. This makes me journey
really ideal for users willing to
learn and experiment and enabling anyone to create beautiful works of
art that are tailored to their individual style in terms of its artistic output. It also said that
midtone leans towards a more abstract and
surrealist design stuff. And it's particularly
good at outputting futuristic or cyber
punk style art. The software, however, is less well-suited for
generating realistic, odd compared to
say, Delhi E, e.g. so to compensate for this
bias towards abstract design, you may need to add
the terms realistic or photo-realistic or realism
to your text prompts. And lastly, it will
need Johnny has a steeper learning curve and
its website based peers, There's definitely well
worth the investment. Okay, Next we have stereo AI. This is a software
application and also a mobile app that
lets you generate NF2, which are non-fungible
tokens using texts prompts to transform
your words into works of art. The platform allows users to publish in if T is on
different blockchains, including a theorem
and they bind and smart chain enabling these NFT to be easily distributed and traded
on various networks. In terms of designing your art, there is a few
different options here, including aspect ratio. You can also use
an initial image to customize your creations. And there's also a variety of different models to choose from. Importantly, story. I also gives you
full ownership of your creations which you can use for your next
creative project. You can print them off or
share them on social media, or even sell them as an NFT. So this gives artists and extra avenue to monetize
their creativity. However, note that while
your operations belong to you and you can do whatever
you like with them, you're still subject
to copyright laws in your jurisdiction, and you may need
special permission from the copyright owner
of any input images. Lastly, using stereo,
you can generate up to five artworks for free daily and
without watermarks. However, you will need
to buy credits to enjoy for usage of this service. Next up is not cafe. This offers a similar design, sweet and pricing system as AI and lets you generate up to five artworks for free daily. Now we're starting AI focuses
more on AFT creation. Night Cafe lets you print your creations and have
them mailed to your house. However, rather than say, print your designs on
a t-shirt or clothing, item III allows you to
print your designs as a poster on thick and
durable metal paper. In terms of design elements, It's very similar to stereo I. There are a few
different settings here, including artistic style, aspect ratio, and prompt weight, as well as image
resolution that you can play with it
for that reason, I think Night Cafe is
definitely worth checking out. Okay, moving on Tuesday
we'll diffusion. This is another powerful tool
for artists and designers. Unlike deli, meat
and meat journey, external diffusion has
an open source policy that lets you bypass the blocked content
restrictions that you might encounter on other AI software. This is definitely a
big differentiation with neuro account
sign-up required as well. And fast image processing, approximately 10 s, you can start generating
images immediately. The downside though, is that
the simple user interface, the lack of sign-in
options means that you can't easily
view the images that you previously generated
and instead you have to save the results to your
computer to access them. Still diffusion also
currently lacks the feet to the other software
variables such as image uploading and
aspect ratio controls. However, the developers
recently added a negative prompt box and new features are said
to be on the way. Also on the plus side, step of diffusion is fast. It's free to use and the
results are far superior to other free AI software
in terms of his artistic. So simple diffusion is similar
to meet journey and favors a more abstract and surreal
stuff, artistic expression. As with other software
programs though, it does tend to struggle
a little bit with capturing symmetry
in human faces. Lastly, we have crayon. Crayon was Ashley, previously known as Delta E mini
because a number of the team members
involved with this project also
involved with Delhi E, meaning there is some
crossover in terms of product development
and the history between the two products. The user interface for
crayon is living through a web browser that
you can access on your PC or mobile device. There's also a mobile app
available for Android users. The software is currently
free and has unlimited usage. So you can really go nuts here. You can create as
much as you want. And in fact, you don't even have to sign up for an account. The downside though
is that results quite slow to generate
around about 2 min. Sometimes they're quite low
in terms of resolution, the images you create
using their software, also for non-commercial use, which means crayon
is more suitable for creating fun images
to share with your friends and
play around with rather than for commercial
design scenarios. Crown also offers a print on a t-shirt service that is
kind of cool for like $29, you can purchase your
AIR design on a t-shirt, hoodie, tie-dye t-shirt,
or a long sleeve t-shirt. Obviously, this is one
of the ways crayon monetize his art services in addition to donations and add balance that
dominate the website. So I think crown is a
good starting point as you begin to
master text prompts. And we'll have a video tutorial of cramp coming up
in the next video.
5. Getting Started With Craiyon: Hi folks. In this video, we're going
to quickly explore crayon, which is one of the
quickest and easiest options to start your journey
with AI generated odd. Crayon was actually
formerly known as Delta E, meaning we have
some team members involved with both products, meaning that there is
some crossover there in terms of product
development and history. However, unlike mid journey, the user interface for
crown is delivered through a web browser that you can
open on your PC or mobile. And there's also a mobile app available for Android users. In terms of getting started,
there are far fewer steps to creating your first art
piece than my journey, e.g. there's no sign-up,
no verification, and nothing to download. In fact, all you have
to do is go straight to crown.com into
your text prompt in the search bar and click the orange button on the
right. So it's super easy. Next, you need to wait
approximately 30 to 90 s for the model to generate
your art request. And once it's done,
it will spit out nine options in a
three-by-three grid for you to select from. In general, the
quality of the art is not bad as you can see here. The image quality meanwhile is okay but not really
optimized for production level and is certainly a lower resolution
than its competitors, including mid journey
and stable diffusion. Now, looking at the other
options here on the screen, we can also take a screenshot
which will download the three-by-three matrix
grid to our computer. And on the left is a print
on a t-shirt options. So if we go ahead and
click on that button, we can see a mock-up of
our design on a t-shirt. There is then the option to buy, which we can check out. For 40 to ¥100 or $29. You can purchase your
design on a t-shirt. Apparently the T-Shirts material is organic and renewable. So that's nice to know as well. There are other clothing options here as well, including hoodies, tie-dye, T-shirts,
long-sleeved T-shirts, etc. Anyway, I'll leave it here. Obviously this is one of
the ways crayon monetize is it art services that
as well as donations? And the obvious ad banners plastered across their website, which is a little bit
annoying to be honest. To sum up, the
obvious key advantage of crayon is that
it's quick and easy. It comes with unlimited
usage and you can generate art in minutes and download
those images for free, and then you use them for
non-commercial purposes. You should also note that
crayon asking users to please credit crayon for
the images where possible. Now, in terms of drawbacks
to using this free service, if we look at the
tools and features, there are far fewer options
than me, Jenny, e.g. when other platforms in
terms of remixing your art, uploading your image
as input data, or controlling the actual
details about the art, including the aspect ratio. Also, as mentioned in the
image quality is a little bit lower in terms of resolution than other options
on the market, which makes crayon a fun tool to share with friends and
for creating mockups, but probably not suitable for enterprise needs and
commercial usage. Crayon can also be a little bit hit and miss in terms of output. I've found e.g. that
it can really fall apart when it comes to
rendering human faces. As you can see in
this example here, I've asked for a
photo-realistic image of a TV anchor
presenting the news. And of those nine image results, there's really nothing
I'd want to use here. Thus, for human faces, I would definitely
be shopping around for another service option. However, for that Chewbacca dog, I was actually quite
happy with it. I thought that was pretty good. So definitely have
a play around here, a test crayon out. At the very least,
it's a fun and easy thing to do between doing your real work and
a good way to get started with generating AI art.
6. Getting Started With Midjourney: Hey and welcome
back. In this video, I'll walk you through
meet journey, which is one of the best
software options right now to start creating AIR in 2022. Using my journey, you'll be
able to create 25 images before needing to subscribe for a membership
with their service. But for the purpose of this video module and
the following project, you don't need to
spend a dollar. Okay, So it gets started. The first step is to register a Discord account
at Discord.com. Discord is a popular instant
messaging social platform, which is also free to join. After registering
a free account, you can choose to download discord onto your
computer as an app, or open up discord directly
within your browser. Both options are fine. Once you are done with
setting up discord, you can jump over to meet
Jennie.com and click on the Join the base
of button right here. This will automatically
redirect you to Discord either in your browser or
to the desktop app. If you've already downloaded, which is what I've done. Now that you are inside discord, you need to confirm that you are inside the official
meet journey server. This means you should
see a green tick when you hover your mouse over
the mid journey icon. Once you've done that, you can check out the welcome message, which has some key practical information laid out for you. So for example, as I
mentioned earlier, Meet Jennie offers everybody a limited trial or 25 images, and then several options to
buy a full membership with unlimited or more generous
generation limits, as well as commercial terms. Then, in terms of
getting started, we need to go to one of these
newbie bought channels, of which there are many. Next need to type
in forward slash imagine to start
the prompt and soon the Discord bot will send you four images in about 60 seconds. So let's go ahead and
do that by clicking on this new channel here via
the left-hand sidebar. If you don't see the
newbies channel yet, check that you are using the
official meet journey server or try resetting the
discord app or web page. Also, it doesn't matter exactly
which channel you choose, as long as it's a
newbie channel. Once you're inside
a movie tonight, you can check out what other
people have been creating and take some notes of what
prompts they are using. Then when you're ready
to start creating, you can click here
in the chatbox tab at the bottom and type
in forward slash. Imagine. As you start typing,
you'll notice a tab pop up above your texts, which you need to
click on in order to confirm that you want
to create an image. Now you can enter the text
prompt you wish to use. Note that meet journey
also asked you to respect their content and moderation
policy by keeping a PG 13 and to avoid
upsetting imagery. As part of this demonstration, I'm going to use Barrack
Obama playing bass guitar. We have no specific
style defined. Once you're satisfied
with your own prompt, you can press Enter on your keyboard or click
the Send button, just like you were talking to somebody on a chat application. This action would
live or your request to the mute journey bot, which will start
generating your images based on your text prompt. The Meet Jennie bot will
generate four options which we'll take a minute
or less to deliver. Once the progress indicator
has made it to 100%, you will see a two-by-two
grid or finished images. And two rows of buttons below. The top row is for
upscaling your images. Upscaling an image
means generating a larger pixel version
of the selected image, which is approximately
1024 by 1024. This will make your
images look much crisper and also automatically add additional details to give you an image those
finishing touches. This means that those four
original images are more like the mock-ups that
the designer has provided as preliminary options. And then the upscaled version
as the final version which is ready for use and up to production level in
terms of volume. Note that the numbered
buttons U1, U2, U3, and U4, they eat map
between individual image. So here we have u1, YouTube on the right, U3 on the bottom-left, and U4 on the bottom right. If you want to upscale
the first image, for example, then all you
have to do is click on one. The other option is to
create variations of your art using the second
row of buttons below. Creating variations
will generate for new images that are
different but still similar in overall style
and composition to the original image is selected
from the mockup options. Again, the V1, V2, V3, and V4. They each map to an
individual image from the two-by-two grid above. Let's go ahead now
and click on V1 and see what the meat
journey bot comes back with. Also, note that upscaling
or creating a variation. This counts towards
your quota of 25. I'll show you the end of
the video how to check how many free images you have left as part of your free quota. Now, because there are
many other people in the channel creating art
using the midtone bought. You might need to scroll up to find your own image request. The midtone bot has now come back with a
two-by-two grid of variations based on the
original image we selected. Next, let's try an upscale
by clicking on U4, which is the image in the bottom right of
the two-by-two grid. Again, we will need
to scroll around the discord channel to
find our new output, among the many others that are being generated
every few seconds. This is definitely one of the
downsides of doing it using Discord as they interface
for creating art. And it can be a little
chaotic at times, finding your own art. So here we go. We can see that the mean Johnny
bought has populated for upscaled versions of the
original image that we chose. Also, as you can see, there's much more detail and depth to the image
that we selected. And overall, it
looks a lot better. After you upscale your images, you'll have a few more
options below as well. The first option is degenerate variations of this image
like we did before. So just think of this as telling the AI designer to read for remixed what
you already have. The second option is to
upscale to max the subscales the image to an even
larger resolution of approximately 1664. By 1664, light upscale,
redo, meanwhile, this upscale the image again without adding
as much detail. In terms of saving your image,
there are a few options. One option is to send the
image to yourself on Discord. You can do this by asking they meet Joni
brought to send you a discord direct message containing your
final production. To do so, you just
need to click on the Add reaction icon here in the top right above the image and search for the
envelope emoji, which is the first one
displayed here on screen. This will automatically
send the image to you in a separate
message window, which you can find at the top of the left hand
sidebar on Discord. So here we have our final image. We can also click on this
image open to full size, and also click on Open original to open the image
and a web browser. And to save, we can just right-click and
choose save image. And it will save it to
your local computer. If you're using the journey on the Discord mobile app, however, you will need to tap
the image and then tap the download icon in order
to download the image. Okay, and lastly, if
you want to check out how many images you have left as part of your free or paid quota. You can jump back to
the newbies channel within the meat
journey server and typing forward
slash info and then click on the pop-up tab above. Here we can see that
I have for jobs left, which translates to form what image outputs
I can produce. So I'm starting to get a little bit low here of my corridor and I'll be looking at
joining me journey as part of a paid subscription. But for now, I hope this
demonstration was useful and you're inspired to
give it a go yourself. If you are familiar
with using Discord, you should have no
problems getting started with using MC journey. And if it's your first time signing up and using this code, just factor in that it will
take some extra time to get familiar with this new interface and verify a new account.
7. Midjourney Image Licensing + Terms of Service: Welcome back. In this video, we're
going to look at the need to know points
of meat journeys, terms of service,
as well as answer the important question over commercial and
non-commercial licensing. Before attacking that question, let me highlight that
to use my journey, you should be at least
13 years old and meet the minimum age of digital
descent in your country. However, if you are
not old enough or you have family members
wanting to use my journey, a parent or guardian can agree to the terms
on their behalf. So that's easily taken care
of if everyone though, always keep in mind that
meet Joni tries to make its platform services PG
13 and family-friendly. Jenny actually states
do not create images or use text prompts that are
inherently disrespectful, aggressive, or
otherwise abusive, plus no adult content or gore. They say here
actually please avoid making visually shocking
or disturbing content. We will block some texts
imports automatically. So if you are producing adult
or horror genre content, maybe look for another platform like stable diffusion,
which is open source. And you can basically
create whatever you want. Having said that, because the art assets are
generated by AI, which is a new
technology in itself. And the art is based
on user queries. The output doesn't
always work as expected, so expect some margin of error. And if you want to create adult content or art not
suitable for minors, then perhaps it's safer to go to a platform like
stable diffusion. Next, from time to time, you may find some
words or phrases blocked and you wouldn't
be able to make the images using
them if you see the cross emoji pop up
next to your work, interpret this as a warning about your contents suitability. Nella situations, your
content may even be deleted from view or just
not produced at all. You can also self-police
your content by deleting it yourself using the cross
emoji reaction in discord. This will delete the image
from public view and help save you from getting into
trouble perhaps later. Now, moving on to
rights of usage, at the time of recording
in November 2022, you own the content assets you create using the
journey services. However, there are exceptions, just to take note of. The first exception is
for non-paid members. If you're a non-paying user, you are permitted
to use the images for non-commercial purposes in conjunction with the Creative
Commons noncommercial four-point zero attribution
international license. In other words, you can check the details of that license if you're in doubt or if you
want to clarify any details. But essentially, as
long as you're using your images for
non-commercial purposes, you should be in the clear. If you do want to use your images for
commercial purposes, then you need to sign up
as a paid user for me, a journey which starts at about $10 a month for
basic membership. The second exemption
is regarding corporate user license terms. So for owners or employees
working in a large company, generating over $1 million
in yearly gross revenue and using their services to benefit your
employer or company, you will have to purchase the
corporate membership plan. B corporate membership
plan currently cost $600. Also note that this plan
involves any upfront, non-refundable deposit for up to 12 months at
use of the service. So in some new journey
offers you a lot of mileage. We've non-commercial licensing. And for commercial licensing, just make sure you are
paying for the service. And if you're
representing a large company which has
only $1 million, you need to be paying the
corporate membership price. Okay, Now let's look at privacy. So when you are creating
on the journey, be aware that this is an open community and
any imagery regenerate in a public discord chat room is viewable by all those
in the chat room, regardless of whether
private mode is turned on. This is quite different from other platforms where
you interact with the AI on a more private or one-to-one basis where only
you can review the results. We've made journey
on the other hand, because you are creating in
a public discord chat group, your results are out there
in the open for anybody to see more of it by
using Virginia services. You can't meet
Jodie, a worldwide, non-exclusive, royalty-free, irrevocable copyright
license to reproduce derivative works based on image prompts or content you
feed into their platform. This means that others can
re-mix your images and prompts whenever they are posted in a public setting as well. And I actually think this can be a good thing because
it gives a chance for you as well to learn from others and gain inspiration. In summary, be careful what
you feed into the platform, especially corporate images or information you wouldn't want
to leak out to the public. So if you're a JK Rowling, e.g. you don't want to be
creating the front cover of your next Harry Potter
book on my journey in a public discord chat
room before in forming your fans and the
media that the book is actually coming
out at Christmas. And just as some
food for thought, as this might not be intuitive
for those new to AI. But even if you are using a private interface or another AIR platform
where only you, as the end-user can interact and view the output
that you produce. Understand that the AI
may be using your inputs, all your outputs to
retrain as algorithm, meaning that some of
your DNA, so to speak, could potentially seep
out into producing other generated art in ways
you didn't anticipate it. Maybe if you upload a photo of yourself and you tell me Jenny, to remix that photo
in a certain way. But if the input, which is
the original photo of you, and the output which is
the remixed version, could be used by the
model to retrain ITS knowledge or to create
some derivative work. In most cases, this is
quite unlikely to happen or to have any real effect because unless you are
fitting the algorithm, significant amounts of
input data to train itself, nothing should come about. For me, it's also not
a big issue anyway, but some people might not
wish for that to happen. So best to check the
platform's terms of services or avoid using such services
if you have those concerns.
8. Image Prompts With Midjourney: Hi folks. In this
video, I'm going to walk you through
the process of adding images as input
data using the journey. Previously, we've
looked purely a text prompts as input data
for the AI model. But we've made journey. We
can also use an image as input by adding one or more
image URLs to your prompt, meet journey will use those
images as visual inspiration. You can mix words
with images or just use a standard load
image as your prompt. This technique is useful
if you want to riff on an existing image or align the output with an
existing style. To use an image as a prompt, you'll first need to
navigate to one of the newbie groups
inside discord. And then click on the
plus icon here in the text bar and select the first option
which is upload file. Next, select a file from
your computer or album. For me, I'm going to select the Chewbacca puppy I created
earlier on crayon.com. Send by hitting Enter
on your keyboard. And if you scroll down to
the bottom of the chat feed, you should find that
image you just sent. Now click on the image
in the chat records, right-click on it and
select the third option, which is called Copy link next week and get to work with creating a new text prompt. Let's begin by using
forward slash, imagine and pasting the URL link of the image we just copied. So make sure you leave
a space after the URL. Because if you use a comma e.g. this will make that URL invalid. After leaving a space,
you can then start defining what you want
to do with this image, including any modifiers
and other parameters. I'm going to insert
Rebecca dog comma, Christmas theme and then send this message to generate
the image request. And voila, here are the results. I would say the first
and third option here properly capture
the Christmas theme. The first result in the
top left also doesn't look right with the double
I think going on there. So personally, I would want to look further at
the third result with the Santa cap by clicking on
V3 below after 30 s or so, these were the results
that I received. The Christmas cap is
a little off looking, but it does the job overall. However, I'm not
happy enough with a similarity between
these results. The original image prompt that I uploaded as part of the
original text prompt. And this brings us to
an important point. By default, the weighting of the image to the
text prompt is 0.25, which means by default, the text prompt will always
have a far greater impact on the output than the image you referenced as
part of the prompt, we can adjust the weighting
and default settings by using the parameter desk dash IW and inserting the
relevant value, e.g. if we use desktops IW one, this will make the
image URL just as important for the image
generation as the text prompt, which in this example where
the keywords Chewbacca, dog and Christmas theme. If however you want to increase the weighting
of the image even more than you can
push up the waiting to 1.5 or even higher. Alternatively, you could
just remove the text prompt altogether and use the
image as your input. But I have found that this
sometimes doesn't work. What I'm gonna do now is
start over again with the same image URL and the
same keywords as before. But this time I will add the parameter desk
dash w v1 to give the image a higher weighting and
bearing on the image result. Now we can see that
the Chewbacca puppy, it looks a lot
cuter, just like it did in the image
prompt we uploaded. And it has more of a pensive
and thoughtful expression, which is definitely a different vibe to
that image results we got when image
waiting was set to the default value of 0.25. Of course, don't expect the
AI to take your image input as a base level and then
Photoshop and edit on top of it. Ultimately, it has its own creative license
and we'll use or image that inspiration rather
than as the base level. But of course, you can modify the weighting parameter to give the model a better chance of capturing the visual
representation of the input image or images. Okay, folks, there we go. In this video, we
learned how to use an image as part of
our texts prompt. This is a very useful
technique to know, but just be careful what
images you upload a discord. As this is a public forum. Thanks for watching
until the end, and I'll see you
in the next video.
9. Image Masking With DALL-E 2: Okay, in this video,
we're going to explore a more advanced technique
called image masking. While it may seem complex, image masking is
straightforward. Once you understand the process. For this demonstration,
we'll be using the Delhi E2 software to
showcase this technique. And that's because one
of the standout features is the image uploader. Very few AIR software tools
currently offer this feature. And in the case of meat journey, the image uploader is
used for image prompts, but not for direct
image editing and manipulation as we saw
in the previous video. Delhi e, on the other
hand, Let's see, re-create remixes of
an uploaded image or edit the image
directly using masking. Masking is a technique used in image editing that
allows you to select parts of an image and then alter that area without affecting
the rest of the image. This flexibility
makes it ideal for making major
alterations damages, without drastically changing
the overall appearance. Masking is commonly used for removing backgrounds
from photos, but it can also
be used to create special effects or
isolate certain elements. In the following demonstration, I will show you how to
take an existing image, modify it, and add
a new background. So after signing up
for a free account, you can use one of
your free credits to upload an image via
the Upload button. You then want to
select an image from your computer and then crop
the image if necessary, using the built-in image
editor for this image, I'm going to skip cropping. So now we have our workspace
here on the screen. You will notice a
few things here. The first is the
generation frame. This limits modifications to
your image within that area, but you can move
that frame around to work on different parts
of the image as you go. Down here we have
a masking want. So let's now get to
work by aligning the image generation frame to the target area and
then using the masking, want to erase the areas
that we wish to customize. Now, don't worry too much
about smoothing out the edges. The AI will rebuild the image to protect the edges of
the subject or seen. It's simply needs a general
idea of where to apply the texts prompt once the desired error for
editing has been erased, you can then enter
your text prompt and click the Generate button
here on the right. This will apply the
text prompts to the erased area that we erased
using our masking want, while also taking into account
the rest of the image. Keep in mind that each prompt
you submit by clicking the Generate button
and we'll deduct one credit from your
accounts balance. Another important thing
to remember is that the text prompt should
reference birth, the area you are modifying, and the error you are keeping. So in this example, I need
to reference both Me, which is Guy, and
the edits I want to make to the upper
part of my torso, which are six pack abs and holding two ice creams,
if you just wrote, Give me a six-pack and
holding two ice creams, the AI doesn't have
quite enough contexts. It doesn't know whether
the abs should be male, female, animals, etc. So you really need to include
the subject of the image, which is the guy will adult men, and reference the entire image and not just the selected area. Okay, so w e has generated
the following output. While it didn't fully deliver on the six pack I had in mind, the image is still an
improvement from the original. Delhi also did a great job
of recreating my skin tone. Now, if you're satisfied
with the generated output, you can simply click accept. But I want to try to improve the definition on my abs by
changing the text prompt. First though, I need
to use the masking, want to highlight the
area I want to modify, which is my chest area. And then I need to re-enter
the full text prompt. And this time I'm going
to try writing a guy with high defined six pack abs
and holding two ice creams. Let's click Generate and
wait for the results. Okay, so here we
have the results. This really wasn't what I was expecting in this case,
it's completely wrong. Now, down here on the bottom, we can click on the arrows to browse through a few
more variations. So this one looks a
little bit better. This one removes the
second ice cream, which is not suitable. And I have no idea
what this one is. Let's go back to that second
version and click Accept. After accepting the
optimized image, we can use masking again
to change the background. Delhi ease editor God actually recommends that
you edit the background or scenario lost
because focusing on the primary subject or
character first is recommended. In the case of a
human subject, e.g. this helps to get the
body shape right, which is the more difficult part before filling in
a new background, which is startling, easier
for the model to perform. Okay, given that the
current background is not located directly
on the beach, I will use the
mosque and wants to remove the background
entirely and then ask Delhi to add a new background
with a beach behind me. So let's update the text prompt to include two more modifiers, those being standing on
the beach at sunset. And then click Generate. Okay, The model has now filled
in the beach background, but it definitely doesn't
look like sunset. If we look through the
other versions as well, we can see that
other images also fail to capture the
sunset aesthetic. So perhaps I need to describe the color of the sky
in my text prompt. But I'm actually going to drop the sunset and
focused on improving the quality of the background
by adding a new modifier, which is a photorealistic
because I want the image to look more realistic so that
I can add it to Instagram. The image output now looks
more photo-realistic, but the background is a
little busy for my liking. So I'm going to look at the
other variations available. And yes, this one
looks quite good. So let's click on except
now we need to fix the area above the image
generation frame by repositioning the frame like so. Then we can just click
on Generate again using the exact
same text prompt. And voila, delta E has now food in the background and the final version that looks quite good. So to sum up, WE isn't quite as impressive as meat journey
in terms of image quality, but majority doesn't
have the image masking feature
available right now. So Delta E is definitely the first and really the
only option right now. Okay, I hope you enjoyed
that quick demonstration. Image masking is a super powerful tool and it
opens up a lot of creative opportunities
for you to re-mix and retouch your
own photos and art.
10. Parameters For Midjourney: Hey there, welcome back. Now that you're familiar
with using meet journey, let's talk about
advanced commands using what are
called parameters, which are also sometimes
called switches or flags. In this video though,
I will refer to these special commands
as parameters. So far, we've learned
how to create a basic text prompt
using forward slash. Imagine to produce a grid
of four images, e.g. for slash imagined that full on a desolate
planet cyber punk style. Now to customize
the text prompts, you can add parameters using two consecutive dashes followed by the perimeter
you wish to use. In this example,
we're using negative prompting by using two dashes, no, followed by the keyword. People. Note that the parameters
should be added to the end of your text prompt
rather than at the front, or perhaps midway
through your prompt. Now, let's look at some of those most commonly used parameters. The first is negative prompting,
which I just mentioned. And you can do this by
using double dash no, and then your chosen keyword
which you can use to avoid adding certain elements
to your image, dash, dash. No cost, for instance,
would try to remove cars from the
image generation. Here we have two examples. On the left we can see at
least three to four people in the collection of images. And on the right
collection we can see that there's a clear
absence of people in at least three of these images because we've used the
parameter no people. In the second image, it does
look like there could be a person there will maybe that's a fire hydrant or
something else. It's hard to tell, but overall, the negative prompting has
worked very effectively here. Next, let's talk about size and dimensions using aspect or AR, these parameters alive
to generate images with a desired
aspect ratio, e.g. typing dash, dash
ir 16, 19, e.g. will get you an aspect
ratio of 16 by 19, which is also 448
times 256 pixels. Here are a few examples
to illustrate how the aspect ratio impacts
the size of your image. Alternatively, you can
also use the width and height parameters using
w and h respectively, and then defining the number of pixels for the width and height, the value is used by H and W
should be 256-2034 pixels. Also, keeping those values as multiples of 64 is recommended. If you're looking to
match your image with a specific aspect ratio, you can check with the software you are using and then enter the width and height into
your texts prompt parameters, which in my case here
using Canva would be 1920 by 1080
respectively sunscreen. Now at two more examples
that again show the role of size and dimensions
and creating your art. Next up we have seed numbers. I see numbers are
used for reproducing art using the same
randomization. Use generate the image, put simply using the same
seed number will help you create a similar output
using the same prompt. A seed number must be
a positive integer. An integer is a
whole number between zero and the number
here on screen. I'm not going to
attempt to read that. If you don't set a seed, a random suit will be assigned instead to find out what seed
was used behind the scenes, you can react with the male
emoji to an image and then checking your DMs on Discord
for details like this, hey, we can see
the seed is 25588. Now, I do want to say that
the C number is not perfect. And based on my
experience so far, I wouldn't expect to see the
exact same output each time, but this might get
better in the future. There is still a lot of
randomization involved with the production of images using
the journey we've been on. Now, let's talk about something very different to seeds and that is randomization
using desk dash, chaos and then
inserting a number, this parameter introduces more varied random and different results from
your image outputs. The number used with this
parameter must be 0-100. Higher values will favor more interesting and
unusual generations and exchange for less
reliable compositions. In this example, I have used exact same text prompt as before in the previous examples, but with a high
chaos number of 70, which has in fact generated some unique and different
results which look pretty cool. So feel free to
experiment with the cows parameter if you
have enough credits on your ambiguity account
and you want to push the boundaries with
your AIR to generation. The next parameter
is desktops video. This saves the progress of your image generation
as a video link, which is pretty cool. The video link is sent
to you after you use the melt emergency in discord to trigger
a direct message. So don't forget to add that male emerges just after the
image is generated. Here's an example of the
Osaka Blade Runner seen in the video format as it was
generated by mid journey. Okay, Let's talk about
another important parameter, which is quality values. Quality modifies
how much time is spent generating your
image at present, there are five options are
available using the journey. The first is quality 0.25, which produces rough
results but is four times faster and cheaper to
produce than we have 0.5. It again is less detailed
in terms of results, but two times faster
than the default value. The default value is one, and there's no need to actually specify this parameter
because it comes built-in with your generic
default image generation. Next is quality to, this is
for more detailed results, but keep in mind it is slower. And it is also double
the price because of the higher GPU usage. It uses two GPUs. And we then we
have quality five. This is much more experimental. It might be more
creative or detail, but it could also be worse. And it does use
up a lot of GPUs. It uses five to be exact. In general, you'll most likely be using the default value, which is one and
does not need to be specified in
your texts prompts. But if you are looking for higher or lower quality results, you can look at using
the other four options to tweak the output
of your images. Lastly, I want to
mention two more things which are not really
parameters per se, but they are another
option to be aware of. The first is private
and public mode. Using for such private
and unfolded such public, you can toggle between
these two different modes. In private mode, jobs
are only visible to you in public
mode, meanwhile, your jobs are visible to
everyone in the gallery, even if you are creating them in a thread or a direct message. However, note that
access to private mode does cause an extra
$20 US per month. Secondly, I want
to share with you the show feature using forward slash show and inserting
the job number, you can recover the ID of a
job in a gallery in Discord producing the
resulting image and the upscale plus
variation buttons. This, in other words, allows
you to revive any job you've generated yourself before and bringing it into
any bot channel, even if you have lost access
to the original prompt. So e.g. if I type
in forward slash, show into discord and click on the prompt is
displayed above. I can use that
feature by pasting the job ID of a previous project that I
did and insert that here. This will then
provide that project, including all the upscale
and variation options to let us make more
edits to our art. That brings us to the end of
this video on parameters and the two bonus tips regarding private-public mode
and the show command. To learn more, feel free
to check out the midtone websites documentation
at mid journey, dot kit book dot IO
for more information. And do note that some of these parameters maybe tweet
or removed in the future. The Beta and HQ
algorithm permanent e.g. have been discontinued
as shown here. Likewise, new parameters will
be introduced over time. So keep an eye out for new advanced settings
as they come out. Okay, thanks everyone
and see you next video where we'll be
creating some awesome AIR.
11. DSNR Bot For Custom Prompts (NEW): Hi and welcome back. In this video, I want
to share with you a useful ai powered
tools that I've been using recently
called dB SNR. This is a Discord bot for
generating image prompts. The bot uses natural
language processing and machine learning
techniques to analyze your input and then generate unique prompts based
on those requirements. This tool, I think is
particularly useful for product designers wishing to generate images of a product. We've custom-made features. Let's go ahead. To use D SNR. You will need to add
the free tool on Discord by first going
to Discord dot GG, forward slash S, k, x x ZAB, WWF. If you don't have
a Discord account, go ahead and register
a free account. Now, once you are set up
and locked into discord, click on Accept, invite
opened in discord. After adding the bot on Discord, check out the welcome message here we can see that
the primary pumped for D SNR is for Slash design. Let's move on now to
the general channel. And you use the design command by typing that
into the chat box. As we type, a tab will appear automatically above,
which we can click on. And then we want to enter the
subject we want to create. This could be a car, a bike, and animal, anything really that
you want to create, but you should limit
it to one subject. For this demonstration, I will
go ahead with a road bike. The bot tool will then ask
you to select the style. There are three options. One is costume,
photorealistic, and also logo. For this demonstration, let's go ahead with a custom style. Next, the tool provide a series of prompts
to help you generate a customized image prompt that fits your preferences
and requirements. So the first question is, what color is the frame
of the road bike? And we have five options here. Red, blue, black,
white, and yellow. I will go with white. Next, what type of handlebars
does the bike have? Again, there's five options. Let's go ahead with Dropbox. By the way, if you're not happy with the
choice of answers, you can always click on
the other button down here below and insert
a custom answer. Let's go back now and smashed through these last
few questions. Clip list, petals, Disc brakes. After completing the prompts, dB SNR will then generate
a text prompt for us. While you're waiting,
you might as well check out what everyone
else is creating. So we can see that there's one
user here generating texts prompts for a Honda Accord
and also a Cadillac, which both look pretty good. Okay, so here's our prompt. We can copy and paste that into my chosen AS software now. But first we might want
to edit the medium. So go down and
click on the button here to add art medium. The AI will ask us which
medium we want to choose. We have five options. And for this demonstration, let's go ahead with
graphic design. This action will update
our existing texts prompt, which now includes
a new modifier designed in a graphic
design style, which has written a little
bit clunky to be honest. But anyway, let's
go and click on Finish and then copied
the text prompt. Next, Gerber to your AI
software that could be stabled diffusion Delhi E or medullary
inside of this code e.g. and paste that text
into a new prompt. I'm using mint journey.
So I need to navigate to the meat journey
server on discord and click on one of
these newbie channels. Next, I will type in
forward slash imagined, and then paste in the prompt. Hit Enter on your
keyboard or click Send and Wait for the results. And voila, we have four options based on
our custom text prompt. I can see an issue here with the word lines in
the top right image, but I think both the bottom
two options look like professional images that we
could use for a website, blog, or another content medium. So they have it, you now have
a new tool in your AIR kit. So go ahead and have a play
around with DSLR unexplored, where the AI takes you.
12. Project Time!: Project time. So the
project for this course is to create your own art and
we're going to use midterm. So the first thing
I wanna do is you want to download, register, open up discord on
your computer by navigating to Discord.com,
forward slash downloads. So you want to get
that all set up. Once that's done, you've
gone through verification, you can login to your
Discord account. You didn't want to go
over to meet journey at mutiny.com four
slash home and then join the beta version by
navigating to the discord link. Once you're all set up there, you'll then be able to
go with the discord, look for the meat
journey server. And then you want to go to the newbies channel listed
here on the left-hand side. Then you can just jump
into the text box, type in for such imagined. And then you can start to
type in your text prompt. But once you're ready,
you can then hit Enter. Then immediately we'll
come back to you in 20 seconds with the result. So once you've done all that, you can then check
out your image, maybe take a screenshot, and then you can upload
that to Skillshare to get some feedback and
to share your creation. So looking forward to seeing
what everybody creates.
13. Before You Go: Well done for making
it to the end. Now, to finish off, I
just want to say that while AI technology
might seem a little bit scary and intimidating the spirit which it is
seeping into all aspects of content production from
marketing to book covers really underscores that mastering AIR will be a key skill for the
modern content creator. Personally unconfident that in the coming years there'll
be many opportunities in what the authors of human
plus machine reimagining work in the age of AI term
the missing middle, which is the fertile space
where humans and machines collaborate to exploit
what each side does best. Sewing machines,
e.g. really excel at managing large-scale
repeatable tasks, while human expertise
can help to maintain quality and
provide feedback. So in the case of AI, artificial intelligence can be used for spinning up an image. And then the human
content creator can use that image as the base
for a book cover, YouTube thumbnail, or
other content item. Using software applications
like Canva or Photoshop. They can add unnecessary text, edit the dimensions, and make other modifications to
produce the final version. We're almost there as well with traditional design
software including Canva, notion and Figma, integrating AI text to
image applications. These features provide users
with an all-in-one dashboard for creating visual art
moving images in the future. Probably starting with
three-second memes and eventually
full length films. But for now, have
a play around with the AIR software I
mentioned in this course. And keep an eye out for new software versions and use cases as they emerge to keep
up with new AI tools. You might also like to check
out there is an AI for that.com or future
periods or IO, which both cover a range of
categories including AIR, texts, video, and design production software
that you can sort through. Okay, thanks everyone
for watching. I look forward to
seeing what you design and you can follow me
at machine learning underscore beginners and check out my machine-learning
books on Amazon. Thank you and have a good one.