Transcripts
1. 00-01 Introduction: This course is the third part of the Blender TD Version
four essential series. In this course, you will learn all aspects related to
cycles rendering engine, including lighting, camera, Blender settings,
and post processing. Essentially, everything
that you need to create stunning
images using Blender. I have carefully crafted
the curriculum so that students can gain the skills gradually with no
friction at all. After this course in Shala, you will be able to optimize circles for different scenarios, maximizing the image quality, while at the same time,
minimizing rendering time. A Salam Waleku, my
name is Wide Mutakin, founder of Expos Studio. For more than 20 years, I have created thousands of treed renderings like
this for architectural, interior and master
plan projects. I have worked with many
clients all over the world. I have clients on almost
every continent in the world. Besides doing projects, I have
also been teaching TD and computer graphics
academically at various schools
since the year 2000. In short, I have real world
professional expertise in TD and teaching experience. In the lighting chapter, you will learn how to use various light sources
inside vendor. We will start with the word
background and Sky texture. Then you will learn how to use HDR and XR files and then learn to use light objects such
as point light, sunlight, spotlight, and Aalight you will also learn how to
create caustic effects, learn how to make lighting
more realistic using IS files and then learn how to make material emit light using
the emission header. Next, in rendering chapter, you will learn how
to easily set up cameras using different
manipulation techniques, then you will learn
the ins and outs of cycles render settings
such as samples, the noising, clemting
color spaces, transform, white
balance, and so on. In the final chapter, you
will learn how to perform post processing with the rendered result
using the compositor. Everything is done non
destructively via nodes. And then you will learn
how to easily isolate pixels for compositing
using crypto math. Besides the small exercises
throughout the course, you will be given a final
project at the end. Basically, you will
create product renderings of launcher set in
three different styles, first with a
transparent background, but with shadows that can fit into any color
or background. Second, with a TD environment
and an evening sky. And third, using the
same T environment, but now with a day
or afternoon sky. So join now and take your Dunder TD skills
to the next level. Have fun learning Waamoicum.
2. 00-02 Exercise files and conventions: Welcome to the course.
Before moving on, there are several
important things I need to mention
about the course. This course is the
third course in a wonderful essential series that I published on Skillshare. In the first course, we discuss
the remodeling in depth. Then on the second course, we focus on material creation, texturing, and UV mapping. As for this course, you will
learn lighting, camera, rendering, and post processing using cycle's rendering engine. Although you can take
this course directly, I strongly recommend
that you take the previous two courses
before taking this one, especially if you are
very new to blender, because most of the time, I assume that you already know things that I explained
in those courses. Just keep in mind that
if you find something confusing and I don't
explain it in detail, it might be that you miss out on lessons from the
previous course. Next is about the
exercise files. You can download all the
exercise files for the course in the resources section of this lesson in case
there is a problem, as a backup, you can also download the files from
the following link. Please pay attention to the capitalization
of the letters, as this link is case sensitive. You can download the
files one by one, but it will be easier
if you just click this download button to download them all
in one zip file. The text you see here depends on where you are or your
language preference. It says download Samoa
because I am in Indonesia. You will see the text
download all if you are in US or UK or other English
speaking countries. As you can see, the
files are named based on lesson with additional
chapter code in front. If lesson has multiple
exercise files, then I put them in a folder with the same
name as the lesson. Next, it's about the
structure of the course. I have carefully
crafted the curriculum so that everything is
placed sequentially. Each lesson you take on
one level will become the foundation of lessons
in the next levels. Therefore, it is important
that you take the course in order step by step,
not jumping around. If you take the course
by jumping around, most likely you'll get
confused at some point. The second thing
I need to mention is that you need to practice. For each video, please try out the lesson yourself
at least once. The course is not
just about theories. Most of the lessons
are practical skills. So again, you need
to practice if you really want this online
course to benefit you. In this course, I'll be using a PC computer with a Windows
ten operating system. So every shortcut I mentioned in the video will be for
PC and Windows OS. If you are using a Linux OS, most likely, you won't find any difference in terms
of keyboard shortcuts. However, if you are a Mac user, you will find some differences. I believe most Mac users
already understand that the command key in Mac is often used to replace
the Control key in PC, and the option key in Mac is often used to replace
the key in PC. But the thing is about vendor. I found that most of the
control shortcuts in PC in Mac mostly become
this control key and not this command key, although there are some shortcuts
still use document key. Essentially, if you are
using a Mac computer, you may need to check the menu or the preferences window or the official vendor's
online documentation for the keyboard shortcuts. In this course, I will be
using vender version 4.4. So all the UI features and shortcuts are
related to this version. If you are watching
this video and have vendor version five
or six or higher, you might find some
differences here and there. In such a case, I
recommend you check my course list as I
might have already released a new version
of this course that is better suited for the version
of vendor you are using. There are at least two
things that you need to have if you want to walk
in bender comfortably. First, you need a standard
mouse with a scroll wheel. Usually, if a mouse
has a scroll wheel, you can press on
the scroll wheel to activate the
middle mouse button. We will use the scroll wheel and also the middle mouse button a lot for viewpoint navigation. You want to avoid using minimalistic most
products that do not have any scroll
wheel or middle buttons. The second thing that you
need is a full size keyboard. What I mean by full size is that the keyboard should
have a numpad area. This is important
because a lot of Werner's navigation
shortcuts are placed in the numpad area. Yes, there is an option in lender's preference window to
simulate the numbered keys, but that will be at the cost of overriding other
important shortcuts related to TD modeling. So again, you really
want to invest in a decent full size keyboard if you want to use
lender for long term. Throughout the course, I may
display images and videos. Some of these contents
are not made by me. Please note that I am using them merely as references
or for inspiration. I never claim that these images
or videos are made by me. If I can find the owner's name, I will credit him or her by putting their names on
top of the content. Otherwise, I will
display image or video with the URA link
of where I found them. As for stock images or videos, if I don't specifically state
that they are made by me, most likely the copyrights belong to the respective
owners, okay?
3. 01-01 Background and Sky texture: Starting from this video, we will discuss the
lighting techniques that exist in lender. As a macro overview, we can divide lighting or
light sources in lender into three main categories world
light objects and materials. In this video and the next one, we will cover the
first category, and that is the word lighting. The old lighting or
sometimes referred to as environment lighting
are specialized sources that affect the whole scene. You can access this
lighting in two places. The first place is the
properties editor, which is inside the at tab. At a glance, the tab icon
looks like the material tab, but this icon is
actually a globe symbol, not a sphere symbol. The second place is
the sudar editor. That is, if you switch the
mode to word essentially, what you see here are the same priameters as what
you can access in vote panel. It's just that they are
represented in nodes. By default, the vote lighting uses a node called background. The node provides two
basic parameters, color and strength. We can see the effect better
if we are in rendering mode. Notice, as I change the color, the overall look of the scene changes this is
yellow. This is green. This is ion and so on. We can try changing the
strength parameter. As you may have guessed, this controls how
strong the lighting is. Let's just set this
to one for now. Essentially, the background
node floods the scene with single colored lights
uniformly from all directions. Although the ground node
is the default node, it is rare that you need to
use the ground node by itself unless you do want to have this uniform looking like
effect on your project. In most cases, you
either need to set up architecture for a
procedural workflow or an XR or HDI texture for static image based lighting or a combination
of both methods. Let's see how we can create
and use the sky texture. To create a sky texture, you can simply press
Shiv A in shadow Editor and then type in
Sky, then Browse Enter. And then you need to plug this into the color slot of
the background node. If you connect the
node correctly, besides the shaded editor, you can now also access the sky texture parameters
from rule panel. Out of the box, our
scene now looks more interesting compared to the previous background node. Instead of just a red
or uniform color, we can now see a
sky color on top, a haze color on the horizon, and a dark color at the bottom. Although this node is
called sky texture, it can also generate
a sunlight effect. By default, the sunlight
effect is already turned on. In case it is turned off, you can turn it on by
activating this sun is option. This is off or
just the skylight. And this is on or both the skylight and
the sunlight are active. You can see the difference
yourself, but to recap, a skylight is basically lights coming from
all directions. With only skylight, we
have soft shadows in the scene or are also known
as ambient occlusion shadows. On the other hand, sunlight is one directional
light coming from a point of an infinite distance. So the light waves are basically parallel
with each other. Besides the sun,
you can also use the sunlight system
to simulate the moon. Okay, the sun size perimeter
controls the size of the sun and eventually how soft shadows are
generated by the sunlight. Higher values mean larger
sunsize and softer shadows, while lower values mean smaller sun size and
sharper shadows. The sun intensity perimeter controls how strong
the sunlight is. If you set this to zero, then this is like having only the skylight
without any sunlight. If we set this to 0.1, then it is one tenth of the default intensity and
so on, you get the idea. Now, you may be wondering, where is the sun's shape? We cannot see it
anywhere in the sky. Well, Bender simulates the
sun and sky lighting system just like in the real world. And just like in the real world, we are barely able to see the sun's shape as it
is just too bright. We can, however, see the sun's shape during
sunrise or sunset. That is when its intensity
becomes very low, or you can still see it
at noon using a camera. That is by setting the camera exposure
to a very low number. I vendor, we can simulate the camera's low
exposure setting by going to the vendor setting panel and then scroll all
the way down to the bottom. You can see the color
management category. Here, you can access the
exposure setting by default. It is set to zero. If you turn this down, usually when the value
is below minus one, we will be able to
see the sun shape. So this is one way to do it. Let me bring this
back to zero for now. Another way to see the
sun shape in lender is to simulate the sunrise
and sunset times. To control the sun height, we can use this sun
elevation primeter. It uses angle degree as the unit 90 degrees
means that the sun disc is exactly at the top of our head or in a positive
Zaxis direction. 45 degrees is at the
perfect diagonal, and zero is at the horizon line. Basically, if you set
the value near zero, we are simulating a sunset or sunrise thus making
the sun shape visible. Let's try changing
the size again, just to see how it affects
the visible shape of the sun. As you can see, smaller values lead to smaller
sun shaped sizes. While bigger values lead to larger sun shaped sizes, right? Well we have the sun
shape at the horizon, let's discuss the next primeter and that is the sun rotation. Basically, this
primeter controls where the sun is located against
the center of the world. Zero means that the sun is exactly in the positive
Y is direction. Breaking this value up or using positive values will make the sun rotate this
way or clockwise. So if we set it to 90
degrees, for example, the sun will be located exactly in a positive
X axis direction. Dragging the value down or using negative values will rotate
the sun control clockwise, so -90 degrees means that it will be in a
negative axis direction. So again, to recap, we can use the sun
elevation primeter to control the sun height, and we can use the sun rotation primeter to
control the sun's direction. For now, let's set
the elevation to 45 degrees and the rotation
to 30 degrees. All right. The last four primeters control the sky color or the
atmosphere color. The altitude primeter controls
the horizon position. Zero means that the center of our three volt is located
at the sea level. If you want to simulate the atmosphere on
high lens or regions, then you can start increasing
the value if, for example, you want to create a scene from a spaceship that is 40
kilometers above sea level, then you can type 40 then KM 4 kilometers and
then press Enter. So this is how the sky looks like when viewed
from a spaceship, 40 kilometers above the ground. Now, if we set this to high, such as 60 kilometers, everything looks
black as there is almost no atmosphere
at this height. Let me set this back to zero. The air perimeter controls how much air particles
exist in the atmosphere. In other words, the amount of pollution in the atmosphere. If you set this quite high, you'll get an orangish color. This is suitable if you want
to simulate a mars surface, for example, or if you want to create a post
apocalyptic world, like you often see
in Sci Fi movies such as Blade
Runner and L. Next, the dust perimeter controls how much dust and humidity
is in the atmosphere. If you set it quite high, it tends to tin the sky
with a brownish color. The last one is the
ozone primeter. This primeter controls
the thickness of the ozone layer. Visually. The higher set this value, the bluer the
atmosphere becomes. Okay, guys, so that is how
you use the sky texture to simulate the sunlight
and skylight procedurally.
4. 01-02 Environment texture: In this video, we are going to discuss the next method
of world lighting, and that is using the
environment texture node. And then up through that,
the combination method. The environment texture is
like the image texture we discussed in the previous
material and V mapping class. But unlike the regular
image texture, it is designed to
be used together with the background node
as a world lighting. Although you can use
regular JPEG or PNG files, for environment
textures, you should use high range images
such as EXR or HDR. For those who don't know
what high range images are, they are images that have more than eight
bits per channel. So they can contain more light information compared
to ordinary image files. That is why they are better
used as light sources. A long time ago, you needed to prodss these kinds of images. Fortunately, nowadays,
you can download them for free at polyhaven.com or
other similar websites. If you open plyhaven.com, you can go to HDRI section. And just browse for
the one that you like. Now, if you have a GPU with
8 gigabytes RM like mine, I suggest not downloading files larger than four K as these textures can take a lot
of varims when rendering. But if your GPU has larger VM such as 16 gigabytes
or 32 gigabytes, you can try using eight
K or 16 K resolutions. As for the format type, you can download either
HDR or XR version as vendor supports
both file formats. You can browse
online to understand the differences in detail
between the two formats. Long story short, EXR is
the newer file format, and so it offers more
features than HDR, but at the cost of
larger file sizes, most of the time, I always
oe EXR if it is available. After you specify the
resolution and a format, you can proceed to download the file by clicking
on this button, right? Back in blender,
let's unplug and move this sky texture
node aside for now. We will use this
node again leader, so you don't want to delete it. Okay? To create an environment texture
node initiator editor, you can press Shift A and then search for environ,
then hit Enter. Next, you need to plug this into the color slot of
the Bagger node. After that, you can add the mapping and texture
coordinate nodes manually. Okay? So there is
one way to do it. If you have the No
wrangle addon installed, you can do this faster using
the Control T shortcut. So yes, it is the
same shortcut that we usually use to
create image textures. Let's just delete this
node for now. All right. For this to work,
you need to make sure that the background
node is active or selected and then press Control the node wrangular add on detected the
background node, which is why it created the environment texture and not the ordinary image texture. Notice that currently
you see looks pink. This is just Bender's way of telling us that there
is a texture missing. To si a texture file, simply click on this open
button then browse to the folder where you
save DXR and XDR files. Select the file you want to use. I'm going to pick this one
called citrus Orchard. This is one of my favorites that I often use in my projects lick the open Image
button. And there you go. We are now using a
high range image as the light source
for our three scene. In most cases, environmental
textures produce more realistic results compared
to the sky texture node. This is because these images are commonly created by capturing
real world environments, and they are not only
for outdoor scenes. You can also find
many indoor textures at polyhaven.com
or other websites. For example, I'm currently using an XR file of O Billiard
Hall from Poly Heaven. By using indoor textures, you can quickly set up lighting for product rendering,
for example. Now, sometimes the
environment texture is not facing or oriented in
a direction that you want. To fix this, you can tweak
the Z axis rotation slider. You want to avoid rotating using the X axis or the Y axis, as this will tilt the texture. Well, unless you have specific needs or
reasons for doing this. Again, most of the time, you only need to use the
Zapis rotation, right. Let's change the texture
file to an outdoor scene. For example, Limpopo
golf course. As you can see, this texture produces strong sunlight
and sharp shadows. Glass rod lighting technique, which I often use is
the combination method. Essentially, we use both the sky texture
and environment texture together by joining them
using a mixed node. For this, you can use two
different approaches. The first approach is to
use the mixed color node, create a mixed color node by pressing **** A, then type mix. Remember, it is the
mixed color node, not the mixed shader node. We will discuss the mixed
shader method after this. Max, you need to plug the
sky texture color into the first color
input slot and plug the environment texture color into the second
color input slot. Then we can plug the result
into the background node. In this condition, the
factor of slider plays a very important role as this value controls which
color is the dominant color. If you slide this all the way
to the left or zero value, now only the sky texture is active and
affecting the scene. The environment texture
has no effect at all. Vice versa, if you slide this
all the way to the right, now only the environment
texture affects the scene as the sky
texture is turned off. Usually, I use a value between 0.6 to 0.8 depending
on the situation. Now, the problem with this method is that
you cannot control the strength of the
environment texture independently from
the sky texture. The sky texture has this
sun intensity value. But the environment
texture does not have any. Yes, you can use the strength
value in a background node, but this will affect everything, including the sky texture, not just the
environment texture. Another approach is
to mix the two nodes, not at the color level, but at the shadow level. For this, we can delete
the mixed color node and then duplicate the
background node by pressing CFB. So yes, we now have
two background nodes. Connect the sky texture to the first background node and connect environment
texture node to the second background node. We can disconnect
this node for now. Next, we can press Shift
A and then type mix. What we want to use now
is the mixed Shader node, not the mixed color node. As you may have this already, we need to plug these two slots
and then these two slots, and finally, these slots, right. Just as before, we can use this factor value to determine which shader is more
dominant against the other. But unlike before, we now have the strength
value to control only the environment
texture node independent from the
sky texture. All right. One final tip that I want to mention is about
texture reference. The reason why we want to use this combination method is because we want to use
reword environment texture, but at the same time, want to control the
sunlight position. The sky texture already creates
the sunlight and shadow, so you don't want
the environment texture to create them as well. Otherwise, you will see
two shadows like this. This looks fine for
an indoor scene. But for daylight outdoor scenes, these double shadows
just look strange. Let me change this to another XR file that does
not have strong sunlight. For example, this one
called Belfast Open field. As you can see, it
looks more natural. So again, with the
combination method, you want to use overcast
or cloudy skies that creates soft shadows like
this one or this one, and you want to
avoid this one or this one because they have strong sunlight and
visible shadows.
5. 01-03 Light objects: Starting with this video, we will discuss the
light objects in Lender. If you want to follow along, you can open the file I
provided in this file. I use a dark gray background
color for work lighting. This is so we can see better lighting effects from
the light objects. To create a new light object, you need to specify the location first
using the T cursor, so hold Shift and then right
click at this location, for example, and
then you can press Shift E. If you go to
the light sub menu, you can see all four types of light objects that
Bender provides. They are point,
sun, spot. An area. Let's create the
pon life for now. Essentially, the point light
is a light type that emits light rays from a single point into the space to
all directions. The point light is suitable
to simulate candles, camp fires, torches,
light bulbs, et cetera. Again, any light sources that emit light ways in
all directions. To control light parameters, you can go to the
properties panel and an open light data tab, the one that has
light bulb icon here, you can change the
color of light. You can make it red.
Yellow, green, et cetera. Then you seeing the
power of value, you can control the intensity of the light besides emitting light rays from a single point. You can also make
the point light emit light rays from spherical
surface to that, simply increase
this radius value. Using this slider or
typing the value directly, we can set a very precise value, but sometimes we just want to tweak it visually
into viewport. For this, you can use
the Gizmo instead. Notice that the
point light object shows a circular gizmo
in the viewport. You can use this
Gizmo by clicking and wagging it to control
the radius value. The higher the value,
larger light sphere is, thus making the shadows
generated by the light softer. This is the basic rule
that you always need to remember as this applies
to all light sources. Again, the larger the
surface of light source, the softer the border of
shadows will be vice versa, the smaller the
light source size, the sharper the shadow
borders will be, all right? Now, when you have a quite
large light sphere like this, and then the volume of light sphere intersects
with a mesh object, you can control whether
you want to have a harsh light border or
a soft light border. For this, you can use this
soft full of checkbox. If it is off, you will get a
harsh or sharp light border. If this is on, blender
will smooth out the light borders that
intersect the mesh surface. The next four parameters are specific to
cycles rendering. So you won't see these
parameters or you will see other parameters instead if you are using DV rendering engine. We will discuss this
reader in another video. For now, let's discuss
the other light types. What is so great
about light object in render is that you can easily switch the type
from one to another simply by selecting the
options at the top. So this is the sunlight type. This is the spotlight type, and this is the area
light type, okay? The sunlight type is perhaps the most unique
compared to the others. Why? Because its position
does not affect the lighting. What affects the lighting
condition is its orientation. So if you place
this object inside a cube or even below the ground, for example, it will still
illuminate the whole scene. Essentially, with
the sunlight type, the light rays are coming
from an infinite distance, not from the light
position itself, and the light rays are all straight or parallel
to one another. If you rotate the light object, now you can start to see the different
lighting conditions. Besides using the rotation
tools or shortcut, you can also rotate the sunlight
object using its gizmo. It is quite small, but you can see a
small yellow circle near the light object position. You can drag this
circle around to set the rotation or the
direction of the sunlight. What is great about
this gizmo is that it automatically
detects surfaces. What I mean is that if you drag the gizmo and then hover the
mouse on a certain surface, Blender will
automatically direct the sunlight to the
point on the surface, which makes setting up lighting very
convenient in blender. And the good news is this gizmo also exists in
the other light types, not just the sunlight
type, right? So again, to recap for
the sunlight type, its position does not matter. Only the rotation matters now, you may be wondering, what
about scaling the light? Well, you also don't want to scale the sunlight
object as it does not affect its intensity unless when you
scale it to zero, which makes the lighting
behave strangely. So you want to leave the
scale value to its default. If what you want to do is control the softness
of the shadow borders, then you should use the
angle value instead. Higher values mean
softer shadow borders, while lower values mean
sharper shadow borders. A is point, we now
know that in Lander, we can create the
sunlight effect using three different
methods using the sky texture node using XR or HDR texture that has
a strong sunlight effect. And the third is using
the sunlight object type. But as I mentioned before, you do not want to have
multiple suns at the same time. So although you can have
all of these at once, you should just pick one method. Personally, I prefer
the sky texture method, as I like to control all the environmental
lighting in one as, which is the shaded
editor, okay? The next slide object
type is to spotlight. This type is almost
like the point light, that is the light rays are projected from a
single point in space, but it adds a cone
shaped constraint. So it only focuses the light
rays in a certain direction, as the name suggests, this type is suitable for
simulating spotlights. You can also use it
for down lights, flash lights, car
lights, and so on. Essentially all lights that are constrained
into a cone shape. As you can see, it
has the soft fall off and radius parameters
just like the point light. For now, let's set this to zero so we can see the
cone shape effect better. To control the cone shape, you can open the
beam shape category. We can use the spot
size parameter to control the
angle of the cone, and we can use this band value to make the shadow
border softer. We can also visualize the cone by turning
this checkbox on. If you prefer to control
things using Gizmos, the spotlight type also provides supero gizmos
that you can use. You can react from the
center to resize the radius, but you can only do this if none of the transformation
tools are active. Let's change the
radius back to zero. You can react with
circle Gizmo to control the light direction
just like before. It also detects mesh
surfaces, which is nice. And if you zoom out far enough, you can actually direct
with circle Gizmo to determine the blend
value, right. So that is the spotlight type the last one is the area type. This type is suitable for simulating light rays
that are coming from an area the area itself can
be a large flat surface, an opening or even
a very long object. You can use this type
to simulate windows, skylight roofs, rope ceilings, TV screens, and so on. By default, it uses a
perfect square shape. That is why if you try to use the Gizmo to
change its length, the other length or the
width will also follow. If you need to have different
width and length values, then you should use the
rectangle shape instead. As you can see,
this shape allows you to change the weight
and length independently. With this rectangle shape, you can easily adapt the area light to fit any window size, or you can also simulate
a very long LED strap. For example, if you want
to be more precise, you can use the size input
fields instead of the Gizmo. Say you want to
create an LED strap 5 meters in length
and one centimeters in width All right. Next is the disc shape. Basically, this will use a
perfect circle as the shape. Currently, the size is too big
as it uses all less input. Let's change the size to
40 centimeters for now. This size value is
actually a diameter, not a radius, so
the overall length and width of the light
is now 40 centimeters. Last shape is the ellipse. This is basically an oval shape, so you can specify different values for
the width and length. Now, if you try to move light, but you end up taking the gizmo, you may want to turn
it off temporarily. For this, you may be
thinking of opening the viewport overlays panel and then deactivating
the extras checkbox. This may hide the gizmo, but not their functionalities. As you can see, if I move the mouse close enough
to the gizmo border, it will still activated. So this is not the option
we are looking for. What we need to access instead is the
viewport Gizmo panel. Here, you can turn off the checkbox called
active objects. Now you can move light object without any
distractions from the Gizmo. Most of the time, I always
turn the Gizmo option on. I only turn it off
occasionally when the Gizmo gets really
annoying. All right. The parameter you
want to look at is to spread value in a
beam shaped category. Essentially, this controls how spread versus how focused
the light rays are. 180 degrees means that light rays will spread
from side to side, creating a 180 degree angle. This is the default
and maximum value. If we set this to 90, then the light rays
will be constrained to form a 90 degree angle
from side to side. If you set this to zero, then the light rays
will not spread out. Instead, they will be
perfectly straight. This can be very useful if
you want to simulate lasers. As we all know, lasers travel
in straight direction. Okay, guys, those are the four light object types
that you can find in blender. We will discuss
more parameters and techniques related
to light objects in the upcoming videos.
6. 01-04 Blackbody: In this video, we will discuss the black body node for
controlling light colors. In lender, you can set light objects to have
any color that you want. This gives us full creative
control over the lighting. This is great,
especially if we want to create styles rendering
or animation. However, this total
freedom can be a bad thing if you aim
photo realistic rendering. Why? Well, in the real world, most lens do not produce all possible colors that we can choose from
the color picker. If you blindly choose random colors for
your light object, most likely, you
will end up with rendering results that
are far from believable. In the real world,
most lends produce a unique spectrum of light
which is measured in kelvin. The lower the value, the warmer or the hotter the color will be. These are colors like
red, orange and yellow. Vice versa, the
larger the value, the cooler the color will be. In other words,
more toward blue. Most people consider 6,500 kelvins as the center color
or the neutral white. If you are simulating length objects and aiming
for photorealistic results, then you need to confine your color selection
only to this spectrum. To use these colors, you do not need to copy
and paste the chart, as Blender already provides a special node that can generate colors based on calvin temperature
called the black body. To use the black
body node, first, you need to switch the light
object into a node mode. For this, simply
click the button down here that says use nodes. By doing this, the light object now incorporates an
emission shader. We discussed the emission shader briefly before in
the previous claps. If you open the shader editor and you have the
object mode active, you can see the shader
node structure here. Before we add the
black body node, there is one very important
thing that I need to discuss first regarding the node mode of
the light object. Currently, the light
object now has two color parameters and
two intensity primeters. Please note that they are all active and
affect each other. So if I choose light blue
color on this color field, and the nouse an orange
color on this color field, the resulting color is green, which is something that
you may not expect. Also, both of the intensity
values multiply one another. As you can imagine,
these double parameters can be confusing to work with. If you use the note mode
for a white object, I suggest that you set this
color field to pure white. This way, you only
need to control the color using the color
field in the emission shader. But for the intensity, I prefer to keep the strength value of the
emission shader at one. This is so any value you set in a power field will not be changed as it is
multiplied by one. So again, for controlling
the light color, you use this color field, and for controlling
the light intensity, you use this power
field. All right. To add the black body node
in the shader editor, you can press Shift
A and then type in black body or just black for
short, and then hit Enter. Next, you need to plug this
into the color input slot. Now we are using the Kelvin temperature
to control the color. Input 6,500 here, we
will get a white color. If we input 3,000, for example, we will get an orangish color, and if we input 8,000, we will get a bluish color. Let me delete the black
body node for now. Sometimes we do not
want to bother opening the shddar editor and just want to work inside the
properties editor. For this, you need to click on this yellow circle and then choose black body
from the pop up menu, and here is the color
temperature field. As you can see, you'll get the same shedder nodes as
before in the shedder editor. So that is how you use the black body node to
control light colors. From now forward, I
will mostly be using the black body node instead
of the ordariclor picker.
7. 01-05 Cycles related light settings: In this video, we will discuss light object parameters that are related to cycles
rendering engine. If you have cycles active
and you select light object, you will see these parameters is below the main
light parameters. First parameter is
the maximum bounce. Essentially, this determines
the maximum number of bonss that the light
is allowed to perform. So again, setting this value to 1024 does not mean that the light will bounce
off 1024 times. It will just cap
the bonds to 1024. Now, you might be wondering, why do we need to
cap the bonds value? Well, in the real world, light bounces around
almost infinitely, or at least until it
loses all of its energy. In computer graphics, however, you cannot have light
bouncing around infinitely, as rendering will take
forever to finish. What rendering engines do
instead is use approximation. With just several light bounces, the rendering engine
will try to approximate the final result as if it
is bouncing off infinitely. Of course, the more
bounces we set, the more accurate the
approximation is making the final result closer
to Rioti as with cycles, a minimum of four bounces is generally enough to get
good looking results. So you don't want to
go lower than four. Now, although the
default value is 1024, which is the maximum number
that you can get in vendor, the real number of
light bonses is also determined globally by the
cycles vendor settings. Essentially, whichever
is the smallest, then light object
will use that value. I know that we haven't discussed the cycles render settings, but just for an insight, you can open the vendor tab then in the light pads category, you can see the max
bones subcategory. The most important light bonds
parameter is the diffuse. This affects almost 90% of
the lighting in your scene. As you can see, it
is now set to four. This means that all
the light objects are kept at four bonss only. Because of this, if we go back
to the object parameters, it doesn't really
matter if you set the maximum bonds value to 512, for example, or 232 or any
number larger than four. But if I set these 23 or two, for example, you'll
be able to see the difference if we
set this to zero. Now, the light rays cannot
reach the monkey head object. Again, this is the light
rays need at least one, two bounces to reach it. Yes, you can still
see the monkey head, but this is due to the
gray background lighting, not because of light object. Okay? For now, let's just set this to the default by
hovering the mouse on top of the slider
and then pressing the backspace key on
the keyboard, right? Cast shadow checkbox is used to determine whether
the light object produces shadows or not. The result may look
strange and may not be that useful for
architectural visualization, but you may need this option for visual effects
such as adding ridden lights or Black
light effects or adding a certain lighting mood
to the scene and the like. Next, is the multiple
important sample checkbox. Essentially, this option helps reduce noise in your rendering, especially if you have seen
contains large surfaces that emit light or when you have strong
glassy reflections, I don't know any scenarios that you want to set
this option off. So just leave this
checkbox on all the time. Okay? For the caustics options, we will discuss this in more
depth in the next lesson. For now, let's discuss
the portal parameter. Note that you can only see this primeter if you set
the light as an area type. A light types do not
have this primeter. So what exactly does
this primeter do? Simply put, this will make
the area light only work as an opening that channels the light samples from the background or
the world shader. So you usually use portal
lights at windows, doors, or any opening that connects an interior space
to the outside world. You usually need this portal
option for interior scenes as it has almost zero benefit if you use it on
exterior scenes. Please note that in portal mode, the area light does not
produce its own light. That is why any color or intensity you specify on
the light does not matter. They will just get ignored. Again, this is because currently the area
light role is only to help reduce rendering noise by channeling light
samples from the outside. You may get a slow
rendering time when using the portal
option, but in the end, because cycles
concentrates the samples, you get cleaner results while reducing the number
of samples required. Personally, I prefer not to use the portal option and just have the area light
emits its own light. Why? Because with this, I can have finer and
more independent control over each of lights coming
through the windows, doors, or other openings
in my interior scene.
8. 01-06 Caustics: In this video, we will discuss how to create caustic
effects in cycles. So what exactly is caustics? Essentially, it is an
optical phenomenon where the light rays
become unevenly or randomly concentrated due to reflection or reflection
of curved surfaces. You can easily spot custis
around swimming pools or when you shine light through a transmisive object
such as glass or bottle, but before we start anything, I need to remind
you that cycles is very bad at producing caustics. This is not a problem
specific to cycles, but a problem that all pat tracer based rendering
engines have. Basically, the way
pat tracers work is just not suited or not
effective in creating caustics. Rendering time will be
significantly longer, but the caustic effect
generated is not great. Personally, I will avoid using
cycles and prefer to use other rendering engines if the project requires
strong caustic effects. What you want to use instead are rendering engines based on a photons technique or ones that can combine different techniques with the photon techniques. An example of such a
rendering engine is LCR. Looks CR is an open
source renderer that is available for blender. Another alternative is
the octane renderer. This is a paid rendering engine developed by a
company called Oti. These two rendering
engines can generate caustic effects far better than cycles at shorter
rendering times. Because of this limitation, Blender turned off the
caustic effect by default. As you can see in the viewport, although I have transmissive
material on a glass object, the shadow is just
a solid black color as if the glass is made of
non transparent material. If you want to turn on the
acoustics effect in blender, you will have to tweak the settings in six
different places. First, you need to access
the global render settings, then the word settings, the light settings,
the material settings, the object settings
that emit the caustics, and the object settings that
receive the effect again, even with all of these settings, you won't get good
caustic effects. Let's discuss each of
the settings one by one. First, let's open global
cycles rendering settings. You need to open the
light paps category. Then in caustics subcategory, you need to activate
this refractive option. As for the reflective option, you can have it on or off. Basically, if this is on, you will get more
caustic effects as reflective surfaces will also contribute to
creating the effects, of course, at the expense
of longer rendering time. For now, let's just
turn this both on. Now, if you have an XR or
a sky texture in the world shader and you want them to also contribute to
the caustic effects, then you need to go
to the world tab, find the settings category
in the surface subcategory, turn on shadow caustics. The next settings, we need to tweak are the light
object settings. Please note that caustic
effects work best with smaller light areas
or sharper shadows. So in our case, we can change the light type
to a point light with zero radius value then the most important part is to turn on the
shadocaustics option. Okay, so we have
three settings done. The next place we need to go is the material setting to
create a caustic effect. Of course, you need to make the material transparent
or reflective. We can do that by increasing
the transmission value. We have discussed
material settings in depth in the previous class, so I won't go into detail explaining
these settings again. After we have the
basic material setup, the next important
setting that you need to turn on is in the
settings category. Then in the surface subcategory, you can see an option that
says transparent shadows. Make sure this is turned on. All right. The next
place you need to go is the object properties that generate the
caustic effects. In our case, it is
the glass object. In the properties editor, open the object tab, find the shading category, and then in the
caustics subcategory, turn the shadow
caustics option on. Finally, you need to select the object that will receive
the caustics effect, which is in our case,
this box object. As before, you need to
go to the object tab, open the shading category, then the caustics subcategory. What you need to
turn on this time is the receive
Shadocaustics option. Finally, we have
the caustic effect. As I mentioned earlier, cycle's caustic effect
is not that great, at least, if you compare it with photon based rendering engines.
9. 01-07 IES: In this lesson or video, we will discuss AES. In blender, although the computer graphic
software in general, by default, all light
objects produce an even or uniform
pattern of light ways. Those light rays then become
uneven when they go through refractive materials or bones of non flat
reflective materials. We discussed this in depth
before in caustics lesson. That is why we can see
caustic effects easily on glasses or around
swimming pools and the like. But in the real world, although we may not
see them clearly, caustic effects are
actually more abundant. This is because the glasses or reflective surface online
products itself cause caustics. So unlike CG, most real world
lighting products generate caustic
effects out of the box. For example, round LED bulb generates a different
lighting profile compared to an oval shape, a decent conventional
light bulb again, this is due to the
caustic effects generated by the glasses
of each product. Some products even have
reflective piecings and irregular glasses or
transparent plastics in front of the
filament or LED miter. All these factors contribute to different and distinctive
lighting profiles. If you want your rendering to be as realistic as possible, then you want to apply these irregularities
in your lighting. But you do not want to model
each of the glass parts and the reflective parts for every lamp products you want
to use in your wondering, that would be too
time consuming. And also, as we
discussed earlier, cycles is not that
good either in generating accurate
caustic effects. The best solution is
to use an IS file. So what is exactly IES? Well, in terms of name, AES is derived from the words Illuminating
Engineering Society, which is Lighting Industry
Consortium founded in New York City on
January 10, 1906. So, yes, the name
does not really help to describe what
AES actually is. In terms of file definition, IS file is a file that
describes how lights coming out from a certain
lamp product are distributed or spread
across three space. Some people also refer to these files as
photometric files, light web files or
web light files. Almost all lamp manufacturers already measure the light distribution from
their products, and they usually provide these files for free
for us to download. These IS files can then be
used by lighting engineers to simulate how the line product behaves when installed
on different buildings, structures, vehicles,
and other scenarios. As to the artists, we may not need to simulate lighting ignitions accurately
or in too much detail. But these IS files are certainly
helpful for us to create more realistic lighting
because they can prevent us from having to
simulate caustic effects. You can search the Internet to find a lot of free IS files. You can even find AS files directly from the
manufacturer's website. That is, if you need to simulate lighting of
a specific product, for example, on the
Zanine website, you can scroll down and download these serious
technical files. This is basically a zip file
with the IS file inside. Some lamp manufacturer websites have two different versions. One version is for
general customers, and another is for
resellers or partners. For example, if you open
flows.com by default, you'll see this type of product
pages for general buyers. You won't see any link
to download the IS file. But if you click on Link
Visit Professional space, you'll see a different
page where you can find all the technical specifications
about the product, one of which IS file. Please note that some manufacturers
require you to create an account first before you can access their partner
portal, right? This lesson, I already
provide you with two IS files called art Is and defined dotIs. There are also PNG files
that go along with them, so we can preview
how DIS will look, even if you are not using
any today software. Please note that I did not
create these files myself. I download them from a
website called Leomonstudios. Now, to use the NIS
file in lender, first, you need to
select the lamp object. Please note that
although you can use any light type to generate
an IS light profile, you should stick with
the point light. We will discuss more
on this leader. And then you also
need to make sure that the light object
is in the node mode. If not, then you need to turn on the use node option
in this area. Next, inside the shader editor, you can press Shift A, then texture, and then choose IS texture or simply
type IS then enter. After that, you need
to plug the node into the strength input slot
of the emission shader. Let me delete this for now. A faster way to do this is by dragging the
strength input slot, then releasing the mouse, and then type IS and then Enter. Okay. Next, because we already have the code
inside an IS file, you need to ooe the
external option and then click the Browse
button to select the file. Just open a folder where
you save the IS files. Feel free to use your own
files if you want to. Remember, you need to
select the IS file, not the PNG file. Again, the PNG files are
only for previewing. If you select the PNG file, the result will not be correct. Click Accept, and now you can already see the
result in the viewport. Somehow, the result is a bit of. It is rotated and the
intensity is way too strong. Let's tackle the rotation first. Essentially, the profile looks rotated because glam
object is rotated. In vendor, IS
profiles are always projected in the minus axis
direction of glam object. If you activate the
local coordinate mode, we can see that
indeed the Z axis is looking this way,
not straight down. Just as a reminder lamp object was rotated because in
the previous lesson, we use the other light types and then drag this
look at Gizmo around. So yes, you can direct an IS
profile using this method. Just activate the
other light types, move the look at Gizmo and then go back to
the point light type. But for now, let's say we want a straight top
down IS profile. To quickly reset object
orientation in Blender, you can press O R. If
you forget the shortcut, you can also open
the object menu, then choose clear and
then use rotation. As you can see, the shortcut
for this command is OR. Another method that
you can use also is by clicking on any of the rotation values in
the item wright panel. Then just reset all
default values. This will zero out
all the values, which makes light
object Z axis straight. Okay? So that solves
the rotation issue. As for the intensity, first, you need to understand
that most IS files already contain information
about how strong light is. So most of the time, you should set the strength
value here to one. And strength value here
at one or one watt also. This should set the intensity to its default or
follow the IES data. If later, you want to increase
or decrease the intensity, simply use either this
value or this value. You do not want to
use both values at the same time as they
multiply each other. Personally, I prefer
to keep this value at one and just play around with the strength value
in light panel. I think I'll go with
five watts for now. The next thing you need
to be aware of when using an IS profile is the radius value or the
area size of glam object. If you want a nice sharp
looking IS profile effect, you should always
set the radius to zero or at least a
very low number. If you increase the
radius value to high, the result will
become too blurry, making it look like
an ordinary light without the IS profile. It beats the purpose of having the IS file
in the first place. This is also the main reason
why you want to stick with the point light type as other types have area
or radius by default. The only exception is
using the spotlight type. That is, if you want to clip
fact of the IS profile. For this, you also
need to make sure that the radius is set to zero
or a very low number. Then to clip the
effect of the profile, you can use the spot
size perimeter, lower the number, the
smaller the con angle, making more of the AS
profile being clipped. If you want to see the
complete AS profile, you should use the maximum
number, which is 180. But even with this value, the point light type still shows a more complete profile if
compared to the spot light. So again, you should stick with the point light when
using AS files. Unless you want to clip
the effect in such a case, then you can use the Spotlight.
10. 01-08 Material based lighting: In this video, we will discuss how to create
lighting using materials, just to recap what we
have learned so far. In blender, we can
light or scene using three different categories world light objects
and materials. In the world category, we can use just a simple
background color, sky texture,
environment texture or a combination of sky texture
and environment texture. As for the light objects, we can divide light
objects into four types, poin, sun, spot, and area. Finally, the third
lighting category which we are going to discuss
now is materials. Essentially, we use material as the light source
to light or scene. In this category, we can use either the principal BSDF
Sader or the Mission shader. We have discussed how to use the principal BSDF seeder as a light source in
the previous class. We even discussed how to have finer control of the lighting
using an emissive texture. So I'm not going to discuss
it again in this video. What I want to focus on
instead is how to use demision shader and
different scenarios on when to use both methods. When you want to
make a surface glow, but at the same time, you want to surface to have material properties
such as reflection, glossiness,
transmission, and so on, then you should use the
principle BSDF shader. Just for example,
you want to create a goose shader or a
holographic effect shader. For this, you can turn down the Alpha value of the shader. And then increase the
emission strength to make the material emit a bit
of light into the scene. Another example, let's say
we have this mirror model. We want to create an LED
effect around the mirror, but we want it to be subtle and look like the LED is
placed behind the glass, not in front of the glass. For this, we also need to use
the principal BSDF shader. First, we select the faces on the face loop by holding out and then clicking on
one of these edges. Create a new material slot, then sign it to
distracted phases. Next, create a new material. To make the material
look like a mirror, we can bring up the metallic
value all the way to one and then set
roughness value to zero. After that, we can increase the emission strength
to make it emit light. If you still want to see
a bit of refraction, you should avoid values
higher than one. So 0.9 or 0.8 should
do it. All right. Now, if you need to create
a light source material, but you don't need any
other material properties, then what you should use
instead is the emission shader. Why? Because the emission
shader is cheaper on a performance compared to a featured principle
BSD of shader. So how can we create an
emission shader then? There are two ways
to do this via the properties editor and
via the shader editor. For example, let's say we
have a ceiling like this. And we want to add a
hidden light in this area. First, select the pace or pass
that will emit the lights, then create a new slot, assign the slot to the face, and then create a new material. By default, vendor always
creates a principle SDF shader. To convert this shader
into an emission shader, simply click here and
then choose emission. This is one way to do it. Et me change this back to
the principal BSDf shader. If you prefer using
the shader editor, just let the principal
BSDF node and then press Shift A and
then type emission, then enter, and then plug this slot into
the surface slot. After this, you can set the
color to a black body node. That is, if you prefer
realistic light colors, let's say, 4,000 kelvins, so it is a bit warm. And then you can
play around with the strength value to
determine the light intensity. Under this point, you
may be wondering, what if we want to create
a monitor or a TV screen? Can we do that also using
the missive Shader? The answer is yes. But because we will
be using a texture, you need to make
sure that the model already has a correct UV map. We discussed how
to set UV maps in depth in the previous
class, just as a reminder. Let's say we want to create
a UV map for this TV model, just like the phase that
forms the screen part, then because we will be using the project
from view method, press one to view the scene
straight from the front view, then open the UV menu and then choose project
from view bounds. We want to use the bond
version so that the phase makes use of all the
available UV space. Then just like before, we can create a
new material slot and then a new material, assign it to the phase. Then change the Shader type
to the emission shader. Next, in a shader editor, select the emission Shader node, then press Control T to create
the image texture nodes. Click the open button, and then browse for the image you want to use as the texture. Then click the
open Image button. Finally, you can use the strength value to
control its intensity. And here's the result. It is almost the same as what we created in
the previous class. But this time we use
an emission shader instead of the
principal BSDF shader. Okay, guys, those
are different ways. We can use materials as
flight sources in lender.
11. 02-01 Camera creation and transformation: From this lesson forward, we will discuss the
cycles render settings. But to perform render in lender, you need to have at least
one camera in the scene. Bender will just throw an error message if you try
to render without any camera. Because of this, before
discussing the render settings, we better discuss how to
work with cameras first, just like in the real world. In lender or to the
applications in general, a camera is a type of object that we use
to define a view. To create a camera, we can use the usual
object creation methods. We can open the ad
menu and then choose camera or if you
prefer the shortcut, you can press Shift A
and then choose camera. The new camera object will be placed at a t cursor location. Let's move this camera
so we can see it better. To see what the camera sees or to activate
the camera view, you can press the zero key on a numpad in a camera view mode. You will see this dark
overlay on the viewpoard. This dark overlay indicates the area outside of
rendering result. So again, when you render, you will only get this
square bright area. The square shape is due to the resolution values
in the output format. Currently, I have the
shape of a perfect square because I assign the same
value to the X and Y fields. We will discuss more on
its parameters later. This view mode, you can freely scroll the mouse well
to zoom in and out. You can also press Shift middle mouse button
to open the viewport. This will not take you
out of the camera view. But if you try to rotate the view using the
middle mouse button, you will go out of
the camera view and back to the
standard view mode. Activating the axis view
modes such as stop, front, side, and so on, will also get you out of
the camera view mode. Now if you are in a certain
view and then you press the zero key to activate the camera view in
this condition, if you want to go back
to your previous view, you can do that by pressing
the zero key again. If somehow you
forget the shortcut, you can also activate the
camera view by going to the view menu camera and
then active camera. Or you can also click
on this camera icon on the right hand side of
the three viewport, right? Let's discuss how to control the camera
object transformation. But before we do that, let's first divide the
viewport into two parts. While the mouse is
on the right part. Activate the camera
view by pressing zero. Okay. You can move a camera object using the move tool or
via the G shortcut. You can also rotate a
camera object either using the rotate tool
or via the R shortcut. As you can see,
moving and rotating the camera object will
affect the camera view. What will not affect the
camera view is scaling. Scaling a camera object will only change its
appearance in the scene, but it will not affect the
camera view in any way. So most of the time, you don't need to use
scale transformation on camera objects. Now, although using
the global orientation is needed in most cases, you want to tweak
the camera position or rotation in its
local orientation. You can activate the local
orientation mode and then move or rotate the camera
object within this mode. However, I found
this method not that convenient as we have to switch the orientation
mode back and forth. A faster and easier way
to do this is by using the transformation
shortcuts right inside the camera view
for this approach. You do not need to set the
rotation mode to local. You can just leave it as global, but make sure you are inside
the camera view mode. Also, you need to make sure that the camera
object is selected. If you select another
object and then press G, for example, that object will move instead and not
the camera object. One easy trick to select
the camera object while the camera view mode is by clicking on this
rectangular border. If the border is yellow, you know that the camera
object is selected. To move the camera on
the X and Y local xs, you can simply press G. Again, as a reminder, your most cursor needs to be inside this view, not this view because this
view is the camera view. If you use the G
shortcut in camera view, the movement is locked to
the X and why local axis. After that, you
can left click to confirm or right
click to cancel. If you want to only
move the camera up or down after pressing G, you can press Z key once. Essentially, the camera is now moving in global
Xaxis direction. In this condition, if
you press Z again, this will activate the
local Z axis constraint. Now, you can only move the
camera forward or backward. This type of camera movement
is also known as dolly. And if you press Z again, you are not back to the initial
movement mode where you are moving the camera
either in x or local axis. All right? The same concept
applies to rotation. Basically, you can use the X, Y, and Z keys to constrain the
rotation globally or locally. If you press R once, you are rotating
the camera using local Xaxis also
commonly known as roll. In this condition,
if you press Z, you are now using the global
Xaxis to rotate the camera. This camera movement
is often called pen. And finally, if you want to tilt the camera, after pressing R, you can press the X key twice to activate local
X axis constraint. Again, this type of camera rotation is
commonly known as tilt. After you understand all of these transformation
techniques, you know now that this
secondary viewport is actually not necessary. So let's join these
two back together. If you like to play games, especially for sports and games, you may already be familiar with the WASND keys for
performing navigation. In Lander, this type of navigation is called the
fly walk navigation. To activate this
navigation mode, you can hold the shift key
and then press the tilda key, which is the key just
below the escape key. On my computer, I
already assigned this command to a custom
keyboard shortcut, which is the W key. I did this because I relied on the fly
walk navigation lot, especially for
navigating large scenes. While it is mode, you can use the W A S and D letter keys to move around the three d space and then use the
mouse to look around. To move up and down, you can use the Q NE keys. While in this navigation mode, you can activate gravity and for collision detection by
pressing the tab key. Just make sure that you have
an object below your feet. Otherwise, you will
fall down infinitely. If you want to cancel,
you can right click. This will bring you back
to where you were before. But if you happen
to like the view, you can click or press
leftmost button to confirm. Okay. Regardless of
whether you know this flywak navigation or
not, you may be wondering. So what does this technique have to do with
the camera object? Well, if you are in a camera view and you perform
the flywak navigation, you will not go out
of the camera view. Instead, you are controlling the camera object via the
Flywoknafgation method. To prove this, let's press zero to activate the
camera view mode. Then while we mode, press Shift Tilda to activate
the fly walk navigation. In my case, I just
need to press W once. You can now move
around the scene while taking the camera
object with you. You can go forward or backward, left or right, up and down. And you can look
around with the mouse. As you can see, we
are doing all of this while never going out of
the camera view mode. If you like how the view looks, you can left, click to confirm. But if you wish to revert
back to the previous view, you can right, click to cancel. Another approach of controlling
the camera position and rotation is to align it with
the current viewport view. For this, we can use the
shortcut Control O numped zero. For example, say, we rotate
and zoom the viewport freely, and at a certain point, we really like how
the view looks. In this condition, we can
set the camera to align to this view by pressing
Control out numped zero. As you can see, the
camera just jumps and orients automatically to where we are looking at the scene. If you forget the shortcut, you can also access this
command via view menu, line view, and then use
line active camera to view.
12. 02-02 Resolution and the active camera: In this video, we will discuss the output resolution
settings and after that, how to work with multiple
cameras in lender. In the properties editor, you can open the output tab to access the
resolution settings. Up to this point, we already
know that the shape of the camera border is defined by these resolution
X and Y values. I don't know if you
consider this a benefit or not, but in lender, rendering resolution
is a global value that will affect all
cameras in scene. Some people do not
like this concept and prefer to have
different resolutions for different cameras. If you are one of these people, then you may be
interested in using an official add on called
P camera Resolution. You can get this
from the extension repository in its course. We will be using the
default behavior, so global resolution
values for all cameras. All right? To see the
resolution effect on camera, we can try different values. For example, we can set
the X value to 400 pixels, then the Y value to 300 pixels. Basically, we now have a
four by three image ratio, which is a landscape ratio. But if we swap the values, we set this one to 300, while this one to 400. Now we have three by four image
ratio or a poultrt ratio. Note that when vendor
performs rendering, it will take into account
the percentage value. Usually, the way we use
these parameters is that we set the actual
resolution target on the X and Y values. But before we do
the final render, we usually need to do a
series of test renderings. To make tests or
preview renderings, we can lower the
percentage value to 75% or 50% and so on. Yes, the image results
will be smaller, but the rendering
process runs faster. Only after we are satisfied
with the preview, we set this to 100% and
then do the final render. You can also input higher values than 100% if you want to. Indeed, it only max out to
100% if you drag the slider. But if you click on it and
then type 200, for example, now the actual rendering will
be at 600 by 800 pixels, 200% horizontally,
and 200% vertically. Essentially, this will
make the actual pixels generated four times
the previous, right? Now, I know that these numbers are not a common
video resolution. If you want to use a full
HD resolution, for example, you can type the
values manually, which are 1920 on X field
and 1080 on a field. An easier way to do this is
by using the preset feature. To access it, you can
click on this icon. It looks like a list. Sorry. Let me drag this down so the pop up menu does not
block the parameters. So this is the full
HD resolution. This is the HD resolution, and this is the
four k resolution. Notice that selecting some of these presets will change
not just the resolution, but also the aspec values
and frame rate value. This aspect X and Y values are not the image
resolution aspect ratio, but they are the
pixel aspect ratio. You see in your old
days when we still had large tube or
CRT televisions, when we look closely at each of the cells in those monitors, they are actually not
a perfect square. Instead, they are squash
or elongated horizontally. So the actual size presented to the viewer is usually wider
than the resolution we set. Different countries use
different TV standards. In my country, Indonesia, we use the Pell standard. If you live in Europe and
you are as old as I am, I believe you are already familiar with the Pell standard. But if you live in America, Canada or Mexico, you may be more familiar with
the NDS standard. Nowadays, people are
shifting away from traditional TV to YouTube or other Internet
based video content. And also the pixels
that exist or modern LED monitors have
a square aspect ratio. So most of the time, you want to leave these
aspect values to one by one. That is, unless you have specific reasons to target
all television devices. Now, if you often use certain resolutions
or frame rate values, you may want to consider saving those settings as
a custom preset. For example, suppose you
use 1080 pixels by 1080 pixels resolution with 30 FPS very often for
creating sogran reals. You can save these values as a preset by clicking on
a preset icon again, and then click on this
field that says new preset, type the name of
the preset, say, Instagram ten ATP, and then
click on T plus button. Lender automatically orders
the preset alphabetically. So you can find the new
preset just after DH letters. If you want to remove a preset, simply press this minus button. Okay? The rest of the
settings below are mostly for rendering and not
so much for camera settings. So we will discuss
them in later lessons. In a single scene, you can have as many cameras
as you need. You can add more cameras using any of the
create object methods. Or you can duplicate
the camera using the Shift shortcut or
the output shortcut. Or you can also duplicate the camera using the
Control C and control with shortcuts inside the viewport or inside the liner
editor. All right? Now, although blender allows you to have mini
cameras like this, there can only be one
camera active in scene. You can tell that
the camera is active but its solid triangle
on top of it. In our case, this is
the active camera. Please note that
the active camera is different from
the selected camera. You can select this
one, for example, but that does not make
it an active camera. This camera is still
the active camera. You might be wondering, why is understanding the
difference crucial? Because when you press the Numpad zero key to
switch to the camera view, Blender will use
the active camera, not the selected camera. In a three D viewport, you can set a camera object
as the active camera by clicking and then
use set active camera. As you can see from the menu, the keyboard shortcut for this command is
Control Numpad zero. Now this camera object has a
solid triangle on top of it, so we know that this
is the active camera. That is why if you
press number zero, Brender uses it as the view. Personally, I prefer to use the outliner to select
and activate cameras. You can tell which one
is the active camera, but the small camera
icon on the right side. If the icon has a
dark color behind it, you can be sure that it
is the active camera. So the left icon shows
the selected camera. While the right icon shows which one is the active camera. If you want to set a certain
camera as the active camera, simply click on the White icon. Notice that if you
are in camera view, switching the active camera will automatically
switch the view.
13. 02-03 Camera types and settings: In this last and video, we will discuss the
camera settings. That is the settings that are unique to each of
the camera objects. To access these settings, first, you need to make
sure that the camera you want to control is selected. It does not have to
be the active camera. But if you want to preview the changes right
away in the viewport, you do want to make it
an active camera, right? In the properties editor, you can find a tab
for the camera data, the one that has
a camera symbol. The topmost parameter
is the camera type. There are three options
that you can choose from perspective, orthographic
and panoramic. Notice how the parameters below change as we change
the camera type. Let's discuss the
prospective type first. Most of the time, this will be the camera type
you want to use as it simulates how common
consumer cameras work in real world. The most important parameter of the prospective camera is
the focal length parameter. This controls how wide the camera captures
its surroundings. If you use cameras before, you may already be familiar with this lens millimeter value. The higher you go, the closer
you zoom in on a subject, thus making the
prospective effect almost gone and making viewer
objects in the frame. Vice versa, the lower you go, the wider the viewing angle
is making more subjects visible at the cost of stronger
perspective distortion. Personally, I like to use values between 20 millimeters
to 35 millimeters. Now, besides using
lens millimeter value, Blender also allows us to use degrees for
the camera angle. For this, you need to change the parameter to field of view. As you can see, now we are using degrees instead of millimeters. If you set this to
60, for example, it means that what the camera is seeing forms a 60 degree angle. So larger the value, the more subjects
can be captured, but at the cost of stronger
prospective distortion, while smaller values
mean fewer subjects can be captured and with
less prospective effect. All right. The next parameters
are the shift values. We can use these values to add small offsets to the position
of the camera frame. You do not want to use large values on these two
parameters as that will make the rendering result look strange in terms of the
prospective effect. If you want to make big changes
to the camera position, then you should use the camera
transformation instead. The most common usage of these shift parameters
is when you need to create a two point
perspective effect. This is very common in the
architecture field when we have vanishing points horizontally
or at left and right, but no vanishing
points vertically. In other words,
all vertical lines stay parallel to one another. If we switch to this
camera, for example, when we position the
camera quite low, while it is looking up, trying to capture the
whole subject, inevitably, you will get these
vertical distortions as the vanishing points kick in. To get rid of the vertical
perspective, first, we need to turn off the
camera tilting in Lander, as we discussed before, camera tilting is controlled
by the local X axis. You do not want to set this to zero as it will look
straight down to the ground. What you want is to set
this to 90 degrees. Then you also want to make sure that they tilting
is set to zero. The CSO D camera does not roll. For safety reasons, you can lock these two values to prevent
any accidental changes. Now, even if we are controlling the camera by the
fly walk navigation, the camera won't tilt or roll. Notice that currently
the subject is not really in the frame. If you only use the zip
position like this, you are changing the
vanishing points. This is not what we want. Imagine if you have to render
a skyscraper building, you do not want to
have the camera positioned at the
middle of its height. This is where the Chef Y
value can come in handy. Without changing the perspective
or moving the camera up, you can have the
subject in the frame. We can also use the
Shift X value if you need small adjustments in framing the object horizontally. And if the subject is still
not in a frame perfectly, you can move the camera
forward and backward. Alternatively, you can change the focal length or
the viewing angle. All right. Next, other
clip parameters. Essentially, we use these values to hide or cut off objects from rendering if they are too close or too far from
the camera object, if I change the clip and value to perimeters,
for example, and then drag the
value up or down, you can see that all objects
located farther away from the clip and value
will get cut off or hidden. You can see the camera clipping effect in all the
viewport modes. Currently, most of
the back part of the house is missing
or being clipped. Let's say this to
100 meters for now, so we can see the back
part of the house again. Now, let's try turning
down the clip start value. As you can see, all
objects or surfaces that are closer than the clip
start value get cut off. This feature is interesting, as you can use this to
visualize the sections of a building quite easily without having to use the bull modifier. Now let's discuss
the other types of cameras that
vendor can offer. If you set the types
option to orthographic, vendor will disable all
perspective effects completely, making rendering result
flat or look like two D. I think we can see
the difference better with an
aerial view camera. So this is the perspective mode. And this is the
orthographic mode. As you can see, in
orthographic mode, blender removes all
vanishing points, making all parallel lines
become truly parallel. This camera type can be useful when you need to render
isometric floor plans, for example, you can also
combine this mode with the clipping
parameters to create straight or top
down floor plans. Essentially, any rendering
that looks like two D now, in autographic mode,
you don't have the focal length or field
of view parameters. What you have instead is an
autographic scale parameter. If you increase this value, larger rendering frame will be. Please note that this number
is actually in middles. That is, if you seen
uses the metric system, so if I use ten, for example, its frame size is now 10 meters width
by 10 meters height. Up to this point, you
may be wondering, what if we use a different
image aspect ratio? Say, we set the
output X value to be half of the output
Y value in Knesian, whichever is the longest will be granted 10 meters in length. So the frame height
is now 10 meters, while the frame width
follows in propultion which is 5 meters
in its case, right? The rest type of camera
is the panoramic type. Basically, with this type, you can render the
scene as if you are using the 60 camera
in the real world. Then supports many
different types of panoramic rendering. But please note that
you cannot preview the panoramic effect
if you are in a solid view mode or the
material preview mode. You need to use rendered
view mode to see the effect. So this is the fish eye mode. This is the mirror ball mode. And this is the
equirectangular mode. If this looks familiar, it is because most of the EXR or SDR
textures available on the web use this
equirectangular mapping mode. As you may have already guessed. Mostly, we use panoramic
rendering results for further three
applications such as VR, 360 videos, game
environments, and so on.
14. 02-04 Sampling and Denoising: In this video, we will start discussing cycles
render settings. As you may already know,
we can access rendering settings in the properties
editor inside the render tab. But before we do anything, just as a reminder, you should first set your rendering
device correctly. To do that, you can open the preferences window and
then open the system tab. If you are using an
NVD RTX graphics card, you want to make sure you oboe optics and activate your
graphics card from the list. Personally, I do not turn on my CPU as it can actually
slow things down. Now, if you own an older NVDA card prior
to DRTxs generation, then you can choose the
Cod option instead. If you have an AMD
graphics card, you want to choose
the hip option. And if you use an
Intel graphics card, then you want to choose
the one API option. After you set the
device correctly, you also want to make sure
that in cycles render setting, you have GPU compute
as the device. This makes sure that you have the optimum hardware
acceleration for the rendering process, okay? To perform image rendering, you can open the render menu and then choose render Image, or you can simply press
F 12 on the keyboard. After you run render
command or press F 12, Blender will open
a floating window to show the rendering result. You can press the
X button to close the window and cancel
the rendering process. But if you wait until the rendering is finished and
then you close the window, you can still see
rendering result in the image editor or simply by opening the
rendering workspace. All right. As you can see, Cycles offers a lot of render settings
that you can tweak. In this video, we will focus
on the sampling category. Inside the sampling category. You can see two subcategories,
viewport and render. Please note that you'll see a lot of these two categories, viewport and render throughout
the render settings. Essentially, the
viewport category affects the rendering that you see in a TD viewport when you set it to
render view mode. While render category affects the final rendering or
when you press F 12. Usually, you want the
viewport settings to be lower in
quality but faster. While in render settings, you want higher quality, although it is slower. The two most
important parameters in cycles that you need to know are the noise threshold
and the maximum samples. You see cycles is a path
tracer rendering engine. In general, the way path tracer
work is like this, first, it analyzes the camera
view and divides or map it based on a target resolution
or inaugrid of pixels, then to determine what color
should be put on a pixel, the render engine shoots
out a lot of rays from that particular pixel in a
tree space in all directions. These rays are called samples. The more samples a pixel
emits to the scene, the more accurate information
that can be gathered. And so the more
realistic the color assigned to the pixel, the maximum sample value
determines the maximum number of samples that cycles is allowed to use for
each of the pixels. Again, the higher the value, the more realistic or
accurate rendering is, but at the cost of
longer rendering time, the default value is 4,096, but usually you
can get away with lower values for
exterior scenes. You can safely set
this to 1,000. As for interior scenes, you can safely use 2000
3,000 samples now, each pixel is not created equal. What I mean is that some
pixels may require a lot of samples while others
require fewer samples. This is where the noise
threshold comes into play. By using this feature, cycles can stop processing
more samples for a certain pixel if enough
information is gathered. So it does not matter how high you set this maximum
sample value. If a pixel has already had enough with sending 500 samples, for example, it will
stop at that point. So as you can see, this value plays a
very important role in speeding up rendering
time intelligently. If for some reason, you do not want to
cap the samples, you can set this value to zero. Alternatively, you can also
just turn off this checkbox. In this condition, each
pixel will emit samples the same amount as the value
you set in maximum samples. Just be aware that the rendering
time will be quite long. If you decide to do
this personally, I always set this on. So how does this
value work then? Simply put the lower the value, less forgiving cycles will be. And so the more
samples are used, the higher the
quality you will get, but at the cost of
longer rendering time, vice versa, the
higher the value, the more forgiving
cycles will be, meaning that each pixell
can have fewer samples, which results in lower
quality rendering, but faster rendering time. There is no magic number
that works for all scenes, but to get you started, you can use values
between 0.01 to 0.1. 0.01 being the highest quality and 0.1 being the
lowest quality, right? Besides limiting the
number of samples, you can also limit
the sampling process by time using this field. If you set this to 10
seconds, for example, then cycles will use whatever information it has already gathered
after 10 seconds and finalize the
rendering or continue to the denoising process if
you have denoising turned. Personally, I almost
never use this feature, so I just leave
this field to zero. After the sampling process, the rendering result
usually still looks noisy. That is why we need
to use the noiser. Essentially, the noiser
is a program that will scan the rendering result and then try to remove
the noise from it. Most the noiser uses an AI model trained in many scenarios
using a lot of images. Just for an example, it is an image that I
rendered without any noiser. And here is the image
with the noiser tendon. As you can see, the image before the noising
looks quite noisy. Well, the image after the
noising process looks smoother. Now, you may be wondering, can we render an image
without a noiser? Yes, we can. We
can turn it off if we want to compensate
for the noise. You can then set the
max samples quite high and also set the noise threshold
to a very low number. But in my experience, you will need about one
to 3 hours of rendering time if you want the result to be completely
free from noise. So in most cases, that is, if you want to render in
just a couple of minutes, you should use the
noiser cycle support two different noiser engines. The first one is called optics, which is a noiser
developed by NVDA. And the second one is called Openimage the Noiser which
was developed by Intel. Before the openimage the noiser always performed
better than optics. But since version 4.4 and
Video upgraded optics, so now it performs
better than open image, the noiser at the time
of recording this video, you can only use optics if you have an NVD ArtaxGraphic card. So if you use other
graphics cards, you can only use the
open image niserEengine. For the highest quality, you want to leave these
settings to their defaults. And if you want to faster
the noising process, you may want to turn
on this option. Again, this works on
all graphics cards, including NVDA, EMD and
intel graphics cards. The rest of the settings
in the sampling category, you should just leave
them to the defaults as they already work
best for most scenarios.
15. 02-05 Max bounces: In this video, we will discuss the maximum bounces parameters to access these parameters. You need to open the light pads category inside the render tab. Let's first reset all these
parameters to the defaults, so we have the same
values to begin with. You can hover on
any of the fields and then press the B space
key on the keyboard. This is doable if you only need to reset one or two values. But if you have a lot
of values like this, it would be better or faster
to use the preset feature. Simply click on this list
icon and then choose default. We've discussed value
presets before. So just consider
this aster reminder. Okay, as the name implies, the maximum bounce parameters
control the maximum amount of bounces that the light rays can perform during rendering. The total value at the top will cap all these values below it. So if you set these
to two, for example, then it does not matter
if you set any of these values to three
or four or even 100, all of them will
be capped at two. So again, it is just a faster way to cap
all the values globally. One value that will
not be affected by the total value is the
transparent maximum bounces. That is why the
transparent field is separated from the
rest of the fields. As you probably guess
already, in general, the more bonss you set, the more accurate
under result will be, but at the cost of
longer rendering time. The most important maximum
bons value is the diffuse. Essentially, this
controls how many times light bounces when it hits
diffuse or rough surfaces. This bonds also contributes to the color bleeding
effect you can see a bit of green color on the floor object due to the
green color of this block. Notice that the
floor is actually just a plain gray if the
green block is hidden. If you set this value to zero, then the light does not bound
at all on rough surfaces. The only reason why we can still see the teapot under
this green block is due to the wat lighting and also the reflection
of glossy bonds. If we turn off the wat lighting and zero out the glossy bonds, now the teapot under the green
block is completely black. All right, let me turn this back on and also set this
to the defaults. The next one is
the glossy bounds. Essentially, this
controls the number of visible reflections
in rendering result. Four means that we can see reflection as far
as four bounces. In other words, we can see
objects inside a reflection, which is inside a reflection, which is inside a reflection, which is inside a reflection. For example, we
can see this tepod because the reflection
way bounces two times. The first is on this mirror, and the second is
on this mirror. If we lower the glossy
bones value as we go down, less and less
reflection bonss occur. By the time we reach one, we can only see
direct reflections. Reflections that are inside other reflections
will just be black. If we set this to zero, then no reflections
will be visible, right? Next is the transmission bonds. In other words, the
number of times light bends or turns due to
refractive surfaces. If you have a refractive
surface such as a glass panel, when a light goes through it, cycles does not count it
as one transmission bound, but two transmission bounces. One, when it hits one side
of the surface and two, when it leaves the other
side of the surface. So if you have three glass
panels stacked like this, you will need one, two, three, four, five, six
transmission bones. Just to prove this, if I set the transmission
bones to six, we can still see through them. But once we go lower like
five, four, and so on, the less refractive panels
that we can see through, by the time we reach zero, then all refractive objects will just look like
solid black blocks. Next is the volume bounce. Essentially, it defines how
many times lights can bounce around inside a volumetric
object such as fog or smoke. By default, it is set to
zero because in most cases, fog or volumetric
objects already look nice without any
light scattering effects. To show you what I mean, I have already created
this cube object. I use Opencipot volume shader that is connected to
the volume output. Let's go back to render view
mode and I want one hide an object with an
emissive material just so we can see the
fog effect better. Okay? If you increase
the volume bonds up, the fog will become
denser because more lights are scattering
around inside the volume. One scenario in which you
can find the volume bonds is useful is when you decide to use a volume
scattered shader. This is a simpler version of
the principal volume shader. Because it has fewer parameters, it can be rendered faster by cycles compared to the
principd volume shader. One important feature
that is missing from this shader is
the absorption color. And so if you use light blue for the four
color, for example, and you have the volume
bond set to zero, what you get in
rendering result is mostly the opposite
color which is orange, which is kind of strange. Well, these orange colors
are not the foul color, but the shadow color of the fog. In this condition,
you do want to increase the volume
bounds value. As you can see,
as you go higher, the more actual four color
shows up in rendering. So again, if you use the
principal volume shader, you may not need to increase
this volume bonds value. But if you use the
volume scattered shader, then you may want to
increase the volume bonds value to get more
realistic results, okay? Less perimeter is the
transparent bonds. This is almost similar to
the transmission bonds, but it works on shaders of a value rather than the
transmission value. We have discussed the
difference between Alpha and transmission
in the previous course. Essentially, unlike
transmission, Alpha does not
generate refraction. The way we count
transparent bonuses is the same as how we
count transmission bonses. So if we have three panels of transparent
material like this, we do not count them as three
bonses but six bounces, one, two, three,
four, five, six. If you change the transparent
bonds value as we go down, after the value goes below six, we start to see some of the panels become
solid or opaque, and when we reach zero, none of the transparent
shaders are working.
16. 02-06 Clamping: In this video, we will cover the clamping
feature in cycles. You can find parameters
inside light pads category. You can see that there are
two clamping priameters direct light and indirect light. Essentially, these
primeters will scale down higher intensity light
values in your renderings. This can be useful to avoid
fireflies and also to reduce burnout or excessive
white colors on the image. If you are not familiar with the term fireflies,
in computer graphics, fireflies are small
dots of noises that are very bright and very different from the other
noises surrounding them. Most of the time,
fireflies happen due to strong light hitting small
areas of reflective surfaces. So it is often caused by bone lights or
indirect lighting, and rarely do you get
fireflies from direct lights? Please note that fireflies
were mostly problems in the past when we didn't have the noisers since
the noisers exist. I almost never
experienced fireflies as they are intelligently
removed by the Dniser. So nowadays, I just leave
these values to zero, which basically turns
off the feature. But because every
th you've seen is different and has
different challenges, there is no guarantee that your renderings will be
free from fireflies, especially in the case where you don't want to use any noisers. So let's see some charts
and render examples to give you a better understanding on how exactly this
clamping feature works. Imagine that this is light intensity distribution
in your rendering. Without clamping, all
light intensities are possible to exist
in your rendering. We do this by setting the
clamping values to zero. Usually, fireflies happen
in these higher areas. By turning on the
clamping feature, we basically scale
down a portion of the highlights in the hope
of mitigating the fireflies, but at the cost of making the
rendering look a bit dull. Basically any value
larger than zero will turn on the
clamping feature. If you set the values quite low, such as five or ten, then the result will be
very dimmed or scaled down as large portion of highlights will be gone
from your rendering. If you set the values high, such as 50 or 100, for example, result will be less
dim because cycles only scale down a small
portion of the highlights. But nonetheless, you
understand by now that clamping is not an
intelligent solution, as it will affect
all highlights, not just the fireflies. Let's see some rendering
result examples. What you see now is a
scene of a kitchen with a Japanese style or Wabisab
style to be precise. This is a design
project from one of my clients in California USA. I chose this scene because there are four strong
light sources from the cooking hood that shine on a reflective
induction cooktop, stainless steel pot,
and glass jars. Even the marble countertop
is also a bit reflective. Basically, it is a perfect
scenario for fireflies. Unfortunately, I do not have permission from my client to
share this file with you. But for the purpose of lesson, I believe comparing the rendering results
should be enough. For the first rendering, I use ten for both direct
and indirect values. Notice how the lighting
looks very dull. In the second rendering,
I use 50 for both values. The rendering now
looks a bit more exciting as more
highlights are allowed. Next, I only use 50 for the indirect value and
zero for the direct value. The rendering now looks so
much better than before. Again, fireflies usually
happen in the indirect lights. So most of the time, you should turn off clamping
for the direct light. And finally, in the
last rendering, I set both values to zero, making the clamping feature
turn off completely. Now we get all these
nice highlights on the curve reflective surfaces
without any reduction. Let's compare the four
renderings side by side. To recap, if you use a Dniser should always start by turning off the clamping
feature completely. If you see some fireflies, try higher values first, such as 50 only to the
indirect clamping value. If that still does not work. Try lowering the values and also try using the direct
clamping and so on. These steps should resolve the fireflies problem in
your rendering eventually.
17. 02-07 Fast GI Approximation: In this video, we will discuss the fast GI
approximation feature. You can find the parameters in render tab inside
light pads category. For easy pronunciation, I
will refer to this feature as FGIA or just
fast GI for short. By default, this
feature is turned off. The main goal of FGIA is to speed up the rendering
time by replacing or adding sharing information to the final render using an AO or Ambien
occlusion technique. We learned about Ambien
occlusion in the previous class. Essentially, they are soft
shadows that you can see on surfaces when lights are coming from all
directions uniformly. There are three
important things that you need to know about FGIA. First, because it is
based on ben occlusion, it does not care about the actual lighting
condition of the scene. It does not care about whether there is a sun object or not, or if there are light
objects or not, it will just create
shedings based on surface proximity the
second thing that you need to know is that it
works on diffuse bonses. So this is the only
maximum bonses that will affect or correlate
with the FGIA settings. The third thing that you need
to know is that FGA uses the average color of the environment to simulate
the Ambien o csion. So if you have mostly blue color in your environment texture
like what I have here, then you should expect a bit of blue tinting in your
rendering when you use FGIA. All right? If you turn on FGIA, you can see that there are
two methods available. Replace and add. Let's first discuss and
use the replace method. Essentially, the replace
method will take over the shading calculation from the paracer after a certain
amount of fuse bounces. For example, if you set this to four and then you
set this to two, cycles will calculate
the light bonss on the few surfaces using the path tracer up
to two bonses only. After that, it will finish up the rendering using the
Vn occlusion method. So it does not matter how high you set the max
bones value up here. Cycles will cut off the process based on the
value you set down here. Please note that if
you put zero here, it will turn off the
phase GI effect. So you need at least
one diffuse bound from the paracer before the
MV occlusion takes over. Let's see some rendering
results to better understand how this
past GI feature works. Let's set the diffuse
maximum bounds to six and render the scene
without the past GI feature. This is the result that we get. You can see how the shadows and highlights look normal and realistic as all of them are generated by
the path tracer. This render took 1 minute
and 10 seconds to complete. Let's name this render result as original for easy reference. Now we can try turning
on the first GI feature. Again, we will use the
replace mode for now. We are only interested
in render setting, so we can ignore the
viewer bonss value. That set this to one and
then hit a 12 render. This is the result that
we get in this rendering. Cycles only calculates
one duffus bonds using the Patraser method and then finishes up
with the AO method. It does look smooth and clean, but certainly looks very fake. As for the rendering
time, this one took 30 seconds to complete. So less than half of
the original rendering. Now, let's increase
the bonds value to two and render again. This is the result that we get. As you can see, it is slightly more
realistic than before. This is because cycles did two diffuse bounces before processing the NV and occlusion. But as we expected, rendering time is a bit longer, which is currently
at 49 seconds. Next, if you set the
bounces to three, this is the result that we get. Notice that after three
bounces and beyond, you will get good
enough results that are almost similar to
the original rendering. Of course, you can still spot differences if you compare the two renderings side by side. With only three bounces, the rendering time is
now at 58 seconds, about 12 seconds faster
than the original. For a single image, 12 seconds faster may not be a big deal. But if you are doing an
emission where you have to render hundreds or even
thousands of frames, every fraction of
seconds that you can save per frame can
add up significantly. Okay? So that is how
the bones value works. Next is the AO factor. You can think of this value as the opacity level of
the abnoclusion effect. But please note that in lender, AO or Aben occlusion
is not applied to make images darker
but rather brighter. So instead of adding dark
shadows on corners or crevices, bender adds highlights
on non shadow areas. And so if you reduce
the AO factor, the image will become darker. This is the result with
the EO factor set at 0.5, and this is the result with
the A of factor set at 0.1. Notice that the brightness level of this rendering is
almost similar to the original rendering minus all the small detail
color bleeding effects. With clever tweaking, you can
fake the rendering to make it look like the original but with a faster
rendering time. Just as an example, I
set the render bonss two for this rendering but
set the AO factor to 0.3. It took 48 seconds to
render this image, which is 22 seconds
faster than the original. For most people
or in most cases, I believe this rendering
quality is more than enough. Especially if you add a bit of color correction in the
pose to make it a bit warm, just like the original
image. All right. The AO distance value
is useful to 12 cycles, how far it should try
to detect surfaces. The higher the value, the more accurate the
AO calculation will be as it will detect and compare surfaces
at larger scale. For example, both of these trend rings
only use two bounces, but the left one uses
30 centimeters for the AO distance while the
right one uses 100 meters. Now, if your scene
is quite dense, such as when you
have a lot of trees or other objects outside the
area you want to render, setting the AO distance value too high may increase
the rendering time. As a rule of thumb, I usually set the AO distance
value to about two to 4 meters larger than the maximum height or
length of the room. If the room's longest
distance is 10 meters, for example, then I set this value to about
12 meters, okay? Finally, let's discuss
the add method. What makes this method
different is that it does not take over or cut off the
diffuse bonds calculation. Instead, it will wait until all bond processes
are completed. Only then it will add the Ambien occlusion
shading to the rendering. That is why in this method, there is no amount of
bonss that we need to set. It will just use whatever
diffuse bonds we set up here. For example, if you want
to perform rendering that uses two diffuse bonses and
then use AO after that, you can set the diffuse max bones value to
two and then make sure you set the first GI mode
to add based on my tests. The AD method produces more accurate color bleeding
than the replace method. Just to prove this, left is rendered using
the replace method. And one is using the AD method. Both are said to only
have two Duffus boonss. As you can see, left one looks a bit pale in comparison
to the AD method. I'm not sure what
exactly is causing it, but I am guessing that because
the Ed method waits for the path tracer to finish its job before
taking any action. Therefore, the image still gets all the color bones quality
from the path racer. If you compare the
rendering time, we can see that
the ad method took a slightly longer time to render compared to
the replaced method. So there are always
some trade offs that you need to consider
when choosing these methods.
18. 02-08 Film and Performance: In this video, we will discuss the film and
performance categories in cycles render settings. The first parameter
is the film exposure. Essentially, this controls
the brightness of the final render by simulating the sensitivity of
conventional camera films. The value ranges 0-10 with
one as the default value. If you set this to zero, the resulting image will
be completely black. And if you set this to ten, the image will be very bright
with mostly overborn areas. Please note that in cycles, this is not the only exposure
parameter that you can use. There are more
similar priameters in color management category for controlling the
overall brightness. Personally, I don't like using this parameter as these
effects image data directly. In other words, once you set
the value and hit render, you cannot change it anymore unless you redo the rendering. This is very different from the parameters that you can find in color
management category. We will discuss this category leader in a different lesson. Next is the pixel filter. These parameters control
the anti aliasing effect applied by blender to
the rendering result. If you are not familiar with
anti aliasing, essentially, it is a method of blurring
the image to make objects look smooth and
not jagged or pixelted. Anti lasing is a very
important technique in computer graphics because all objects that
we see on screen are represented by
a bunch of pixels. If you want to render
without any anti leasing, you can set the
filter type to box. If mage resolution is high, it may be hard to see
the leiasing effect, but if mage resolution is quite low or you zoom
in close enough, you can see how the lines
are jagged or pixelated. Again, this jagget effect
is what we call liasing. I know this is not a good
example because currently some anti leasing effects still
happen due to the Dniser. Sometimes you do want to have
iasing in your rendering. For example, when
you want to create a pixel style at
work in such cases, you may also want to turn off the Dniser most of the time, you want to use anti
sing in your rendering. For this, you need to choose either the Gaussian type or
the Blackman Harris type. Please note that the
Blackman Harris algorithm is more recon and more advanced
than the Gaussian algorithm. And so you always want to prefer Blackman
Harris over Gaussian. The width value here determines
the amount of blurring. Higher values mean softer edges, but at a high risk of losing more details
from the textures. Lower values mean sharper
or crispier edges and also sharper textures, but at the risk of getting more more effects
on the rendering. More effects are unintentional
patterns that usually show up when you try to display images with repetitive
lines or patterns. Moyer effects may become even
more noticeable in videos, especially if the
resolution is quite low. Just for example, I
render this image with a value of only 0.5 pixels. Notice this headboard area
creates curvy patterns. You can see similar patterns on the shear curtain and
also on the blanket. Again, these patterns are what is called the Moire effect. The default value for the
perimeter is 1.5 pixels. This is the save
value to avoid moire. But personally, I
like to start with one pixel so that the texture in my
rendering looks sharp. I only increase
this value to 1.5 pixels if I see anocb
more effect in rendering. The next important film
parameter is the transparent. Essentially, this will
remove all background or environmental colors
from the rendering by making them transparent. This is very useful. In a case where you want to add your own background in
photoshop or Krita. Just make sure you
save the image in RGPA mode and not just RGB, and you also need to
use a file format that supports transparency
such as PNG or XR. You will lose the transparency. If you save the
image in JP format, we will discuss more about image rendering in
another lesson. Now, if you have objects with transmissive materials in your rendering, such as glasses, bottles or crystals,
and you want them to also become
transparent in render result, you need to turn this
transparent glass option on. And if you do turn this on, you also need to tweak this
roughness threshold value. Basically, only
transmissive materials with roughness value set below this value will
become transparent. Larger than its value, then they will be
rendered as opaq pixels. As an example, if I render this scene with the film
transparent option turned off, the background will be filled
with environment texture. But if I turn on the
transparent option, Now the background is gone. This way, we can easily replace the background
with image using compositor or using
external software such as Photoshop or
Krita, for example. Now, if I set roughness
threshold value to 0.1, only this object
becomes transparent as the other materials have
roughness value larger than 0.1. All right. Next, let's discuss
the performance category. We will discuss compositing and compositor editor
in later lesson. But for now, if you have a fast GPU and you want the compositor editor
to perform faster, you may want to use the
GPU instead of the CPO. Next, in the trends category, you can control
how many CPU curds you want to utilize
during rendering. Usually, you can just
leave the setting to auto detect to let vendor determine what is
the best number. Next, in the memory category, you can specify whether cycles vendors
using tiles or not. Essentially, the tiling
option will force the rendering process not
to be executed all at once, but gradually using smaller
areas one at a time. The purpose of
tiiling is to avoid the rendering process from
running out of memory. For example, let's say
your graphics card only has one or 2
gigabytes of VM, and you need to render
an image at 2048 by 2048 resolution because you have a lot of objects and
textures in the scene. Every time you render,
Blender crashes or displays a message system
is out of GPO memory. By turning on the
tiling and then setting the size to 512, for example, instead of rendering the
whole image at once, cycles will render only a
half quarter of the image. After it is done, it will
process another area another area and so on until all areas of the
image are rendered. Basically, the smaller
the VM that your GPU has, the smaller the tile
size you need to set to prevent cycles from
running out of memory. Please note that because cycles needs time to
initialize each tile. The more tiles you have, the more time you add to the overall render
time. All right. Next, is the persistent
data checkbox. Essentially, if this is on, render will reuse
the data that is already in the memory
for the next rendering. This is useful if you are using the same scene for
multiple renderings, such as when you render from multiple angles or when you
need to render emissions, you may not see the difference
in the first rendering, but at the second
rendering and beyond, you will notice
that rendering time is shorter as vendor skips the initialization
processes and keeps using the geometry and texture data that already exists
in the memory. The last parameter is
the viewport pixel size. Is setting only takes effect if you use render view mode
on the three D viewport. If you set this to one X, Blender will render
the viewport is based on the size of the
D viewport on the screen. If you set this two X, Blender will double
the size of the pixel, or in other words, making
the viewport resolution of the size of Vifor thus making
the viewport render faster. For X, we have the
resolution again and so on. The default value is automatic. This means that it will follow the resolution
scale of the UI. We've discussed the
UI scale before. Just as a reminder, you can open the
preferences window and then open the interface tab, and here is the
resolution scale value.
19. 02-09 Color Spaces and Display Transform: In this lesson video and
several following videos, we will discuss the parameters inside the color
management category. But before we can use these
parameters correctly, we need to understand
the basic concepts of how color spaces and display transform work in Lander are on computer
graphics in general. In the real world,
the range between dark and light is
very, very wide. The darkest value is when
there is no light at all. For example, if you
are trapped inside a vacuum tight box and round in the bottom of the ocean in
the middle of a cloudy night, in this speech Black condition, our eyes cannot see anything
as if we are blind. As for the brightest value, theoretically, it is
an infinite value, but in reality, for us
living on planet Earth, the brightest light that
we can see is the sun. There is no man made light that can beat the
brightness of the sun. In fact, if you stare
at the sun too long, you may end up damaging your eyes or become
blind at wars. So there is lighting
condition in the real world. In computer graphics, we call these light range definitions
as color spaces. All right. Now, if we compare the
real world color space to color spaces inside display
devices such as TVs, computer monitors, smartphone
screens, and so on, most display devices
use a color space called standard RGB
or S RGB for short. In algebical space, each channel consists of only eight bits
or eight slots of data. At the machine level, computers
only know binary values. So each of these slots can be occupied by either zero or one. If we convert the binary
values to the symbol, the minimum number
will be at zero, and the maximum number
will be at 255. So again, for eight bit
per channel image data, each of the channels,
red, green, and blue can only have
a maximum value of 255. As you can imagine,
our display devices color space is too limited in comparison to the lighting
range in the real world. Because of this limitation, all technologies that try to capture real world
imagery need to compress the lighting information
so they can be viewed comfortably
on display devices. Without this compression. We can only see a
small portion of the lighting conditions that
happen in the real world. This light range compression
process is what we call tone mapping or also
known as vio transform. As CG professionals
or digital artists, we need more flexibility
in our workflow. Working inside an eight bit per channel color space is too limiting and not
ideal for many cases. That is why higher bits per channel color
spaces were created. This is what we call HDR or high dynamic
range color spaces. Simply put, these color spaces use more bits per channel
than just eight bits, allowing us to work with
wider lighting information. The most common values are 16 bits per channel and
32 bits per channel. One important fact that
you need to know is that cycle's rendering engine does
not work in algeblspace, but in a high dynamic
range color space with a maximum depth of
32 bits per channel. So in order to view the rendering result
on the monitor screen, vendor needs to perform
a view transform. This is what the color management
category is all about. Again, to recap, we use the
parameters in this category, so we can see the
rendering results nicely on these pay
devices or when we need to save them to RGB image formats
such as PNG or JPEG. Before we continue
to the next lesson, it is best to name things
with the same fa names used by Blender developers and are stated in a
blender documentation. For color spaces, the high
range color space used by cycles when wandering is
called linear color space. It is called linear
because light range can expand or contract depending on lighting complexity
of the scene. If, for example, you only have one light object in your sin
with a very low intensity, light range will
be quite narrow. But if you have
multiple light objects with different intensities, light range will become wider. The light range can
expand very wide until it reaches its peak at
32 bits per channel. Next, the target color space is called Image color species. It is called this way because Blender subots not only
the RGB color space, but also other
display color species that may not be as
common as SRGB, as for the tone mapping process, Blender calls it Vo transform, but sometimes also calls
it display transform. Please remember these
terms as I will be referring to them quite often
in the upcoming lessons.
20. 02-10 Color management basics: We will continue discussing the color management category. We have discussed the
underlying concepts of color spaces and
display transform in the previous lesson. Now, we will cover the basic
color management primeters the first parameter is
the display device. Here, you can tell Blender what type of display
device you are using. Again, almost all
computer monitors or display devices on the market
use the SRGV color space. So most likely, you don't
need to change the setting. But if you are using an
Apple product or a computer, you may want to change the
setting to display P three. This is the standard color space used by most Apple devices. If somehow you prefer to connect your computer to a very
old CRT television, then you may want to ooe the
Reg 18 86 option. All right. Nowadays, there are already display devices that cannot put more than eight
bits per channel. These high end monitors are
what we call HGR displays. If you happen to
use this kind of monitor to really make
use of its potential, you may want to use
the Reg 2020 option. After that, you also
need to go to display subcategory and turn on
the hid crane option. Currently, I am just using a
common or non SDR monitor, which is why this
option is disabled. I am also using a PC, not an Apple computer, so I just the setting to SRGB. Vendor has a video
editing editor called the video sequencer. By default, it uses
DF RGB color space, which is enough for most cases. But if you want to edit videos
using the video sequencer, while your videos or images
are in HDR color spaces, you can also set the sequencer
to use HDR color spaces. As you can see, vendor supports many different
color space standards. With wider color spaces,
color correction, cross fades, and
other operations in the sequencer can produce
different results. The next parameter is the
mode for the view transform, also known as tone mapping
in other software. Remember that cycles
renders the seen in SDR color space independent of the target image color space. So this view transform
is applied after the rendering or on top of the rendering result
non destructively. This is great because
we can see and change the effect without
having to redo rendering. We can even see the effect while rendering is in progress. Just press F 12 to
render an image. At an open rendering workspace, notice that this
workspace also has the properties editor on
the right side by default. As you can see, you can
change the parameters in color management
category and see the result immediately
on image editor. Cycles provides different
modes for view transform. You can read the
documentation if you want to learn each of
them in more detail. For now, I will try to explain them as
briefly as possible. In most cases, you want
to use the AGX method, as this is currently the most advanced
view transform mode. It was designed to replace
its older version, which is called filmic. So again, instead of filmic, I suggest that you use
AGX instead, okay? Now, if you wish to perform further color grading on
rendering result inside a video editing software such as the Vinci resolve or
Adobe Premiere Pro, then you may want to use the
filmic log mode instead. Log stands for logarithmic.
Long story short. This mode compresses
the dynamic range of an image to try to capture
as many shadows and highlights as possible so the leader users have more flexibility in the
color grading process. As you can see, the image looks very unsaturated
and under contrast. Because it is not designed
for final output, but rather for
further refinement. If you remember the
predator movies, the predator has
different viewing modes, one of which is the hit map. Well, you can create
that type of visual in blender using the false
color view transa mode. Just be aware that the noiser usually screw up the result. So I suggest that you turn off the Dniser if you do want to output the hit
map from rendering. The corns I neutral, will try to preserve
the colors of the textures so that they
match the original colors. Mostly, you only need this vo transform when the original texture
color is important, such as when doing
product renderings. If the customers need to know the color of the product
or certain part of it, they can pick the color directly from the product renderings. Last one is the standard mode. Although it is named standard, this is not the
default mode and you should not use it in common
rendering scenarios. Essentially, this
mode does not do any tone mapping besides applying this by D
five setting up here. This is useful when you are rendering non
photorealistic results. For example, if you use
workbench engine and you want to output the
exact colors you set in color pickers, all right. After you pick the
view transform mode, you can then choose
the low presets. The name of these presets
should explain themselves. This is a high contrast preset. This is the medium preset, and this is the low
contrast preset. Please note that
different view transforms may have a different
set of low presets. For example, you can find punchy or gray scale
presets in AGx, but you won't find these
presets in a filmic log mode. The last two parameters are
the exposure and Gamma. We discussed exposure
before in a film category. I basically, these primeters control the
brightness of the rendering, just like the exposure
in a film category. What makes these
two primeters way better is that they work
on a view transform level, not directly on
rendering result data. So you can perform render
and then after that, adjust these two parameters without having to
render the image. Most of the time,
you only need to adjust the exposure
level without the Gamma. The depth of exposure
value is zero. The lower you set the value, the darker the image will be, while the higher
you set the value, the brighter the image will be. The gamma pameter below
it used to be different. Instead of zero, it uses
one as the default value. Most of the time, you
do not need to tweak this primeterUnlike the
exposure parameter, which moves the whole range of rock and values of the image, the Gamma primeter only
moves the mid tones. Notice that, as I slide
this value left and right, the bottom value or
the darkest point and the top value or the brightest
point stay the same. Only the colors between
the two are changing. The best way to use these two parameters
is that you want to focus on the exposure first until you find
the best spot possible. Only then, if needed, may you add a bit of adjustment using the
Gamma parameter?
21. 02-11 Curves and White Balance: In this video, we will discuss the last two parameters in
car management category, which are curves and we balance. Just like the exposure
in the gamma parameters, you can use the curse feature to control the brightness
of the rendering result. What makes it different is
that you have final control over which level of brightness you want to
increase or decrease. By default, the curse
feature is turned off, so you need to turn it
on to be able to use it. This diagonal line represents the brightness
level of the image. This is the lowest
value or black, and this is the brightest
value or white. Currently, we are
working in color mode. So this curve affects all
three channels or G and B. You can control only
the red channel, the green channel, or the
blue channel if you want to. The way they work is
basically the same. So let's just focus
on color mode. The way we use this curve
is that we need to create control nodes by clicking on a curve line and then
move it up or down. Let's just call these
control node points for easier pronunciation. If you move this point up, then all of the midtones
will become brighter. If you move it down, then all of the tons
will become darker. Now, if you do this, this is basically the same as tweaking the game
of value up here. So if this is the only thing
that you want to achieve, then using the game of
parameter would be enough. The true power of the
curves lies in having custom brightness distribution
utilizing multiple points. Say you want the
dark colors near black to be a bit
brighter for this, you can create a point near the black node and then
drag it up a little. If you want this area
to be darker instead, you can just drag it down. Another example, you want the areas near white to
be slightly brighter. For this, you can create a point near the white
node and then drag it up. If you drag it down,
it will become darker. You can create as many points as you like by
clicking on a curve. If you click on an
existing point, you are selecting that point. You can change the type
of the selected point by choosing one of these
options to remove a point, you need to select it first, and then click on this X button. If you change your mind and want to reset
the whole thing, you can simply click
on Reset button. So that is basically how
you can use the course. Next, is the white balance
feature. Just like the course. This feature is also
turned off by default, so you need to turn it
on to be able to use it. Essentially, white balance
is a visual processing that you can use to neutralize the colors from being
affected by the lighting. Just to give you
a bit of context, if you go outside in
the early morning, you may not notice that
everything looks a bit bluish. As soon as you take a picture using a camera or smartphone, you realize that the
colors are indeed bluish or are mostly cool
in terms of temperature. Well, that is, if you turn off the white balance
filter in a camera the same thing happens
when you are inside the room where most of the
lamps are in warm colors. If you take a picture
in this condition, the result will be
yellowish or oranges. Our eyes or our brain to be precise can adapt to these
sliding changes quite well. So up to a certain level, we can still tell
that certain surfaces are actually white or gray, although they look
bluish or yellowish. When producing photos
or computer renderings, sometimes we want to help
the viewers overcome the lighting conditions and see the colors in their
natural state. This is what white
balance is all about. Basically, we are shifting the neutral color
reference from the standard 6,500 kelvin
water temperature values. You can do this manually by dragging or in putting the
temperature value here. Please note that because we are not adding color
temperatures to the image, but specifying the white point. This value works in reverse. Values lower than 6,500
will make the image cooler, while values larger than 6,500 will make
the image warmer. Additionally, you can tint the white color reference by shifting the color
using the slider. Personally, I always prefer the easier and automatic way
to control white balance, which is by using the
eye dropper tool. With these two active, all
you need to do is select the surface area
in rendering that you consider to have a
neutral white color. In this case, we can use
the central ceiling area. As you can see, blender automatically finds the
best values for us, and so now the colors in the
image look more neutral. This is before,
and this is after. Using the eyedropper tool, be careful not to choose the
wrong white color reference. If you pick the word
color as reference, for example, the
overall image tone will become bluish or cool. Vice versa, if you select
the blue sky outside, that will make the overall
tone wrong or oranges. So again, make sure the color
that you pick is white.
22. 02-12 Image editor and Render Result: In this lesson video, we will discuss the image
editor and render result. Up to this point, you should already know how to
render an image in ender. Just to recap what we
have planned so far, to render an image, first, you need an active camera. You do not need to set the three D viewport
as the camera mode, but this is certainly
helpful to make sure the view is correct
before we render it. For this, you can
press zero in Numpad. Next, in the output tab, you can set the size of the
image you want to render. Then in Render tab, you can tweak the render
settings as needed. Finally, you can press F 12 to start the
rendering process. When rendering, Blender
opens up a floating window where you can see
the image going through different
rendering processes. You do not want to close this window as that will
cancel the rendering. If you want to go back to
Blender's main window, you can minimize
this window instead. The image that is
being rendered is also available in
rendering workspace. If you do want to cancel the rendering that is
still in progress, besides closing the
floating window, you can press this x
button in status bar of the main window or simply
press the escape key. If you wait until
rendering is completed, you can safely close
this floating window. Again, rendering result will still be available in
rendering workspace. Notice that the
largest UI area in rendering workspace is
an editor called Image. As with any other editors, you can access it from any
workspace by clicking on laptop icon of any UI area
and then choose Image editor. Notice that the shortcut for
this editor is Shift F ten. Let me switch back
to the TD viewport. As the name suggests, this editor is used
to view images, all images that are
active in scene, such as PBR textures, environment textures,
and rendering results can be viewed
using this editor. Besides accessing active
images in the scene, you can also create
a new blank image and open an image that
exists on your computer. But for now, we won't be
discussing these two features. To open an active image, simply click on
this dropdown list. You can scroll the mouse to find the image that
you want to view. If you click on it, vendor will open it in
the image editor. You can zoom and pen the editor like any
other editor in vendor. Now, if your scene contains
hundreds of images, scrolling and trying
to find the image manually like this would
be too time consuming. Notice that, when you access the drop down list, by default, the text cursor is already inside the search
field at the bottom. So you can quickly type in the filename or just
part of the filename. For example, if I type
in metal that will filter the list to only show texture files containing
the word metal. You can also type in
the file extension. For example, EXR. This will filter the list, only display the XR files. I am sure you get the idea. Now, from all of these images, there is one image that is unique called the render result. This one is unique because this is not actually
an image file, but rather a special container for images rendered by blender. By default, the render result is opened when you open
rendering workspace. But in case you
open other images in this editor to bring
the render result back, simply type in render
and then press Enter. Another thing that is
unique about render result is that it has eight
slots of images. These slots are
useful when you need to compare one rendering
result with another. Please note that when you
perform image rendering, the result will be placed inside the active slot you
set in render result. Just to prove this, let's
say we activate slot number two to produce a
different result from the previous render. Let's make a new material
for the wooden threads. And make the color light
blue or inigo Okay. Now we can press F 12 to
perform image rendering. Wait until it is done
and here is the result. Modussablender uses the second
slot for this rendering, not the first one because it was the one we activate
before rendering. So again, this is the
first rendering we did earlier stored
in slot number one, and this is the second version
stored in slot number two. Please remember that
these rendering slots are stored in the system memory, not on your storage devices. In other words, if you do not save them and close
the vendor file, they will be gone forever. It is not enough to use the save command
in the file menu, as that will only save
the through the scene or the blend file to save run results as image
files on your computer. First, you need to select
the slot you want to save, and then open the image
menu and then choose saves. The bottom field, you can
set the name for the file, and on the right side, you can set the settings for
the file format. By default, these settings follow the settings you
have in the output panel. You are free to
change the settings or image that you
currently want to save. But if you have a lot of images that you want to save
using the same settings, you may want to
change the settings first in the output panel. This way, you don't have to keep changing the
settings again and again for every image you want to save because the video
is already quite long, we will discuss the
file format settings more in depth in
the next video. A
23. 02-13 Image file formats: In this lesson video, we are going to discuss different image file
formats that you may want to use to save
your rendering results. I'll be using the bender file
from the previous lesson. If you are an image editor and open the image menu,
then choose saves. You can find the settings for the image file format
at the right side. You can also access
the settings in the output tab in the
properties editor. If you click here, you
can see that there are many image file formats
supported by blender. You can wait online if you want to dig deeper into each of them. This video, I will only cover the four most popular
image file formats, especially amongst CG artists. The first one is JPG, also known as JPEG. JPEG is the oldest file format compared to the
other three formats. It only supports eight
bits per channel. That is why you won't
find any options for the bit depth when
saving in JPEG format. When storing image data, JPEG compresses the file size by sacrificing the
image quality. These types of
compression method are called lossy compression. In blender, you can use the quality leather below
to control the compression. Again, higher quality means lower compression and
larger file size. Lower quality means
higher compression and smaller file size. Please note that JPEG does not support alpha transparency. That is why there is no RGB
option in color category. You can only find BW for gray scale colors
and RGB for colors. The second most
popular format is PNG or sometimes
pronounced as pin. PNG has many variances. You can use the common
eight bits per channel, or you can also use
16 bits per channel. So, yes, you can save
more color information from cycles when doing results
using 16 bits per channel. In terms of data compression, PNG uses voiceless
compression method, meaning that it always preserves the original
image quality the compression slider here only determines the file size
against the processing time. This does not
affect the quality, as the quality is
always at 100%. The higher the compression, the smaller the file
size is at the cost of longer time needed by blender or other software to write
and read the file. Vice versa, the lower
the compression, the larger the file size is, but software will be
able to write and with the file faster in
terms of transparency, PNG supports Alpha transparency. That is why you can find
the RGB option here. As a reminder, RGB
stands for red, green, blue, and Alpha. The RGB channels
control the color. While the Alpha channel
controls the transparency, the third most popular
image file format is open XR or just XR for short. Unlike the previous two formats, this file format is designed to store high dynamic
range images. You can save the
image as 16 bits per channel or 32
bits per channel. With 32 bits per channel, you are basically able to store all the rendering
result data produced by cycles in terms of compression. EXR is not constrained
to only one algorithm, but it is compatible with a wide range of
compression algorithms. Just be aware that
the ones that have word loss mean that they
will degrade image quality, just like the JP
compression algorithm. So this is something that
you need to be aware of lastly for transparency, as you can see, it can also
contain Alpha transparency. The last image format
we want to cover is web P. Web P is the
new kid on the block. The biggest advantage
of web P is its speed. It is relatively smaller and faster than both JPEG and PNG, making it an ideal
image format for web pages in terms
of compression. I supports both loves
less and Los methods. If you set the quality to 100%, it will use the lossless
compression method, but if you set the
quality lower than 100%, it will use the Los
compression method. WebP also supports
alpha transparency. However, web P is not designed to store
high range images, so it only supports eight bits per channel,
just like JPEG. The biggest disadvantage
of using WebP is its compatibility because the
file format is fairly new. There are still many softwares
that do not support it. But this condition is rapidly changing as
I'm making this video. Hopefully, by the time
you watch this video, all graphic software
in the world already supports the
WebP file format. Now, the question is, what is the basic strategy for using these
different file formats? Well, if you want to save
rendering result for backup purposes or for further editing in
the SRGB color space, you should use the
PNG file format. But if you want to further edit rendering result in
SGR color space, then you should
use the XR format. If you need to
publish the image to the web or for your
client to preview, then you can use JPEG, as this is currently the most
widely used file format. In the near future, expect to use the web file
format more often, as this might be the
format to replace JPEG.
24. 03-01 Compositor basics: In this lesson video, we are going to discuss
how to use the compositor. Essentially, the compositor
is a node based editor that we can use to perform post processing on
rendering results. So yes, in order to
use the compositor, you need to perform
render first. Otherwise, the compositor
has nothing to process. After you have a rendering, you can open the compositor, just like any editor in Bender. You can switch UI area to
become a compositor by clicking on the top
left corner icon and then choose compositor, or you can just open
workspace called compositing. As you can see, the main editor of this workspace
is the compositor. We won't be discussing
imion so we can minimize the drop sheet and the timeline areas
at the bottom. Now, if you don't see any
node in a compositor, make sure that the use node
option up here is turned on, and then you can
press the home key on your keyboard to zoom
extend the default nodes. Regardless of the version
of vendor you are using, you should be able to
see at least two nodes, vendor layers, and
a composite node. Because the compositor is
basically a node editor, just like the shader editor
or the geometry node editor, it works from left to right. In other words, the input
starts from left side, then process, and then the
output ends on the right side. The render layers node here
is the representation of render result or images generated by circles or
the rendering engine. While the composite node
holds the compositing result. Currently, both
display the same image as there is no process
between the two nodes. The image data just goes straight from vendor node
to the composite node. To see anything different
in a composite node, we need to add some processing
nodes in the middle. Just for example, press Shift A and then choose color
and then color RAM. Place this in the middle. Vendor will automatically chain the connectors or plug
the slots for us. If you remember the
second course where we discussed material
and UV mapping, we used a color ramp node before inside the schedule editor
in one of our projects. Well, the color ramp node in the compositor
works the same. Basically, this node will map the darkest color to
the color on the left, which is black and map the brightest color to
the color on the right, which is currently white. Anything in between will
just be gray scale colors. Now, the compositor
should display a different image than the
original rendering result. But how can we see the
compositing result then? Well, bender provides many
different ways to do this. If you go back to the
rendering workspace, the stat in the image
editor, that is, if you have render
result active, you can see a drop down list. The view layer option here is actually the render layers
node inside the compositor. While the composite option here is actually the composite
node in the compositor. That is why it is in gray scale due to the
colorm processing. Let's note that sometimes the composite image in rendering workspace takes
longer to update. So don't be surprised if you make changes
in the compositor, but it does not do anything
in rendering workspace. This is because bender does not seen data between
workspaces in real time. Again, it will update
eventually just not instantly. For now, if you want more instant feedback
to preview the changes, there are at least two
ways to go about it. The first method is to use the composite node together
with the image editor. Currently, it can only work if both editors are in
the same workspace. The second method is by using a different node called
the viewer node. With this special node, we can use the backdrop feature while inside the compositor. Let's discuss the first method, and then after that,
the second method. To keep using the composite
node and image editor, you need to split this area and then activate the image
editor in one of them. Make sure you have
run result active, and you also need to set the view layer
mode to composite. Now, any changes you
make in the compositor, will reflect instantly on the image editor as they are
both in the same workspace. Just for example, say we change the left
color stop to green. And not color stop to yellow. As you can see,
the changes happen immediately in the image
editor. All right. Let me minimize
this area for now. The second method to view the compositing result is
to use the viewer node. To use this node, you
can press Shift A, output, and then ooe a viewer. You need to plug in a node
to be able to view it. The node can be the same
as the composite node. If you have the backdrop
uption turned on, you can see the composite result directly in the background
of the compositor editor. So, again, this backdrop feature only works if you have
a viewer node active. Just to prove this, if
I disconnect the slot, the backdrop now only
shows a black color. Let's delete the
viewer node for now. A faster way to create the
viewer node or to move it to other nodes to preview them is by using the Shift
Control and click method. So if you hold Shift and Control and then click
on a Color Ramp node, vendor will create a viewer node that connects to that
color ramp node. But if you hold Shift Control, and click on render Layers node, the viewer node will move and connect to
that node instead. This causes it to display
your original image, not the compositing
result image. Essentially, with this Shift
Control and click method, you can quickly review any
node in the compositor. If you have complex
compositing nodes, you may find it a
bit hard to see the backdrop as it is being
blocked by the nodes. Performing the usual
navigation methods such as middle mouse drag and scrolling only affects the
nodes, not the backdrop. To navigate the backdrop, the shortcuts are
a bit different. To zoom in and out, you
can press the V key on your keyboard and read out
shortcuts for zooming out, and V for zooming in. As for painting the backdrop, you can hold the key and then rag using the middle
mouse button. Another way to navigate
the backdrop is by clicking on the viewer
node to make it active. While in this condition, you can move your mouse
cursor on the corner or the border of the backdrop image and
then rag it to resize it. You can also move the
image by dragging the middle point that
looks like an X symbol. Okay. Personally,
I prefer to use the Image Editor to preview both the composite
or viewer nodes. So let me first turn off the backdrop and let's bring
back the image editor. Currently, the image editor
shows the composite node, not the viewer node. To make it show the viewer node, simply click on the
top drop down list, type in viewer, and
then press Enter. Now what we are seeing in image editor is the viewer node, not the composite node. Up to this point, you
may be wondering. So what is the
difference between the composite node and
the viewer node then? They both seem to
do the same thing. Well, for image rendering, both nodes are equally capable. What I mean is that you can also save the image while
in a viewer node mode. Just open the image menu
and use the save Asman, the same as you would save
the image in other modes. The real benefit of
using the composite node is when you need to save
multiple images such as hundreds or even
thousands of images during animation rendering
for each frame rendered, the image data will go through all the compositing process until it reaches the
compositing node. After that, vendor values the file format
settings you set in the output tab and then save the file in folder
you specify here. We won't be discussing animation or animation
rendering in course, but at least you know now that the viewer nodes main usage is for quickly previewing
nodes in the compositor, while the composite
nodes main usage is for saving the compositing result on multiple images
automatically.
25. 03-02 Compositing with nodes: In this last and video, we will discuss how to create more complex compositing
nodes inside the compositor. For now, we will use
only the viewer node. So we can delete
the composite node. Please note that you can only delete the composite node if you already perform rendering or already have the
rendering result. If you delete the composite node and then try to render again, that is by pressing F 12 or
by clicking on this icon, you will get this
error message that says no render output
node in the scene. The reason for
this error is that vendor tries to compose
the render result, but cannot determine where
the or the output of it. If you do want to perform render only for the
render layers node, you can open the output tab, scroll all the way down to
the post processing category. Here, you'll see the
compositing option. If you turn this off, you basically are telling
Bender that you only want to render install result
in render layers node. Now you can press
F 12 or click on a small icon to perform
an image render. Again, to recap, if you
want to render an image, you need the composite node. If you don't have or don't want to create a composite node, you need to turn off
the compositing option in the output tab, all right. The first node we want to discuss is the
color balance node. To remove a node without
breaking the connection, you can select it and
then press Control X. So it is basically the same as the dissolve shortcut
in mesh t modeling. Let's create a color
balance node by pressing if A and then type in color balance and then enter what is node in the
middle of the connection. Now, if your color balance node shows circular color pickers, that is c, you are still
using the default setting. If you prefer to use the
square type as I have here, simply head over to the preference window and
then in the interface tab, select the square a HV option. Okay? So how exactly does
this color balance node work? Well, glass color
picker or the one called left is for
controlling shadow colors. The center one or the Gamma color picker is for controlling
the mid tone colors. The white color picker
or the one called gain is for controlling
the highlight colors. If the color is set to white, the effect is turned off, and so the image data just passes through this node
without any modifications. Let's say we want to make the dark colors to be
a bit more bluish, so we need to tweak
the leaf color picker. The way we use this is
we should start with only the colors on the top area because if you move
the picker down, the image will become darker. Next, you can find a
hue that you like. If you find the color picker is too small, you can just zoom in, or you can also click on a color box to open a
bigger color picker, feel three to play around with these colors and see the
difference in run result. For example, we can make
the mid tone colors a bit green and make the highlight
colors a bit orange. And I think I want to make the shadow color to
be a bit purple. Finally, if you think that
the effect is too strong, you can reduce it globally using the factor
slider at the bottom. I'll go with 0.5 for now. The next common effect
that people often use is adding glow or bloom
effects to the rendering. For this, you can
use the glare node. You can press Shift A, and then type glare
and an Enter. Personally, I like the
bloom color to work on your original image rather
than the color graded image. So I place this before
the color balance node. You can try putting this after the color balance
if you want to. By default, the node
is in strex mode, which explains why we have these strange streaks of
lights in highlight areas. What we want now is
the bloom effect. Essentially, the
bloom effect will add transparent bright colors or materials or pixels
that are bright. You can play around with
strength, saturation, and size. Be careful not to
overdo the effect, as it will make the
scene look foggy. I think I already
like how it looks, so I just leave most of these settings alone
except for the thin value. Let's make it a bit yellowish. All right. So that is basically how you add a bloom effect using
the compositor. As a reminder, you can hold Chef and Control and then
click on a node to preview it. This is the original. This
is after the bloom effect, and this is after the bloom
and color balance effects. After comparing all the nodes, now I think the
result is submit to dark to brighten up on image. There are many different
nodes that you can use in the compositor if
you press Shift A, color, and then adjust. Here, you can find common color adjustment
effects such as brightness contrast,
color balance, which we already used, exposure GEMA, HSV,
curves, and so on. Feel free to try all
of these nodes for now because when the result data is actually in azar color space, we can use the exposure node for simple brightness
adjustment. Let's put this after
the color balance node, and let's increase
the value slightly. I think 0.5 is enough, but now the bloom effect
has become too strong. Let's reduce the strength
value to just 0.5. We can compare it again
with the original image. I think we already have
a nice looking result. The last effect we want
to add is the Fin effect. If you don't know what a fin is, it is the darkening
effect that happens on the borders or
corners of an image. Fine is a French word which
explains the pronunciation. In the old days,
photos usually had fine effects due to the limitation of lens
or camera technology. So back then it was
considered to be a problem. Nowadays, with current
camera technology, we rarely see pinar
effects on photos, but sometimes we
intentionally add pinear effects for
artistic purposes, such as when we need to drive
people's attention toward the center of the image or
other areas of the image. The thing about vendor
is that it does not have a simple node that
can create a fine effect, at least not in the current
version that I am using. However, we can still create the effect by combining
multiple nodes. First, we need a
node that can create black colors on corner
or border areas. For this, you can either use the box mask or
ellipse mask nodes. I'll use Ellipse mask for now. To see what this node
actually generates, we can hold Shift Control
and click on the node. As you can see, it creates white ellipse shape on
a black background. Let's change the width and
the height value to 0.99, so the shape almost fills
up the entire image. Next, we need to
blur out the shape. For this, we can simply
use the blur node. Let's set the X and Y blur
values to 300 pixels. I believe this is
enough for our needs. Now to merge or superimpose this ellipse image on
top of the main image. We can use the mixed node, press Shift A, and
then type mix. What we want to use now
is the mix color node. The way this node works is that the original image that we plug into the first slot will be covered with the second
image in the second slot. So we need to plug
the main image into the first slot and then the ellipse image
into the second slot. Don't forget to move
the viewer node to the mixed node so we
can preview the result. Currently, the second image just blocks the image below it. To make the white
color transparent, while the black color opaque, we need to change the
blending mode to multiply. Now, we can see the FINA effect
on our rendering result. If you think that the
effect is too strong, feel free to adjust its strength
using the factor slider. Let's make this 0.5
or perhaps 0.6. Alright, I think this
looks better. Okay, guys? So that is basically how we
can compose nodes inside the compositor to add non destructive post processing effects to our renderings.
26. 03-03 Render passes and Cryptomatte: In this video, we will
discuss the render passes, and after that, the
cryptomd feature. When you render using cycles, you may be accustomed to
expecting the color or the RGB channels and sometimes also the transparency
or the Alpha channel. But cycles is actually able to provide more rendering data
than just RGB and Alpha. If you go to the properties
editor and then opener, you layer tab, which is
the one below the output. What you see here are all
the different types of data that can be generated
by cycles during rendering. If you zoom in closely to the render layers node
in the compositor, notice that the slots available
in a node are actually controlled by the checkboxes
you set in V layer stab. You can check the blender's
documentation if you are curious to know each
of these render passes. For now, I will cover only
some of them briefly. The combined option is
the one responsible for generating the RGB
and the Alpha channels. So you may always want to
have this option turned on. For now, let's turn on Z option. The fuse color, glossy
direct emission, bien occlusion, cryptomd
object, and cryptomd material. After that, we need to render
the image again by pressing F 12 or by clicking this small icon at the bottom
of Brander layers node. After the rendering is done, you can open rendering
workspace or the image editor. Make sure you have the
render result active. Otherwise, you won't be able
to see the render passes. You can access via render passes via the third
dropdown list at the top. Combined means you
are looking at the final RGB or Alpha result. If you choose deep, this is the Z dep data or the Z option you activate
in view layer tab. Essentially, it adds value to each rendered pixel based on how far they are
from the camera. Black is the closest to the camera and white is the
furthest from the camera. This can be useful for the related effects such as
lens blur or creating fog. The diffuse color pas shows the actual color or textures
without any shading. The glassy direct pass shows the reflective colors from firs rays or before they bounce. This is the emission color, which basically shows all
the missive materials, and this is the AO or
the NBN occlusion pass. We've discussed AO before, so I'm sure you already
know what this is. The last ones are
the cryptomd data. The object has
three passes, zero, one and two, and the material
has three passes also, zero, one and two. At the glance, these passes
look broken. They are not. It's just that the way we use the cryptomd data is not to view them like this
using the image editor. In short, the object
option provides data that we can use to select pixels
based on the objects, while the material option lets us select pixels based
on the materials. To give you a better context, let's first discuss what is
a math in computer graphics, and then what is crypto Mt. The term math in computer
graphics refers to images that we use to select
part of other images. Usually, math images
are single channel, so they can be visualized as black and white or
gray scale color or as an Alpha channel. In the old days, we created
mat images manually. For example, if I want to add a different background
image to the window, I can apply an emissive
white material on a window glass and then make
all other objects black. Yes, you can easily crop
window area in photoshop. But imagine a scenario
where we have complex objects such
as plant leaves, blocking the window area, and you are also
creating an animation. You do not want to trace
the selection manually for hundreds or even
thousands of images. By generating the
met image or images, we can easily extract
the window area to make it transparent and then place a different
background behind it. Okay? So that is basically
what met images are. CryptoMt is the next evolution of the traditional
made Image creation. It was an open source
project first developed by Jonah Friedman and
Annie Jones back in 2020. Essentially, unlike
traditional met creation, cryptomt automatically
generates met areas for all objects and
all materials in the scene. The word crypto is
used because it uses cryptographic
techniques for indexing and retrieving pixel
areas behind the scenes. No, you do not need to know
complex math to use cryptomt. It is as easy as picking
pixels using a color picker. Let's see an example. Imagine that we want to add some lens fare or strik effect, but only on spotlight objects. First, we need to create
the cryptomd node. Then you can plug the main image into
the image input slot. Please note that
the cryptomd node doesn't actually need
this image input to work. It directly accesses the cryptomd data
from render layers, even if you don't plug
anything as the input. The reason why we input
the image is so that we don't need to use a plain
white color for the mid color. It will use whatever color appears in the image
estimate color. Next, you want to make sure
you set this to render, and the active scene
name we have up here is the same as the one
displayed in this field. You do not want to use the
image option unless you are using an external EXR file
that contains cryptomd data. If you want to use
the render result, you should always
set this to render. Next, we need to specify
whether we want to use the objects or the materials as our basis for the
mid area selection. Notice that although these four spotlight
objects are instances, they are still registered
as different objects. So if we use the object mode, we need to pick each
of the four objects. In this case, it will be easier if we use
the material mode. Next, we need to
pick a pixel that represents the material
we want to select. For this, you can activate the backdrop and then pick the pixel from the
backdrop image, or you can also just use
the image editor, first, click on plus icon
and then click on the Musive area in the
middle of these spotlights. Any of them is fine as they are all using
the same material. If you do this correctly, the name of the material will show up or be listed
in this field. If you hold Shiv and Control
and then click on this node, you can see that we
have successfully created the Mt image. Now, if you want to preview different output slots
using the Viewer node, you don't need to manually
dig the connector. You can simply hold
Shiv and Control and keep clicking on the
node to cycle the node. As you can see, the mat slot outputs a black and white
image without transparency. The pick slot outputs colors to easily visualize the areas
that we want to select. Again, it is not a must that you activate this big mode
to pick the pixel. You are free to pick
from the backdrop or from the image editor
using any view mode. The next step is to
add a glare node. Previously, we used
the blue mode. Now, what we want
is the strex mode, connect the image output to the image input and bring the
viewer node to this node. I want the color to be with yellow and also six strik to
make it more interesting. And let's rotate effect
about 20 degrees. I think this is enough. We can revise it again
leader if we need to. Last step is to combine the strik effect
with the main image. For this, we can create yet
another mixed color node. Unlike the FNA effect, now we want to brighten
up the main image, we use the ad blending mode. Next, log the original
image into the first slot. And then plug the strex
effect into the second slot, move the viewer
node to this node, and here is the final result. Again, what is cool
about the compositor is that it is non
destructive and procedural. You can always go back to any nodes that you want
and make changes there. Just as an example, let's say you want the
lamp shade material to also produce a stric effect. For this, simply click on E plus button and then click
on lamp shade material. Another example, you also want
to add the monitor screen. And so on, you will see all of the material names that
are active listed here. Of course, this looks
too exaggerated. To take out the last two
materials from the list, simply click on the minus icon and then click on
the monitor area. Let's do the same with
the lamp sheat material. Click here, and we are back to only having
spotlight material active.
27. 03-04 Project Furniture product rendering Part 1: In the following videos, we will do a series of
rendering projects. Basically, we are going to
create product renderings of the launcher and an ottoman
set called Cali Soft. This product was
manufactured by Minotti, a well known Italian
furniture company. I modeled this shared
product a long time ago, so I don't think they still manufacture and
sell this product. We already covered TD modeling and material in the
first two courses, as the course focuses on
lighting and rendering. I am providing the T D models complete with all the
materials and textures. This way, you can practice
lighting and rendering right away without having to create everything from
scratch. All right. Let's imagine that a
furniture company asks you to create a TD rendering of one of their share
product lines. But the one rend ring that can easily place on different
background colors. This way, their
marketing department has more flexibility when designing
the marketing campaigns. Notice that these
rend rings have transparent shadows that can work on different
background colors. So how can we create
something like this easily and
quickly in blender? Well, the key to creating transparent shadows like this is to use the shadow
catcher feature. But before we do that, let's first tackle lighting, camera and render settings. Currently, if you
activate rendering mode, we can only see black
color like this. Again, this is because we do not have any light
sources inside the scene. Now, for studio lighting, you can find that a
lot of people use at least three types
of light key light, field light, and Blight. You can read or watch
about this online to have a deeper understanding
of the concept and workflow. However, in this video, I'm going to do
something different. For a quick, beautiful
rendering result, you can just use
an XR or SDR image that mimics studio lighting. You can find many
of these images for free on Pool Avan or
Ambien Cg and like. You don't even have
to find or use a studio lighting type SDR file. As long as it is
an interior type, you can try and see how
it looks on your model. Personally, I prefer the ones that have neutral color tones, so the leader, I don't have
to do any white balance. But if you do like certain SER or XR files that
will warm or cool, feel free to try them also, remember to apply a white
balance processing leader so that the color of
the product that shows up in the final render
does not stray away too far from the original
for this lesson, I'll be using this HDR file
called artists Workshop. In Blender, open
the shader editor. Make sure it is set
to the world mode, not the object mode. Press the home key if you
cannot see the nodes. If you already have the node
wrangular add on active, you can simply select the background node and
then press Control T. Currently, the same with spin. This pink color is a way for blender to tell us that
a texture is missing. Click the open button and
then browse the folder in which you save the HDR
file. Click Open image. And here is the result. As you can see, we can get nice looking
lighting for product rendering quickly and easily
just by using an HDR file. If somehow light direction
still looks off, you can try turning
the z rotation value. You can also turning the X
NY rotation if necessary. Try not to make the main
light source look at the subject straight
from the viewer's eye, as this will make
the product look dull and the shape
becomes harder to read. What you want instead is to make the main light source slightly
off to the left or right. This way, the subjects shape is more defined and more
interesting to see. In our case, I think 30 degrees for the right
side, lighting is good. Or if you want light
coming from left side, I think 170 is nice. Again, these are
not fixed numbers, as they may vary depending on your model and your SDR file. All right. Next,
let's add a camera, press Shift A, then type
camera in an Enter. To make the active camera use our current viewpoint angle, you can hold Control out and
then press zero on numpad. Usually, it does not exactly match how we
viewed the scene before, but it should be good enough
for the starting point. Next, you can use all the different techniques we discussed to control the camera. You can activate
the fly walk mode using the shift
till the shortcut. This way, you can
control the camera using the WASD keys and your mouse. You can also use the
transformation tools or use the transformation
shortcuts, such as G and R, and also the
transformation fields on the right side panel
for this case. I don't want the camera to
roll to the right or left. That is why I set a zero
value for do I rotation. Next, you can make the
camera look straight horizontally by setting
the X value to 90 degrees, but in this case, I do want to see more of the top
surfaces of the product. So I think values around 80
to 85 degrees work better. Finally, we can
adjust the height of the camera using
the z coordinate. I think 70 centimeters is fine
for our product rendering. Now, let's check the
render settings. For the final render output, we want a four key
square resolution. So we set both weight and
height values to 4,096 pixels. But for doing test renderings, we want to set this to only
25% of the final render. Essentially, this
will make render only render at 10:24
square resolution. Later, after we are done
with the test renderings, we can set this back to 100%. Okay. Next, we need
to open render tab. For this scene,
lights barely need to bounce around the wall
to light our subject. So I don't think we need
more than 512 samples. Later, for the final render, we can set this to 0.01. As for now for test renderings, we can set this to 0.04 or 0.05 to keep
rendering time low. For the noisy, abusing optics, as my GPU is an Nvidia RTX card. Next, set both clamping
values to zero, so no highlight
colors get cut off. We don't want to use any
I approximation for now, and we can leave all the
settings to their defaults. For the pixel filter, we can set this to one. And because we want to have
a transparent background, we need to turn this
transparent option on. Next, in a performance step, we can set the compositor
to use the GPU. Then turn on the persistent
data option so that we have faster rendering time for the second and beyond
test renderings. Finally, in color
management category, we can use the AgxFeld transform with the Be contrast reset. Again, I will need to use
white balance processing as our environment color is
already neutral. All right. Let's press F 12 to
see how it looks. So far, the rendering
result looks nice. That we still don't
have the contact of the ground shadow yet. For this, we first
need to create a plane object to act
as the shadow catcher. So press Shift A, then type plane, then enter. Let's make this ten meter
square just to play safe. Every time we create a
new to object in lender, the object will have
this default material. As you can see, the color is white or off white
to be precise. Personally, I don't
mind this color. Just be aware that although this plain object leader
becomes transparent, its color will show up in the reflection or glossy
surface of the product, especially the ones that
are facing downward. You can see this white or light gray color
below the chair. If leader you mostly want to put rendering against white or
bright colored background, then this is totally fine. But if leader, you want to paste this on top of a dark
colored background, you may want to change
the material color of the plain object
to darker color. Or my preferred way to solve this issue is just to hide the floor object
from the reflection. We'll see how we can
do that in a moment. To turn on the shadow
catcher feature, make sure you have the
plane object selected. Then in the object tab, in a visibility category, you need to turn
on this checkbox that says shadow catcher. Now, the plane object
becomes transparent, but any shadows that fall on its surface will be visible
in rendering result. To remove this white reflection
I mentioned earlier, we can simply turn off
this lossy jack boox. Essentially, this
makes the plane object invisible in the
reflection calculation. And so now all reflections
come from SDR image. Now that we have everything
set up correctly, we cannot perform
the final render. We can increase the
resolution back to 100%, and also makes the nose
threshold smaller, such as 0.01. Press F 12 and wait
for it to finish. After the rendering is done, you can save the image via the image menu and then
use the Save As command. Make sure you save this in
a format that supports of a channel and that you also
save it using the RGB option. If you want to preview how this rendering looks against
different background colors, you may be thinking of
using Photoshop or Krita. Those are valid options. But you can do that also right inside vendor
using the compositor. For this, you only need one
node called the Alpha over. Essentially, the image or the color in the first slot
will become the background, and the image or the color in the second slot will
become the foreground. So in our case, we want to plug this
rendering result into the second slot. You do not need to plug
the Alpha channel as this Alpha over node
automatically detects the Alpha. Let's press Shift Control and click on this
node to preview it. With this setup, you can easily change the color in
the first image slot to compare our product rendering against different
background colors. I
28. 03-05 Project furniture product rendering Part 2: In this project lesson video, we will create two renderings. Basically, we will present the same log chair set
we rendered before. But now with three
environments, again, because we are focusing on
lighting and rendering, I already prepared all
these three models and textures of record. I even set the
camera position and also the environment texture
because at this point, I believe you already
know how to do them. As a disclaimer, I downloaded some of the
assets in the scene. I downloaded the HDR file for the environment
lighting called evening sky 026 A
from endncg.com. The textures for the
three remodels are either from polyhaven.com
or ambiencg.com. And for the three
that you see outside, I download it from ndcd.com. All of them are
free to download, so I'm sure I am not violating any commercial copyrights
by providing them for you. Besides the hair products, you can see that I included a table lamp and a side table. I modeled them based on real products just
like the chair. The side table is a product
by NV italia called NIC, and the table lamp
is a product by Arors home called
the Adler lamp. I modeled them for the
project about ten years ago. You can still find the link
for the Nick side table, but I don't think Arterers still make the Adler
lamp product, right? If you are curious about
render settings, basically, I am just using a two key size for both the width
and height values, but I set this
percentage to 50%. Later, you can set this to 100% when doing the
final rendering. As for the rest of
the render settings, I am using the same settings from the previous
veson only thing that is different is that I increase the samples
count to 1024, as this scene is much more
complex than the previous one. And you say before, because we will do a
series of fast renderings, I am setting the noise
threshold to 0.05. Leader for final rendering, you can set this to 0.01, okay? Now, if you activate
the rendered mode, you can see that currently
the scene is quite dark, and the only light source we have is the environment
or the wood lighting. If we set this factorial
value all the way to one, we'll get evening lighting
with mostly bluish tone. But if you reduce the slider, the sunlight will appear. The light rays go
through the window and the curtains and shine on the
hair product and the floor. As you can see, with
this environment setup, we can switch the lighting between day and
night quite easily. Let's first focus on
evening lighting. So set this slider to one
to remove the sunlight, and let's switch back to the material preview mode
to see the objects better. The first thing we are going to add are the ceiling lights. For this, we can make
use of the niche shader. Select the sealing
object and then press the fourth slash key on numpad to isolate the object
from other objects. Go to phase mode, hold out, and click on
one of these edges. So we have this
phase loop selected, create a new material slot. Assign the selected phases to the slot, create a new material. Let's name this material
glow dot sealing center. For gluing materials, we can just use the principle
BSD of shader, but to make the
rendering a bit faster, we can swap the shader
type to the sion shader. It has fewer options, and so less processing is needed by cycles when doing
it for the color, we want to use a
black body node, so click on the color input
slot and choose black body. We can set the color or the
temperature to 6,000 Kelvin. This will make the
light slightly warm. Next, we can set
emissive strength to 50. If you are wondering how
I know this number well, it is because I already tried different values before
recording the video. In real project scenario, you may need to try
different values until you find the best one
that fits your needs. Okay. Next, let's
select this phase. Then press Control L to
select the whole structure, create a new material slot and assign the
phases to this slot. Then create the new material. For this one, you can name
it glow dot ceiling hidden. As before, we can swap the shader type to
the mission shader and use the black body node for the color input because
this is an accent light. We can make it quite warm, such as 4,500 Kelvin and then set the
missive strength to 60. Again, I know this number because I have tried
different values before. After you are done,
you can press Tab to go to the object mode and then press reports k on the Numpad to exit
the isolation mode. Press zero if you want to
activate the camera view, activate the render view mode, or you can also press F 12 to
create a preview rendering. And here is the current
lighting condition. The next step is to
create two light objects, one for a table lamp, and another one for
simulating a spotlight or a down light to create sharp shadows of the
products on the floor. Now, when I need to place light objects
precisely in the scene, there are two methods
that I usually use. The first method is using
the line to active command, and the second one is
using the TD cursor. To use the first method, the target object
we want to align to needs to have the origin
point correctly positioned. This method will not
do us any good if the target object has the origin point
located somewhere else. For our tablem model, notice that the origin
point is already positioned at the center pole where the light bulb should be. At least it is
already correct in terms of the X and
Y coordinates. You can easily move
the light object in the coordinate leader. For this method, the t cursor
can be placed anywhere. It does not have to be
near the target object, press sat, A, then type,
point, then enter. Currently, the point
light object is selected and located at
the t dcursor location. It is okay if it is not
visible in our view directly. Be careful not to click on anything as that will
change the selection. While this condition,
hold Shift and click on the target object and then press Shift to open the Snap Pot menu. Next, you need to use
selection to active. As you can see, the
point let object just move to the original location
of the table lamp object. We can move this up until it is roughly at the
center of the shed. For light settings,
let's set the radius bigger so it better
simulates an LED light bulb. I think 6 centimeters
should be enough. We also want to use the body
node to control the color. So click on this
use Nodes button. The black body node,
set this to 4,500 kelvin to make it warm and set the missive
strength to 15. You want to fully control the light intensity
using the slider, so we should set
this to just one. Okay. Let's preview the
current lighting condition because the video duration
is already quite long, we will continue the
project in the next lesson.
29. 03-06 Project furniture product rendering Part 3: In this lesson video, we will finish up the
product rendering project that we started in
the previous lesson. I explained before
that there are two methods I usually use
for placing light objects. The first is using the
line to active command, and the second one is
using the Tre cursor. We have covered
the first method. Now we are going to
use the second method. Let's say we want to
create a spotlight just above this point
on left arm rest. Hold shift and right click at that location to
move the tri cursor. Now, you can create a
spotlight object for this, but because we will
be using an IS file, it would be better if
you use a point light. So press Shift A and then
type point then enter. Currently, the
point light object is located at the
armrest of the chair. We want it to be higher. So let's make Z coordinate
larger like 220 centimeters. The light will be off screen, so we don't really need to
place it exactly on ceiling. Let's set the
lighting parameters. Change this power field to one watt with dice
value to zero, as we want to have sharp shadows cast from the light object. Use a black body node for
controlling the color. I think 5,000 Kelvin is
enough for this light. To make the light more
realistic and interesting, you can use an IS file to
control its distribution. So click on a strength slot
and choose IS texture. Joos the external type, and then for the file, you can use the one I
provided called X ERO IS. Finally, for the strength, we can set this to two. Before moving on, I
need to mention that although we have been using
only the properties editor, all that we have
done here actually reflects on the shedded
nodes automatically. The resulting nodes
may not be tidy, but they function the same. Essentially, it is up to
you whether you want to use the shaded node editor or just use the properties
editor. All right? Let's go back to the camera view and render the scene
by pressing F 12. Notice that now we have these nice sharp
shadows on the floor. The rendering result looks nice, but I want to add more bluish
stone outside the room. For this, we can
add an area light, Shift and right click on a window glass near
the center million, then shift A N type
area, and then enter, move this area light
so it is outside the balcony and also move it up to make it look like it
is coming from the sky. Let's change the type
to rectangle so we can set different values
for weight and height. Let's make the weight for meters and make the
height to miles. Go back to our camera view by
pressing zero and then drag light target node and place it roughly near the
foot of the Oto man. Okay? Now, for this area light, we want to use a
custom blue color that does not exist in
the black body spectrum. So for this light, we do
not need to use the nodes. Simply change the color up
here to strong blue color. And let's set the intensity
to around 100 watt. Now we have this
strong blue color just outside the window. But notice that
currently the shape of the area light is
visible. Why is that? Well, this is
because by default, blender does not hide
light objects when they are viewed through
transmissive materials. In our case, the window
glasses are transmissive. If we hide window glasses, the light object
becomes invisible. We can fix this easily by
selecting the area light, and then in the
properties editor open the object tab in the visibility setting at
the way visibility category, you need to turn off the
transmission option. Now the shape of the
area light does not show up even if it is
behind a glass material. Let's render a preview
again and see the result. After looking at the rendering, I think I want to make the left handbr and the
right and bar to be missive. Yes, you will see the
right bar directly, but it will still contribute lighting to the right
surface of the chair. So select the bars object,
isolate the object. Go to phase mode, press A twice just to make
sure none is selected. Then hover over the left one and then press L
on the keyboard, select the right bar also
using the L shortcut. Then create a new material slot, assign the faces to this
slot now to save time, instead of creating
new material, let's use the previous
glow material and then duplicate
it by pressing this button name this
material glued side bars. For the color,
let's make it 3,000 Kelvin so it emits a
strong yellow color. But turn down the
strength to only ten. Let's go back out from the isolation mode and
then do a preview render. Now, I think the blue
area light is too strong. Let's change its power to 50 watt and do another
render preview. Okay, I think the
night scene is done. Next, let's work on a day scene. For this, we can
open the node for word lighting and simply decrease the factorial
value in mix shade or node. One that if you
said it is too low, the sunlight becomes
very bright. I prefer dimmer sunlight, so I think 0.6 to 0.7 is enough. Next, we want to
make the lighting, tell more stories about
the surroundings. Essentially, we want to add some shadows on the
floor and a chair. This will give the
impression to the viewers that the room is
surrounded by large trees. For this effect, you can
just copy the three tree we have outside and place
it near the balcony. There is one way to go about it. But in this video, I want to show you
another approach, and that is using gobo textures. Essentially, we
create a plain object with a three silhouette
texture to block the light. Note that this technique
of blocking light to create fake shadows is very
common in photography. The term gobo actually stands for the words go between optics. Photographers usually
use small disks made of steel or glass to
contain the shadow pattern. And then place them
within lighting fixture. Again, the term Gbo or go
between optics actually describes the location of the disk within the light
pad of the fixture. When using blender
or TD in general, we can apply Gbotextures
in several different ways. In this video, I'll be showing you what I think is the easiest. We will be using this
image as the texture. Notice that it is in PNG format
and has an Alpha channel. You can have colors
in the image, but the important part of it is the Alpha information,
not the color. If you are wondering, I created this image using blender
from a three Tree model. Okay? First shift and right click on a balcony wall
to place the T cursor. Next, you can create an ordinary plane object and set all the shading
and scale manually. But you don't want to do that, as vendor already provides a special feature to create a plane object
based on an image. You can find that in the image
menu and then mesh plane. Notice so Blender does not
create the plane right away. Instead, it prompts us
to pick the image file, just like the Global
texture file I provided, and then click the
import Image button. As you can see, we have this plane object with the
Gbo texture applied to it. The nice thing about
this method is that the plane object
automatically has the correct aspect ratio and the Alpha channel is set
up to affect the opacity. It will take quite some time if we had to do
everything manually. Let's rotate the plane
object by pressing R, then Z, then 90, then enter. After that, you can try
scaling it and then moving it so it blocks the light
that is going into the room. If you are in camera view mode, you can make use of the transformation
shortcuts such as S and G. Or you can also use the transformation
fields at the right panel. If you need more shadows, you can duplicate the
plane object using the output shortcut and just position or scale the new plane object until you really like how
the shadows look. After several minutes, I finally settled with
this do formation, and that's it, we are done. Now we can perform
the final render. As a reminder, don't
forget to increase the percentage value
of the output size to 100% and also set the noise
threshold to 0.01, Alright. Here is the final render
for the evening version, and here is the final
render for the day version.