Guerilla Photogrammetry - Fast and Easy Realism | Hamza Meo | Skillshare

Playback Speed


1.0x


  • 0.5x
  • 0.75x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 1.75x
  • 2x

Guerilla Photogrammetry - Fast and Easy Realism

teacher avatar Hamza Meo, Limitless

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Introduction

      1:22

    • 2.

      Mesh Generation - Reality Capture

      12:05

    • 3.

      Mesh Cleanup - Blender

      29:47

    • 4.

      Texture Optimisation - Blender

      32:30

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

84

Students

--

Project

About This Class

To achieve Realism in CGI, there are two main secrets: Lighting and Texturing.

In this class, we will learn a fast and easy workflow to quickly make high quality and realistic assets for your 3D renders.

We will achieve this by photogrammetry, BUT not how we conventionally do it.

We will cover:
1. Quick acquisition of data for photoscanning in a matter of 2-3 minutes instead of hours
2. Processing this information in Reality Capture
3. Correcting the mesh artifacts and blobbiness in Blender
4. Converting extremely high poly mesh to low poly while preserving every detail in texture
5. Optimising the model for use in the 3D scenes
6. Free Assets!

We will talk about about how to quickly make a realistic asset for your scenes within real-life limitations. You come across something during your evening walk and you wish if this could be a part of your next CG scene. Take out your phone and START SCANNING!

With this Run & Gun scanning, the artifacts that arise, will be taken care of in Blender.

In the end, you will have a beautiful asset for you asset library.

HAPPY SCANNING!

Softwares Required: Reality Capture/Meshroom(Free), Blender

Meet Your Teacher

Teacher Profile Image

Hamza Meo

Limitless

Teacher

Hi, I am Hamza. A medical doctor and a 3D artist. I absolutely love 3D! Working extensively in Blender, DaVinci Resolve and After Effects. With 9+ years of experience, I love teaching and it feels like a way to give back to the amazing 3D community.

Happy Learning!

See full profile

Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: When it comes to realism, there are really only two main things that you need to worry about. Lighting and texturing for lighting, there is a separate miniseries that you can have a look at, but here we'll see how real life textures just propel your renders to absolute realism. When you talk about models with real life textures, there is no better way to achieve them, but by using photogrammetry. We'll make the scorpion APC using photogrammetry and also see how to correct the nuances that arise during the process. During scanning, you don't always have ideal conditions. This three part series is all about making the best possible assets in less than ideal conditions. We'll talk about how we can quickly gather the data for photo scanning using video from your phone in a matter of minutes, Process them to be ready for photo scanning. And importantly, how to process these extremely dense measures in blender to make them ready for you in your scenes. Stick around and you'll get a free high quality three D model as well that I've made for you using the same technique we'll be using reality capture and this technique is gorilla photogrammetry. 2. Mesh Generation - Reality Capture: All right. We'll use our phone to shoot our video in overcast conditions at four K 30 FPS, which most devices these days are capable of. We will move slowly around the subject to reduce motion blur, and capture every part of it that we can. This is the Scorpion APC. We'll move around it in an onion peel fashion, which means we move around the object in the first pass and then moving subsequently closer with next passes, making sure to get closer to individual parts of the object. We can see that we have a real life limitation here that we cannot reach the top of the object. And this becomes problematic when you do not have a texture data to project. Later in the process, we will see how we mitigate this using texture painting during the processing part in Blender. We'll take three round passes and this will get the job done mostly. We'll bring this file into after effects, Speed it up so that we have around 600 frames for an object the size of a vehicle. About 600 frames. Ensure that we have adequate overlap between those frames. This will help aligning the images in the reality capture. In the next step of the process, we will bring up the shadows and decrease the highlights so that the image is flat and lighting is neutralized. You can shoot in log footage as well, which will give you some more wiggle room to play with. We will then export it out as a Jpeg sequence. We've now converted the video into frames 617 in total. We'll select them all and then we'll put them into reality capture. There we go. The first step is to align the images. Setting the image down scale factors one and click a line. We wait and we wait. All the images have aligned and we've got ourselves a sweet point cloud. This is so fascinating for me when all of it comes together quite nicely. Fairly early on in the process, we'll now examine it to see how the point cloud has come together and see if anything is lacking. We'll define the ground plane now to align it with one of the axes, Now we put two for the top view and then we'll rotate it, align it with one of the axes. This will help define the area for mesh generation, which is the next step. We'll see how it looks from different angles and then go on to set reconstruction region. We'll clear the current region and we'll define a new one. Set the rectangular boundaries as close to our subject as possible. There we go. Then just inspect it in perspective view to see how everything binds together at just the height. See how it's looking. Top view, side view, make minor adjustments if needed, Backside again, some minor adjustments, and we'll make the bottom margin a bit tighter as well. In the photogrammetry process, inspection at every step of the way is paramount. So you'll see us inspecting our model from time to time to see how we're doing. At that particular step, everything seems to be on point and we can go to the next step of the process, which is generation of the mesh from this defined area. For mesh generation, an image down scale factor of two is a good balance between quality and management and reconstruction in normal detail is how you should be doing it. We'll wait for it to calculate the mesh for us perfect. So now the mesh is calculated inside the region that we defined. So we'll just go over and see how the mesh is looking. Just observe it from every direction. See what parts have been reconstructed well, what parts are lacking, how the mesh is looking. See if any nooks and crannies have been missed out, any holes in the mesh, any part that didn't generate as you intended it to in objects like these, oftentimes it's the underside of the model that is undergenerated because we mostly cannot physically go underneath it and capture the necessary data. We'll see how we address that in the blended processing part later on. But now we will get rid of the extra geometry by selecting it and filtering it out. So using the rectangular selection tool as the first pass, crudely go over all of the geometry to broadly select all of the extra bits. And then we'll refine it later on by the lesser tool which will be a bit more precise in micro adjustments in the selection. Pressing control, while you select the new geometry, keeps the previous selection while enabling you to select new parts. By pressing control and dragging, we select all the extra bits that we can. This part of the process is important because we do not want to be projecting the textures onto areas that we just simply won't be using later on. So it's a good practice to save ourselves from unneeded. Unnecessary texture and mesh information. And it will also save you memory down the line. So we'll just keep selecting crudely as much as we can inspecting the model as we go, which is always a good practice, and see if you're not leaving any areas behind. Just selecting as much as we can using the rectangle tool, Go to the other side, select that as well. Now generally the selection process is fast and simple. But because this is a track vehicle, we will need to spend a tiny bit of extra time. I switch from rectangle to the lasso tool quite early on here. Let's see how far we can just take the rectangle selection tool. Another tip is to double click by your left mouse button while you're selecting and that moves the rotation pivot of the model. This will enable you to rotate around any specific areas of the mesh that you intend to. We'll go underneath and see if we can select some pass from down there. It's always great to have as little as possible to clean up in blender. Reality capture handles these extremely high poly measures really well, but if you do it in blender, it will bog you down a bit. So ideally, clean up as much as possible In reality capture. Now our first selection pass is complete and we'll do the filtering. Now it's crudely gotten rid of that extra geometry, but it still needs that micro cleanup Before texturing, we'll select the lasso tool and then we'll start the micro cleaning up process. We're setting up the rotation pivot by double clicking our left mouse button like I said earlier. And now we'll trace out the object as close as we possibly can. This process does leave a bit of jaggedness and jagged edges and we will fix those when we go into Blender. This is a bit of tricky selection because we're making our way through the teeth of these tracks. This lasso selection tool is extremely good in situations like these. Blue triangles that you see underneath are non manifold geometry which have risen as a result of our previous pass. They are not a part of our mesh and they do not form a closed mesh on their own as well. We would want to get rid of them sooner or later. We can see that some of them are being selected because they are coming in our way of selection. Let's quickly select this, making sure that we select as many of these straight triangles as possible. We'll go underneath the track once more. Thankfully, not in person selecting as closely as possible. Once again, remember to keep pressing control on your keyboard while you select it. We'll go as far back as we can in this view while tracing the model. And then come back to complete the selection. Let's select this left away island here. There we go. We'll go to the other side to see that we're not taking any part of the mess that we want to keep. That looks good. So now we can move on to the other side. Let's trace this break. Now I can understand that this can be a bit boring. Believe me, the time spent here will save you hassle later on in the process. So take your sweet time, put on some music and just get on with it. The isolation of the ground touching the side of the tracks is easier relatively because we have a defining margin between the tracks and the ground, so we will just easily trace that. It's helpful to follow these natural lines in order to prevent a sharp cutoff when you develop the model. Later on you'll notice that while selecting, I'm constantly changing my rotation pivot point. This ensures that we are only rotating around the point of our interest. Inspecting or selection as we go. And moving on to the inner side to clean up what we can at this stage. On the other hand, what we can't clean up, we'll just use billions in blender to do our job. Now we go on to the back side and selecting this ill defined topology as much as we can, This is pretty much what we can do at this point without wasting any more time. We will go on to the next step. Just give it a quick lo and we can go on to filter a selection. It will take a second or 22 process and now we have our mostly clean up model here. Let's see how we're doing in terms of topology. Give it some time and it shows that we have a few defects in our topology. Let's now fix that these topology issues arise as a result of our clean up process. But fortunately, reality capture has this important and very handy feature where it can detect bad topology for the most part and then try and fix it. The software tries to get rid of non manifold geometry, fix any inconsistencies with the geometry. It's a good automatic first pass to help you down the line. Now the model that we have is 5.3 million tries, which is arguably quite large. So we'll try and simplify it to around 500,000 tries, which is a bit more manageable. If you have a low end machine, you can use this 500 K model. I would recommend using about 5 million tries, because the texture that will get projected on it will be much more crisp and better quality. Later we'll go over how to transfer that high quality texture onto a model version with just 20 K tries out of the 5.3 million that we've made here. In the next part, we'll actually be working with the 5.3 million tries model. However, before we do that, we'll need to unwrap the model for the texturing part of the process. The unwrapping looks fairly dense, which is always great to see. And then we'll click Texturing. Now the textures have been projected based on the UV unwrapping that we did. We'll again quickly scan over the model and see where are the discrepancies between the texture and the mesh. It looks fairly fine except the parts where it didn't have the data to generate that texture data, we couldn't capture the top of the vehicle as we had real world constraints. And this is what this series is all about, making lemonade out of lemons. After texturing, just move around your model to see what areas of the model would need to be worked on just to get an idea of where you'll need to focus in the post. In the next part, we'll head over to blender and make it into a usable asset. If you've got any questions or suggestions, be sure to let me know and we'll sort it out for you. Until then, farewell. 3. Mesh Cleanup - Blender: In the last part, we exported our model from reality capture. In this part, we'll see how we'll treat this model within blender to make it production ready. We've imported the model here in blender. Let's see how it generally looks. See if there's any straight, we see any straight geometry around it. We'll see the model in rendered view as well and see if there's any discrepancies in texture geometry, what things we need to work on, and just a general overview of how it looks like. We'll go into solid mode again and we'll rename this to UHP, which is for ultra high poly. Let's turn on the statistics to see the number of polys that we're dealing with. So 5.3 million roughly. Let's now move a model to more centralized position. Rotated to align with the views of blender. So the front of vehicle to the front view in blender. And then move it to a centralized position again from the side view as well as the front view. Nudge it down a little to that ground level, see if everything looks fine. And then we'll set the three cursor as the origin. This ensures that it will scale and rotate from the ground level where this vehicle will be sitting. We've started adding a cube underneath the vehicle where we couldn't reach earlier. And the geometry was not constructed in a way that we wanted to. So we'll knock out this degenerator geometry. The way we knock out this geometry is by using billions. So this box that we're making is essentially 1 billion cutter. We'll scale that box in a way that it conforms the geometry that we do not want. But at the same time, be careful that it does not include the geometry that we want to keep. So we do want to keep the inside teeth of the tracks. So we'll make sure that the box does not cover them before executing the Boolean function, we see that the box covers what we wanted to cover, select the box and vehicle as active. We press control minus this cuts out anything that's within the box and we did not need before applying. We see the mesh once more and we click Apply. The modifier is now applied and this box has done its job, so we don't need this anymore, so we'll just delete this. We keep on inspecting the model every step of the way. Now the second step is to get rid of the geometry that is not attached to the model. We go to edit mode, deselect everything, press L over the model to select the bulk of fit and see what parts have not been selected. These are the islands of geometry that we do not need. Once that is done, we press control I to invert the selection. And let's just also select the stray vertex that's left by the Boolean operation. We'll press X, delete the vertices. We'll see what more extra geometry we need to delete geometry that was left attached to the model. The geometry that did not come within the Boolean operation or within the invertletion method. With a little more inspection, we find out that these pass underneath do not form a part of the model and are not necessary. These were left behind by the reconstruction bounding box. We'll just take our time and select all of them. There we go, and we'll delete them. The other side does not have straight triangles like these. That should be fine. Let's inspect our model once again, shifting into object mode, see if the model is ready for the next stage of the process. Everything seems to be in order, so we'll go to the next stage, which is sculpting. In the sculpting process, we will get rid of these coarse surface features, which are not originally part of the object, but they arise as artifacts from the photogrammetry process. A good question is why should we get rid of these imperfections, especially if the model looks fine when we saw it and rendered view. Well, the answer is when we are baking normal map, as subsequent part of the process, this artificial bumpiness will become a part of the normal map. Because these are artifacts, we do not want these to become a part of our normal map. However, we need to be careful to not over smooth and overdo this because we then run into the risk of losing the mesh detail. Just as we did here. We'll switch to the flatten brush, which will give us some of the form back. Especially when you're dealing with flat surfaces. I mostly tend to switch between smooth brush and flattened brush. Smooth brush for meshes which have more organic curves. And flat brush when the models are more faceted and hard surface. An important difference that you should and must know when using flat and smooth brush is the nature of how these two affect the textures. When you use the smooth brush, it sort of warps the textures because it's averaging the height of the mesh detail within its circle of influence. The flattened brush, however, does not cause any warping of textures. Because it just flattens out everything in one plane. I therefore primarily use and like the flattened brush, but if something's not working out the way it should or I don't want it to be faceted, then I'll tend to use the smooth brush. Another thing that the smooth brush does is that it shrinks everything together, sort of slimming them. In a way, you can avoid the slimming by using flattened brush. Or if you have to use the smooth brush, you can switch to inflate brush to make up for that shrinking. You'll also notice that while I'm using the brushes, I'm not using them at 100% of their strength. I'm using them at about 35% 45% depending on the influence that I want from them. This part will be fairly sculpting heavy. So if you don't enjoy cause part of the meshes being smoothified, you can skip to the last third of the video and carry on from there. In the last third of the video, we will be baking texture maps from this ultra hy poly to a high poly mesh. For those who want to stay, we will be talking through the process and our thought process behind whatever we are doing. So a good way to conduct this smoothing process in sculpting, it's good to visually select one part of the object and then make your way from there successively to the adjoining areas. Currently, we're flattening out the back side of the vehicle Using the flattened brush, I generally do a course pass over the intended area and then just zoom into smaller areas which need a bit more finesse with the hand. So this bottom back plate is done and we move on to its top adjacent area. Switching from smoothing to flattened brush, we move carefully in order to not flatten the areas that should not be flattened, like this curved ridge right there. So in this way, we're making the model more defined, keeping the ridges and eliminating the blobby elevated bits. You don't need to be very, very precise with it. This process can be very forgiving so you don't have to break your backs over it. Another advantage of this flattening process is that at a later stage when you are decimating the mesh, the resultant mesh that comes after decimation will have less polls in these flattened areas than if we had not flattened the surfaces. So the decimate modifier assigns more vertices and more geometry to the areas of the model which are more curved in order to preserve that curvature. But the areas that are flat will not get that much geometry because they don't need that much geometry in order to hold themselves together. If you do not have these areas flattened, which are supposed to be flat, the subtle changes in height will be picked up as curvature by the decimate modifier, and these areas will end up gathering more geometry. Ultimately, the geometry distribution does not end up where it should. This gives us a low quality remesh after decimation, which we obviously do not want, just taking care of the back here. You'll see that while we flatten, we do get rid of some of that detail. But that's not worrisome because we will get that detail back from the texture and the normal map that we generate from it. We keep flattening the facets, keeping in mind the plane, if they lie, flatten the bit as well. And then going up top, we will take care to not dissolve these X shaped ridges because they do give a fair bit of detail to the back side. We keep avoiding those ridges and flattening out the areas between them. If you're not good at sculpting or even if you've not tried it yet in blender, please don't be intimidated by the process. It's straightforward. In this process, we use very limited amount of brushes, so you don't really need to know a ton about everything in sculpting except a few brushes that we're using. Anyway, like at every step in photogrammetry, we are moving quite a lot around the model to see how the model is looking at that particular stage. We continue doing that. I want to drill this down in you because it is very important in every step we are done at the back of the model. So we'll move on to the side over here, we'll align to the plane and we'll start flattening and flatten a bit more. Once we're done with local area, just give it a little look how it's looking. Then this little area, we can flatten this out as well. After that is done, we will compare it to the other side, like go back to the other side and start working on these tracks. Now importantly, we'll use the smooth brush to smooth out these wheels because we don't want the curvature of these wheels to get flattened by the flat brush. You see the raised and depressed area in the central part of the wheel. We do not want to flatten that in the sprocket. Same thing again, and now when we're doing the outer rim of the wheels, we do need them to be flat. Switch to the flattened brush, Decrease the size carefully outlining that circumferential area of each wheel, so as to give it a little bit more definition. We've adjusted the strength to 50% because we only want to give it one pass over the area in order to not introduce any more facets while we're using the flattened brush. In these uncommon situations, you do need to be relatively more careful while you're tracing out the topology because there is a chance of messing it up, especially with circular shapes like these when treated with these sculpting brushes. So just about 02:00 position on this last wheel here, we see that there is a dent and we do seem to be struggling to try and get that out. The flattened brush flattens out the topology based on the viewing angle that you're viewing the model from. You should align the viewing angle by the plane of the surface that you are flattening. We've gone out of norm here to shift our viewing angle, to fix that one key bit of topology and it does seem to do relatively fair job. This Brocket is relatively straightforward, just flattening out between the pins here. Here, just around here. Then we'll just complete it all around. Let's examine where we are going next. The topology underneath the tracks which was gotten rid of by filter selection in reality capture is a bit jagged and we left to fix it in blender. Now is the time to do it. We use the smooth brush to just smoothify all of these saw shaped jagged edges. You don't have to be precise because these will be touching the ground anyway. But just for the sake of clarity and cleanliness, we'll just quickly take care of them. This moving out of word seas will also help if you want to select all these bordering words and give them a face, but we won't give it a face here. So we'll just die if there are any uneven, unkempt edges. And we'll shift on to the other side, we'll pick the front of the vehicle as our next side of action. While we are flattening this side of the mud guard, we see that the flattened brush is creating an artifact. So it is a good time and a good example when we switch from flattened brush to smooth brush. So we'll do that and that takes care of it. Same brush at the front with 20% strength so that textures don't get warped up and we'll just cover this front hole quickly. No rocket science here go underneath because oftentimes we tend to miss that. And then we'll cover this little island here as well. You can see that with 20% strength of smooth brush, we are retaining some of that bumpy detail. And that will become a part of the normal map that we bake later on. Some level of bumpiness is desirable to keep the detail in the mesh level as well. In effect, normal map drives the micro details and this macro bumpiness drives the macro non uniformities. If some part of the mesh is particularly malformed, the point of no rescue and you know, you can't salvage it by mesh manipulation. It's easier to just get rid of it outright. So delete that part of the mesh by using Booleans and then you can model that part yourself and place it exactly in the position of the part that was deleted. Join the newly modeled part with the original mesh by selecting both and pressing control J. Then when you bake the textures in the lower poly mesh, your original textures will be baked onto the new geometry that you made. We won't be needing that technique here because we've obtained a fairly fine geometry, but that's a neat trick to have up your sleeve. This part here has a texture of straight grill lines. We'll be careful not to use the smooth brush to prevent those straight lines from curving as we go further and further up, we have less and less texture detail. Because of that real life limitation that we had during the capturing process, we're not too concerned about not using the smooth brush here. But flattened brush still gives more defined edges. We'll still continue to use that flattening out this top part a bit, keeping into account all the ridges that we encounter. Preserving those and then nicely moving on to this front. You can see how the flat brush defines these edges quite nicely and gives back that angular nature of the transition between the two facets. Taking a step back, having an overview and taking care of those tracks. On the right side, well, left side of the vehicle, but right side for simplicity, forgive me. There we'll continue working our way up. Remember when I told you that smooth brush has the effect of shrinking the mesh? So we'll be gentle when we use the smooth brush here on the front of the barrel, since this is a flat area and bumps on it do not really make any sense. So we'll just pick the flat brush. And just flatten it out completely. We switch to the smooth brush again, to just go over the rest of the barrel, especially the underneath and the back side of it gently covering what's left, making sure we do not become too aggressive with it. We already have the power set at about 20% Slowly but surely we see that the model is transforming from being bumpy and lumpy to a more planar, well defined model. As we go through this sculpting pass, the edges of the model will be more defined and this will help when we light the model up for rendering. The light will nicely blend in around the edges, giving great looking edge highlights. If we put the model in retraced engine without this sculpting pass, these bumps will cast some amount of shadows and this will affect the texture integrity of a model. The extent of sculpting and treatment can vary depending on how clean the data was initially that was provided to reality scan. But here I deliberately selected this model to teach you how to optimize suboptimal measures and still get great results with it. So we know we could not reach this top part of the object when we captured the video. Inherently, this part of the mesh is relatively blobbier. But just by using smooth and flattened brush, it nicely defines the geometry and reinforces the creases between the facets. While we are using an alternating between the same tools that we have used to fix the other parts of the mesh. I want to take out some time and speak to you in a broader perspective of why even bother with collecting these photo scans. You see you should not be fixated on creating a single model or a scene or some animation. What you should be striving for is creating systems, creating an animated or three D scene. A set repository or a library is essential to increase the productivity of your workflow. Every major studio has their own asset library in one way or the other, whether it's made in house or it's bought from somewhere online. So in your everyday life, whenever you come across an object that particularly catches your attention, it can be a hard surface object, it can be a nature object, it can be an everyday object in your home. Anything that makes you go, hmm, I should make this a part of my three D scene. Take out your phone, scan it, and make it a part of your asset library. Over time, these assets will accumulate and before you know it, you will have an entire repository that you can use as drag and drop objects. And you'll have your very own personal asset library. And believe me, there's something about scanning your own assets that's so satisfying, it's almost addicting. Do see your scanned assets within your three D scenes that you have captured from real life. It's just a joy at another level. This way you can quickly create your three D scenes by just dumping the models that you've scanned, throwing in an HGRI, Add any additional lights, if you may set the composition of it and you can call it today. However, be mindful that your job is to be a grand collector and not a grand holder. The difference. Well, collectors collect only the things that they absolutely, really, really like orders. On the other side, they'll just put their hand on anything and everything you want to make your asset library a high quality place to get your assets from, not a garbage bin. Photo scanning will also give you the opportunity to step away from your screens, go out in the open, in the nature in the museums, wherever you want to take a time out or if you're feeling like it, take your phone out and start scanning. It only takes two or 3 minutes. If you do the scanning with the video method and it yields phenomenal results. You'll start to casually scan whenever you go out, but then you'll find yourself taking time out, going out, especially to Photoscan, that one object, when all, if it comes together, I promise you, it's all worth it. Moving over to the other side, you notice that we did not go over very aggressively over the grill part of the object because we do want to preserve this surface information. Had we used that flattened brush, it would have just wiped out that grill surface detail. It's like we treated the wheels on the other side. We'll treat the wheels here in the same way as well. Using the smooth brush to smoothen out these curved parts of the wheels, smoothing out the area between them. Just gently going over this Procket as well and then switching to flattened brush like we did previously to just flatten out the circumferential area of the wheels. We're just giving one pass with the flattened brush over the circumferential area. But if you feel that your mesh is particularly blobby, you can use multiple passes of the flattened brush to optimize that area. I do recommend to not use the flattened brush at 100% strength because that gives an all or non approach. If you have the strength to about 22:40 percent, you'll be able to preserve some surface detail. If needed, you can have a second pass which will add on to that 20 or 40% Now these left tracks are done, we'll keep eye baling, we identify This break needs a bit of smoothing. So we do that then using the smooth brush, we quickly fix the jacket edges underneath the tracks, giving it a quick, smooth pass on the outside. There we go, we move on to the inside. Just quickly going over that area because it won't be much apparent in the final renders. Now we eyeball the model again, quickly inspect what areas need some revision. This is a good time to look at the deeply seated areas of the model which are likely to be overlooked. So we identify this deeply seated area, which is the transition from the body of the model to the barrel. These areas can be relatively more undergenerated because if you're not careful during the capturing process, you might end up capturing less than ideal information in these deep areas. A good way to mitigate this undergeneration is during the capture process to capture enough data for the software to process. You can adep this by moving closer to these areas and honing in on them. During the capturing process, be sure to capture some amount of parallax so that the software has enough data to generate a viable mesh. At this point, we're pretty much done with the sculpting stage of the process. Before moving on to the next step, we'll quickly see if everything looks fine. The next stage is to set up the nodes for texture maps, but before we do that, we'll switch to render view to see how the textures are looking at this stage, see if there's any warping introduced by sculpting and use of the smooth brush all looks fine. And now we'll start to set up the nodes for our texture maps. We'll zoom into the model in an area where we can see the texture details. So let's select this front part, over to the window on the right side, we press shift F34 times to go to the shader editor. Over in the shader editor, let's remove these prepopulated nodes that the shader already has, accept the base color. We'll use this base color to make a normal map. First, we'll do this by using a bump node. We'll introduce the bump node, plug the base color into the height, and plug the normal into normal. We'll keep the strength at 100% and use the distance as the driving factor for the bump. 0.35 is a good number to start with and you can dial it down according to your model. We'll change the string to see how it looks with and without bump. And to be honest, bump gives much more detail than you would get otherwise with just the texture. Next we'll move on to make the roughness map. Let's introduce a color ramp. Plug the base color to the color ramp. Invert the colors and color to roughness. Now if we switch to the output of the color ramp, we'll see this distribution. The areas in black will be shiny and the areas in white will be absolutely rough. At this point, we'll dial the roughness in the shiny areas. We do this by increasing the black amount to a bit closer to being white. It still has some gradient, but it's generally rough. We can also bake in a metallic map, but since this model will be 100% metallic anyway, so we can just pull this to 100% in the principal shader. Seeing how this model is looking from every direction with this simple node set up. We can see here that bump node is doing its magic really well. We can see that on top of the model where we did not have access to shoot the video Earth, we do not have good texture detail. We'll fix that in the next part. After baking, after inspecting with the node set up, the model is a bit contrasty. To begin with, we'll introduce the brightness contrast node to decrease the contrast. One stop looks well for baking. All sit to solid mode. Duplicate the mesh by shift D, right clicking to release the duplicated mesh to the same position as the original one. Rename it to high poly hide the ultra high poly mesh from the viewport. For now, we'll put down a decimate modifier on it, aiming for about 100 to 150 K tries from the original 5.3 million. If you divide 150,000 by 5.3 million, the answer comes at about 0.03 Setting this will now apply, we immediately see that a lot of microsurface detail has been lost. However, the triangle count has come down significantly. We'll get those surface details back by normal map. Now, because this decimated model has new topology, we will unwrap it for better texture baking in edit mode and smart UV project. And now we have an automatic unwrap. Arguably, the packing of the UV's can be a bit better, but it does work fine as it is inspecting the model one last time before we commit to baking. And now let's give it a new material in which we can plug in the texture maps that will be generated after baking. Call it Scorpion High poly Now will unhide or ultra high polymesh make sure it lays on top of or high polymesh. And now for baking, we'll use an add on called Simple bake. For baking, I cannot recommend this add on highly enough. It generates all the texture maps in one click without having you to set up everything on your own. We'll select PBR bake in the settings, I've made a preset four photogrammetry, but you don't need to do that. You can just follow along as I'm doing. Select the ultra high poly as the bake object. With bake selected objects to target objects checked. Select the target object as high poly. We're baking diffuse roughness and normal map. So make sure these are checked and everything set up at the back end, which it is. We don't need any other passes, so we'll skim over that. In terms of texture resolution, we're baking at four K, so 4096 by 4096. And we'll set the export path as a subfolder called texture within the parent folder for organization. This will be in the sub folder of High poly four K, set the calculations to happen in the foreground, so it uses all the resources that we currently have. And then we click Bake. It takes some time and our texture maps are baked time to import them and place them within our high poly shader to import all of the texture maps at once. Press control, shift, select all of the textures. It's okay. And all of the textures will be imported, plugged in into the shader to see how it's looking. We'll switch to Matt Preview. It looks a bit washed off because we do not have metallic turned all the way up. When we do that we have all of our textures with the details of the ultra high polymesh. We can see that the normal map has preserved all of those micro texture details that were within the ultra high poly mesh and projected it onto the lower poly version. In the Matt preview, the bump looks exaggerated in the EV so we'll switch to cycles to see how the textures really look in the natural light. Seeing from all angles, it looks like it's coming really well together. Fantastic. In the next and final part, we'll see how we fix the blurry and inadequate textures on top of the vehicle. I will introduce you to some more tricks in the process, so stick around and enjoy the process. Until then, farewell. 4. Texture Optimisation - Blender: Welcome back. In the last part, we worked on optimizing the model and we baked texture maps from ultra high poly to a high poly model. In this part, we're going to repair those textures in the areas where textures were not generated in the first place because of the restricted access that we had while we shot the video of the object. We'll then save those new texture maps and bake them onto a low poly model so that the low poly model has almost text detail as the ultra high poly one. Stay with me till the end and I'm sure you'll pick up a few tricks here and there. Let's begin. This is where we left off. We'll do some organization before we begin. We'll delete this ultra high poly as we don't need it. We'll delete this cutter collection as well. And this default collection is not needed as well. So nice and simple, justrahighpolymdel. So we'll switch to Matt Preview and go to Texture Paint. This workspace is where we'll do the bulk of our work. Let's switch to flat lighting because we only need the diffuse map without lighting affecting it. Let's now identify some unwell areas in terms of texture. This back part needs some doing. The side looks all right and particularly the top needs a lot of work in order to make it closer to the rest of the texture quality from the panel on the left side. We'll use the fourth tool, which is the clone stamp tool, as our main driver of texture painting. This will paint the areas by sampling the part of the texture where the three D cursor is placed at. We press left click to designate the area that will be sampling and then we start painting. It's simple as that. At least on the surface, we'll keep changing the position of our three D cursor so that we're sampling the part of the texture that lies in the vicinity of the area that's being painted on. If you worked with a clone stamp tool in Photoshop, this should be a walk in the park for you but with a twist. Because we're working in three D, there's one important thing that you need to keep in mind about the sampling site while you are texture painting the clone stamp tool clones, the area under the three D cursor based on the perspective that you're looking at. If the perspective that you're looking at, the sample area is skewed a little, the area that you paint on will inherit the same skewed textures. It's something that you might want sometimes but sometimes you don't. An important thing to keep in mind, that's the reason we'll mostly be painting in orthographic views of top, bottom, left, right, front and back. An extension of the same thing is that you need to keep the area under the three D cursor within your view, only then you'll be able to sample it. Another thing that needs mentioning is that the sample area needs to be in the same model. Cloning textures work if you have your three decursor placed on a different object. This means that you cannot clone a different texture on a different model, on top of another texture on another model. If you don't get it, don't worry because we will see the practicality of this shortly. Essentially, what we're doing here is that we're copying textures from areas which are crisp and then painting those texture patterns to the areas which are not as crisp as we would like. There are a few tips and tricks that will go over during the painting process in order to achieve a fairly good result. It also comes down to skill practice. So you'll get better as you get the hang of it. Like we did in the first part, during sculpting, we chose one part of the model and then we progressively moved to the adjacent parts of the model. We'll do the same here. We've selected the back side of the model and we'll work our way forward. In the first pass, we'll have the strength of our brush to 100% and we'll place the three decursor over the area, the texture of which closely resembles the area that we're painting on. Now we keep moving the three Dcursor for it to prevent looking homogeneous and for painting over the same area, we're using different places as our samples. This will mix up the textures sourcing from different places and the texture will have a much more heterogenous look to it. Now as I said earlier, the sampling sources, the textures from the view angle that you're looking at it. And there's only so much texture from the same viewing angle that you can clone before it starts distorting. And sometimes you want to clone a texture from the other side of the model that you can't see in the same view. The solution to this is that you go to the edit mode, select the entire mesh, and duplicate the entire thing. Now you can rotate this duplicate to any side. We'll put the right side up to source the textures from that side, see if it aligns well, and see that the texture that we're about to source is in the same Viewing angle, we'll go back to the object mode and then you can place your three decursor on top of the duplicate and source the texture from it. This part will again need continuous examination of the model to see where we can source the textures from and what textures will be appropriate. Where once the texture is cloned, we'll see the general pattern of how the texture is looking at that point. So we got rid of that originally cloned line because it didn't make sense at this point. So once the area has got a general texture from a different area of the model to refine and outline it, we can use the same adjacent cloned texture like we're doing here. The slightly blurred outline is good in the way that it will help us to blend in the adjacent textures. The storage bin on the APC should have the same texture as the storage bin on the other side. So we'll source our texture from the other bin and we'll try and match as best as we can, changing the source to the middle part and then continuing our painting just like Sir on the edges. You have to be a bit careful in order to not bleed on the other side. We'll zoom in keeping the cursor in our view and we'll carefully paint the boundaries that is done. We examine the area and see what other area needs doing. Adding some texture here. Now, this area is more homogeneous than we needed to. First we'll define the boundaries and then we'll sample a texture that introduces some variation. Like so, we can always experiment with different places for sourcing the texture. And during this refining phase, we'll turn down the strength a bit. We'll do that in order to mix the new textures with the old ones, but when we're painting a new area, we'll set the strength back up to 100% Here we're trying to paint the top of the storage bin by sampling the side of it. As we can see, we can't get a convincing paint over that. We'll try different sampling sites, but that does not seem to be working in our favor. When that happens, you can partially paint and then leave it for that time, and we can come back to it at a later time when the adjacent parts are painted, so that we can source textures from there. So we've done somewhat of a passable job here, but we'll move on to the next area, which is the central part of the turret. Let's inspect the model and identify the area where we can get the texture from. For this part, we'll use the light brown back side as the source of it. And for that we will move our duplicate mesh near the area that we're going to paint on. So place our three cursor there in the same view and paint a part of it. We'll then move the cursor to a different area. Use that as a source to add in more texture. After we've cloned this texture, we can source a different area for the crevices. We can try different areas to be hetrogenous. You don't have to be perfect, but it should make a little bit of sense about where you're painting and what you're painting. This area at the front of the turret does not have good textures as well, so we'll take care of that as well. Sourcing from the adjacent areas. Mixing and matching using different powers of brushes. Cloning textures from different parts of the mesh by duplicating it. These are some of things that you will end up using quite a lot in this workflow. You want to be sourcing the that is not flat looking but has some sort of texture detail in it. Here we're trying to make a smooth transition from one color to the other painting at low power as we approach the meeting point and bringing the power to 50% in order to mix these textures. These are extra features here that we don't need at this place, so we'll get rid of them using the texture on the front plate as a source to paint on the top and sampling nearby areas to get rid of these dark unwanted areas. Mixing the textures from surrounding areas and moving on to the next part, again using the front plate as the source, cloning the texture on top of the turret and the side. Inspecting the model where we need to paint next, the crevice between the front plates needs a bit tidying up. We'll use the bottom plate as a sauce and we'll paint the top one following roughly the painted area black and the brown area with the brown sauce. Tidying that up and leaving behind some imperfections. This area shows some grime deposition. So we'll leave it and we'll just tidy up from the top the texture on the night site balance weight is a bit blurry. So we'll sample the texture around it and paint on top again, trying combinations, mixing and matching until it looks just all right, we'll save the work on the texture that we've already done by image and save. Very importantly, whenever you work in texture painting, be sure to save your textures periodically. Oftentimes it gets overlooked and you've put in half an hour of work and it gets lost and it's painful. Make a habit of pressing alt. S. Painting. Whatever you do, it gets saved as you do it. There is another crevice between the grill and the body, which did not receive any texture information because we could not get to it during our data acquisition. Now is a good time to address the textures of this area. We can clone texture from the front of the vehicle and we can quickly go over it. We're not too bothered with areas that are within crevasses and within confines of between two surfaces because these aren't areas that get seen very often in the renders. Nonetheless, we should make an effort to make the texture look as better as it can be here in the front. These radiator lines could be made better. We can clone from the bottom part, which is straight, and we'll do the same on the engine intake grill as well. Being a bit careful there. All done. You can also go a step further and paint using an image as a stencil, but this would require having an accurate image of the model from real life. But getting a suitable image might get tricky, so we'll stick to our trusty texture painting. Now we'll paint the front right of the vehicle and we'll see where we can source the texture from. Let's move the duplicate mesh aside to give us some working space. There we go, and we'll identify a place where we can source the texture from. Again, filling in those crevices where textures are not that good fidelity. For starters, the surrounding areas are a good place to source the textures from. If that doesn't work, we can look for other places. You'll notice that we're not spending a ton of time perfecting a single area because this process is all about churning good usable assets in the shortest amount of time possible while tackling the real life limitations that we can have during the acquisition process. We'll inspect again which areas need our attention. This front bit seems to be a bit off and we can't seem to paint over it. We'll take care of it shortly. For now, let's paint out the flag so that the vehicle can be used universally. Again, using the surrounding areas as source. Whenever you see that the painting is a bit splotchy, you can again sample different areas to mix it up. Here we will draw a continuation of the bright streak of paint that you see coming in from underneath. And on the left side, the darker paint can be given a continuation. Once we've painted the area from different regions, we'll zoom out a bit, decrease the strength, and we'll give one or two gentle strokes. There we go. Now let's address this unpaintable area. And there are a few triangular areas on the left side of the model as well. If you go to edit mode and select this part of the mesh, we can see that the mesh is actually intersecting. When we move it out of the way, this solves the problem. On the left side of the model, the geometry is actually and we can see through these triangular areas. What we can do is we will create new geometry for this. We'll switch to vertex mode, Select these vertices, press that gives it a face and we'll do the same thing for the other big triangular mesh defect. Select the vertices, press give it a face. There are a couple straight triangles here as well. We'll give them a phase two, then let's try and texture paint on them. We switch to texture paint mode thredcursor on the source, but we still can't paint over those face. The reason is that this new geometry does not have any UV's for us to paint on. We'll make UVs for it. We'll identify and select the newly created faces. We'll do this by going to the phase select mode selecting one phase, shift selecting the second, and shift selecting the third. Press U projection. Now we have the UV's of these triangles without affecting the UV's of the rest of the mesh. We'll scale this down to match the texture resolution with the similar tries in the mesh. And then we'll move them to an area in the UV space where the textures correspond, similar textures that are present in the area surrounding these triangles. We can see that the textile density within this triangular area is still too large. We can either scale them further down, or we can find a different area in the UV space, an area which is relatively blurry to match the surroundings. But we resort to scaling them down even further to match the textile density. At this point, this seems just about right. And we'll move them to the side over the brown area so that they'll show up with a brown texture. Rotating them a bit to match the grain of the texture. Deselecting. If you don't look at it too closely, that looks all right. Alternatively, what you can also do is that you can place those triangular UV's in the black area, in the texture map, that's where there are no other UV's. Once placed there, you'll then be able to paint on top of them, just as we were doing before. Either way, this is to give you multiple ways of troubleshooting your mesh and how to go about in the face of problems related to the mesh and textures. What we're essentially doing is we're sealing texture from one part of the mesh and painting on the other part. And this reminds me of a lovely book that I recommend you all read. It's called Steal Like an Artist. It's a small book that tells you in palatable chapters, how do you think creatively and sustainably in an artistic mindset? One of my favorite ideas in the book is productive procrastination. It's the idea that you're procrastinating on something, but you're working on one of your side projects. Something that you're really passionate about, something you're really interested in, and in our case, photo scanning, and that comes with the added benefit of building your asset library. You can be working on multiple productive procrastination projects at one time. So if you get bored of one project, you can hop between the projects and in the end you'll have your library. There's another idea for artists who are struggling with idea generation. So if you struggle to generate ideas for your three D renders, know that every idea is just a mash up of previously existing ideas. What this means is you can copy ideas from different renders. This is basically what referencing is. Then you can add in your own touch to it. The direct derivative of this concept is that you should have an Inspiration folder with you in your hard drive, whatever, three D renders, animations, models, you come across, screenshot, it, download, whatever, keep it saved this way you'll have a bank of references. What's good is that this bank is tailored to your own likings. And you'll have an inspiration folder, whatever you came across and you particularly liked in the past. This will also help develop your niche. In three D, you will find yourself gradually gravitating towards certain rendering styles, animations, certain sort of cinematic styles, color grades. And you'll want to replicate that look in your own renders. And that's how your niche slowly but surely starts developing. You'll add your own touch to it and lo and behold, the most original and authentic render of its kind. And on top of it bank of inspiration and references, you will be a massive idea generation machine. Now coming back to our model, we're painting the front part of the weakle, just behind the radiators. The texture does feel a bit homogeneous, but remember this is our first pass and we're just crudely populating the texture to cover every blood texture area. Once these blurry texture parts are covered with some sort of detail in the texture, we can then move on to introduce some variation in the texture. It can be with a different color, can be with different texture pattern, whatever. That just breaks the texture up. In this instance, we're adding a different color that breaks the texture up at some variation and has a similar degree of roughness and grime in it. We've sampled this from the back side of the vehicle. And this reiterates that no matter the size of the source of texture, if it looks like it belongs to that part of the object, it will work. Notice again that we've decreased the strength of the brush to mix the textures up on the top side of radiators, the outlining can be made a bit better. So we'll sample the bottom side and just clone it. The top, that's one side, that's the other makes it a bit more defined. The area above the engineer intake is not defined. We'll put our three decursor there, we'll clone this as well. Inspecting again like we do every now and then, we see there is a streak that we can just get rid of right here. We'll tidy up the area around the black **** here to paint out the spilling textures. To add a bit more detail to the front part, we'll identify the areas in the duplicate mesh we can grab the texture data from. We'll look around. The gunner's hatch here has just the right amount of detail that we can add to the front part that is done. The texture at this point might look much less detailed than you wanted to, but keep in mind that this is only the diffuse texture. When you add in the normal map and roughness map, it will give that micro detail to the texture and it will all come together really nicely. We're now looking for an area where we can clone texture from for the generator compartment in the front, you can expect that painting in the crevices can be a bit tricky, especially between two acutely angled facets, just as we have here, between the turret and the body of the vehicle. While painting in these crevasses, keep in mind that you should not paint with bright colors because dust and grime naturally tends to accumulate in the interface between two surfaces. We don't want to get rid of that because that gives us that natural environmental eroding aging process. In summary, when painting in crevasses sample dark textures. Now at this time we're mostly done by the first and second part of our texture painting. The third brief pass, we'll just sample a few areas randomly and just lightly dab on textures here and there. This is in order to further break up the textures and have less uniformity. Be sure to be gentle at this phase because you don't want to undo your hard work that you've put in the first and second passes. You can easily get carried away by trying to add in more detail, but in real you're taking the detail away. If during this pass you think and see that the textures are getting blurred again, you can always press control Z and revert to previous state again. Make sure you keep saving the image by pressing all S or going into image and save. We'll paint the mouth of the main gun black. And clicking, instead of dragging to just sample one area under the three D cursor, we don't need much variation in this area because that's just a cover at the end of the gun. We'll now re examine our model how it looks at this stage. We'll keep the camel part on the generator area because that just gives another level of variation. At this point, the duplicate mesh has done its job. So we'll get rid of it. X, delete the war disease. We go back to the object mode and we give a final look to see if there's any area that needs some work. We'll look around and we can identify that there is a small patch of area on the top that looks a bit blurred. So we'll shift back to texture paint mode and just paint over using the surrounding areas as the source texture. Just briefly going over them. We're not too bothered at this stage giving another look and it looks like our texture is now ready. Be sure to save the texture by alt S. This is how the material looks in the material preview mode with all texture maps combined, preview exaggerates the normal map or bump a bit. We'll shift to cycles view and we'll see how the texts are looking in cycles. We'll look around the model to identify any anomalies in shading, modeling, topology, texturing, anything at all that we can repair at this stage. Now if you remember, the underside of the model was left open due to the Boolean cleanup operation. Now's a good time to address that as well. Before we prepare the model for export, we'll add in a plane, bring it down to match with the bottom plane of the APC. We'll reposition it such that just sits in the center. We'll refine the position a bit. Once that is done, we can scale it up to match the bottom part of the vehicle. We'll make sure that it does not cross the confines of where it should be. We'll take a view of how it looks from underneath. Adjust the size a bit if necessary. There we go. And then from the side view, moving to the center and scaling it in the y axis, Again, refining it a bit just like so we won't scale it all the way because we will add an area of transition as the next step for that in the edit mode, let's go to select mode. Select the edge. Press all Z for x ray view to see through what we're doing To extrude that edge a bit higher, a bit angular, to meet the body of the vehicle. We'll keep switching between perspective view and right orthographic view. This is in order to calibrate the exact position of this edge. Let's switch back from x ray view. We can bring it down a bit in z axis to match with the body of the model. The edges are showing up on the side. We'll scale it down on x axis. And that takes care of it. This looks all good. So now we'll go to the backside, select the edge, bring it a little closer to the back side. Once that looks good, we'll extrude an edge from here as well. Move it up in the Z axis so that it lines up correctly with the body. Look for any gaps which we do see here. So we'll move it a little back on Y axis, the edges are peaking out again. Similarly, we'll scale down the edge on X axis and to give a smooth transition, we'll press control B and leveled bot edges control B again and scroll for the number of edges. Four would do just fine. Go back to object mode, select the plane, select our APC and press control J to join it all together. We'll now go to edit mode again. Press L to select the plane and go to UV editing. Normally we would project it and place it somewhere in the UV space for it to inherit textures like we did with the triangles. But here by default, it looks like how we want it to. Therefore, we won't bother meddling with its UVs. And also because the underside of the model won't really be visible in the final renders, it's just there to aesthetically complete the model. As a final step, before we convert the model to low poly, we'll look around and see one last time if there are any incoherences within the model, including the mesh, the shading, the textures, and all the maps. When that is done, we'll duplicate the mesh and move it to low poly collection. There we go, hide the high poly collection and rename the duplicate to denote that it's low poly. Add in the decimate modifier and then give it a new material. This will be based on the new UV wrap of the decimated mesh. We'll name it Scorpion Low Poly. Now we'll proceed to decimate the mesh. We'll aim for the poly count 20-25 thousand tries, a ratio of 0.15 brings us to that level. We have roughly 23,600 faces, which is right in the ballpark. Just for the sake of comparison, we'll give it the previous material to see how it's holding up with this decimation. We can see that there are certain areas where texture just cannot hold up with areas of decimated geometry. See, the line here in the storage compartment is skewed. We'll fix these issues with baking. We apply the decimate modifier. In the edit mode, we can see how simplified the mesh has now become from the original 5.3 million tries. We can unwrap this again for a bit more textile density or we can just go ahead and bake our textures now. And we'll go with the second option. We set up our simple bake by turning on both instances of the model bake, selected objects to target object turned on. We select the target object as poly, set the texture resolution to two K for this low poly version, for a good usability, edit the export path to low poly, two K. All three maps are selected and we click bake. It takes little time to bake four K to two K textures. We'll now select our low polymesh. Give it the material that we intended to give disable Hi poly model from view control shift with principle shader selected and select all our baked texture maps. Let's go to Rendered View to see how it's looking. Immediately you can see that the line is restored to normal. The other small texture discrepancies as well and the model is holding up really well. The model is made of metal, so the metallic goes all the way up, tinkering with specular a bit, but it looks fine as is. So we'll just leave it be. We'll save our hard work and congratulations, We have successfully brought in a real life object into three D, facing real life limitations. And you can see that it renders fast. And it's just a delight to see you can make it a part of your asset library where it will live forever for any project that you want to make it a part of your very own Scorpion APC to make it presentable will enable both high poly and low poly meshes. Align them side by side so we can see them both together. Set up a front view. So it's all nice and tidy. Just like that. We'll import the text that we wrote for our free ferret model. Copy it. Paste it here, edit it. The Scorpion developed in 1970s for advanced reconnaissance. It's also a light tank if you want to add that, but we'll stick to vehicle here. Make sure that hy poly and low poly correctly set up the view angle to greet the viewer with a nice angle When they open the file. Be sure to grab the free resources. I've provided the video data for you to practice with when you do share your renders with me. I'd be absolutely thrilled to see them with this. Thank you so much for tagging along. I wish you well and farewell.