Digitize Your World: Free Photogrammetry with 3D Zephyr | Nikolaus Frier | Skillshare

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Digitize Your World: Free Photogrammetry with 3D Zephyr

teacher avatar Nikolaus Frier, 3D Modeler, Engineer, Maker

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

10 Lessons (31m)
    • 1. Introduction

      1:38
    • 2. Go Get the Software!

      1:48
    • 3. Requirements: Squashing & Confirming

      2:42
    • 4. Lets Get Started: Navigation

      3:02
    • 5. Getting Photos

      4:36
    • 6. Your First Reconstruction

      2:03
    • 7. If at First You Don't Succeed...

      5:23
    • 8. The Dense Point Cloud

      6:34
    • 9. Your Mesh!

      1:36
    • 10. Recap

      1:30
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

119

Students

1

Project

About This Class

What Will You Get Out of This?

This class will teach the basics of both photogrammetry and 3D zephyr (a free software). Focusing on a first-time user perspective going through step by step what you will see while working your first project. You’ll also learn some of the basic terminology associated with photogrammetry, what kind of equipment you’ll actually need, and some best practices for taking your photos.

Who's This Class for?

This class is for beginners of 3D Zephyr, showing a step by step process of how pictures are combined and become a 3D model of a real-world object.

Whether your a traditional artist, digital modeler, or even a game designer, I think this class and Photogrammetry as a whole, will interest you.

Meet Your Teacher

Teacher Profile Image

Nikolaus Frier

3D Modeler, Engineer, Maker

Teacher

Hi there! My name is Nik, I'm a mechanical engineer who graduated from the University of Missouri - Columbia (AKA Mizzou for college sports fans). In my time there I was a TA for a 3D printing class in which I learned a lot of useful skill when it comes to teaching and preparing classes. I hope to use this knowledge to share some of my passions.

I have many passions, and I really enjoy learning and sharing them as well. That's what I hope to do here on Skillshare!

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Hi. My name is Nicholas, Friar. I'm a mechanical engineer by trade and an entrepreneur and maker by choice in this class, I'm going to get you up and running with photo Gamma tree software called three D s effort . I'm gonna show you how toe actually navigate the program. How to start your first project. What are the principles behind taking photos for photo Gamma tree as well. A stepping you through the rest of the way through your first reconstruction to get you started. So I will. You understand enough to poke it with a stake of bird on your own as well as enough knowledge to ask questions about what you don't know on your instructor. Let's get started. What is photo? Gamma tree photo. Gamma tree is the use of photography in surveying and mapping to measure distances between objects. But in this, the definition of photo gamma tree is more like we're using photos around an object to re create a three D model of that object. What can you use for together for? You can use it for reconstructing models to print on your three d printer. You can use it to give you a good base to start. Three D modeling your own objects, and you can also use it for architectural studies as well as game designed. As you can tell in this class, we're going to be focusing on three these effort. It is a photo gamma tree software with a free option, and it is one of the more widely available ones. I chose to use three days effort for myself because at the time it was the most accessible to me with the limited resources that I had, which is one of the great things about three days effort is that it allows you to have the least amount of minimum requirements to get started. 2. Go Get the Software!: Let's talk about getting the soft. I don't want to go too in depth about how toe like actually go to the website download and install it. But I do want to briefly mentioned that there's an alternative method rather than just going to three D Flows website and downloading three D's effort. If you have steam on your computer, you can actually go to this team store as you can see on my screen and download it. Note, though, that it's gonna offer you to buy it for 1 99 to get the free version, which is also available on their website used. Need to scroll down a little bit until you get to the download demo. Going to the steam store might be more viable option for some people if one you already have steam and to you don't want to have to worry about updating the software every time or every couple of times that you open it. But with that said, now I want to talk to you about the different tiers that three D Zephyr offers. If you're wanting to go through steam, you really only have two options. The three demo and the three D's effort light version note that going through steam, the three D Zephyr light will actually cost you an extra $50 or so. That said, Let's talk to the differences three days after free is you can get the full three D reconstruction, but you're limited to 50 photos, and you can Onley ever use a single NVIDIA graphics card for three DS effort Light. You're allowed up to 500 photos, and you can use dual lived A GP use with three D's effort Pro and three Zephyr. Ariel. You have unlimited images and unlimited in video GPU support, but those are a lot more costly. So I think for most people that you're probably gonna want to stick with the free. And if you're actively using it, then you can go to three years of her life like I have 3. Requirements: Squashing & Confirming : So what is actually recommended from three DS effort and three D flow. They recommend that you have 16 gigabytes of RAM, a quad core processor and NVIDIA graphics card. And for the rest of the photo gamma tree community, they recommend that you have a DSLR camera to get the best quality reconstructions. But all of this could be very daunting and can cost you quite a bit of money, especially for something that you're just hoping to use a free software for. What can you actually get by with? You can get by with four gigabytes of RAM, a dual core CPU, no graphics card whatsoever and ah, phone camera. If your cameras from 2015 or newer, what are you giving up with the Bram difference? Technically, three days effort does say you need a minimum of a K K. Bytes of Ram. This isn't true. I can use it and have used it with four gigabytes of ram. So no worries there. The main thing that you're giving up here is speed, but you're also giving up the quality of photos you can use in your reconstruction. Because the software uses RAM to temporarily store and process data. If you have really large images, your quickly going to meet your RAM quota of the four gigabytes with 16 gigabytes, you can use high quality images. You can use more of them, and it will take less time with the do core and quad core CPI use. What do you get it again? It's mostly time, but it is important to note some CP use are even better than others because three D Zephyr uses specific technology in some of the CPI use to use a kind of pre processing photo recognition so you don't have a new enough are good enough CPU. It's going to take you even more time. What are you giving up with onboard graphics, or even in a M D graphics card versus and NVIDIA graphics card? Well, simply put, three D. Zeffert uses a special thing that only NVIDIA graphics cards has to speed up and get better reconstructions. So if you don't have NVIDIA graphics cards, then it's gonna lose your time. It's recurring trend that yes, you can views lower or UN recommended items, but you're going to sacrifice your time, so just be aware of that. If you're starting out and you don't have all of these recommended things. It could take you quite a bit of time to get your reconstructions with the phone camera versus D S L R camera. You're giving up quality of reconstruction because the quality of images lower the software has less points and less definition to actually recreate your image. But don't worry, though. You can still get very good reconstructions with just a simple phone camera. 4. Lets Get Started: Navigation: Let's get started with three D Zephyr. You have a couple options When it comes to manipulating the screen with your mouths, you're right. Click will allow you to rotate around a certain point, depending on where your mouse is. When you right click. That's going to be the point that it tries to rotate from your scroll wheel like normal is your zoom in and your zoom out. If you click on it, however, it will let you pan. So click on the scroll real in drag, and then you're right. Click Option is going to be your menus. The cameras let you look around. The camera rendering will help you figure out some of the different colors and stuff. I don't really use it all that often, though. Bounding Box will be useful once you start getting your first project for the main menus we have file. This is how you can open and save as import things. You know, the generic file options. You have tools, which the main part of the tools menu that I use is this election. Once we get our project started, well, actually, go back into the selection and use this to help tidy up or reconstruction. You can also use options. This is where you can enable some of the different presets and make sure things are flowing in the colors and everything that you prefer. I don't really mess with him too much, though. I just used the default ones because I'm easy and I don't like to play around too much with that. You have your export, which is what she'll actually get your final three D model from when we're all done and you have your workflow. This is probably the most important menu because this is how you're going to start and work through your project or reconstruction all the way through. So you have you project that's getting it started. It's going to create a sparse point cloud, so kind of like a rough idea of what your three D model will be. And it just trying to make sure all the cameras are lined up properly. You have your dense point cloud, which it's going to populate a lot more points around your camera or your sparse point cloud to really fill out a three D model. Mesh extraction is where it's going to connect all of those points from the dense point cloud to get you a model and in the textured mesh generation is it's going to actually apply the color to your mesh site, where you can use it for architectural studies and gang sign. Note that in the free version, you will have to go all the way through to the texture mesh generation in order to get your model. And you can convert it a couple ways with a couple of different free Softwares if you just want to use it for three D printing. So let's go ahead and get started and click. New Project will collect next, and it prompts you for pictures. Yep, that's right. We need to go to start taking pictures before we do that, though, let's talk about some of the best practices for capturing images 5. Getting Photos: now on to capturing your foot. I'm sure you're quite capable taking photos, but I want to go over some key points that you need to understand When taking photos for photo chemistry, we're going to talk about object theory, background theory, lighting theory and finally, photo taking three. So for object theory, you want to make sure you select an object that has a great amount of detail and kind of has some different colors in texture to this yellow all the same texture cup. Is it going to turn out? Well, it's gonna be amorphous blob of yellow on your reconstruction, and you won't even be able to tell that it's a cup, so avoid something that is smooth and all the same color. You also want to avoid things that air shine, so this PS four controller is extra bet because it's mostly the same color. It's mostly smooth, but it's also shining. As you can see while I rotate it. Light is actually bouncing off a different spots. This is going to confuse the photo grammar she saw store and really messy up. Lastly, you want to avoid transparent objects. Can software won't be able to tell what's in front and what's behind. The object is going to get extra confused, and it's really not gonna pick up any of this detail. So now that you know what not to capture, here's an example that I think will turn out really wealth and will be the sample for this class. This is a wooden statue of a line that's kind of like the two D pop out puzzles and because it has a bunch of different texture. The wood itself has slightly different grains, and it's all slightly different colors, meaning that the software will be able to pick up very identifiable and unique points on it . Next, we're going to talk about background theory. There's two main things with back around 31 is you don't want something that's all the same background color, Some photo. Gamma tree People will recommend this, but I haven't personally had luck with it, so I don't suggest that you try it. You want something that has a little bit of detail in a little bit of texture that you can kind of have as a backup plan for the photo Gamba tree software to still pick up and be able to align your photo with the rest. Secondly, with background theory, you need to make sure that you have a static background that means non moving. You don't want to have, say, a plate in one shot, and then as you rotate, you move that plate each time. It's really going to mess up your so shot. Same with having people walk around in the background or cars. I want a nice, static, consistent background that's not going to confuse the software. Moving on to light theory. You want to make sure that you have AH lot of light and that it's even like a lot of light will ensure that there is not one part of your model that's really hidden in Chata and even lighting. We'll make sure that you don't have a bunch of small or harsh shadows all of your object. Both of these things will result in the software getting confused because that one able to tell what the actual color of the object is. So shadows or bad. Finally, photo theory again. One of the things that three D Zephyr boast that you can do is that you can take photos in any position at any spot. I haven't had much luck with that. So I recommend for you to go ahead and make sure you take all of your photos in an even in clear linear path. So what does that mean? It means that your current photo has a lot of the same portions of your object as your next photo and so on and so forth. You don't want to start at the front of object, then take your next photo in the back and keep bouncing over. That's probably going to turn into a poor reconstruction, so make sure all of your photos are in a line and that each photo contains a similar portion of your object from the previous and your future. Picture, also with photo theory is that you want to try to avoid taking shots that are looking up. You always want to try to maintain even our downward photos. This will help ensure that your background is always going to be the same with your object . We start pointing up, you're going to introduce more lighting issues, and you're also going to add more background that the software has to try to figure out and connect. Now that you know how to take photos, go ahead and take some photos. Remember that the free version Onley allows up to 15 voters for reconstruction. But be sure to take a couple extra just so That way, if some of them are bad or not getting recognized, you have some backup. So go ahead and find an object. Take your picture and update your class project down below with one of the pictures that really highlights what object you're actually reconstructing. See in the next one. 6. Your First Reconstruction: awesome. So now we have all over photos for a photo Gamma tree. Let's go ahead and add the mint. Click the plus button navigate toe wherever you have your photo saved. Quick method to do this is control A that allow you just like all their photos that once are. You can go through and kind of highlight all of them individually, so go ahead and click open. Click next. Oh, wait. I had us take extra photos, just in case some of them didn't work. So it's telling us here that there is a maximum allowed of 50 pictures, so just go back through and you can be really careful with which ones you select are. You could just be kind of fast and just get rid of enough to make sure that you can go on to the next stage. This next stage is just it. Trying to auto, adjust and use online calibrations for the camera. As you can see, it tells you exactly what camera was used for each photo. Just go ahead and hit next for your project wizard. You'll probably want to go do something like close range, at least for me, because I was very close to the object. Urban would be something like an outside picture. And then, obviously, if you're trying to capture human, you do the human body. I'm going to stick with close range for this category preset, and then I will also stick with default. You might want to experiment with fast default and deep to try and see which set gets you the best reconstruction at the most efficient time. But three days effort rightly recommends that you try default for most use cases, so you'll click next. And then finally you'll click the run button, and now this will pop up telling you the status of your reconstruction. Be warned that this could take several hours, so don't plan on doing anything else with your computer. Maybe started late at night before you go to bed and let it run through 7. If at First You Don't Succeed...: turns out that this object didn't go so smoothly. This was with the default setting. And, as you can see, it says that there's a lot of no on the reconstructed column. This might happen in you, too. So some things you can do who to figure out what went wrong. So I didn't close that project wizard down below. You should see the camera navigation, these air, the cameras that actually got recognized and then aligned. Try to understand what parts of these photos are good and why they got reconstructed. So, looking at our actual sparse point cloud of what got recognized, it looked like it really only picked up my background platform, which had all of images. And, as you can see, appear where my mouse is. There is not really a lot of points. I got picked up on the actual lion model itself, so what you can do is you can try to retake photos with better lighting, can try to make sure that your object takes up mawr of your camera's frame, or you can give up on this object and try a different one. But there's actually one more thing I want to try first. If you go back to work flow, you can click on a new project. Next. It's gonna ask you if you want to save in this case, I don't because it turned out really poorly. So I'll click. No, and I'm actually going to import those photos again and I'm going to try to pick. Oh, that's right. Pick too many photos. So I know that I picked 54 so I'm just selecting for to remove next. So I'm actually going to try a different, uh, category and preset. So I think I'll go urban this time and I'll think I will actually just stick with the default to get, because it actually pretty fast on my computer, the different presets. Let's just try to handle the points in the photo differently, so playing around with them might turn one in its set. That turns out really poorly into a really good team set for another category. So don't be to discourage if your first couple of attempts turn out badly. It is a work in progress for you to figure out what the best settings are for your all right. So after trying a couple of different variations between category and the default preset. I've come up with something that got 40 out of my 50 photos and that ended up being close range and deep. This really didn't take me that long, and I was kind of surprised, as most of the other reconstructions I've done with just 50 photos took a significant amount of time. But then again, I am also using a relatively new and good machine, so your mileage may vary. But as you can still see, I am missing 10 photos. A way that you can check which photos got missed is if you come back to the project tree in the left after you finish reconstructing, you should be on the sparse point cloud kind of tap. All you have to do is click on the cameras tab, and it will show you all of the cameras that got reconstructed and eventually the cameras that did not get reconstruct. What you can do here is you can observe which photos didn't get taken and try to learn from . You can also try to ADM or images by observing the name, removing it from your image set and taking a new picture from your object. Note, though, that your object has to be in the same exact position that it was before, Which is why ahead you try to take some extra pictures in case. So what I could do is I could rerun this image set, but removing this picture, this picture, this one and one more because I have four extra photos and try to add in the four photos that I did not have before and see if I get a better result. I'm not going to do that. I think 40 out of 50 photos will get me a pretty good reconstruction. The other thing you can do by noticing which images did not get aligned properly is look at them, think about why they didn't work. And I want you to share one of the pictures that did not get oriented for you in your project and describe what you think that image didn't work. If you really have no idea, ask for questions and hopefully one of your other classmates can answer them. If not me. Once you have your sparse point cloud, we need to do the next step in. The reconstruction will go to work, float, dense point cloud generation giving you some information about the wizard. I'm going to stick with close range. And because Deep worked with me last time, I'm gonna make sure I select high details this time. This is not the same exact presets because you are doing a different operation. But they still are categorized in similar fashions. So if you look at the category of aerial close range human body urban again and then the presets, you have three fast default and high r with the first workflow. You have fast default and deep. So I'm just doing high details to try to capture all of the high details in my model. Click next and run Wait for this guy to go, and then we'll be back again. 8. The Dense Point Cloud: I have my dense point cloud. It finally finished. As you can see, it looks like there's some issues around the face of the lion statue, and part of that is because of the number of pictures I took for the highest detailed part of the model. And because some of the photos that did not get captured or seven out of the 10 actually war face shots of the lion, it's clear that that part would be less detailed. So some people might call this a failure. But I don't I consider it, Ah, learning process. Luckily, it happened during the classes. So Aiken help point out that for the more detailed areas you want to try to focus in and get more pictures of that detailed area. Don't be discouraged, though. If you have, like a really detailed object and you're like, Oh, there's no way I could capture it With such a small number of photos, you can only do 50 photos at a time. Let's say I wanted to re capture this lion. Next time I do my next 50 photos, I would focus around just the face and the front end of the lion and then later, I can actually combine the mesh that will get from this reconstruction and the mesh from the reconstruction, where I just focus on the face together and I can get a complete model. So try to think outside the box and use your tools. Resourceful e. If you don't have the access to the paid versions of the software, but moving on to how to get your mesh from the dense point cloud before you want to extract , you mesh. You want to make sure you clean up some of these extra areas like these blankets that are shown in the background, and the way we'll do this is by going up to the tools menu under the selection category. This is the one of my most used menus of them all, and that's because it allows you to clean up your dense point cloud. The three main features I use for this election are manual selection. Mage allows you to highlight specific areas manually selected by color, which allows you to pick points based off color and invert selection, which will allow you to manually select just your area that you need. And then, if you invert it. It'll make sure that you get rid of all of the overhanging or really far out there points that got inserted into your reconstruction. Let's go ahead and start with manual selection. We're going to do tools selection manual selection on this window. It's important to note that you have a bunch of different options for manual selecting. You can use a box, a lasso, an ellipse Pagliano rectangle, but you also have different Moz. Add and remove. So when ABS initiated whenever you select something with the tools, it will be adding that to your selection while with remove, you will remove it from the selection. The remove function is very useful when you're using the select by color tool. But since we're just trying to do some manual selection here, we're going to make sure AD is selected. I like lasso be as I can get more intricate shapes with it, and I'm just gonna quickly highlight around roughly my object. It will take a second for your computer to actually select all the points. It will highlight them in red, meaning that they're selected. If you close out of this window, it won't automatically de select those points, which is good. Some will go back up to tools selection, invert selection. Now this is going disliked all the points that were not previously selected and de select all the points that work. This is gonna allow us to easily remove all these tiny blips of points and the blanket from the background. So now you could just hit delete. Alternatively, you could go back up to tools selection and then click the delete selected items. I think that's most of what I want to do for cleaning up the dense point cloud. Except, you know, you notice some of these extra floating bits kind of around. I don't want to mess with the face too much because that's is all actual detail that's there. Just didn't get captured very well. But around here by the back leg, you can see some floating bits, and I want to actually get rid of those that way. Don't have any weird mesh features. So manual selection going to go back to the lasso just going to highlight thes extra points , hitting the delete once they're selected to get rid of. I think I also clean up a little bit more of this platform that I used to capture my lying on manual selection. This time, I'll just try the lips, and you can tell that I'm not very familiar with these other ones just because I don't use them like I said before last. So it's probably my favorite tool just because it gives me the most freedom and what I select with it. So I'm just trying to clean up the platform a little more. Nothing too fancy. Once it's all selected, I'm hitting. Delete, rotating around with the left click button holding and dragon. Zooming once again is a scroll wheel, and panning from side to side is when you click on your scroll wheel. All right, so I think this dense point cloud is cleaned up enough to the next workflow, which is a mesh extraction. This is which dense point cloud you're going to be using free mesh with the free version. You should only really have or have one dense point cloud to select, so I wouldn't worry too much about ever changing anything on this money. This next one is again just your national wizard for your default presets. On sticking with close range, and I'm going to stick with high details again. Next run. This next process should be a lot faster than the rest. It's just connecting the dots and two triangles to form the mesh. Once this is done, will actually have to do one more step before you can export it. And that will be another workflow setting. So I'll see you in a second. Now that you have your mesh, we can go ahead and move on to the last and final reconstruction. The reason why you need to do this reconstruction for the textured mesh is because with the free version, you're only allowed to export text of measures. You're not allowed to just export the mash. So we just have to do this one last step. Goto workflow, textured mess generation all cameras. Next, I would just leave these on default unless you know what you're doing and you specifically want the textured mash next and run. When this guy is finished, you can go straight up to export and export the mesh 9. Your Mesh!: Now that you're textured, mesh has been finished, you can just click finish. It looks pretty much the same to the regular mesh, except now it's going to have textures on it when you export it first. Certain applications. If you don't care about that, that's fine. Just go up to export its right next to workflow export textured mesh meshed export. You wanna click textured mash Shotaro Populate for you the export format you need to change ? I would personally change it to o B j slash mtl If you're especially if you're going to want to eventually. Three d Print this. Your slicing software for three D printers can recognize Oh BJ's and so can Mesh Mixer, which is a convenient tool for editing things to get three D printable Ready models. I wouldn't really worry about anything else on the menu. Just change your expert format. Leave everything else the same hit export. Select where you're gonna save it and you have it. The Dexter mesh can be opened through three DS effort or other programs. Like I mentioned with Mesh Mixer. You can also open it with blender these air just to free Softwares for three D modeling and other stuff. So now that you have your textured mesh, go ahead and take a picture. Are screenshot other image and share it in your project down below so we can see what kind of results we're getting. If like me, you got some weird issues, maybe take some screenshots of those and ask for help are try to outline why that didn't work. This will help you really get to understand what went wrong and what you could do better next time. 10. Recap: So that was my class on three DS Effort The Free Frederick Gamma Tree Software. Don't forget to save your projects. That way, you don't have to redo all of your reconstruction and processing time on your computer. I hope you've been following along and updating your project as you get more material for your first reconstruction. I just wanted to talk over If you key terms that we mentioned, we have the sparse point cloud, dense point cloud, the mesh and the texture finish sparse. Point Cloud is just the first rough draft of your reconstruction, where it's connected all of your images together. The dense point cloud is where it's giving more detail from the aligned photos. The mash is where it's connecting the dots, and the textured mesh is where it's applying the colors and the mesh together to be an exportable format for the free dish. I also wanted to recap on what kind of the best strategies for taking photos of an object are the object you need needs to be detailed. You need to take pictures and an even light setting, and your pictures need to be very similar. And in a linear path, around the object. I hope you enjoyed this class. And better yet, I hope you've actually gotten used to the three D Zephyr software and are able to work on your own to get reconstructions. Feel free to update your projects down below with extra reconstructions that you do pass this first initial class. I would love to see them. I hope you all have a good time reconstructing models with images in the future CIA.