Computer Vision, Machine Learning with Core ML, Swift in iOS | DevTechie Interactive | Skillshare

Computer Vision, Machine Learning with Core ML, Swift in iOS

DevTechie Interactive, Learn new everyday

Computer Vision, Machine Learning with Core ML, Swift in iOS

DevTechie Interactive, Learn new everyday

Play Speed
  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x
18 Lessons (4h 11m)
    • 1. CoreML Vision Intro

      1:40
    • 2. Vision Intro and Overview

      20:27
    • 3. Face detection Part 1

      1:17
    • 4. Face detection Part 2

      19:37
    • 5. Face detection Part 3

      24:20
    • 6. CoreML Vision Face Cropper Part 1

      2:25
    • 7. CoreML Vision Face Cropper Part 2

      26:58
    • 8. Face Landmarks and Contour Detection on Image Demo

      0:37
    • 9. Face Landmarks and Contour Detection on Image Part 1

      31:07
    • 10. Face Landmarks and Contour Detection on Image Part 2

      38:53
    • 11. Face Landmarks and Contour Detection on Image Part 3

      29:35
    • 12. Face Landmark on Image Demo

      2:26
    • 13. Face Landmark on Image Part 1

      15:48
    • 14. Face Landmark on Image Part 2

      8:04
    • 15. Face Landmark on Image Part 3

      3:20
    • 16. Face Landmark on Image Part 4

      3:30
    • 17. CoreML Vision Text Detection Demo

      2:18
    • 18. CoreML Vision Text Detection

      18:10
  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

39

Students

--

Projects

About This Class

Learn about CoreML and Computer Vision to create intelligent apps

Self driving cars thought to be a distant dream just a few decades ago. However, thanks to the recent progress made in various fields of computer science, this dream is becoming a reality now. Computer vision plays a central role in understanding the capabilities these vehicles required to be able to operate not only under standard conditions, but also under the most unexpected situations.

Machine Learning is everywhere these days. We live in a world where Machine Learning and Artificial Intelligence is not obscure mathematical and science fiction anymore they have become crucial part of our lives. Netflix, Amazon, Siri, Pandora, Google, Prisma the list goes on and on and it’s not just entertainment and media, It’s even the post office to healthcare and traffic to security. Close analysis suggests that virtually every moment of our lives we are touched by Machine Learning at some point. 

With the continuous evolution of Machine Learning(ML) and Computer Vision(CV), humankind will achieve new success in some unimaginable tasks. This is a perfect time to be involved in ML and CV world. Its evolving and becoming more and more complex while solving some of the most difficult tasks for us and thus making our lives better.

The goal of this course is to present the concrete modules needed to build a foundation for Computer Vision and Machine Learning. We will learn about various real world problems and discuss how Computer Vision solves it with various different algorithms. From there we will explore new Vision Framework introduced in iOS 11 and how it helps us handling those same tasks with a simple Application Programming Interface(API).

You will not only learn various techniques for image analysis but you will also learn how to combine them to create intelligent apps that can see things with the eye of your device(camera) and present you its perception around it.

So lets get going to build something amazing and be part of the future.

Meet Your Teacher

Teacher Profile Image

DevTechie Interactive

Learn new everyday

Teacher

Class Ratings

Expectations Met?
  • Exceeded!
    0%
  • Yes
    0%
  • Somewhat
    0%
  • Not really
    0%
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Your creative journey starts here.

  • Unlimited access to every class
  • Supportive online creative community
  • Learn offline with Skillshare’s app

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

phone

Transcripts

1. CoreML Vision Intro: Hello, everyone. My name is a new and I'm very excited to present to you this new video series on computer vision. Do you think? Or email and vision framework Computer region is all around us. From self driving cars to home Security automation and machine learnings are getting smarter now with learning new ways to not only interact with you but keep you faith. They're your delivery drones. Ah, that deliver food our day to day items hard to you and they use computer visions to make sense of the practical world. In this course, we will learn about the fundamentals of computer region there now natively available on IOS platform and you don't need any liveries lycopene CV to perform. Some of the comment asks. So we'll be looking at techniques like face detection face landmark detection, optic classification, optic tracking, natural Newell style transfer Barco detection, ex detection and we will be creating a very our Ah, very cool our own next recognition app. So I hope you're as excited as I am and I will see you guys in this Ah, in this vehicles and register and ah ah please feel free to post any suggestions you might have and enjoy the course. Thank you 2. Vision Intro and Overview: Hello, everyone. My name is Anoop. And today we are going to be looking at and be derision, machine learning with normal suit and I list find you course. And we're gonna be talking about a bitter vision in this course now. First question that we need to answer. What is competitive visions? It's a computer vision is the field of computer science that deals with how computers and gain understandings from images and videos so everything related to images and videos falls into this category. Now Some of the examples Uncle Bitter Region path are face detection, faith recognition, image and video captioning. Okay, classification object tracking in generating arts like artistic style transfer, also known as your style transfer. Now let's take a look at each of these in a little bit more detail, so face detection is a technique to find human face. An image are in a video. Below are some of the general steps that you take Do detect faces in the image so you detect face on this position. So this is usually done using Villa and Jones face detection algorithm, which is which is how one of the hour than that's the most commonly used and you train this algorithm first and it takes a little while because you have to come up with all different examples. But detection with this algorithm is really, really fast. Then the next step comes the, uh, normalization The faces now, because the faces are found are captured and be off hard. Different brightness, different contrast are different sizes. As you can see in this example image all the pieces, uh, needs to be basically processed on and and needs to be scaled from the same size in the same level of exposures. So they can be ah ha Kandic and people put into the into the machine, learning all of them for feature detection. So for that reason, all that all the images, all the faces defected are skilled the same size. And then their exposure differences are compensated using history Graham Equalization. Now, then come see Step off her collecting features for each detected face. And this is the step that you take basically defined this specific facial features like center off the I and off the nose lips except sacra. Now, once we have been all these, we can feed all these features that we have collected from the from the faces into a machine learning algorithm. And then we can use Albers. I'm like support Rector Machine also known SSV um, or a Newell Net or a nearest neighbor of them find matching faces. And that is the next thing, the next slide if we talked about without. But now face recognition. So what is face recognition? Well, as you can see in this picture you're identifying and that this person is Marie and she's staff member and this person behind her is a contractor. And, uh, this is the way off this is This is done using thea technique called face recognition. Now, face recognition is the process off identifying and verifying a person from the image or radio how Facebook has been doing it for a while and they have developed a system which is about 90 put 98% accurate in identifying person in the image. So if you recall in olden days when you would upload a picture in on Facebook, it would ask you to bag your friends our track bag, people who are in the picture Well, we'll when? When you're doing that, he will not only tagging and letting your friends know that. Hey, I have actually applauded your picture along with mine. But you're also you're also helping Facebook label that data. So then the Iran Facebook used that data to train their machine learning algorithms. You have created a mapping between person and their name. So then, in the time when Facebook estimates the upload, a photo freezing would start auto dragging those images and identifying the people on the image eso That's the power off machine learning and that has been done using the technique office recognition. Now the face recognition process is very simple. You basically first detect our faces as we have discussed in periods life. Next, you analyze faces, ah, face basically big features like eyes, nose and lips, and you actually can divide them in different categories. Now all these features combined. The third and final step is to take these person a machine learning algorithm and then machine learning out. Wortham compares. Those unique features do the faces that it has already known, or it has already been trained on, and that's how it's able to identify that this face belongs to Marie or win isn't a like now. Third technique is, um, image and video captioning. Now, image and video captioning is the process of generating actual description, often image, and it has done by performing image or video analysis on them. Now it uses both neural network and natural language processing to generate actions. The common techniques are on volitional vilnit, which is a technique in new network to dio object, to do image recognition and makes conservation and the recursive bill at face, which is a recurrent. And your letter, which is our am in for natural language processing, which is used for sexual based home. Bigger now in general, for image captioning model to work, you will need a rather large in the more of training examples so you generate. Suki could generate a good option, like in the image below. As you can see now, there's this market are, ah, photo awful market. And, uh um, farmersmarket looks like and the the other them used deep part of evolution al No net in language generating the current noodle that dio basically predict that this is a group of people shopping in an outdoor market. There are many ways devils. I want the food stent. So these are two different caption that it has generated, and he's agency. It has done a fairly good job at identifying those now sound the things some of these techniques are actually used for people who are usually impaired to give them idea that they can actually point the camera at starting location. And it gives a caption by looking at the scene in identifying what those things are. So there are really good, really good use cases for all these techniques and the specificity specifically for for this one. Next comes the object classification. Off the quantification is the process of classifying an object in the image are in the battle for image. As you can see in the picture, you have a cat. And in this whole picture there are several things that are there is a There is a the street, uh, looks like yeah, straight in grass. And there's some trees behind, and there's a cat in the picture, and the machine learning algorithm was able to identify, get in and basically draw a rectangle around it. An upward for such classifications are like cat or like car, person, table. Whatever it has been trained on, and whatever it has known that that can be recognized by the machine now steps for object classifications. Are you first pre process? So basically, again you normalize the contrast in the brightness off the image and you extract features out off out of the image. If you reach has like, doing extra information, for example, this one has a lot of things and there are different techniques and other than that you run . So basically you could run something like finding a dominant object, basically finding an object that is that that looks like, you know, the biggest object or the major object in the scene. Or he can apply s tradition a simply room everything and know only leaves the edges in the screen and identifying the the object by the shape off the edges shape that it former us basically, after removing all the details and then it comes to raise comes a step where he used learning Albertans like support vector machine or new lever to identify image. They stone what it has been trained on. Next comes object tracking object tracking is the process off locating moving objects over time in a video feed are in camera feed. Now it has uses in security surveillance, video communication, compression, Um, fact traffic control in omitted reality. If you notice in this example, we have this how self driving car or some sign some sort off, semi self driving part which needs to keep track off the car right in front off it. Now, this is a perfect use case for object tracking because you you have a car will know where the car ahead off you. Those but there is coming closer is moving for their way or it's going on the different lane or something. Now there are grains other than study use for object tracking suggest cardinal BCE tracking or mean shift tracking. He can use control tracking our Kalman filtering our particle filter. Now all these techniques are very, very ah, research arranging And they have their practical uses all but it's not in this girl for this regular course. I'm not gonna discuss about it, but you know what? To Google in flying more detail If you need to find more detail, please. Now generate. You are It is like, uh, like like artistic her transfer or basically neural style transfer is the process of migrating content from one. You made it through a different style, and the best way to look at it is bacon being to see and basically by visualizing it. So if you see in this picture we have a photo off what looks like a lake in some houses, and then you have an image off style, which is like a famous painting off when God starry night. And then you combine these. Do you generate a opponent image, which actually is painted this photo in? Oh, thank God style. It's like Rangel would have done it. So this is what's called neural style. Turn its work in Arizona, there's really famous app. It's called Prisma, which has Bean has bean around for a while, doing exactly same thing into and basically transferring your selfies or your photographs into famous famous Tom art like calorie forms. So we'll be actually looking at one of those examples. How feature and then come civilian in the Koran rely. So all these techniques that he have seen basically implementing them well, you don't have to do it. This is the good news, because Vision and Hormel combined they have taken care off all these for you and in this video course over the time we're gonna be exploring all these. So what is Region Framework? Civilian framework is was introduced in IOS 11 and Apple announced it basically as a replacement for common computer vision to solve common computer vision problems. Allusion lets you perform computer vision test like ah caspit or without knowing internal workings. Like it said, for complex computer vision algorithms and with vision, you can perform so many different things like identifying faces, finding facial features, finding Faith Concourse and during our core detection rectangle detection extra protection . Now, ex detection I would like to highlight because it's just the text detection peace, not the recognition piece. So you would be able do. By using this technique, you'd be able to point a camera and, um, see the area. The text is written, but you won't. You will have to do further, um, uh, implementation or provide further implementation to basically do the text recognition. And that is one of the other thing that we're gonna be actually, so we're gonna be creating a not only a text detection, but a test recognition app simply, in other words called optical character recognition. And we'll see our lab. You might have heard up, and we're gonna be creating back in. This will be a few days here, Um, and in past, for doing any of these, you'll need either open CVI or something similar library to perform all these. But now you can do it within IOS itself. Now what vision from Berken do well, reason free work and find faces in the image seeking, and it can give you the rectangle bound for each detective place. It can also find facial contours and detailed features like locations off the location of the I nose and mouth. And basically, if you points to draw your lines and everything around these creatures now, you can also find rectangle or objects like street signs. You can detect regions off images that contain text, like a mission in the X detection. You didn't detect and recognized barcodes. He contract the movement off object. Basically tracking you can do up declassification with the help off external Corno models. They're basically classifying what the dominant object is in that image and then last but not the least. You can help transform core ML model. Every medical help a formal model. He can actually performing the houses in classifications on those images, and you can do so many different things with core amount and envision compartment. No reason Famer usage is very simple. It has actually divided in the three steps, and it starts with the request types of different. Imagine else's types need different request types. For example, face rectangle detection. You'll need we and the neck face for tangle. Request on detecting face your official landmarks you will need to create being detect Face Landmark Request and you would create Be in Korea. Malaya. Quest for Koroma Image processing and be in detect extract Tango request for extra traction . Now the second thing you do is you created request handler so the quest handle Erste are the optic that processes one or more images of that problem. That basically passes is one more image in the houses pertaining to single remarkable images, and we can use would be an image in September for single image processing. Are we in sequence request handler for handling multiple image processing now then becomes the observations, and these are the results that are wrapped, endure and do the request that we get back in the completion handler. And these are the observations and information sent, like bounding boxes, off analysis and results. So we're gonna be we're gonna be doing all these home with practical examples as well. So this is this is the common signature what it looks like in the vision framework. So you have you create a request with me in that stash request, basically like me and face request, or, like, you know, best requests and everything, and then it gives you a completion handler. It has a West Indian error. And so you simply unwrap the operations from request aunt result as we and a stash operations like me and facial feature operation or anything like that. And then you process them. Now, this is this what you create this request, he can create a handler, and, uh, you can create a handler. And in that hamburger he can supply image in options and then you just call and Luda performed in supply your request, which actually does all the work for you. And then you see the observation back. Now, in the next video, we're gonna be starting with the demo on each of these categories, and we'll be creating APS for all of these categories. One by one. I hope you're enjoying this video. And thanks again for watching See you guys in next video. 3. Face detection Part 1: Hello and welcome back. And today how we are going to be building this new map. There began a use vision framework to identify faces on the on the screen. Basically recognized the face region using condition. So in this, as you can see in the we were able to click on the screen and the victory work could find most of the faces that are are there on the screen. So, miss, take a look at another example. So I'm gonna tap on the screen and we come bet Indigo. Um, any agency it actually does. Really good job finding faces, even when there at different off different science, looking in different directions and stuff like that. And this is definitely far better Then what we used to have in a sea had better. According Miss Doctor. So this is the, uh, them and we are going to be building this app from scratch. Service started 4. Face detection Part 2: All right. So first thing we're going to I have this a blank project open. You literally created this project. And what you gonna do, basically is are going to be going to be creating, basically, do I and then connecting it to the code. So this was clear to you? I I'm gonna now search for collection. You? Yeah, no. Track it down in the center and concern it. Two full screen. You basically want a habit on screen? I'm gonna kick place. So maybe use identifying. You're Teoh. You also make sure that you have are, um, here strong direction to be horizontal. Yeah. And, uh, crazy is enabled. Yeah. So and this scene, you have the control over set up a Call it good. So as so this is a four for the about the you I not Let's go in the background. India be control over and basically justice code. So I am going to I need a article it onto the collection too. So let's create I e act it and, uh, is equal are election. Why collection me and there and then we're gonna need data, Tina. Two. And this is actually a collection or an array off you are Mitch down to initialize it out some images. So I'm going to, um, take these images and see again, cracked. He's here. Okay. And basically, you're gonna find all these images in your in your resource is older, so hi. It's miss like high. I copy. And it's already done. So? So I don't have you. Yes, I am obliged to actually imported. Okay, So I'm going imported out from the finder. And for you, you can always find this inside the results of older so you can have resources over, and you're gonna find all these holdings images out there. So more explain. Okay, those will do it now. So we are going to be come. Okay, that's fine. So I'm going to say X men, and we have a giant Seiple e Star Wars. Me and places of the one. I have a bunch of creases. Okay, cool. So the next thing that we're gonna do is basically we're gonna first assigned issue controller as the delegate and data source for bar or collection. You said to say so, Doc Collection, You not delegate himself so Dr Collection, you don't date us of its people. And once we do that, it's going to start complained. So gonna recently provide the extension? Did you control hurt and say that we won't conform to a collection? You delegate? And you told her my collection, Esso's and you don't or your aunt Collection. You delegate Duke. Okay, you hear that? How we are going to receive another error, which is going to be that it does not conform to the vertical. So basically there to mattered Sound that you need Teoh No used implement in order to conform to the Burgo. And the 1st 1 is basically no girls Items in section in return data dot Count in here. Okay. And second on its cell for item at an expat end for temporary. Just to move the error in the same list cell is going to correct. And you dot Did you himself starting next map. You know yourself now reason I feel temporarily because how we are going to need you a collection self class. So this clear Just that don't create and use a file and saying customs now in this in this custom tell basically we are going to put the logic for our Oh, to show. Are you, um, to shore images? That's always duty. Face detection part. Okay, So this was created cave able from this class. Um, basically, the class is cold Customs help and times from why collection i o collection cantor and this . So our class is going to have a few more e books. So for us, it's gonna have available for detective faces, detected faces. And this is a collection of you anything in the store, all the faces that you have, Detective. And, uh, next is going to be You're gonna create a Enbridge. I'll be able type mitt. You're gonna put an observer, so good set and U s a guard let enrage quickly or to damage health returned. And basically we were doing it. So when we set the image, you understood some property. There are no go to this class like a gonna set a, uh a I mean, what makes people and also coming up the detective place? Um, already right here. Just disarray in a clean. So first, let's create a you need to you. So we're going to say next Hado in it to you. Marriage quality white meat. You okay? Uh, you want to? Me and my dog content mode. Do you people que aspect fit and I don't translate. Turn. Hi. So, this particular Roberti Trance state advertising Mathematical Strange. You needed to set it to false. So begin, uh, work with other me out. So this, um, let's call this to stop inmates is equal to rich, like So now, one of the only thing that wanted to basically is one that clean up the old base that that we have detected. So let's create a function called Clean Crease detected. And what we want to do in this particular function is detected dot for each right, So and, uh, for we're gonna run this for YouTube and remove, um, remove all the now bulk abuse from its super support. You okay? And, uh, don't we have found that I'm also gonna be sent, or, um, Cherries are Ray. Where do so you know all is going to remove all the content home, Detective, please, is all right. So, uh, the next thing that we are going to do this cold This, huh? Great. Here. Clean coupled basis. All right. Um, now, the next thing ISAF. How you to set a part of you to show this image this image? Um so let's create in it, and you call super dot in it. I think the best frame here and we also gonna required to create a required in it because we are actually implementing quality and that we have to do it, and, uh, this one it's not necessarily acquiring can actually throw in her most of time. I have seen people from metal fatal enter, but I don't actually just put it in the right order in place. Uh, now these two are going to call a function collection is going to call second youth. And, uh, you can stay said, uh, asses your initializing instead of call set up loose. So despite its on both and they now, but we need to do is we need to, um, ag this image of you into the main view. So we gotta say, Add something, we'll go. Only two many analysts of this constraint. So the outcome strange Start, activate hand. Gonna you ankle based system for setting constraints in town. You're gonna constrained this, uh, this photo. Uh, you do take the indidis based off the self. So we're going to stay out leading that constrain equal to out Doc Leading. Okay. And order image start trading. Oh, constrain it. Both himself traded constraint. So basically, keep our for treatment to be one of the ways we used leaving anchor to be at the same place where cells needing anchors and photo images trading anchor keep it the same place where self start trailing. It is grand in it. Same thing for top and got and then bottom. Okay, so this is basically all the constraints are set up now on. Our next thing we're gonna do is we are going to bad gesture. Nicer. So we can actually, But my seat, that gesture. So So let's do that. Stood against yourself dot and gesture. Recognize? Er you write that just you recognize her? Started itself in the selector. So let's create it selector. So at, say, function, handle that and, uh, be are going to say, has a selector is handle step. Okay, now, on the end of that, what we want to basically is, uh, want Teoh clean couple faces first, and then I'm gonna call the base section for division So we're gonna comment it for now. You can see. Go. Thanks. Detection here. All right. Now it it looks like we have We have done everything that we need to do that, that it's related to setting up ourselves. So first thing we should do his shourd finished setting up our real adore. So this isn't that up basically for the cell. Are we gonna first go to demean your controller? And you? It's so a class, and this is going to accustom self. And you also want to make sure that you have reuse identifier set a cell now. OK, now back to the view controller holding on to us, we will Costas as customs. Okay? And I say sound thought image it's equal to Yeah, uh, but, um, index past I don't like Okay, um, all right. And one last thing won't actually set of the size off. The S L s o size of the cell can be set in our delegate flow layout and their two properties that we do. There's own property that we descended for this one and another one for the delegate for the collection. You delegate. So listener said the first set up size for item at so sides. For item that and you cannot turn CD size in this one. How weight and height and the bit is going to be some thought collection dot A brain not weak in south dark collection. You don't frame that height, okay? And, uh, we need one more in retirement and we need from the property. Basically, this property is to remove the intermediate spacing between cells. Um, then and this dropping is called minimum line spacing for section that so many months line spacing or a section at and in return you for this. Okay, first building run and see what we have got so far. Okay, so it looks like we're crashing. Okay? The reason we're crashing it's because people got to set the outlet. So I left. It's elect our controller, go to the outer section and back and drop. Basically one correction. Really? To the collection. You in the story board? Yeah. Okay. This were you on. See, this time is gonna work. Okay, cool. So we have custom it and uh okay. All right. So looks like our Beijing is working fine. And our images are loaded correctly Okay, so the next video we're going to go. It's basically you gonna come from rates detection and then control a box around the place . It's a cool place. Get blushing, coca guys and joked with me and helps you guys and next video. 5. Face detection Part 3: no one looking back. So in this video, we are going to be continuing. Um, Omar, you I have to, uh, do the actual not plates detection part. So you remember in the last video we implemented this recent collection where we can see images and, uh, begin, Um, disagree. Look at them now plays webpage. And in this video we're gonna be doing is basically we are going to be making these images to work with vision framework to detect faces on them. So let's get, uh, get started. I'm good. Now have back t not to the high school and, uh, busting we're going to Yes, they are going to go to our custom cell. So, class. And, uh, let's create a extension. So I'm gonna create a new class, actually, just Teoh percent. They're gonna call this custom, so plus all right. Custom self destination and, uh, an import from you. I can't can import operation, and this is gonna be an extension to customs help. All right? I only basically gonna write all the base prediction related logic and this extension. It just makes it easy to manage your code. So can say must be creative function, and I'm going to call this function detective faces, and, uh, and this function, we are going to be creating a request. OK, so you request off type, I'll be and faced we and detect face for tangle request. So it is going to be Klay End Stepped face rectangle request. And we've been retaking the creation handler about, and this handler returns to things request and her, and we're gonna be basically writing our logic to handle that. So person will do. Is it gonna say some thought detective faces? We're gonna finish like that with an empty all right. And I think an echo. It tricked all the request responses. Request request to results. It's a request that result, and we're gonna say God for each of the result. How do you want to get straight over or request? And you wanna put bomb some sort of operation on that And what the operations are gonna be for us to begin a basic miss check with guard lit, and, uh, it's going to be request to Zico to request and would be face observation. Okay, So look, we're doing is you're saying make sure that the request is a be and face operation kind, uh, kind of request. So each result that were, actually, if rating over it's basically more represents its result. So result result this each result that your trading over should be out type being face operation. Okay. And was it past it? If it's not that early return from here. Otherwise, we say, uh, that we want to handle the face detection. So how can handle the extraction? Basically, let's create a new function. Gold handle, face detection. Okay. And, uh, but here in supplying this is basically an optimization. So acceleration, we end based operation by and since there we're gonna draw. Are you I on average. Okay, So you gonna come back to that now? But the year been called that you say something, handle destruction and being a pastor result. So as based population. Okay, so once, once we haven't, once we have done that, are questions is created. So next time you do it. So we are going to be performing the request so upon the image requests. So let's create a new function for from image requests. And, uh, this function is going to take a request. Um Oh, type we end. Detective Basement, tango face, rectangle quest. Okay. And, uh, here. Oh, well, no two is basically, you wanna launch a background tread and on the background thread you wanna put from our, ah, a vision test. So I say dispatch Cube, uh dot Glodok, it's called your service and quality of services contribute user in issuing it, ok dot Hey, Sink. And we can write our cold inside that, um so the code that that we're gonna be writing is it just about so MasterCard let and, uh, we're going to a tree. Seedy image out of the image that you have currently. So knows yourself. Dark image, That CD. Okay, this is gonna give us a uh huh? Team it else I'm going to return. So first we got the city, mate. Are you? Live it. Okay, then we're gonna create a quest handler, so request. Yeah, and, uh, this is going to be an image of questions. Image request handler. Santa is going to be our type city image and options. So in the supply, her CD image here and options are gonna be blank. Okay, How? Once we have that, we can actually for fun. That was rapid. And do try catch. So do catch Grant error that localised description. Now, this error object is implicitly created. How? But in actual custard to say, Let our car people dip Error. Our let our basically can say are localized discussion, That's all. Okay. And then we're gonna try to try No, the question lor 1/3 form an array of requests, and this is where we're gonna pass our request that you have received, um, as a printer. So was the request is gonna is gonna be profound. It's gonna basically return to some kind of completion handle. And this that's we're gonna get with quest and Error object and are This part, of course, is going to be executed. Okay, so let's go back to our handle on detective. Please start doing changes on the U I 51. Do says we moved, um, back to our background red. Really? Move our move us back to main front basically and perform well to changes in the main entrance. So I say, dispatch cute. That main dot a sink me and we are going to see looked bounding box. So each observation returns us a bombing box off the face. So you say observation don't bombing bucks. Okay? Next thing we're gonna do is we're gonna create a face box around. So for that was created helper function to create an animated face box. So I'm gonna create a function down here, and, uh, we're gonna call this create animated based box, create animated face blocks, and the thesis is going to date and Emily's error and a rectangle image. You want image and, uh, correct. Correct. As in front and is going to return a drive you tonight. Now, what you will do is the first you want to scale get the scaled height for the image. For that, let's create any function, uh, in mistake. Get skilled height, and the scaled height is going to expect from image to our image. And it's going to return us, um, a city float. And, uh, this is gonna be simply a division. So it's a self doc frame that size dot from it, and we're gonna develop that my site Scott with and, uh, times here it that size don't height. That's going to basically create a scale. Same as the screen sites for for us and going to return it back. Um, so you're gonna say that image killed height is equal to so that get scaled height have been a supply out image into it. And once you get the hide back, reason, reason we want to do it because our, um, are in age. It's basically is basically good in a in the pounds off phone, and it has ah scaled according to the screen off the before, and it has made herself aspect. But we need to actually reach the right that the right skill to draw at the correct location. So this first from create a, uh, a transformer. So you're gonna see let corn, uh, flip. And we're gonna say C g if I ain't trying transport or every gun that three shit with emitted with the scale X and this count is going to be one in my this one just because it is a negative space of you can actually flitted. And then we're gonna translated by the translate tick by. And, uh, this translation is going to be you don't find on X and negative bus negative off image scaled height. And, um, now we can subtract. Basically so dark, framed out height. So whatever you spring is missing its height, and we're gonna divide that went to you're gonna add, um are scaled height has division is loss and scaled height by top of our image. Skin tight. Okay, so this is going Teoh translated to that, and then we're gonna actually create a I ask you as well. So crea transform scale tense corn keel And this skill is going to be our cg of my transponder identity. And we're gonna create a from 20 metrics. I'm gonna see that skill by in town. This camp is going to be from self for explanation is going to be self dot So that frame dot How bit and, uh, the why location may itself is basically going to be image scaled height. Okay, now we're gonna create a direct out of these properties. I'm gonna say converted, Correct. So we're actually converted direct according to the screen size? Yes, a wrecked applying scg transform. And this, uh, transform. We're gonna bust apply the scale to transform scale. Okay. And then you can apply the flip castle s a cast. Um, applying transform what? Right So Okay, so once we have what we have been that, um, we are going to be taking this. Okay, so after once, once we have these converted reticulated gonna create of you can call it based box, you have you. And in this we're gonna start adding some properties on the layer Soviets ups part start, uh, layer dot border color and, uh, color literal. The white color is the one that I want, and we're gonna add border with in corner radius. So that's a face box stuck later dot Border it to be too. And facebook start later. God, corner radius to be eight. Um, and then frame is going to be place. That start frame is going to be or converted. Direct. Still gonna fortitude. Okay. Next, we're gonna apply a background color, and we're gonna play transform. So hey, stop. Start over. Eight people to you like color. And this time we need to apply basically a tricolor that has just a border. But, um, but it's some outfit back council property. So, like Wheeler is going to be one. Find out. And alpa properties, country your 10.3. Really? Now we'll make this, huh? it basically bounced from its region from its center. So for that community rescue, say, face box from and then apply the traffic on the layer single Z, they transform. And, uh, gonna by a treaty transform. So see a transplant, Tweety. See me in a trance Corm three d make skill. And initially, we're gonna set off the wealthiest to zero. So this is our a starting point, basically. And, uh, 10 of you're gonna stay you. Are you going to use your idea dot Any meat with a duration and the bounced. So the inauguration is going to be 0.5, Do you know is going to be zero spring damping. It's going to be zero point you The initial spring velocity is going to be one. Um, and, uh, the options basically is going to make dot Not curb is in and out. Thanks. And the animation that won't apply this country face box dot layer doctor, transport is equal to and this is reviewing. Actually, recent are transformed toe 17 s, a c a. Some classle mix scale one one one. Okay. And, uh, and then And the completion is going to be nothing. and we're gonna return to speak this face box. All right. Okay, so it looks like they have got everything created for this. Help her mother. Let's go back to face detection. Um, method. And, uh, let's get the, uh, that chord finished up. So we have our bombing box, and I've been created a face box, and we're going to use our help passiveness create any trace box, and, uh, the image is going to be our help. The image. And, uh, the Iraq is going to be developing marks that we have actually found through the observation. Now, we're gonna add this to our, um, to our mean Okay, so we just needed south because we are inside a cold earlier. Ah, and now we're gonna need have so few, And you can add this space box eso home. And the next thing we're gonna do is we're gonna add, um, the face box, too, are detected. Faces have. So All right, so looks like everything is his complete home. Oh, actually, um, we need to call but form. And where's your question? Have you to buy a supply that request here, So begin. Make begin. Basically called the deck faces and call perform a major press from here so it can't do the ah, the image that you made analysis and find up, find the face and do the face detection. Ah, and then returns back the the observations to rust. Um, now, remember, in the view controller, the actually, um uh are in the customs out we had Yeah, this place where we wanted to call the paced action. So we're gonna say self not detect. He says, Okay, it's run and, uh, take it up. Right. So here we have our have running and step on it. And as you can see, it can actually do you agree your job that you go cool. All right, So he goes Is, uh you have created your very first complete project into him and Beechman framework to detect faces. Now, in the next video now are going to be creating a every single app put in that, um we're going to be cropping faces out off out of the out of these photographs and, ah, reason we're gonna be doing that because for future projects, we're gonna use that FBI. Basically, we're gonna design maybe A and read, do you think? Yeah, for two to do some. So the image analysis on the faces using corn. So I hope you guys enjoyed this for you. Um, and, uh, see you guys in next video. 6. CoreML Vision Face Cropper Part 1: Hello, everyone. Welcome back. And that way you're gonna be looking at this new demo. So, uh, in this, uh, in the fact, what we're gonna be doing is gonna be again using the face detection from the vision framework. But along with that, what we're gonna be doing is we're gonna be cropping how individual faces out, Um, and that we're gonna be writing the FBI, which is gonna be a leader utilized in some of our abs to do further image classifications . So here is the demo. So this is, uh this is our input file. And it's a one image which has, like, multiple places, as you can see off, like, you know, different folks with different angles and someone classism without classes, something here. A moustache is in all races and color since everything So So, yeah, this is that still gonna be using other input and that this shows basically and here's our output where our, um, our algorithm from the vision framework was able to, um, extract the face part out successfully, and it has listed them individually. And, uh, you I collection of yourselves. So, um and you can see like, it's it's really a very, very close to like, You know, being being perfect. And uh huh. This is actually ah, really good, because, uh, Iran, if Leslie you wanna do Ah, classification off. Like you know less than you want to find out if this image is off a male or a male. Hey, can just supply this part and and get the results out, which optimizes the performance off the app. Really, really good. So that's why the supply is important. And, uh, we are going to be creating it from scratch. So, uh, yeah, I hope you enjoy artist M. And, uh, we will see you in the next video to get started building this. Thank you. 7. CoreML Vision Face Cropper Part 2: Hello and welcome back. And uh huh. Now we're going to start building that face scrub FBI. Um, so basically, I have a very basic app created, and this is just fresh out of what we create from finally, um, project. And I have that. And it's one project, so it's literally that nothing in there. So, um, it's got started, and I'm gonna frag a collection of you, um, and drop it on the U controller, and we are going to be up, basically make it cover the whole screen. Thanks. So and, uh, I think they're gonna do is we're gonna give it itself and identify. Okay. Now, um, what we can do from here, we can also set, um, the floor layout and vertical scrolling and everything is cared. We need to set the south size. So we're gonna say we need 100 by 100. And, uh, minimum spacing is going to be zero. And next, you're gonna need a committee view so striking image the art and put it inside the cell, and we're gonna make this image of you eight points, all sites, and you're gonna say they should aspect fit. Okay, Cool. Um, that's think we need to do is we need to find the, uh, spaces like an example, uh, to work. And I'm gonna use the same example that I showed him, guys. So I'm gonna actually include this file in the and the resource this older black on a piece tonight, I'm here and be gonna track it out inside our pastor Skandalakis. Okay. And this is the same image has a whole bunch of people in it. And reason could image to work with because it has been used to from angles in various different kind of faces. So it actually really tests out the performance and the accuracy off vision framework. Um, now, next thing, um we can get is we are going to go back to our controller, and that's clean up. Now we're gonna create a idea. So v coir collection. You you I collections. You okay? And we're gonna create a another April I'll call faces. And, um, this is gonna be up tight. Um, you are image, and it's gonna be the size with a blank bankrate. Okay. How we gonna make her collection view dot Delegate Our controller to the delegates and the pterosaurs so collection you dot Esso's is equal to. And how are you going to say creative function and dysfunction is going to be get bases from pitcher. Okay. And, uh, this is gonna be empty for up, but we're gonna fill this one up, but let's put that here. And this first conform to the particles. So you get an extension. You controller, you are a collection collection of you don't get and you are collection view sauce. Um, and we're also gonna, um, basically conforms to you. I collection. Do you delegate flew out. If you notice. In the last video, we actually separated them out, but I don't feel like you have separating them out. Like, you know, you have seen that one part. Just another one you can follow, like you know, or pick either one toes. Both of them are get there's no right or wrong answer or right a wrong choice for this one . So I'm just gonna go with, like, all in one. So what? We're gonna do a number off items, so number items you section Have you got a return? Found south Dark faces don't count. Okay. And guess yourself here. Row so for competitive expect. Okay. And let's sell is useful to you. Our collection. You start Dick, you reusable Selves. We used identify that they could find and the next class and return to sell. Okay, now, uh, one of the thing, Uh, now, once we have control entity to these, do anything that we need to do is we need to define the size and whatever show it's on a short three images in one role and how we can accomplish this, that is, is very easy. So you gonna say collection view, Oh, size for size, for item. But in next. And, uh, here you're gonna say, let with is equal to stuff that collection of you calling you doc Frame docked with and over many items you want or roll. So we're going to say three. But now and being in the return return cge size How it wait as with and hide as because we want to be square. So Okay, how This cell is very simple. So what I'm gonna do is I'm gonna include that, um, cell just really quick to get creative. You file, um, coca touch and I do I collection. If you sell, okay? And you can call this custom self, all right. And the sound is going to very simple. It's gonna be just be outlet week, far image. You're a make you. And you I and, uh now gonna go back to our storyboard. And first we're gonna connect are collection of you So dry out collection of you from here to the controller and next we're gonna assign Sell it Class is gonna be custom self customs help. And we can connect that outlook for image views. Except now, let's go back to our controller. And in this you controller, you're gonna say, Are we gonna cast this one? I'll sell as custom self all right. And as a sound thought image you that interview that we have the image a sequel to self doc faces dot um, bases in this fact, Doc, I Okay, so this is going to give us, ah, face from that face of the right now right now x empty. But we're gonna actually populate that. So a under to do that? Um oh, you do is, uh, you're gonna create a new file that is gonna be completely separate file. So we can extract out the part that does the face detection and cropping park. And you're gonna call this one face, uh, face that is gonna be importing you, like fifth and creation are you create an Tina like result, it's gonna have three cases. So it's gonna have success on the field, which is going to accept this spring. Gonna have air is going to accept its pregnant. Yeah, And now we're gonna create class face. Now we create three private, credible So private lar be and requests, and this is going to be the end request. I as can be to your right, and you have private our faces. And this is going to be all five. You I miss. And this is gonna be the size of them tearful and the heart the result, which is result and And I have one more. It's gonna be the image you're gonna be passing on. Okay, so let's say if you want to count how many paces they are, so we can actually do you count. I'll be able this off and in states return. So God faces of that. Yeah, but that's going to give us the count off the face is how that has been detected in the inch on this Created an initial Isar, and we can expect in the image likeness Isar, I gotta say, some thought got image is equal to the image that has been passed for us in this custom in this initial. Okay, Now, uh, next spark, it comes up theatrical clumpy part. So we undo is you say function, you will provide a completion handle, and its completion handler is going to be escaping and is going to return a void and going to have two parameters. One is going to be an array off you and Mitch. Virtues can be no and a result. But all right, so this is our completion handlers format, and we're gonna create a face request. So let's create face. We left. And this is bbn face rectangle. That's just like last time. And, um, he's gonna have a completion handler because gonna flight with Quest Air. And, uh, now, here is the request. We're gonna get to pull the observations back, So that's a guard. Let observations. And this is where we get actually, all the phrase operations or any kind of operations out back. So? So it actually is wrapped inside the requests. Are we going to say request our results? Um, be in face observation. So, um, we know that because we're making a being detect face rectangle request. What we're gonna pick it back is a B and face observation. It isn't right that we're gonna get that on. If we don't, then we're going to say else return from Oh, are, um, so returning actually have a better option. And without stocked result is equal to it is old. Stop. Um, air with air dot Looks like subscription. A completion. Handlers begin a fast on, um, a new for the right image here. I'm right and 40 years old. You're gonna pass our results object. It can be gonna return from here. Okay. The last 60 air by Okay into saying you need extra. Is that okay? Cool. And there was extra permits, extra parentheses. Right here. So for that reason was actually complaining. So because it was all time. Now, uh, what thing we're gonna do is we're gonna check theme the coming up observation. So because say, if accelerations dot com it's greater than zero, please, we have some operations because a some thought result is equal to it is owned dot Just to notify that. It's how we have found some results out of phase operation. So he has a observations. Duck or each and, uh, gonna get face. Here are, please. A single face operation have you up, sir? Elation. And, uh, then you're gonna say, have a tribute width and height. Basically, just gonna get for for the rectangle that we're gonna crop for that we need to first get the image with and the reason we're actually getting this so we can normalize the total bounding box on. And, uh, we can actually get the exact position where the face was found. So we're gonna do it. Buyer Just getting the images, Sounds it and height. Okay. And then how are we gonna compute box when they start Bond face operation? That bounding box got it, and we're gonna multiplies it. But in May, trip okay. And box height is going to be face observation, Doc. Bounding box not hike times. Image there, then next we need to eso we were gonna create ah whole right thing. Are it? So we're gonna need not only reckon height, but we're also gonna need a Region X and my looking So we're gonna computer app so you can see box Operation X physical to face Operation Dark founding not bounding box dark origin dot Exe times image and then box or a gin. Why is equal to priest observation? Not bounding box? Don't. Why? But we're gonna take this whole you and subject one out of it, basically, in order to flipped it and you multiply this with image height and so track box hide from it. And this is going to give us the origin wife. Okay, now that's created place wrecked. So face rec and, uh, this is gonna be seated. Direct brand youth are going to be box Parisian X box origin. Why? Box with and box height. Okay, now this. Let's get a CD match out of commits, and we're gonna apply cropping to it. There's a building method and we gonna say propping. So it's gonna come and gonna say crop face wrecked out of it, and then some talk face is that upend in, um come and we're gonna create the image and I'm gonna find an overload that takes seedy image, and so is the 1st 1 And we're gonna supply are seedy image that we have received from last and unwrapped something. Um, now, once, once we have what we have done that a t the end off for each of you gonna say completion. And I'm gonna supply ourselves start faces because this time he has fun pieces, and I guess they stop that result type. Basically a success. Okay. And, uh, money box start region dot Why? Okay, so our else condition for observations are not Operation Conch is less than or equal to zero. We're gonna say South are result is equal to result. Don't field, uh, base not found. Okay. And, uh, gonna call the completion handler here is all, and we're gonna say no. Is the results for pieces and South thought result is due response. All right, so this is, uh this is basically our art where we actually, uh, creating this request and defining the completion handler. But the request now, main thing we need to do is when you to actually create a, uh uh of yen request image request handler and supply this supply the of the original image and then perform the emission house. So let's do that. Next. Do you guys a self taught, um, vision requests is equal to a rail face request. So you can supply this Just a vision request directly and say, let image request handler is equal to be an image request handler. And, um, can have CT image option seven nights that felt that image starts DJ image, and, uh, the options are going to be empty. This case I said you're not splint options. Okay. Um all right. Um, so once we have that ready, I'm gonna save do and sketch, So print in Dr Localized description, and we're gonna try image request handler. Done. For whom? Southdown Vision class Mike Zone. Okay, so this is this is basically our our FBI. And, uh and, uh what? What can I do? Um, scandal. See? Okay. Um, So what we don't do basically is, um, how we are going to, you know, in our view, controller back in where we created that method. Get faces from picture because they face Cropper, docked in it and get supply. Our image so feet his image right here. And yes, it's talk, crock and this is going to give us images, back face images. So our base images, um, and results. Okay. And we're gonna put sweet statement on the I don't switch on result. And you say case doctor, um, sandals for success. So for successfully as a self taught faces is equal to faith images. I'm brought that. And then you're gonna say this, we're gonna call our, um, reload data on means. Right? So we're gonna say, just patch, um, you that main doc a think? And it was a self taught collection. You got reload data. Okay, that's gonna refresh our collection. Next, we're gonna say case or error is, uh, let error and, uh, gonna print the enter out, okay. And case, um, I think this is OK. Case that, um field. Ah, again. Ah, you can say sprint failed message. So whatever, we even failed, or we're gonna pretend it. All right. All right. Uh, this build and, uh Okay. So I think I think we should be done here, So let's try to grant and see if we see any issue. So Okay, Yeah, It took a little while on reason is because there are so many traces on this image. It actually, uh, a little time to find, but it came back with particular results, and these are the results. Stack off your expected so I wouldn't quiz. You guys do, um, to basically take this as a challenge to include some kind of waiting indicator, um, on the app. And, uh, just let you know that there's some processing by not. And also where you can do is you can create a image selection screen where you let user baker and image and then basically, uh, perform this of our face detection on cropping on that user chosen image. So I would actually encourage you guys to do it. If you have any questions regarding that, you can always posted in the messages. And I will reply back to you. Um, yeah, this is it, guys. Eso We have created our face cropper, FBI, And, uh, if we need to you that Regan, I use it in current and future creature demos as it so Yeah, this is this is it. And thanks again, car for watching this video. I'll see you guys in the next video with a new a vision technique that we're gonna be tackling, creating you and creating new app. Thanks 8. Face Landmarks and Contour Detection on Image Demo: Hello, everyone. This is the new pan there. This is the demo off the app that we're going Teoh create today. So we're gonna basically do the face landmark detection on video frame. So we're going to run Ah, video. Create a video layer basically and then run the face. Landmark protection on it. Right. So I hope you're excited and I'll see you guys in next video and we'll build it together. Thank you. 9. Face Landmarks and Contour Detection on Image Part 1: Hello and welcome back. And, uh, and he Stalin last video. Uh, we are going to be working on faith Landmark detection. So, uh, let's get started. I have, um, blank. Ah, Project open. And I literally got nothing into itself Have just created this. And we have an empty ah, you controller and, uh, um, empty. Basically code, that's all. So empty storyboard and anti controller. All right, um, so let's ah, let's clean up the for a little bit. So I'm gonna delete this, and, uh, it's gonna meet a little bit cleaner and going to make some extra space so you can actually right here in the middle of the screen. All right, so let's get started. First thing we're gonna do is we basically are going to create a ha ha a preview layer. And this is gonna be a B foundations are Levi Layer that is going to, um, basically project on the camera feed onto the screen. And for that, what we need to do is ever gonna create a you I've you gonna drive out in Dr You on the screen. And are you going to constrain this to cover the entire screen? like South, All right. And once they have done that, um uh, ideally, I would actually put a class to it, and basically, but we don't have that class created yet, so let's first create a preview class, and we're gonna quite some logic in that. So call this, uh, review of you, Okay? And, uh, this preview view is basically going to be the hack inheriting from you, have you and, uh, it's gonna import you like it can import vision and it's gonna import. Gave me foundation aimed. Gonna create a class preview. You kind of be inheriting from you, I view, huh? And there's some properties that we're gonna create. So let's start creating. So fasting. The need is a shape layer. So basically, all the shapes that you were seeing of each shape has its own underlying layer. And that's what we're gonna have created in the way. So let's create private bar mosque layer. So, like, mosque that here putting mask on the face come and just is there gonna be off type? See, a shape layer handing in the crisis. Right. Okay. And the next thing going to create a video tribute there, So video I gave you, Mayor, and this is going to be off type A V capture video review layer. Okay. And this is the actual, um Ah, layer that actually is going to show the camera feed. Okay. And, you know, defined basically us create that and create a Have you return dis views later as a V capture preview there. But that's a default. Claire, off this view is going to be a recaptured layer. Next thing we need is a uh ah is a every session. So let's create session. And this is the fashion off off capture. So basically the entire camera feed that we're seeing, it's part off every capture session. And if you wanna turn on or turn off that feed, we actually call this session, not start running in session, not stop running. So that's what we're going to do here. So we can actually create the session. So you say a V capture session and it's going to be off optional type, and we're gonna set its together and center. So this I said, get step and have you gonna return. Forget we gonna return video preview earlier darts sessions. And for Saturday we're gonna say video preview earlier darts session is equal to new value in this new value actually gets supplied when you actually set your property. And this is the default parameter that's actually gets that. So when you sent the session it it actually gets that set in this newly parameter can. You can actually get hold off if by just calling me value. Okay. All right, let's go back. Um and Okay, so next thing is, we are going to override a layer class, so override class Mabel, Leah, Class and, uh, any class and you don't say basically, it is going to be always. Avery, capture video preview. Elliott ourselves for this particular view. Okay. And, uh, now, ah, we will start to create that layer. So basically, each layer is gonna contain a, uh uh a layer with, like, you know, the the lines and the rectangle around on the face and the features. So So you're gonna saying Yeah, create a function called, uh, create, and it's gonna be a private function. So 58 funk create layer, and this is going to accept Expect a parameter which is going to be with direct it is gonna be see direct. Okay. And, uh, going to return. He see a shape layer. Now, let's create a mask. The pope, which is off types you ship player on this is gonna be the return. You said. Let's tried to return value. I left her some properties for this one's mask. Doc frame. It's going to be the rectangle that may have been supplied with. Okay. And, uh, mosque dot corner radius. It's going to be 10. Mosque dot capacity is going to be is your point, Sam with fire musk that border color, it's going to be your eye color dot red dot ct color. Plus, don't order what his court sweet to find. Now hand mosque there. That appends. So we're gonna pent this mosque to the year today. The array to the area that we have created a day off these mosques and you say layer, basically the default there are test you talk, insert sub layer a mosque at one position and you're already returning. Yes. All right. So, uh, next next year, Erica and this mosque actually is Thea If the the rectangle box the box around the face when the face is detected. Now there's a rectangle that we were gonna try around the face, and this is what it is. So let's, uh let's make some room. Yeah. Okay. Now we are going to create a function called Draw Correct Founding box. Okay. And, uh, this is going to expect a face off the oration. How we in being faced operation, okay. And what's gonna happen? Basically, we're gonna be supplied of a observation. Basically a box of are basically a Vienna V in faith operation, which is going to have a bounding box. Have pitch is going to represent the the area basically off our face. So this particular thing right here bounding box is gonna be supplied, which is the rectangle surrounding the face that was stepped in. Okay, so we are going to be needing these bomb it box a t rex, and particularly gonna drop in our in our layer. So let's this create, um, to be not gonna use this beautiful begin. Say create player and we're gonna supply the face operation that now if you do just like that, what's gonna happen? This, uh, because the image is when the images capture, it's on a different chord in space. So basically, the images are captured by the camera in the accord. Inner space is ah flipped on and starts from lower left corner. And I o s, um so, like, in the screen Ah, the image coordinates starts from here. This is your zero. This point here is something around here is 00 and irises works from here. That tears, you know? So it's just are drawing from on the top left corner. So for that, you have to actually normalize thes thes, um, our point. So let's create a function called Normalize my mission and ah, regions. And we're gonna say fed face observation. Thai people cap B and face observation, and we're gonna return. It's the direct out of it. Okay, Now we're gonna apply it transform, which is going to have a scale and a translation. And then we're gonna apply start another translate for the wit off and hide all the frame. So let let me create that in. So let translate Is he going to see G? I find trying to solve not identity. Identity is zeros, you know, basically on identity matrix that represents ah, the empty space empty state. There's no translation or nothing. No sort off. Like, you know, transformation is being applied. That's what identity mean. And we start from identity because it gives us a blank slate. Okay, So okay. And then uh huh. Any skill I frame docked with Ian frame, not height. That's the first thing we do. So, uh, we actually skill the, uh Are you saying the, uh, size off the image harder that we have gotten? Teoh do the sides up the visible frame. Did he have? All right, so the same. More like scale, actually. And then you transform. Um, and you apply c g find transform, and you create a scale. Basically one in negative one. This is going to flip the, uh, on the coordinates and the negative off him. Not now. We're gonna bind all these things together and then return that we're going to say based observation, got bounding box dot applying and the first apply scale and then be applied transport. Okay, but this is going to create a normalize rectangle, and we'll do basically, instead of tests, you're going to say normal life dimensions and, uh, can I supply are faced observation there, and it is going to return us. Ah, the, uh the normalized rectangle that's gonna fit right on the screen. Um, where the faceless home. Okay, so we have created had and yet created are now rectangle her running backs. Okay, Have African see a sparkling box that's more appropriate. Now, uh, that's one ball, Verrier. Let's also create a, uh uh, draw a function for landmarks so we can say drop face. Wait. Landmarks, kidney based operation. All right, So we're gonna do it's, ah, you're gonna get get the face mount. So let's see your face. Bones from normalized emission face Operation on Earth you do is we're gonna create a face layer. So face layer is equal to create layer with faith bombs. How they And now we are going to start drawing those lend Mars. Okay for that first thing you need to do, let's create a helper matter that's gonna basically draw all the points. Um, so this greed, the helper called Draw Faith land more on target layer. I'm gonna supplying near here, see? Layer and face land arc. We should okay me and face landmark region duty. It's a type B and uh, we want to know if we were not close to path. Basically close bath is, uh, there. If it's an I were looking for, you can actually after beat after we draw the last point. Gonna connect last point Last points end to the first point just to make it Make it close So that's what we're then it here and by default, it couldn't be true. Okay, And, uh, and then we're gonna create a wrecked here, and this wrecked or rectangle you act is going to be a target layer 0.3. Um, then we gonna get points. So basically these air, it's easy points. He's at the point. Said you're gonna draw and we got say, for I in zero to left there face landmark region that points count. Why don't Okay, this is going to give us the count. All the points however many points has found on that particular landmark, for example, knows I was going to give us all the point for that. And we're gonna say point like point is equal to face landmark region dot normalized points , and, uh, from the caramelized point, you're gonna get the eye it friendly and we can add it to our point of rate. So, uh, thank you. Points find start depend plane. Okay. Now, um, cause they have what we have. Ah, had, actually, we need to create the actual layer which is going Teoh take all these points and draw, uh, the actual those points on the layer. So let's create another function. So you see, drop points on layer, okay? And this is gonna be sees direct Landmark points is where we're gonna pass our array of points. That won't be, uh, not re Traum. And even the supply closed fat here. It's all Andy for 20 years. Okay, this is going to return the CIA lair. And basically, what we're doing in this step is we are creating a layer drawing all the points that we have found on, um, on that on that but faith feature and then returning that layer, and we're gonna add that layer to the target layer that have been passed. And that's how we'll be able to drop multiple features on the same layer and then ah, differently. Yes. We're gonna draw multiple faces if there are more than month is in the picture. Uh, huh? So, uh, yes. Created Lang path. And this is gonna be you. I busy at bats. Uh, light bath. Don't move. So first thing, or do is gonna move this the pointer to landmark points dot for us, okay. And then we're gonna basically looked through all the points. So let my point start, uh, trump list. Uh, drop first. All right. So what does drop first do is basically, it returns the array without the faster. Because we already have used for settlement are Pointer move there. We're gonna start from a second point in the race, and that's what you're saying. Drop first. Which is this life is going to give us a new Ellery a new collection in that on that collection This for his contract execute. Okay, so we're gonna say line, path dot Add line two point, and if close bath, it's sad, then you want to stay in line. Path dot and life two. Landmark my doctor first. All right, so now we have our path ready. Get created. Memory. Ah, not no way. It shaped there. So line layer. See a shape layer lying earlier dot is equal to Lang path. That's easy path line layer dot Fill color is equal to know You want to be transparent in ninth layer dot capacity. It's going to be one. Find out on it to be completely line layer dot stroke color And this is going to be you I call or not Blue. So features are gonna be trying with blue color and lying layer That line it is going to be there pointed to there so many points and someone, if you just that's gonna be drunk especially actually there, nephew. Cute, you know, So that's that I'm keeping you pointing to and we're gonna return this light there. Okay, So back to our, um are basically loop where we actually ended all the points and next we're gonna do it's gonna say, Let lend mark layer is equal to trot points on layer and begin the pass through direct that you have, huh? And the best. The points that we have actually just populated. Okay. And this is ah, going to be clothes bath on. We're not supply. Whatever. Whatever has been supplying in close path. Okay, now again, are we gonna transform this eso saying mark little transport apply. See a transform three D make and this is gonna be make make a fine transform I And, uh, it's gonna be CD a fine transformed and identity that scale by and this is going to direct dot with and, uh, negative Obrecht dot height and then we can translate it by CEO in the world. Okay, so ones are layer is, uh is sent inconsolable. We're gonna actually, and this layer to the target layer not insert something there and still be landmark layer at one. Okay, Now I'm going to create one more function. This is gonna be a hell. Perfection that's gonna help us and move all the old masks. So yes, a form ask in mask layer. Um, last dark, removed from superb. You just move all the all the existing years, and then you can say mosque there dot Move off, all right. To make it to make it empty. Okay, so I guess so. But what we have is this Ah, Hempel bless. Which is going to let us create all these landmarks and and the rectangles around the face . One last thing we have left is if you remember, we only have created our face bomb and face clear. And now we should actually draw some face landmarks using this help of their solvency Draw Faith landmarks. Jane, down. We are going to use the one that has closed back. And USA Face layer is the 100 earlier. Um, the region is face observation dot landmarks, Dr. Let's say, knows starting. They started notes and, uh, you want unwrap this and the clothes path, but I knew it's going to be false. Yeah. Now, the next one is going to be, um, knows Crest. Okay. And, uh, we don't need to supply close path value that we will take. Whatever the deep artists. Um, one after that is going to be our medium line. A median line is basically in line that goes through the If you remember the demo, it goes through the middle off the nose, Basically where your face is Sort of like faces. Median. This. Okay? No, uh, next left eye. Then be half left, people. One after that is left eyebrow. Okay, Now you're gonna move on the right side. I it's a it's gonna be pass on this. Copy these, please. Three. And want a piece suit. See, right. I people. And right then, right. And so Okay. Now we have lips. Some of us will want to lips. I didn't say Skopje. Yes, I say in there lips and, uh, out there lips. Okay. Now we do wanna also drop the counter off faith that goes through your chin to your year. So you say ace canto. And for this one, be actually. Don't want to close at the Eagles. A close bodily good. False. Okay. All right. So this is all of our helper, Mr. Let's go to our industry word. Select our view and assign this to pretty Okay. Cooked. So this is it for this video. In the next year, we're gonna be working on the actual video video feed, part and out. We're gonna be basically taking the video feed, showing it on the screen. And we also gonna be utilizing this, uh, this preview layer a preview class to basically, uh, perform some face for a tangle detection in police feature detection. All right. Hope you guys enjoyed sweetie up hand. I'll see you guys. That makes you 10. Face Landmarks and Contour Detection on Image Part 2: all right. Ah, hello and welcome back. And, uh, the last video we created this preview in some helper methods to do our not only your faith rectangle detection, but drawing are and feet, but also our faith. Landmark, dry. Okay, so today, how are we going to start implementing this? People stronger. So let's get started. Um, nothing I want to is that I want to create the Ivy outlet. So and I ve got licked. Our week are give you you, and it's just gonna hold the directions to our preview. Okay. Uh, now, before a fearless connected no, you connected to this one? Awesome of it. Next, you're going to create a whole bunch of variables basically. And these are mostly related. Teoh the capture and showing your camera feed into the beauty. Okay, so let's let's get started. Create available for device position and soon to be every capture. Okay, the reason this time detecting because we haven't importance and miss in court, um, vision and import. I am King Foundation. And trusting is gonna be a B capture device, not position. And we're gonna need device under back camera. Basically, the other other options are front camera. So parental back he connection move. I can fuck. Then we're gonna need a session. And this is again a every session. So maybe capture section I'm getting you slices here. Now we create a variable and we're gonna ask, Gonna keep if the state of the sessions are sessions. Ah, is session running and it's going house. Now we will create a session Q So, Sessions que and this is going to be dispatch you. Ah, and put a label here. Call this baby fish in and you put anything you want. Really? This is, um, were you doing in the fire which year? Working kids for leaving, proposing things. So ah, you can label it anything you want to on our next thing. We're gonna create a session thanked up his own. Now, this is going to be up tight. Session said that so and you can see there's nothing like that exists right now. So basically what it is, All right. So what it is a very simple are in, um, that keeps the That keeps the state off. Our profession there is being, huh? Is this being successfully configured? It's not all Christ by the user to use the camera or configurations somehow failed some bear. Ah, you and not continue the authorization or, like, you know, doing organization Process. Okay, so let's create an extension for this new controller and we are going to be creating, you know, I'm here. Fashion to set up result. Yes. In your case. Three Case's success not authorized and gonna be generation field. Okay, so I'm gonna use that here. It's a session center result. Hi. And, uh, usually you're gonna say so can be snitch. Like success. Um, hi. Next we'll create a device in So are Medio device input. Can we capture thing? But no. When I protect his video device output, which is a bee capture video data output. Thank you. Now in also need a year really are cute self video. Stay down output key. But what Q in this Dispatch que but label video data hope que Okay, um now let's create in a ray that's going to hold all of our uh huh region finger related request. So basically all the request that you can actually perform access to basically, in this case, the rectangle face single detection and face feature detection. Cygnus E R requests not is equal to write off the end request and being released. I thought, Now next, we're going to create a face detection, the quest. It's gonna be uptight being request. Okay? And this is basically gonna be what we're gonna set. It's at the request. Um, OK, so on the view did load fasting will do is take our preview, you and you on conexes air session and apply current session. Do it. Okay. Now, face detection request is equal to listen. Your site that we and detect faith, landmark request and reason they're doing face Landmark request is because West Race Landmark you also get how the face face rectangle detection with that. Okay, so let's create a condition handler. Now, this condition under you have two choices. You can either. Why? Can't hear, But I'm going to show you another way. So if you create another function, you're saying you funk, um handle face landmarks, Okay. And, uh, it does not take anything of, but I should, basically. And how do you know what it should take? Eso complete his completion and requires thes two things. So you can actually do that it's a request. Is he in request? And Ener is and often in pain. Okay. And ah, so you can do is take all the way out. Yes, A request tradition handler is going to be self dot handle this landmark and breathing to using it on the priority list and Lou it like this. Okay, so this is how you can actually supply your own delicate, Um, basically, I mean your completion handler, um, in a function, because essentially, it's a function now, what's even that we also need to create? Ah, set up our region. So it's clear that function set up operation. Okay. And, um, once we have a vision set up, we also kind of need to set up our camera. 20. They should to ask user for the permission of the state punk set camera, but no. And even called both of these. So this gets out to senate. You shouldn't dot that. Okay, so now we are, I think trusting we should do is we should set the camera out. So they say, Ah, such here. We capture, uh, device, not authorizations. Standards for a big year. Oh, not guilty organization status for video okay. And, uh, then thinking now, supplying cases case not authorized. Um, so either has already outright us to use camera, so we don't want to do anything. So we're gonna break out up here in the middle, check for not determined. And for this one, we won't say session que I Dr. So spend. Okay, So me and that's just been take you and then you're saying maybe capture there. Um, maybe get to a device, not request exits. And I'm gonna request video access in the completion. Hampton is going to supply Weather User has granted the access amount and based back. You're gonna say here, uh, is not granted. Okay. In that case, you're gonna say, self top session set up his old equal to not bow ties. Yeah, such pigs has to look at Yeah, because, uh, because of you included everything that sites this one is complaining about. And as we passed that, yes, a self taught session. You doc resume. Okay. Now, uh, second case is actually more didn't accept gonna handle everything in the air. So by default begin to say fashion takes up. Is people to Dartmouth all ties? Yeah. So, essentially people's for the other states are more or less like that. Um, okay, so once we have, um, we have our Oh, off already. I'll be on the basically could be your decision to create ah, session and basically start. Ah. So this prefix process, and then I'll start to start running that session. So has a fashion que not a thing. And it's gonna be a mattered. Typically used here so punk can figure session. And this is a regular computer. Our baby capture session. And here you're gonna say self doubt. Configure. Yeah. Now, um, now all we need to do this they basically set of the session for a capture stuff soon. So first thing we're gonna do is we're gonna see if so that sessions result is not equal to stop success. Okay, found, then we're gonna return from here. Otherwise, proceed. There is a session. Stop. Begin configuration and session. Darks station, please, sir. Physical. So I and the session precent is basically, um it remains the quality off your off here. Photo are are how video as being captured. Okay, um, now what we do is we want Teoh on devices. Andya big the people current are characterise in them. Add that. So let's just do that. Just can't do catch. And we are print inner Enter here. All right, So what you do. And I say you are default video device. And this is a B capture device. Okay. Never gonna do some checks, so we're gonna check it. Fritze. If the device currently supports bull camera, then you can use that. So, yes, they let, uh, if yet God will camera device. I would say if we capture device, that default and then select seven default device type is going to be docked. Ah, and we're gonna check if we have built in dual camera. Um Okay, um and, uh, the video immediate. I is going to be video position is going to be back. Okay, now, that case, because a default really device is equal to dual camera device. Okay, Have you nothing else here? Um, else if we have back hammer device and this is going to be again a B capture. Oh, um, so back to race is going to be every a capture. Are you guys got, um, do you fault and that the best type is going to be wide angle camera on start video. Dark back. So exploration, and, uh, in that case, we can say default video people to back camera. I can't get out. Joyce. Yeah, And last one is, uh, it's up front. Camera. Uh huh. OK. And, uh, this is going to be a be captor. Um, default and, uh, the device stay. It's, ah, going to be built in my angle camera. How media type is going to be technically video. The position is going to be fine. Okay? And we're gonna say, Do you folk? Your device is equal to burnt camera. Yeah, that's build how and if let has reflect. Okay. Grilled failed executive declaration. Okay, so this one is really catch, and, uh, you can pick place. Okay, So, yes, looks like are fantasies are, um we would my stuff. So do you catch a friend and then needs one? This one is so this weird from the doubt. If Santa, that's what's a class you turn from here. As I begin to Congress in configuration and said me too do far these two cameras or dificult and, uh, then Okay, here. I do catch lock. Okay, I can't. Have you been in Dar now? For some reason, actually smoked here. Okay. Built fangled. Okay. Yeah. Trying to dance and some extra. Okay. All right, sir, there is resolved. Cool. Okay, so next thing ah, we are going to do is yeah. Could had a, uh, input device in the Basically, this is how you wanna show where input is coming from. So that video device, you go to try a B capture device, and we're going to find the device that's going to provide the input. So which is going to be default? We do device, and, uh, you unwrap this. Uh, Now I'm gonna check if session dot can add to the 1st 1 to check if we can add input device to the input. If we can, then we're gonna do just that. So station and, um, input. Um, radio devices. Next thing going to in the States self dot device video device. You put the global vagal at the creative. You're gonna sign the local very do it. Okay, that this one is referring to the local one. And this one is how referring to the one that you have created way up. See above. Okay, cool. So moving forward, huh? Now we are doing is you're not Check the alienation for our device, basically, but the current or condition that is, that is currently ah, user is in our user. Hold this phone. So for that, we're gonna actually dispatch because we are on the background que When I first moved to the main thread seven has a dispatch que That means that a think and, uh, let's check the status bar orientation. And this is the way to check opportunity ation urine. And you say you I application dot share done status bar orientation. Okay. And are you gonna create a veritable call initial video orientation? And this is gonna be up type capture, video addition and any show condition you can actually the slides with, and then you could check if status bar reputation is not equal to unknown dot unknown. Then, uh, you want Do, um, say if let video or you station if you quote to status far orientation thought video orientation. So, uh, as you can see, we don't have, um, the video. In addition for this, and in order to support a video orientation, what we need to do is we are going to have to, um, take our status bars or you taken okay and converted on and basically into a video invitation because video orientation is if you option click. Um, statin. Part orientation is you. I interface orangish and a video orientation. It's every capture video inundation, which is indicating the current video parentage. So you have two different us, so you're gonna have to create some kind of mapping between them. So let's create an extension. In this extension, we're gonna create on status bar orientation status. Barrow, intuition is off. Type you I interface orientation. So you can actually get an extradition to you by interface orientation and, uh, gonna create a mapper. So, you know, find a people called, um video, Korean station and, uh, a V capture radio station there. You're gonna create a switch himself, search on your interface orientation. Okay. And you say case, uh dot So if the interface is in portrait mode, we gonna return on dark portrait four. Um, in getter that portrayed. But you. Then you're going to say it. It sounds portrayed upside down. Jane, you on return? Got trait site down for you okay? He thought Round landscape your turn. Done. Escape. Yeah. And start, Let's keep right. But this one going to say in return Dark landscape, right, honey and devote case. I wanna return now. Cool. So now, um, no, because it's not option. Make sure that because option duck video for tradition. Uh, so reason we're actually defining adoption or so we can put it in if let. And if the orientation is farm and is not nil, then we want to actually move inside. This this if, if Lit block. Okay, so, yeah. All right. Uh, does he accept that we're gonna work? Community is they're gonna say initial video orientation. And that being that to portray, are you gonna set it to video orientation question This way. Cool. Um, now, um, and we are done with this if and you can say sounds, start the view. You that video preview layer that connection, Doc, video orientation. And this is where we're gonna use are in the show. The only condition that and this is all your doing basically to support different are different orientations now. Discord. Ah, that that we're doing is not related to vision, but it's related to any foundations. If you're creating your own custom camera or some other app that uses camera, you can actually use this code as is. All right. So next thing is basically we're done with I mean, you and, uh, we had outside this block. Now we should put an else block here saying, if the if we cannot add, then you're gonna say, Can't add media device profession for some reason and session results that have result How even a naked configuration and you gonna commit our configuration because he started begin configurations that we should commit so it can take effect and you're gonna return from here. Okay, so that's our else block. He that for catch we can a friend, the error. And we can also set the status for this two configuration failed, and we can come to a decision and in return. Okay, So once we're out of this, uh, try cash book and be added are input. Um, see, Get outside. Not that one, but yeah, Yeah, I think. Okay. So you should be out of our so hold. Okay. Um, the single just block is ending. Okay, that's that. Do catch Okay, Cool. So in the next time, basically, in the next life, what we want to do is how do you want to? Yeah. You want to add a video, put seven minutes a video. I'll give. I talk physical to TV capture. Uh, video data put ain't down and you say video device output got video settings. Also, this is basically what kind of video were projected on the on the screen. So this was the import. We're taking input. And they said actually showing what I mean, setting what we're gonna show in the street. Okay. Okay. Fyvie, um, pixel butler, uh, fix cell again for my type key. Best someone that one and, uh, cast if that string and, uh, you're gonna they basically individually off K c v pixel form that, um, key for former type. Underscore 32 b g are Okay. Okay, So this is the time that we can actually project on, are basically pass it on to D in the Output Street. All right. And now we are going to say, basically, at this succession, God can had how put ah, video device. If you can't do that, Henry, than you can say video If I thought put out. Always just card late friends. So we don't want to show or do any processing on the late frames frames that have been being able to make it to the upward string. Um, and then we're gonna say, set, sample, upper delegate to self It, uh, que the queues could be video data. Texas is gonna be the mediated output Q on which this stream is gonna be supplying. And with the exception ed out, um, video of devices, Epitaph else block. The reason is complaining because you do not conform to that delicate yet. So we're gonna do it in a minute. Um, sprint and add outlook. You know, intercession set up the introduction. Figuration failed in session. Done that committee session from a configuration in return out of it, you say session don't come in. Make sure. So if you have moved here that used, we have actually successfully oh, finished everything setting up for in putting up and let's create an extension for you controller and make this era go away. So the center saying yes, it does not conform to this particular delegate, which is there were going to receive our output stream and basically are, uh, image data. So that's conform for that? Uh, that's that. That seems like a good stopping point for today and down who can do it. We're gonna continue on this with some orientation. Related are configurations. And so you gonna stuck? Basically the exit data, Um, in the orientation that we can actually supply on exit is the information about image, um, that we can etch, and, uh, we actually actually going to need it in our Teoh do our are faced. Okay, so I hope you guys enjoyed just to get in and out to you guys in next video. 11. Face Landmarks and Contour Detection on Image Part 3: All right. So we're running. We found that a program is crashing because he forgot to add. And this camera uses description in for beatus, and then he can see excluded, uh, complaining just just about that. So let's go copy this strange. And it's, uh, camera description. And, uh, you are going to go into the important B list. Yeah, out this. He had to give it a description. So camera access or just some easiest description? Because this is gonna show up to you, the user and, uh and yeah, um, so they should actually satisfy off a requirement. Said let's build and run and right again and see Hello and welcome back. And, uh, in this video, we are going to be a mountaineering, this implementation for our video feed. So you're going to start creating the exit function. So let's create that. So eggs if orientation, um, from device orientation and basically what we're doing here is, uh we are going Teoh. Ah, you went 32. Um So what we don't do basically we are going. Teoh, take the devices over condition and ah create exit data out of it, which we're going to need um, for basically are, uh, our video feed Basically our preview there to tell that this is the best the orientation off particular image on. We're gonna include that active information. So let's create that. First, we're gonna create any numb Yeah, device orientation on. And it's gonna be up you and 32 type, and I'm gonna say case top left, Stop. Right bottom right. Bottom left. Barton. Ah, left top Next are right up. Um, right back in. Right. Left. But which is the last one? Okay. Ah. Now, once we have this, uh, you know, created we're gonna say far makes if orientation And this is device or rendition Argument to switch basically on you. I device not current, not orientation. And we're gonna determine our exit our orientation based on devices orientations. We're gonna take a start. Portrayed upside down hes exact orientation. That left bottom based on landscape For that, um, in this case, explore in addition, is if the device orientation or device position is front, uh, then it's bottom, right? Otherwise, it's and reason it is because if you're using front camera, if you notice of the image tends to left mirror and for that reason, we actually, we'll have to adjust. Uh, the images orientation according to it. So that landscape, right? Um, And in this one, we're going to say, um, except for inundation is equal to device position dot Front, I mean, is equal to dot front. Then you're gonna say its top left otherwise is bottom. Right. Okay. And in all the other cases, which is gonna be handled by default, are we gonna say, except orientation, it's going to be right up. So far, image is only a region of your intuition is going to start from, right, Um, for the one you made to the tradition. Yeah. So we're going to return this exit orientation. Don't rule value. Cool. Let's build and make sure that we have Correct. All right. Okay. So, building successfully, your heart configuring the session. So let's go ahead. Up at the top. We have set the earth and ah, you have configured the session now less this picture. So we have Okay, we haven't had a chance to set of the vision, so let's set up a vision, and, uh, it is going to be, um So the first thing we need to do is we need to the only thing that really to do every Teoh say self tough requests is equal to based detection trust. Okay, so we're gonna take our face rejection request that were Ah, initializing here. Well, I'm gonna pass it in the set of vision. Okay, so, um, that's speak sure that we have, um, camera authentication set up. We have our configurations that is being called from camera authentication. Um, and then camera organization is Yeah, Characterization. Exit orientation. Okay, so now let's configure are purely load and beauty it appear. I mean, you load is already sets of bliss. Said, um, so session is initialized, requesters initialize her originally set up in our camera. Artists set up, um, and the section actually going to move it down in a little bit. But let's create of you will appear and, uh oh, yeah. And do basically in view, Bill appear is first thing we're gonna check if we have, um, successfully configured our session. And if you have, then we're gonna start running that session. Okay. So super taught, you will appear and animated. Next thing we're gonna do is session que that casing and here. We want to make us switch statement for our session set up result. Because this is where we're storing what they were is happening inside. Um, yeah, session configuration. So in the session configuration be actually set all the states. So here. Yes, sir. Configuration failed O r. It's basically even if set configuration is, It said a success have failed. I say we should have It's set up as successful as well. Okay. To initiate with success that we assume that is actually ah, successful. And then we override all the other properties based on if it's not configured skin test success. Okay, so can apply. Aces easier. Okay, start success on. Then we're gonna simply say self that session thought start running and still thought it's session running is equal. Do cells, not session. Ellen is running, You know, said this flag as well, based on whatever the sessions state is. All right, so that's for success. U s. A case not authorized and not for not authorize case you can actually show a alert and alert is going to be basically display showing, um, user, what happened? So if it's not our tries, you're going to say that message is equal to these. Allow camera access to do the best protection. And, uh, you know, say that alert controller you I L'Art controller, and I'm gonna take this override off title. So title is going to be based. Production message is going to be the message that we just created. Preferred style is going to be alert. Okay, Next thing we're gonna add an action on the alert. So gonna say at action. And, uh, this action begins, Stay, you. I alert action and they're gonna take title entitle is going to be OK, and the style is going to be cancel. And the handler is going to be no a screen, Another one at action. You. I alert action rant title is going to be open settings, and this is they were actually gonna open, um, setting speech for this app within the settings App on iPhone. So, um, we're going to get action here, and we're going to say Ah, you I application dot share not open. And you wanna open settings up for this settings congregation paid for this absolute against you. I application you I application open settings. You are l string So this is the euro strength and that since this parameter takes and you are else we're going to say you are well initialized with the override that has string and gun supply this string here. Okay. And this is gonna be unwrapped. Um, for the options, we gonna supply nothing. Just an empty options and Handler is going to be now, So this is basically were just lying off. What this line, of course, is going to do is it's going to open the settings page on the settings up that belongs to you and use Ercan directly, see, or if they have not allowed the camera. I mean, I always get They have not allowed the camera that you're presenting that, but they can actually turn on and off the camera set from here. Okay, um, and now we're gonna present so self that present, the controller is going to be alert. Controller. I made it true. And completion nail? No, because all this is going to show up on the u I. We need to move to the US thread. So And for that, you're gonna say, dispatch que Dr Main. Not a think and based on the cold here. Okay, Now, uh, we're gonna say case last one configuration failed. Um, one level down. Okay. Okay. Start not competition failed, and we're gonna do something similar here. Um, but we're gonna present a simple message, so I state this copy it and based it here. The message is going to be unable. Do you configure session, uh, were unable to capture capture? Do you do vehicle race issues? Okay, the reason we're calling this congregation failed is your calling it when we have an air being thrown or something like that. So that's what we're saying. Uh, unable to capture due to configuration issues. An Indian, This is gonna be a U. S. Comptroller. A high alert controller. Face detection. Ah, style alert. And it's gonna have just okay. And if we're going to take this one out and then we're gonna present this color controller for this state Now, this handle view, how do you will disappear? Because we want Teoh basically stop our session. So if so, that session set up results is equal to success. So if we were able to successfully figure our session, let me use procession waas available and then we're gonna say, session dot Stop running and it's yourself. That is session running is equal to so session that is running be exempted There. Okay. And in the end, we're gonna call super. You vill disappear, animated, so system can handle whatever other things that it wants to handle. Um, now we also need to handle transitions. So user changes urination, you can say for you will transition, and we can override this method. First thing we're gonna do is we're gonna let system figure out and do whatever it wants to do. What? That to size in, uh, coordinate. Okay, now what? What we're gonna do, basically, is we're going to check for the connection. So, like, radio preview connection is equal to preview view dot video. I had that connection. So if there is a connection available, yes, they let device orientation is equal to you. I device don't current that orientation to get your condition, and we're gonna stay guard. That new video orientation Video orientation is equal to device orientation. Dot is landscape. Oh, first, actually, we need to 4%. We need to get the give you video. In addition. So now we need to do something similar. So if you could see, only is left is right and Ali's. But we don't have a way to transform the devices orientation to the video invitations to let's create another extension. And this time, this extension is going to be on on the video, basically on the device orientation side. So gonna call it on you. I device orientation, and I want a paste. This gold right here. Okay. And, ah, the change that we need is a portrait is going to be portrayed. Let's keep upside down. It's gonna be right. And let's get right. And rescue flap, who can be flipped so escape there. So if it's right, it's gonna be left if they're slept, is gonna be right. Okay, set. Now go back here. Oh, yes, a video orientation. All right. Once you get the video inundation, yes, they device orientation on its portrait or the right orientation. Not is landscape right? If these two alleys are there, then proceed Otherwise return from here. And this is where we're gonna set media Previ layer connection, Doc, Video orientation is equal to new with the orientation. A second Acworth e issue. We need elf and oh, actually said this is a comma. So you wanna check guard late for death. The video annotation is present, and you want to check if it's if the device or condition this portrait or landscape otherwise we can return, Michael. So, uh, all right. It looks like we have set up everything. Everything that we need to handle the orientations. All right, I think we're missing. We have a, uh, Prentice's heir to that was extra. That's what I was doing in there. So that's built and see if we can get that is all. Okay, cool. So it's this result that's building successfully now. Ah, one thing onto his owner. Take this, take these two methods and move it in its own extension. So it's good the bottom and, um, creating new extension. What a view controller. Yeah, and I'm gonna pasty studio here. Okay, Now, we already set up the vision, thus less handle how they says and face landmarks, other two things that we need to handle so so handle basis. And this is going to be something that we can switch back and forth. But I just want to leave it for the sake of completeness to request exactly. Same thing we can copy here this year. And in this one, we're going to just say dispatch que not mean not and, uh, going to say a guard let results is equal to Quest, Dr. Results as we in face observation. Okay. Else we can return from here just to make sure that we have actually faith observations that we got has a as a result for this request. And then we're going to stay self taught preview for you. Don Remove mask to remove any of the existing players that we have created having a safe our faith in results. No south dot preview if you don't drug face bounding box and we're going to supply face this one. Okay, now, it stood similar saying for Hendro based landmarks, So again expects you gonna make sure you're on the main thread will restart writing these her creating this last? Yes, A guard card. Let is equal to request duck results as being face observations. Health return. Come here and then so done. Preview of you don't the more that's for face. In result, you're gonna say helped out preview you dot draw faith landmark in this case. Okay. And, uh, now we need do basically supply are I see you rusty results. And you're gonna make it as optional on this handle the image part. How where we actually received we're gonna receive all the image data. So you're gonna say and the video data output sample up for delegate. We get dead output, maybe get the sample. Buffer says they're gonna say guard, let Big Phil Butler is equal to see him. Sample coupler, get image. You made a buffer from Sample Lafer. And then this is where we're gonna actually said the exit orientation. So against, except orientations is equal to TG image property orientation and rob you Anderson, we're gonna se self dot eggs. If orientation from device orientation said that? No, we can actually get the actual connotation. Yeah, I'm not that on otherwise even return. Okay? And again, it is not this sexually a comma can be a actually an extra entities. Okay, as for Mattis code, All right, So if you have this hobby going to all be going to do is request option. We an image, huh? Option? Do any and, you know, initialize it empty, but we're gonna get the camera intrinsic data. So let it let camera data is equal to see him get Tuchman and the target for the attachment example before in this case, on the key. Hey, it's K C M Sample Upper Flashman Key. And in this search for camera increased entry, think metrics and last readable is gonna be no. And we're gonna set that its request options cost options is equal to we An image option. Thought camera Trade six is equal to camera intrinsic data. Okay, So you guys said that given the property and image request handler, how the end image request handler, and we're gonna choose the override that actually expect a C B C v pixel buffer. So you're gonna pass our pick that look for in this and past our orientation so active orientation that we have computed and request options. Okay. Oh, let's do try image request handler that perform, and we're gonna perform the requests that we have actually collected. Um, catch is simply going do print the error. Okay, cool. Right. So that looks like it's everything has build and and run and see it in action. All right, So here we have our devil running. And as you can see, how it can detect not only the faith rectangle, but, um, the face landmark. And it also works on smaller images. And ah, and slightly. Ah, slanted faces. Um, all right, so this is it, um, and, uh, things again for, ah, joining the, uh, creating this and hope you enjoyed the would you? I'll see you get the next video. Thank you. 12. Face Landmark on Image Demo: lower one. This is a new pain there. Hope you all are doing great. And today have her going to be looking at this. Ah knew her basically new technique on vision free work on. And this was introduced her with IOS 11 where the image is processed to detect not only the faces, but to detect face, um, landmarks like nose, mouth, eyes and stuff. And this is the demo that we're going to be creating. So we're gonna be working on image for this project and in the next product within, within the same Siri's We're gonna work on the video feed so we can actually do the same landmark detection in the video frame. And in this the face just say, you know, the face detection and the landmark protection. Um, used to happen using religions. Now, with deep learning so advanced and so popular, Apple actually deviated and and moved away from religions algorithm. And now they're using deep learning based algorithm to do all that And that, um, had the difference. If you want to compare the difference between those two, the way you can do it is you can look at court image, Faith protection. And then I use region frameworks for use face detection and we would really bring with others. Basically, it gives you this notion where it can detect faces not only small at a different distance, but from, like, you know, from different. Even if you look at the background and everything with the background and there some settler images and everything available, it was able to detect face even behind this guy. So it actually works really good. And this is all based on deep learning algorithms. So this is the app that we're gonna be creating today. Alright, SLC you guys. Then next year and we'll get started. Thank you. 13. Face Landmark on Image Part 1: Hello and welcome back. And, uh, let's dive right in. So I think that we're gonna do is, uh we are going to create a new helper class in this help of glass is going to be our detector that we're gonna work on. So we're gonna call this faith detector, and, uh, this faith sector is going to import from two eyes going to import to libraries one if you like it and another one is vision again, and then we're gonna create our class faith detector. Okay, Now we're gonna we're gonna need to functions. There's gonna be a function that is going to take points in draw those points on the image on and it's ah, it's a helper function. So let's first create that, because then we're gonna create our main method, which is going to be called from thea outside a few controllers. So here, again, doing an A p I, um, form off class design where we're gonna expose just a matter that we need to the too exposed to the external services. Okay, so let's create a private funk draw on the image, and, uh, we're gonna say source it's gonna be a sourcing, which so up type your image and bounding box. So the bounding box defined there. We need to draw that, um, and Regions, and this is basically faith, landmark or face and marks. And this is going to be re in face landmark region to D. Okay, so this is all gonna be supplied to us, and all we need to do is draw these points onto the image. So what thing you're gonna do is you're gonna take your eye graphics, begin image, context and whatever you right inside this image contact basically is gonna be written on the image. So that's what we're gonna do. We gonna say source that size? Oh, big is gonna be fault, and scale is going to be one. Then we're gonna create context so or get the context. So we're gonna say you I graphics get current context, and we're gonna apply translation because, uh, this image is on a different coordinate space, and we are going to be, uh, we're going to convert it into a higher score coordinates space. So translate by zio uh, and so start don't size, not height. Okay. And then we're gonna do, uh, less unwrapped this context so we don't have to put up smells. Jenning here dot skill. So scale it by one and negative one on the by exists, And this is going to basically put the image within. I rest context where 00 point starts from the top left instead off starting from bottom left. Um, then we say contact start set, blend mode, the way we want toe blend those two images and we're gonna be, um thank CG blend moored dot color birds. Okay. And context dot set line. Join, um, and this is going to be around in context. Aren't set line cap is going to be around. Okay, um, now, next thing is you cannot, uh, set entire alias. So set should enter a list graphics context, and it's gonna be true and basically said allow and a listening it's gonna be true. Okay, Now, next thing we're gonna do is we're gonna help you The, uh uh, skilled, uh, rectangle with so wrecked do it is equal to source dot size not with times bounding box dot size sighs dot wit. And then we're gonna compute direct height. And this is gonna be so if start right dot size dot Height times bounding box outside sort . Right. Okay, Now let's ah, put together our direct So rectangle is going to be see direct and I'm gonna take any over riding together They start from bending from Gino zero and paint to source dot, size dot Will it's and source Dodge size don't height. And this is basically the rectangle where image is going to be drawn. So that's what we're gonna do next. We're gonna say draw, um, this image that we have actually supplied. Um, Dust et image. Okay, indirect. So once we have this image drawn with indirect within the scale direct then, um, we are up be evident within this new wrecked property that we have created. Um, and basically, it's gonna draw the source image, and then we're gonna apply our landmarks and features um, on the top of this, You gonna all those. We're gonna draw all those on the top of this. Okay, so first, let's draw our face wrecked, and this is going to be contact start. I had liked you can, uh, because stakes TG effect. Okay. And, um, so let's test inventing create X and this is going to be bounding box start or region dot Exe times source outside. Start with. So it actually, uh, skills according to this site. And the why is going to be bounding box start tradition dot Why? Time, source dot size, height. And with it's going to be direct with and height, it's going to be wrecked tight. Okay, this way, we just have to do this. We just have to put w and match, and hey, and we're done. Okay, Now we're going to draw path. Now, basically, this is going to trial the, uh, rectangle around it. So drop at, um, crying mold that stroke. Okay, now, let's drop features. Uh, make some space at the bottom. Okay, Draw creatures, and we're gonna create a color your eye color green elastic loop, and guys say vehicle er that set stroke and, uh then gets a context are set line with to be to point out and for Beath region in Face Landmark, the area that we have supplied compute points. So you're gonna create points away off type C G point, initialize it empty. And then yes, a far I in zio too. Face region dot point count. So all the points that has detected get all those points in converted into a city point. So that physical to city point, um, and you're gonna get TG float? Uh, point point X and C g float point dot way. Now, Um, what is this point? Property. So, uh, because we're getting points face region points, and this is basically containing all the points that it has detectors. So let's create a property called point and from face reason. There is a, uh, property gone normalized points. And this is normalized according to your coordinates space in your image size. So we're going to get that, and we're gonna get i its value Basically, the country index, and that is going to be our point. Okay. And then, um, this is a this normalized point. It's get converted into CT point. That's what we're going to use. Um, and then we're gonna add this to our points away. So we're gonna say add bend happy. All right, now, let's thing going. Do it. Uh um uh, map points. So we're gonna create a map. So point, stock map and what we're doing, basically is gonna map these do a you point space according to the bonding box. So, Stevie Point and we're gonna map it to the current bounding boxes Religion relative to the current bonding boxes. Ah, size. So you gonna say bounding box dot region dot Why? Times source dot size don't with Plus, um, the dollars, you know, is like the first property first rally that is being provided by the closure. So dollars zero dot exe times wrecked of it. Okay, it is going to give us tea. No. Nice point. Uh, and then we're gonna do the same thing. Bounding box start Ragen dot Why? Times sores, Thoughts. I start height, uh, plus dollars. You know, doctor, Why? Times correct height. Okay, so all the points, I've only And now we can actually just add lines, our add lines two between mapped points, and this is gonna basically taken off drawing all the lines in against a contact start drop at God CD, CD, bat trying, more dot stroke. Okay. Now, once we have actually done that, we are are images are basically written on, but we still on the image context. So it's still in the memory we haven't extracted it. Are So let's take out the image. So we're gonna say how colored image are facing Midge against a face image is equal to, um you are images equal to you. I get graphics, contact graphics, get image from current context. Yet they I want and you I graphics and image context because we're gonna close that, um ah, context. Now, um, and insert. Okay. Next thing, we're gonna return this face image maybe. All right, let's make sure that we have a return at it here, so this function returns EU image. Now, let's create, um, a function for finding landmark. And this is gonna be water public function that, uh, could be a controller or anyone using our class can call completion at escaping. And this is going to return au image function completing block with Boyd. And that's it. OK, so I think this is a good starting point in the next video. Are we gonna fill dysfunction by, uh, creating the landmark request first and then the questionable and then executing that rocker center? Um and ah, probably we d After that, we're going to see are connecting all these dots together. Do you make it work with? Are you controller? I hope you guys enjoyed this radio and Elsie together Next video 14. Face Landmark on Image Part 2: All right. Welcome back in. Uh, let's continue from where we left last time. So we're gonna say how create a people called a result image, and we're gonna keep our image of this. Now, let's create a how a detection progressed. So detect face request, and this is going to be in detect face. Let Mark request with the completion handler on, and it's going to return a request and error. Okay, Now, you in the first guard, Um, let operations that we have received in the request start results and your cost. It's as we en face operation. Now, another thing that we are doing a check for her parameter is equal to know if any of these are false, that we should return from here. Okay, Now the sprint. Um, how many faces be a found so we can say found operation start count faces on on the image are and the ridge. Okay. Now, uh, uh, for, uh, let's necessary it over our observations so far. Face in hop, sir. Rations thinking that basically, Yeah, get all the landmarks. So first list you guard let landmark so we can actually unwrap our face landmarks, So I guess if you start landmarks and if nothing, then we can continue to slope and let's find the next landmark. But we found a landmark, then first get its bounding box, and that is going to be face start bonding box. Okay, now this is there how we will also I need a an array, basically off we in faced landmark regions. So all the landmarks that we have found we're gonna actually populate them and put them in this one array so we can pass that directly to our helper mattered and get all the features drawn at the same point. Okay, so that's creating valuable called landmarks Landmark region. And this is going to be up type we in face landmark regions duty. Okay. And now we gonna start, um, basically checking for features and then populating this landmark regions. All right, so if let faith contour so it be found face contour in, um, in the limo as a landmark. Then we're gonna add that upend face contour. They played left eye. Then we're gonna say landmark region start, bend, left eye. It's copy this and do same thing for right, I Okay, then we're going to detect for nose. Um, okay. Less detect knows crest. Uh, next you can also show median line. So median line. It is a line that actually shows her at the center of the face. Um, if you have detected that gonna show that that's all And, uh, the show Lasting out lips out lips and up turnips. There we go. Okay. Cool. So once we have actually extracted all the features, let's go. Ah, single result image and caller helper method. And, uh, we're gonna supply the source image image, the bounding box that we have actually computer, or we have actually stored extracted from the face operation. And and your, um, landmark regions. Okay s take a look. Okay. And then, you know what we need to do is, uh I guess this is there are for loop. Anstey s at the end of for four group. Um, we're gonna do is you want to call our completion, and we're going to return the result image. Okay. And this is our request. Completion is ending. So let's create a request handler. So request handler, because without request, handler won't be able to execute any of this code. So image request handler read CT image, and that is going to be imaged dot cd image options are blank and, uh, going to do catch, friend and are. And here you say, request dot because handler, doc perform and supply are predict face request in this. Right? So that should do it. That should actually make our landmark protection work. Let's build to make sure. Okay, so we have. All right, We need do, but try here. Cool. Has Bill again just to make sure it's working. All right, so it looks like it's working. Awesome. So in the next video, we are going to be connecting and using. We're gonna see how to use this FBI and tried to be control and basically do do some face landlord detection. Thanks again for watching you guys next year. 15. Face Landmark on Image Part 3: Hello and welcome back. And, uh, very have you gonna cutting you? Basically, In the last video, we created this face protector. And in this video, we are going to be working in the view controller to basically use this before we start, let me drag a image out and use any image that has, like, multiple faces. But the project included has this image off giants. Eso if you wanna use this, but please feel free to use the sun assault. Um, okay, So once we have gotten that, now are you Controller is going to be very, very simple. And before we actually make changes, trouble you control for less create a, uh, on our storyboard. Let's create image view so drag out of the image of you. And, uh, you can have basically make this, um, full screen and, uh, aspect fit. And just for the sake of change, are we gonna use this matter? This name hang s a image for you. All right. And then we're going to go back to our view controller, and, um, here we're gonna simply add a less first creative function at obsolete because we're gonna call this function from Ah, selector it. This is gonna be ah, selector method. So handle. Yep. And, uh, you're gonna say self doubt, view dot ed gesture. Recognize? Er you. I tap gesture. Recognize? Er, starve yourself, Hash selector. Handle that. And here it's gonna be simple. We're gonna say let source image is equal to self, that image dot image and, uh, dispatch que dot global dot a sink, and we're gonna actually call our fetch our face detector. Um, on the background thread find landmark with source image and the completion, it's going to be self dot image. That image is equal to what we have gotten back. Okay, Um, and I said they should actually take care off handling the image. And, uh, let's make sure you didn't miss anything. Okay? So the last thing we need to do is we need to set the image property giants and less belligerent 16. Face Landmark on Image Part 4: right. So while running, I found a couple of issues. So, um, let me show you The first issue is about this error, basically, because we forgot to add, um, this mean thread call and where he's supposed to do it. You're not supposed to update any y elements on the background threat. So that's first thing less. Fix that. And it's the easy fix. Um, and run it again. Now, there was another issue that I, uh, noticed. I don't know if you did not, but if I tap on image, it at this point shows only one faith and one landmark. And if you notice the face and the landmarks are quite further apart, So why is that? Well, as it turned out that we made a typo here, So this should be X and not why that that should fix the alignment issue. All right, let's click one more time and they go. So, as you can see, is directing face landmark and the face rectangle perfectly. But another thing is, it doesn't detect any of the other faces and what's going on because we found five faces in the image, but, um, it's like none of the none of them are highlighted. Eso what is going on here? So the reason I found basically for this one that we made another type of so source image is the image that we're passing. So this is the original inmates actually got now, because at each it aeration the air for each operation in the loop, have your detecting one faith and drawing its landmarks. We should ideally or really be supplying the result image, because result image is going to contain the landmarks that were found in previous iterations. So if we fix that and rerun, um, it should render it correctly. All right, so let's take a look. There we go. And I'm gonna tap on the image and wait, and, uh, it's gonna come back, and there we go. And now, as you can see, it had found, like, all the faces and, uh, basically have found all the landmarks. All right, so this is a good exercise. Her some debugging car mistakes that we made her typos and stuff. So, um and, uh, this is, uh this brings us to the end of this project. So I hope you guys enjoyed this video and I will see you guys in next video. Thank you. 17. CoreML Vision Text Detection Demo: Hello, everyone. This is Anoop. And, uh, today How are we going to start this new app where we are going to be detecting text on the image. So I have Ah, this image opened, and this is the sample of that we're gonna be creating in this, um, in this section. So basically, this is an image that contains some text. And if you notice we have text with double space and these are all the other characters and their some extra spacing between characters, as you can see, So uh huh. And it's like in a justified and everything. So let's take a look. How? Ah, text detection with new vision framework reacts to this and if it can find, ah, all the characters within this image. So I'm gonna tap on the screen, and, uh, there you go. So if you know this, uh, did a pretty good job finding most of stuff. It missed this line bash and comma in here. And, um, it definitely missed question marks dock at the bottom. But ah, again, it's ah, it's pretty accurate for for basically image to understand. And there are a lot of factors, actually depends upon a so so how bigger images would kind of resolution a has. And, uh, this text detector technology also works with, um handwritings. And it supports various different languages as well. All right, so, um, hope this, um them or ah makes you feel good, and we'll move forward into the into the creating this app. So, uh, let's get started in the next video. We are going to be building it out from scratch. Thank you. 18. CoreML Vision Text Detection: Hello and welcome back. And, uh so I have, uh So this is the demo that we're gonna be building. And I have Temple AP Open, and it's just a file. New project. Eso It's got literally nothing into it. Everything is empty. The only thing that I have populated is I have properly the assets for with these two images. Ah, one of this is basically containing this text right here, Um, that we saw in the demo and ah, there's this Ah, driver's license off McLovin. Um, that we're going to be using just to test our because it has a lot off different variations off text in Bagram and mics off a whole bunch of things going on here so we can try this out. Um, and the text addiction technology, basically in the vision technology to see, like, you know how accurately can find some of the text or detection of the stepped from the text . How? With this image. Ok, eso You gonna find this in this in the attached project? How these two Larry, these two images that he can use us off, so let's get started. Um, I'm gonna for similar do is I'm gonna dried out the image. You and, uh, going Teoh being it, but do full screen and going to set. Let's that first, do the drugs lessons and aspect fit. Okay. And, uh, let's create a outlet. So when the control drag and call it image for you okay, in tow. Basically, next thing we are going to do is once you have done this, how are we going to go back to every controller? I'm going to clean this up a bit and, uh, make a bit of room here. Okay, so I'm gonna add a new file, so new file sift and this file is going to be called text detector, all right? And this is going to import from you Like it following the same A p I, um ah, format. Are we going to create? Ah, a function that it's going Teoh act as an FBI and service can be called by the V control or two to perform the direction. So we're gonna say class text detector. And, uh, you're going to create a, uh, trust in, um, pastor function. Ah. So let's create private Fuck on. This is going to be our, um helper function Drop overly on Emrich, and this is gonna have a image. You are image and operations. We in rectangle operation, okay. And basically going to return in your image. Okay, so let's post built this so again, how are you gonna say you I graphics begin context. We're going to try a different variation this time. We're gonna try this one. Begin image, context, image dot size and, um, this end. So you are graphics and image context, and in between, there's we can write our functionality, and the sun is going to be super easy. So all we need is basically draw, um, rectangles and you're going to get rectangles. So as you can see how we get, we're getting bien rectangle operation. So the detector is going to supply us rectangle, and we need to do a scale and then transform and basically tried on the image. So it's created context and get the concurrent context. So you I graphics get current context scared, transform Sesay a fine transformed that identity and transform is equal to transform dot scale by image that side start with and negative off image, start size, start height and then transform is equal to transform dot translated by zero and negative one space. Okay, so flip the image, and that's a context context dot set line quits. And this time is going to be two points. You know, Dubai. Now, whatever you wanna call it, set stroke color. You like water, dr red dot C G. Color. Okay. Having a safe or erect in operations. So all the rectangle operations that we have received you're gonna call you, I bet. See a path with rectangle And he has a wrecked dot bounding box. Don't apply applying transform. And you're gonna stroke that out. And that and last, we gonna get the result. Image is equal to you. I graphics context. Um, grande fix, get image from mate context. And we're gonna end the image context and return the result. Image unwrapped. Okay, so that's our helper function. Now, let's create a a function that's going to expect except a image and provide a completion handler. And, uh, basically, this is where we're going to perform our requests and the image analysis. So great Funk, detect text on image. You are image completion at escaping. You are image. Sure. It owns a white. This is our completion block. And, uh, now we're going to create a request. Be in detect text rectangle request and ah, the completion block. It's gonna be there, Which is gonna provide us request in error. Have you got say, guard Let observations is equal to request dot results as we in text. Operation Air is equal. Equal nell Health return from here. And any assailant result image is equal to image and far up there. Elation and up their patients. Um, it's a result. Image is equal to self taught. Draw, overlay on result image. Um, observation dot Character boxes. Okay. And when we're done gonna call completion with you, I are Result image. Let's create request handler. We an image request handler. Um, image that seedy image and options are then re blank. Do try request handler. Doubt our farm requests. So we're going to supply a request here and gonna catch if there is a year. And bring that they're out. Okay. All right. So this is a Sixtus. Um, right. So there's a of exclamation to unwrap and sustained it too far. Okay, so that is fixed. All right, let's build Cool. Now, let's move to Yeah, this is visited. So this the part where we're doing the next on detection. And, uh, you're gonna move into the view controller and ah, within the V Comptroller, we gonna say, Yeah, I had a a just to recognize Ursa units yourself, don't you dot Add gesture. Recognize? Er you are tap Just you recognize, er bid target itself. And there is a at sea function that we're gonna provide Handle tap, and ah, it's handled Tap is going to be called here. So hash selector and, uh, handle tap. All right. And in this inside south Central tap, you say, let input, input, image, every call to sell that image you dot image and ah. Then you I, um dispatch que dot global with the quality of service last, so quality of service classes user initiated. Um, and we gonna say dot a think And inside this, we can say text detector that detect text on input image with the completion handler where it's gonna provide the image back. So image. And, uh, he gets a self dispatch. So let's move to make you first and say self, that image view, that image is equal to image. Okay? They'll select the right environment, grunts and execute and take it out. Okay, so we have this ah, Travis license image loaded, and I'm gonna tap and, uh, let's see hopes. It's so we are getting air. Okay, so it's finding mill some there. Let's check this out. Okay, so it's not finding character boxes for some reason on this image. So let's try out. That's basically unwrapped this. So, you know, say if let killer Box says, Is he called to up, sir, Operation that character boxes. Um, then you want to perform this process. And instead of writing this, we're gonna say char boxes. Let's try to build on right again. Um, okay. Looks like it did not find any of the decks, which is, uh, it is very strange. Let's try out a different image. So five sample text. Okay, so for some reason, is not finding oh course, because reason is not finding any character boxes is because you missed something. So don't say it requests, because this request is off type text. There are two things, So we need to turn this report. Character boxes on Syrian say, report captor boxes. That's true and let's try to run it again. And hopefully this time we should be able to find the characters. Okay. And we were able to find the characters. But it seems like it's, um So my stamp, our image. Alright, let's try Teoh debug What's going on? So how? We have operations P tech of the in text operation, and then you get the result image or eat operation in observations, you say, um, get the tector box and you may be a supplying result image. And if you're operating that members ultimate And we returned that, um And here you're pressing request not report character parts of this true ISS. It's good. Okay, so it seems like Ah, fine stick it are other misheard. So, uh, begin graphics out. We have to write our image into the graphics context first, so you can actually see the point. That's Theo. You're right. Are you made on the graphics context? That's all, Um which is why he was actually not showing that. So hopefully that should fix the issue. Okay, that was the issue. So you go Ah, you have this. And let's try to load the other image and see if that actually works now. So it again. And here we go. Let's click on the image. Okay. Perfect. So, um, it was able to find a lot of text in this image. As you can see, it has highlighted all those are there some things that it couldn't find, like so the number part us out thes height and wait. And no, these things. So but you did a pretty good job. Like, you know, I was able to identify the whole off this text, which is has a nice how they contrasted background. So, um yeah. So this is it. Ah. Hope you guys enjoyed this video. And, uh, I will see you guys in next. Really up. Thank you.