Transcripts
1. Introduction: Hi, everyone. I'm Alex Baker, and I'm thrilled to welcome
you to this course. As a full stack developer, mobile engineer, instructor,
and AI enthusiast, I have the privilege
of working with several big tech companies when I specialize in
technologies like React, React Native, Engler, and Biton. Today we are embarking on an
exciting journey together. We'll be building an object recognition application
from the ground up, combining the power of react for our front end with FAS PI
and PTN for our back end. But this isn't just another
web development course. We are going to dive deep into the fascinating world of
AI and machine learning, applying this concept in
real world scenarios. Whether you are an enthusiast, eager to build your first
intelligent application, an aspiring developer looking
to break into tech or an experienced programmer
wanting to expand your skill set with
react or AI integration. This course is designed for you. We'll work with cutting gauge technologies
like fast API, PTN, and tensor flow,
giving you ends on experience in building modern
AI powered applications. Get ready to transform from a spectator into a
creator of AI technology. By the end of this course, you'll have the skills to build applications that don't
just process data. They see and understand
the world around them. Let's turn your AI
inspirations into reality.
2. AI, Machine Learning and Deep Learning: Welcome, everyone. So before we start with hands on coding, we will have a little bit
of theory regarding AI, machine learning,
and deep learning. Today, we are embarking on an exciting journey through the world of artificial
intelligence, machine learning,
and deep learning. These are transformative
technologies that are reshaping the world that we see today and they are appearing new technologies, new features regarding
those technologies around our devices that
are changing our lives. And with that, what is AI
or artificial intelligence? Official intelligence or AI is a branch of computer
science that focuses on creating intelligent
machines capable of performing tasks that typically require
human intelligence. But what does it really mean? Imagine a computer system that can understand
natural language, recognize objects and images, make decisions, and even
learn from its experiences. So that's the essence
of AI in general. It's about creating machines that mimic cognitive functions, we associate with human minds, such as learning and
solving our problem. AI systems can range
from simple rule based programs to complete systems that can
adapt and evolve. The goal is to create machines that can
perceive the environment, reason about what they perceive and take actions that maximize their chances to success at that goal that we
want in this case. Now what are the types
of AI or what we can find as those types of AI? When we talk about AI, it's important to
understand that there are different types or levels of AI. So let's explore the two
main categories of it. So we have weak or narrow AI, strong or general AI. So weak AI or narrow AI, this is a type of AI that we interact with our daily lives. It's designed to perform
a narrow task or a specific sets of tasks we
can include on this case, something like voice
assistance like City Alex or Bullet assistance,
recommendation systems, like we see in
Netflix or Amazon, image recognition software,
and spa filters in email. These systems are
excellent and they are specific tasks but can perform outside their
program domains. They don't have general
intelligence or consciousness. With that case, we come to
the strong AI or general AI. That is more of a theoretical
concept at this point. Strong AI refers to a machine
that with consciousness, sentience and mind, with the ability to apply
intelligence to any problem, rather to one just
specific tasks. These types of AI would
include something like reasoning, solve puzzles, make judgments under
certainty, plan, learn, communicate with
natural language, integrate all the skills
towards a common goal. While you're not there yet, this is the ultimate
goal of AI researchers. It's the kind of AI that we see in movies like terminator, matrix, and so on. We now go to learn what
is machine learning. Machine learning
is a subset of AI. Machine learning is where things really get interesting
from our perspective. In traditional programming,
we provide the computer with a set of explicit instructions
to solve a problem. But with machine learning, we take a different approach. Instead of writing
explicit instructions, we give the machine
a large amount of data and let it learn
to solve our problems. And how does this work? So we feed the machine
with large datasets. The machine analyses
and identify patterns. Based on these patterns,
the machine can create their own
rules or algorithms. The machine then can
apply the rules to new sin data to
make predictions. For example, instead
of programming a computer with all these rules, what makes an email spam, we can show it in thousands of examples of spam and
non spam emails. The machine learning
algorithm will then learn to identify
the characteristics of spam email and then can
apply this knowledge to new emails that
hasn't seen before. The key here is that machine learning systems improve their performance
with experience. The more data they process, the better they
become a better task. This ability to learn and
improve from experience without being explicitly programmed is what sets machine
learning apart. And then side of
machine learning, there are several
types of learning. We have the
unsupervised learning where the data comes unlabeled, or clusters of data
come unlabeled, and it's a task
of the machine to identify and label
those sets of data. Then we have the supervised
learning where we fed the machine with
data already labeled. I will become easier for the machine to learn
with those sets of data. Then we have the
reinforced learning where the machine will learn by having rewards or punishments as is going to learn with that data and giving
us the output. So now we can pass
to the third part of our context of AI, that is the deep learning. A deep learning takes the machine learning
to the next level. It's a subset of that
machine learning part inspired by the structure
and function of the brain, especially the interconnecting
of many neurons. The deep learning uses
structures called artificial neural
networks that are software systems designed
to mimic the way our neurons in our brain
connect and communicate. Just as our brains can identify patterns and make
sense of complex data, deep learning algorithms can
perform similar features. The deep and deep
learning refers to the number of layers
in these neural networks, while a simple neural network might have only
one or two layers, and the deep learning
systems can have hundreds. Each layer in the
Depo neural network processes the data it receives, extracts features, and passes the processed information
to the next layer. As the data moves
through these layers, the network can learn increasingly abstract and
complex features of data. This makes deep
learning particularly good at image and
speech recognition, natural language processing, translating between
the languages, generating realistic
images and videos, even create art of music. As now as we see
here on the screen, we have also the parts
of neural networks. So let's dive a little
bit deeper into neural networks as they are
the foundation of depleting. There are several types
of neural networks, each designed for
specific types of tasks. So you have the feed
forward neural network. These are simplest types of
artificial neural networks. The information moves in only one direction from the input layer through
the hidden layers, and then we get the result
on the output layer. They are used for
straightforward tasks like classification
and regression. We also have the
recurrent neural network. These networks have connections
for that form cycles, allowing information to persist. They are particularly
good at processing sequential data like text
or time series data. And they are used in tasks like language translation
and speech recognition. We then see the convolutional
neural networks. These are designed
to process data with grid topology, such as images. They use a
mathematical operation called convolution in place of a general matrix multiplication in at least one of these layers. There they go to choice for any type of
image analysis task. Each of these networks have their strengths
and weaknesses, and we chosen based on the
specific problem at hand. And now, how do we put all of these parts together
to create application? Now that we understand the
individual components, let's see how AI
machine learning and deep learning work together
in the real world scenario. Let's take the example of object detection system
in a self driving car. The AI part, the
overreaching system that enables the car to
perceive its environment, makes decisions and
controls its movement. This includes not just
detecting object, but also planning routes, following traffic
rules, and interacting with the other vehicles
and pedestrians. We then pass to the machine
learning part that is the underlying engine
that allows AI to learn from vast
amounts of driving data. This includes
learning to recognize different types of
objects like cars, pedestrians and traffic signs, understanding road
conditions, and adapting to different
driving scenarios. The deep learning architecture
and in this case, used the CNNs or the
convolutional neural networks enters the complex task of object detection in
images and video streams. The CNN processes the visual
input from the car's camera, identifying and localizing
objects in real time. In this scenario,
the AI provides the overall framework and
decision making process. The machine learning
then will allow us the system to improve
the performance over time as it encounters more scenarios and complexity
scenarios in this case, the deep learning
will provide us with a powerful pattern to recognize capabilities needed for real time object detection
and classification. So what are the real world
applications of these kinds of technologies that we can encounter or have
in the near future? In AI applications,
we can get chat bots, speech recognition,
self driving cars, virtual assistants like
City, Alexa and Google, Smart Home devices, predictive maintenance in
manufacturing as well. We then can pass to
the machine learning where we have the
email filtering, the fraud detection, recommendation
systems like Netflix, Amazon, Spotify, and weather
forecasting as well. In the deep learning,
we can have technologies like
face recognition, autonomous driving,
medical diagnosis, natural language
processing, and so on. The benefits of
this technology to the world can have a lot
of impact in healthcare, environmental protection,
education, accessibility, scientific research, business and economy and transportation. As we've seen, AI and machine learning and deep learning are not just shaping our future. They are also active
improving our present. These technologies are
solving problems and creating opportunities that were unimaginable just
a few years ago. However, it's important to remember that great power
comes great responsibility. As these technologies become more integrated in our lives, we must also consider the
ethical implications of it. And ensure that they
are developed and used in ways that
benefit all of humanity. The future is exciting for sure, and it's built with
lines of goals, neural networks, and data. Whether you are a student, a professional, or simply
a curious individual, I encourage you to keep learning about these fascinating fields because who knows maybe you might be developed
a next breakthrough. Thank you for joining me
on this journey through AI machine learning
and deep learning. And now we can pass to most interesting parts
that are endzone coding.
3. Convulutional Neural Networks: We now come to the part
of the neural network we will use in this course
to detect the images, and we will use the convolutional neural
network that we learned before, and we will have a deeper
understanding about what is it. So we can start with
the definition of a convolutional network that are specialized types
of neural network designed specifically for
processing visual data. They have become the backbone
of numerous applications, playing a crucial role in tasks
such as object detection, image classification, facial
recognition, and much more. But what makes the
CNNs so special? Well, they are inspired by biological processes in the
visual cortex of animals. Just as our brains
can quickly recognize patterns and objects
in what we see, CNNs can be trained
to do exactly the same with digital
images and videos. The convolutional
neural networks presents us with
some key features. So now let's break them down. So first, we have the
convolutional layers. They are the core of
building blocks of CNN. They apply a series of
filters to the input image, each designed to detect
specific features like edges, textures, or more
complex patterns. As we go deeper
into the network, these features become more and more abstract and sophisticated. We then have also
some pulling layers. After the convolution, we often
use these pulling layers. They reduce the spatial
dimension of the data, making the network
more computationally efficient and helping it focus on the most
important features. Common types include max
pulling and average pulling. We then go to the
fully connected layers that is typically found at
the end of the network. These layers connect
every neuron from previous layer to
every neuron in the next, and they are used to make the final classification or prediction based
on the features, extracts on the layers before. All this all works is the
math behind all of this. So here we have an example of neural network in this image. One crucial component is
the activation function. In many CNNs, we
used what's called the Lu function, as
you can see there, which stands for
rectified linear unit, and the forma is simply this mathematical formula is our activation
function in this case. This means for any input X, if X is negative, the function will return zero. If X is positive, it returns X itself. This simple function
helps introduce non linearity in our network, allowing it to learn
more complex patterns. The beauty of this Lu
is that computationally efficient and helps to mitigate vanish gradients that can
occur in deep networks. And now for the structure and reason of this deep
neural network, we understand the components. So let's look how
they come together in the typical CNN for real
time detection in this case, that we're going to use
in our application. So for our real time
object detection, we have the inputs. This is where we feed our data. We have a prane trend
model in this case. Then the convolution, the
first convolutional layer that applies the filters to
detect low level features. In this case, we use the package tensor flow framework
in our application. We then use the pulling layer to reduce the special dimension. We also use convolution, another convolutional layer that detects the eye
level of features. We then use pulling again, as we see it on the image, and we reduce dimensions
and focus on key features. We also have then at the end the fully connected layer that makes our classification
and detection. Then we see the result
on the output layer, such object classes and
allocations in the image. This structure allows the
network to progressively learn more complex features from simple edges
to complex objects. When it comes to implementing the CNNs file time
object detection, we often use pre trained models. These are CNNs that have
already been trained on large datasets and can recognize a wide
variety of objects. We can then fine tune them
to find our specific case, saving us significant amount of time and
computonal resources. One popular framework
for this is how we mentioned before,
the tensor flow. It provides a high level API that makes it easier to build, train and deploy
neural networks, including our convolutional
neural network. The application of
these CNNs are vast, so we can get these networks
for autonomous vehicles, medical imaging,
face recognition, content moderation, and
augmented reality as well. So in conclusion,
convolutional neural network have revolutionized the
field of computer vision, enabling machines to
see and understand the world in ways that we were on the stuff
of science fiction. As we continue to refine these models and develop
new architectures, the possibilities
are really exciting. Whether you are a researcher, a developer or simple, someone that is
interested in this field, CNNs are definitely a
technology to watch. They are just shaping
computer vision and improving the
feature itself. With this, we have a
better understanding how we're going to
build our application.
4. Installing VSCode: In this course, I will use Visual Studio code
to edit to text. So it will be our ID. Feel free to use the one that
you are more used to it. If you don't have any, I
will advise you to install this one to code along
with me on this case. And if you don't know, Visual Studio code is ID, text editor or code editor, and is part of Microsoft
Environment and allow us to have a lot of extensions supported by the community
and also by big companies. And this ID supports,
many languages. Again, the extensions
are vital part for this ID and
software development, also as integrated terminal, which is very cool
and GIT support. So to install
Visual Studio code, you simply click
here on this button. In my case appears
I'll move for Mac because I'm running
a Mac environment. But depending on what kind of
operating system you have, it will appear
different for you. So just click on this. And once it's downloaded,
we will continue. So once it's finished
downloading, you just go to the
Con downloade. My bee a Zip file, you
just need to extract it, and depends on your
operating system again, you'll have executionable
file or a DMG file for MAC. So we'll just press it two times and proceed with
the installation. One thing to notice is we have a little tick box
to add to Path. Please do it because
it allows you to open the editor through the terminal
with a simple command. Once it's installed, it's open, you can see the ID here, and we will go
further into it next. But for now, we have our ID
installed and we can proceed.
5. VSCode Extensions: Now that we have our editor, we will install some extensions. So extensions
basically allows us to get a better work flow. Helped us with some bugs
to detect some parts, with GID and even coloring text. So it's really up to you. But on this case, we will use
part of PyTon and FAST API, and we will install extensions
that will rely on it. So in our extension base, we will need all of these
Byton packages to help us code better and code
easier and support. There's pylons that has basically the intelligence
that can predict what we want on the coast and also say some errors on the coast or some optionals to
make it better. We also need Biton, so the language, the
debugger to debug our cos, the environment managed,
so we will have virtual management
or we will have virtual environments to run
our Python environment. So they will also need
the linking so the pylon to blind our
code to see where we want to format our code and keep it the same between all
teams in this case. And these are the
packages that we need, and we can install all of
this package with only one. So we have this Python
extension pack that we will install all of the
packages we saw before. So just proceed with the
installation of this one. It will install the
others with it, and we'll ask you to relos. So then just reload the
Visual Studio code, and we'll have the
packages necessary.
6. Best Way to take course: In this lecture, we will
learn how to take the most and the advantage
of this course. In software, especially
when we are learning, we will come across
many roads blocks, even when following codes
along with the videos. This happened because
of bad typing, instructor mistakes or
more frequently because of software and packages
that are always changing. So what should you
do if you get stuck, then you cannot proceed
with the course anymore. First of all, don't
penny, it's normal. This happens to everyone. As I cannot answer
all questions, make sure to use the skill
shares discussion tab where you can find students with the same
problem as you and you can have a thread with all
your questions and answers. So we can help you proceed
further with the course. As software changes
all the time, I will make sure to leave the resources on the resources
sections in skill share, where you can download
requirements, package Jason and the
correct versions of all the packages that you
need for this course. We'll also leave the
images so you can download the images that we
will use on this course. If you continue to get
stuck, use Google. Google in the solution will
often return many results, and you can learn as
well from that search. Many students will
have the same error. Many people in software
will have the same error, and that can help you solve it. Read documentation. Many platforms and packages or programming languages provide us comprehensive
documentation or tutorials how to install
and proceed with it. So if you get stacked, please go through
the documentation and follow the steps in it, as it might help you to proceed with the course
and get stocked. You can also ask AI. These tools are ear
to help for sure, and we can provide
with a fast solution for the problem with
further explanation. It will help you
solve the problem as well explaining what happened and what
were you doing wrong. If you follow these
recommendations, it will make your experience smoother and overall quality. With all this clarified, we can all proceed further.
7. Resources: This course, we will
use some resources, and to find the resources, you simply need to go down on the course and find it here. So the zip folder will
contain links, images, and other needed resources
that throughout the course, we're going to insert in it. So feel free to download
this and follow the video.
8. FastAPI and Python Setup: So in this lecture, we
are going to explore some exciting
technologies that form the foundation of our AI
object recognition app. So specifically,
we'll be discussing BTN and fast API and how these powerful tools
enable us to build a robust and efficient backend
in our AI application. So let's start with BTN. Many of you already
familiar with this versatile
programming language, but let's recap why is so
crucial for our project. PyTon is often called the language of AI
for good reasons. First, its syntax is
clear, intuitive, making it easier to write and understand
complex algorithms. This is particularly important
when we are dealing with AI and machine
learning concepts. PyTN also boasts a rich
ecosystem of libraries and frameworks specifically
designed for AI and machine learning. This means that we have
a wealth of tools at our disposal for building
sophisticated AI applications. Moreover, Python's large
and active community ensures that we have plenty of resources and tutorials and support available as
we develop our app. But PTN isn't just for AI. Its versatility allows us to use it for web
development as well. Data analysis, automation,
and much more. This makes one stop shop for many of our
development needs, including the end of our
AI Object recognition app. Now let's talk about a little
bit more about FASTEPI. So FASTEPI is a modern high
performance web framework for building APIs with PyTN. But what does it mean and why is it important for our project? First and foremost, FAST
API lives up to its name. It's fast. In fact, it's one of the fastest
PyTN frameworks available. This speed is crucial
when we are dealing with real time
object recognition, as we need our app to respond
quickly to user inputs. Fast API is also
incredibly easy to use. If you are comfortable
with PyTon, you'll find that you can quickly get up to speed with Fast API. It uses standard PTN type hints, which not only make your
code easier to understand but also provide automatic
data validation. Another great feature of FAST API is automatic
documentation generation. This means that as
we build our API, FAST API automatically creates interactive
documentation for it. We will see that
further in our course. And this is incredibly
helpful for testing our API for any future developers who
might work on our project. FASTEPI also supports
synchronous programming, which allows our application to handle multiple
requests efficiently. And this is vital for creating a responsive AI application that can serve multiple
users simultaneously. So how do PyTon and
FASEPI come together in our application that we will build from no one? So
let's break it down. We use Python to handle the core logic of
our application, and this includes
process images, running our AI model
for object recognition, and managing the overall flow
of data through our system. FAS API on the
other end provides a framework for our web API. It allows us to create endpoints that our front end
can communicate with. For example, we might have
an endpoint that accepts and applause image and returns a list of recognized
objects in that image. Fast API helps us
eventually to end all these incoming requests
and outgoing responses. It takes care of things like parsing the incoming
image, data, and formatting our
AI models results into a response that our
front end can understand. The simplicity of PTN and
the intuitive nature of FAST API also allow us to
develop and interact quickly, which is crucial in
the fast paced world of AI development. In conclusion, PTN
and FASEPI form a powerful duo that enables
us to build a robust, efficient and scalable back end, for our AI object
recognition app. PTN provides us tools and libraries that we need
for the I development, while FASEPI offers high
performance framework for exposing our AI
capabilities to the world. As we progress
through this course, you'll gain ends
on experience with both PyTon and FAST API. Seeing first and all
these technologies come together to create
something truly exciting. Remember, the skills you
are learning here are just applicable for
our specific project. The combination of PyTon
and fast API is using various AI and machine learning applications
across industries. You are building a
foundation skill set that will serve you well in
many feature projects. In our upcoming sessions, we will start writing some code and see these
concepts in action. Get ready to bring your AI objective recognition
act to life.
9. Install Python on MacOS: So now we will go into install PyTon in a
Mac OS environment. If you don't have a Mac OS, you can skip this
lecture and go to the Windows lecture where we install PyTon on
Windows Machine. So in the MacOS, we have several options
to install PTN. So the first one we will use is to install PyTon through Brew. So Brew is a package manager or package installer in MacOS. We first need to open our terminal and
enter this command. Don't worry. I will leave all of these commands in the resources so you can copy it easily. So after we have this command, you just paste it
here and press Enter. I will request your passwords. So just enter the passwords. And we press Enter
to install the brew. So now that we have
our brew installed, we can proceed with the
installation of PyTon. In this case, I will
just clear my terminal. We just need to type Brew
Install PytnPressEnter, and after that,
PyTon will install. A so if everything went okay, you should see
something like this. Another way to
install PTN and is the most common and
advised way to install the PyTon language is through the official website and
then downloads parts. So simply here, you
have downloads, don't worry, I will leave all these links
in the resources. You just find the
version that you need. So in this case, if
you press this button, it will unload the
latest version for your operating system. It will just be a
package to install. You will download
it, install it, and PTN and the package will
do everything for yourself. The third way to install
PTN is through BM, so PM allow us to install
specific versions of PTN, and we also install
it through Brew, so we can just say Brew. Install Pi M. So
we just click it. After it is installed, we can check the
PyM Install list. If we click, it will
give us the list of pytes and versions
available to install, so we just press Enter. And as you can see, we have many Python versions to install. But anyway, you
choose the latest one always. We've installed it. And after we have
Python installed, we can just clear my terminal. We can see the Python
that we have installed, and if everything goes okay, you should be able to
type Python three because Python three is the latest and it's the one that we're
going to use on this course. And we just do the Version tag, and we see the one that we have installed. So press Enter. And as you can see,
currently, in my system, I have Python 3.12
0.0 installed. So we should have something
around this version as well. And with this, we see that
everything went okay, and our system has
PyTon installed.
10. Install Python on Windows: Now we will proceed with the installation of BTN
on the Windows machine. If he's on our Windows machine, you can skip this
lecture and go to the MacBook part where we
will install PyTNon MacBook. But anyway, if you do have a Windows machine and you will do this course through Windows, you can proceed with this video. So the most recommended
way to install PyTon on the Windows machine is through the official website. I will leave the
link direct link to the website on the resources. So basically, you
just need to go to the PyTon website,
downloads parts. And here you will have the correct version
for your machine. You just press the
download button, download the package, and then proceed with
the installation. So after the downloads, you click on the icon that downloads and you will
see a window like this. So important to note that we need to check this
checkbox here, so we'll ask PTN to pass, and after this is checked, you can just to install now. Also here, we can disable the
path length limit because some systems don't allow many
characters on the limit, but this can cause
troubles in our system, so we can just disable it. And then we have a
successful setup. Then after the installation, we can open our command
prompt to check the version, we just do Python and
then the Version flag. As we can see, we have successfully installed PTN
in our Windows machine. Another way to install
PyTon is to open our Microsoft store and on
the search war we just do PyTon and then we can see PyTon three and we have all these Python version
of the latest stable, and then we just press G, and it will install
the PTN for us. To note, this is the
easiest way to install PTN, but the less customizable.
11. Installing And Running FastAPI: So now we will create
our fast API server. First thing we will
do is to create a new folder where we'll
keep our applications. So I can just create
a new folder here. You can create your
folders the way you want. It doesn't matter and whatever you want, also doesn't matter. And now I'll call
it my learning app. So now that you have our folder, we can open our
folder and again, create a new folder
where we call it server. And we have our folders created. So again, we need to open
this with Visual Studio code. If you have your
Visual Studio code into the post you can
just open it like this. If not, we will open it
through the terminal. So let's open a terminal. So now I have my terminal and I will navigate to my desktop. And in my desktop, I have my folder that is
called my learning app. And inside of this
folder, I have, again, another folder
called server. So now I can open it with Visual Studio code
with this simple code, space, and then dot. And then we have our
Visual Studio code, and we are inside of the
correct folder here. If for some reason
the cos dot command will not open your
Visual Studio, we can fix that very easily. So open your Visual Studio
code any other way. So just click on the application where you have it on your
system and then it will open, and then you can see
here the command. So and then you can
see the command. So in Macs, Shift command, and B in Windows should
be Control, Shift B. So plea open it, and then you can
search for Shell. Command, and there you
can already see it. So install code to command path. So just click here and you may have to
restart your terminal, and then we'll have that
command available to you. So now inside of
our server here, we will create a file, and our file will be named
main dot pi or PY four PyTon. So inside our main dot pi, we will add the first
route for the FastAPI. We will call the first
packages needed for FAST API, so we can just say
from FAST API, we import the fast API package. So then we have to use it, and then we start
it in the constant, so up will be fast API and the braces
because it's a class. Okay, so now we will
have the first route. So our first route will
be a get route just for example and demo
of the fast API. So we'll say add. Because we
are calling a FastAPI class, and then we say G,
and then the G, we have our routes or root route that will
be just a slash. And then inside of
our get methods, we will have sync function. So a sink is a synchronous function that will run either way without waiting
for the rest of the code. And then Df is what we
call a function in Python, and we say deroot. So the name of our function, open braces and then semicolon, and then we have the return and we'll just return
a simple message. So message will be just
inside of the quotes, and then we will
return the low world. So what is important
here in Byton we always need this identation
so the code will run. If we do it like this,
it will cause an error because this idented block
is really necessary. So now for our app to run, we need to create a
virtual environment. The virtual environment
allows us to run the Python inside a
closed environment. And for that, we
just need to again, go into our project. I can just open a terminal
inside of sal Studio code. So here, and then I will say
just PyTon or Python three. VM for virtual environment, and then dot VM. And why do we create a
virtual environment? So in virtual environment The allows us to run versions of PyTon and even our
packages without being influenced by
the outside system. So we create just a
sort of encapsulation, so environment that to run that version of PyTon without messing or conflicting
with our system outside. So here I'll just press Enter. And there you can see
creates two folder. So the cache cache Python codes and the virtual environment, where we have the
application running, and then in the
future, the packages. But then we still need to
run our virtual environment. So to run it, we can just
say source dot VM BMF Mac. If you are on a Mac is BM if you are on the
Windows is scripts. So because we have a BM
folder, you can check here. We'll say Bin, and then we will activate the
virtual environment. So just press Enter. And as you can see, now we are running inside the
virtual environment. So this virtual environment will run always the
same version of Python there and the package is installed without need to change whenever you
change in our system, which is enough for what
you want to do here. So now we can finally
run our application. So in this case, we
just say fast API, dev, and then the
file we want to run. So we want to run
the main dot PI, we just to main dot py. So PY for Python, and then we just press Enter and the application is running. So here, you can
navigate to this route, and that's what
our route returns. So the slash in this case. Then if you notice, we here
have also this API docs. So FAST API will create
as a doc immediately, and we can check all our routes and test our routes as
well the documentation, just by simpl go
to this RL here. So again, if I go to
the same one and then a Doc we will have a swagger
fAStAPI with our rot. For now, we only have
the root but in the end, further down the course,
we'll have more routes here. So here in the
docks, you can see, you can try it out the route. So if I'll just execute, we will have assays
that we set here.
12. Another Example Route: We can look to more complex route that we
can use in our application. So first of all, I will paste
here just a simple object. So this object, you
can see that we have three items with
name and description. So this will be kind
of our item wizard on description that
we can navigate to a page containing
this information. I will leave this object in
the resources so you can just copy and paste it in your project so you don't
have to type all of this. So now, first thing
we need to do is to create another
Grow as we did before, we'll just do at then app. So again, our Fast API app, and then we say G and
then open braces, and we will want our
route to be named items. Then slash, and
then for each item, we will have the item. So we will pass
this an argument to get each one of these items. So then again, we need to define our function and this
function we can name it get item and then we will pass our Im ID
that is going to be, in this case, integer. So here we are just
assuming the type. We always be an integer. Then we just close here, then we can create a
variable named item, that would be the itens. So our object get get
and will get ITNs ID. So we will pass that
Ian ID in our function, and then we will get the exact same integer
from our object. And then we just do a little
bit of error handling. So if ITN is non, so if we do not exist, we can just return
a simple error. And then we say ITN not found. And the set will be error 04. So 404 is usually the not found error in
the app development. But if everything
goes as expected, we will just return item. So we'll just return
the correct item. So now we can save and
go back to our browser. And here as we see we have our first route that we created, and then we just need
to navigate two items. And if we press Ions, not found because we need to set the integer as we
saw in our function. So again, another slash, let's get item number two, and there is not fun because I forgot to add a slice here. So it should be root slash
INs and then the item 80. So if you go back
to our browser, we just need to refresh
this and here we have it. So again, our item
two from our list. We can see the other two. So item one, item
one is correct, and then item three. So cohost 800 and
then slash DGS. We have, again, our
faster PI Swagger, and here we have a new route. So we can just open it here. We have our parameters that our route only
takes the item ID, and then the item ID will
be one, two or three, as we set it before,
and we can try it. The box will open
for us to test, and we can say again item two. And if we say execute, we get items to our route
and then the correct item. So let's try it again. So clear and then say item one, execute, and Im one,
the correct one. Now, if we set item four, like we tested in
our rousa before, we can just execute again and
we get the error message.
13. Running App With Uvicorn: Now we will install UVCorn
in our application. UVCorn is ASGI so a synchronous server
gateway interface that is designed to be
very fast for pyting applications to serle them
with optimal performance. But now you ask me,
why do we need this? We already have our server running with a normal fast API. Well, so UVCorN offered us more benefits
because of the speed. Fast API is built
on ASGI standards, and UVCorn fully support the standards, ensuring
seamless integration. Also, UVCorn has a
synchronous support. The Python as synchronous
features allowing for efficient handling
and concurrent requests. UVCorn is also production
ready and can handle e loads making suitable for both development and deployment. Also allows us to easy
configurate UVCornn settings. So the benefits for
web applications with UVCorn are the
improve response times, scalability and is prepared to be used with modern Python. Now, to install UVCorn
in our application, we simply go back to
hysal Studio codes. We can stop our server
with control C, we stopped our server, and then I can clear our server to be so you
guys can see it better. And then we can just say PIP. So PIP is the package
manager of PyTon, we install our packages
in PyTon install UVCor. Now we can just press Enter
and let the package install. So finally, that we
install our UVCorn, we can run the
application with it. So in this case, we just say UVCorn main column and then up, and then reloads. So this reload flag will allow the application to reload
whenever some changes happen. We can also add more
options to the UVCord. So let's say we want to oust
it on a different port. So we just say UVCorn main app, and then we can say port. So currently we are
running the port 8,000 and we can just say 80, 80, and then our application
will run on this port. But for now, we will just
run it normally with a reloads flag. Okay. Okay, so now, our application
is running with UVCorn. Let's see it on a browser. And we have our docs and our
normal application running. And let's try Ins dot one. And it seems everything
went smoothly, and now we have our application
running with UVCorn.
14. Installing Packages with Requirements: The previous lecture, we saw the Twinstyle package using PIP. So use BIP Twinstyle
packages and Byton. But imagine the following. Imagine that we have a big
team and the entire team should use the same packages without being installed
one by one manually. Also, the same
versions is preferred, so the application will run the same to everybody. So
how can we do that? So in FAST API, we can add the
requirements text, and then the application
will install all requirements
through that file. So here, in our side bar, we create a new file and we
call it requirements text. So we save it, and then we have a text file. And for that, we can
add the packages here. Let's say, for
example, our fast API, and then the package version. So in this case, we have the 0.1 15 and zero. Don't worry. I will leave this file
in the resources. And please, the
preferred way is to copy the content of
that file and paste it here because the
versions might be different for you at the time you are
watching this course. And like that, we ensure that you run the
application exactly as me. So here we have the
version number, but let's say we want to
use the latest version. In this case, we just need
to have it like fast API, and then people get the
most recent package. So now we will use
the current versions. So I'll say FAST NPI is
zero dot 115 dot zero. Then we will also
use tensor flow that is the version
2.16 dot one. You can skip this part and just copy the files that
we mentioned before. We also need a nPi
1.26 dot four. We will also use the
open CV Piton 410, dot zero dot 84. And then the UVCorn version
that we installed before. But again, so zero
dot 30 dot six, and then we save the file. So now, how do we install
everything at once? So again, in our terminal, we just stop the
server with Control C, and then we can just say PP Install air
four requirements, and then our
requirements dot text. Now if we press Enter, it will install
all the packages. So let's just press Enter. And then you have it. So all
the packages are installed, and then we can just run, again, the UVicorn main up,
and then reload. So as you can see,
everything went smoothly and now all our packages
are installed.
15. What Is React And TypeScript: Imagine that you are
building your house. You want it to be
sturdy, efficient, and easy to modify if you need to add a room
or change a layout. In the world of web development, react and typescript are like the advanced tools and blueprint set help
you build that house. So let's start with react. Think of react as
a master carpenter who specialize in creating
reusable components. Instead of building its room
from scratch every time, react allows you to create modular pieces like
pre built walls, windows, or even entire rooms. That you can use
over and over again. This not only saves time, but also ensures consistency throughout your house
or web applications. React makes your website
interactive and dynamic. It's like having a house where the lights
automatically turned on when you enter your room or the temperature
adjusts on whose own. In a react application, when you click a button
or enter some text, the page can instantly update without needing to
reload entirely. This creates a smooth app
experience for users. Now let's talk about typescript. If React is our
master carpenter, typescript is like having a super smart assistant who double checks everything
before it's built. TypeScript is a
programming language that builds upon Javascript, adding an extra layer of safety
and clarity to your code. Imagine you are trying to fit a square peg into round hole. It just doesn't work, right? Typescript prevents
these kind of mistakes in your code
before they happen. It's like having a guy
that ensures that you are using the right type of material for each
part of your house. If you accidentally tried to use a window where a door should be, typescript will let you know before you even
start building. This might sound complicated, but actually makes your life
easier in the long run. With typescript, you
catch errors earlier, your course becomes more self explanatory and work in teams becomes smoother
because everyone can understand the
blueprint more clearly. When combined react typescript, you get the best of both worlds. You have the power to build dynamic interactive
web applications with reusable components while also having
the safety net that catches potential errors and
make your code more robust. As students diving
into web development, think of learning react and typescript as
gaining superpowers. React gives you the
ability to create flexible, efficient
user interfaces. Wild TypeScript
provides you with X ray vision to spot and prevent issues
before they occur. Together, they keep you
with the tools to build modern reliable web
applications that can stand the test of time and
scale your projects grow. So in this course, we
will use this both react and typescript to build
our front end application.
16. Installing Nodejs: So now we will install Nojs. But what is no JS? So NojS is an open source
close platform that allows JavaScript runtime
environments to be used and run by developers. So to install no
Js is very easily, you can just go tonjs.org, and it will appear at the
downwards button for you. And this already has the
version of your system. So if you are running
Windows environment, the downloads will be
for Windows system. If you are running a MAC, it will be also for MacOS. So you just simply
need to download, make sure you are using LTS, and for our project here, we will use always the
version bigger than 20. So at the time of this course, it should be fine. So just download
no Js to where it suits you better and just run
the file and install no Js. After you complete the
installation of NGS, you can open a terminal to
check the installation. You might have to
restart your system, but it rarely happens. So in our terminal, we can check the version of
NGS by simply writing note. Dash dash version and here you can see I'm
running the latest one, but that as advanced features, we don't really
need it, you use it here or we don't
really need it here. So in this case, you
can just use the 20, the LTS that is common and
better for the usual user. So let's check your version. You can also check your
version by using the shortcut, so no V. Same thing. So you can see your
version of no Js here.
17. Create First React App With Vite: So now we can start
creating our front end. And to create our front
end, we're going to use VT. So T is a tool that allows us to easily create
applications with react, view, and so on. Even vanilla JavaScript
applications. Its framework allows us
a fast scaffolding of the applications and also a
fast building and rendering. So to create an application, so we go back to our
terminal in our terminal, we are in our server folder. So we just go one level down, so cd dot dot. And then we are in
our project folder, and then we can just do NPM, create t and then we just say at latest so we can
install the latest version. So you just press Enter, and then we say yes. The project name, we're
going to name it as object recognition a Packers
name can remain the same. And then we have all the options that we can build our
application with. So in this case, you just
going to choose the react one, use the arrow keys to navigate, react, and then we're going
to use typescript as well, and our application
is installed. So now we can navigate to
the application folder. So see the object
recognition app. Enter, and then
we're going to open it with our Visual Studio code, so code, and then dot. And now that we open
our application with Visual Studio code, we need to install our packages. So now we will open our terminal inside of our
Visual Studio code. And if for some reason doesn't
appear the option to open the terminal on your
settings there, you can just say Shift Control
P on Windows or Command, Shift P on Mac. And then you have a
dropdown and you can just write Toggle. Terminal. And here we have the option
to open the terminal, and you just press Enter, and the new terminal
will appear for you. And we are inside of
our project folder. We can just say NPM install to install the
package or NPMI as short. So let's install the
package and let them run. So now the packages
are installed. We can run our application. And in this case of WVD, we just to NPM run deaf to run the development
mode as we can see here. On our package Jason, how to run the application
in death mode, how to build our application, run leading, and even
preview Has Build mode. So let's close this and
run the NPM run Def. After it's built, you see you have our local as where the
application is running. So let's just copy this
and go to our browser, and then we see that we have our VID and react
application running.
18. Image Control Component and Style: Now for the main front end, we can go back to the
Visual Studio cost, and inside of our source, we have this app dot TSX, that is our entry point, and we're not going to
need some things here. I will just close or put termot bit down and we
don't need this te. And we can delete everything
inside of our return. Also, as you see, these
packages are not used, so we can delete them as well. And for now, we
leave it like this. So B here, we will create a
new folder, so icon here, and we call it
components inside of our components because
our application is very simple, on
the front end side, we can just create one component and then create
a new file and we call it image control dot tsx
for the typescript. And because we also need CSS, we can create a new file and
call it just index dot CSS. So now in our back
to our component, the image control TSX, we will need to
create our template. So for that, first, we will import our CSS file, so we say import. Index CSS, and then we create our
components, so say cost. And our component will
be just image control. And as a functional component, we create R function for it, and then we just return
something inside, which for now can be just a div. Then because our image
control is not used yet, we need to export it
to be used in uptTSx. Say export default
image control. I'll save it. And then on
our pTsx in our return, we can use the component
to create say image. Control. And as
you can see here, my Visual Studio
already says to import image control for
components, image control. And then as you can see, you just imported
image control from our components folder
and our component. Then calls and app dot
TSX is done for now. And we can go back to our
main component because if you will go back
to the browser, you see that is empty because we are not rendering
nothing yet. So back and again, because it's a simple at app, I will just paste some
CSS for our application. I will leave this CSS file on the resources so we can just simply copy and paste it here. So just standard CSS for our containers and
our image container. I'll just save it, and then we have access to those classes
inside of our component. And first, we will create our containers so we can start seeing
something in the app. So we use this DIF and
because we're going to use more than
one container and more than one DV here on the
template, we need a bracket. So open brankets and I can just cut this and copy it here, and this div will have the
class name of container. So from our CSS file, we have this container class. Then continuing on that div, we will have another div
that is our inner container. So our class name will
be inner container. For now, we can just save it. I have a typo here,
so it's container. If not, the class will not work. Then here I can just put a simple Ptag to see our
application running. Back to our browser, we see already running here appearing on our
screen on our application, which means everything
is correct.
19. Setting our State Variables: So first things first, we need
to set our state variables so we can use them inside of our logic inside
of our component. So we will need first
image state variable, so it say cost, image, and then it say set image. And this image will be string. So coming from our backend, we just string for image, and we say use state
and then open brackets. Now we need to import use
state so we can use it. So we leave the Import
index ESS down, and on top, we say import, use state, and this comes from react. Now we can save, and we need to set the
type of our image. So here for now, we have no type for our image, nothing when the
component starts. So we want this image
to be string as a type Or null because in the
beginning we have no image, so it can be null and we set
the initial state to null. We also need to have a state variable to hold
the file that we're going to plod to the front
end and then to the back end so we can get the prediction
and the results. So again, cost and we
have the selective file, and we say set selected. File. And again, use state. And in this case, we will
need a special type that is file that holds all
the files properties. And we also want it
to be null, or null. Then this delete
this we don't need, and then we just close
the brackets and we set the initial state
as null because in the beginning we don't
have any selected file. And now to continue
in our front end, we also need to set three more variables that are
going to be changed later. But anyway, we'll set it now
so you can see how it works. So in this case, we
will need a prediction. But in this case, we don't
need to be state variable. We can just set a normal
const say prediction. That is an array and
we set to empty array. We also need ese loading. So when our application
is loading, and this case is a bullion, and we set the true false. And the last one will be error. So say cost error, also boolean, so to word false and then we
also set it to false. So these three variables will later go to our hook
where we'll under the requests and be
set on the template. But for now, we will
just set it like this so it can be
used on our template.
20. Prediction And Image Boxes Template: Continue now to create our basic front end where we have the button
to applause the image, a box where the
image will render. And also the prediction number, we will just delete this P here, and then we say
open curly braces and say if we don't have image. So if image is null or
undefined or it's empty, we just say image. And N, say, open
a B, and you say, please applause your image. So let's just save it and see what happens
on the front end. And we have a message saying, Please upload your image. So so far so good. Continue here. But if we do have an image, we come here again and again, curly braces and
say image exist, so we just open now
braces and a DIV and this Dv loved class name of image container
from our CSS file. And then we open the image tag that the source of
our image variable. So here you can see already
appears highlighted. So it means it's in use. At we can just say
something like applause and the class
name will be the image, our CSS class, and
then close the tag. And we just save
nothing will appear on the front end because we
still don't have image. So continuing inside
of the same container, we will have our pervition. So we say, again, Curlbra says, if the
prevition exists, we will take the first
item of our array, and you will see why later, but that's why we set it as an array. In this
case, it's empty. And again, we open aces and open a div div with a class
name of prediction box. So our prediction box will contain the information coming from the production itself. Close the Div and inside, we have a PTA with
a class name of category text goes again and the category text will have the text coming from
the prediction, which in this case,
we open again, the curly braces, and this
will come from the backend. So it'll be rendered on our
back end in our request. And we have the
prediction from the zero this is still an error because we
still don't have a define type there,
but it's fine. Prediction category, and we want everything
on capital letters. So say two upper case. And for N for these errors to go away on your application, we can just go back
to our constant set. We want an array of any. So any type will be used here. This is an error with DST B with a lentin that doesn't
allow any types, but this will be change later. It doesn't matter for null. And like that, we don't
have any error here. But continuing
year, again, down, we open another
PTA that will have the class name of
category accuracy. So it will be the accuracy
of the prediction. That is attacked with the image, and then we close,
and then down again, we will say Clibrass
open brackets again and we say prediction. Again, from zero from the first element of our
array, we want a score. Times one and because we want to create as
a percentage number, and then we say two fixed one. So because it will come a large number and we want to turn it as
a percentage value. Then outside of the caribras we say percentage,
percentage, accuracy. But this will be shown
later how it works exactly. So now you are wondering
what's happening in our front end. So let's go and see.
So still nothing because we don't have
any image to show. Now I'm seeing that the container is a little
bit to the right, and this is because we set
the width so we can go back. And in our index CSS, we just delete the
w and save it. And now it's exactly
on the middle. So going back to
our image control, we need to set the
container for the error. So after this div, that us our prediction and the curly braces as
well for our prediction, we give some space here, and we say error, again, curly braces and then error, P, and then these people will
have a class name of error. And then we close it,
and then insides, we love the error that is
coming then from our request. For now, it's just false. It shouldn't be
false, but it should be a string instead and you say, string we'll say no error. Just eat into a little bit here. After that, we will have our
input to upload the image.
21. Image Upload Input: So now for our input, where we'll upload the image directly from our system
to the application. After the error here, we will open input
tag, so input. And the type of
that input will be a file because we will upload
a file from the system. So in this case, image file, and we say on change, that will have the change
event of our input. But now, in this case, we
just set empty function, so we don't have any errors. And then we say
accept the image. Images in this case, we want all kinds of image now. So just put a star and all
image types will be accepted. And then we close the input tag. Right down below, we want button where we'll proceed with
the insertion of the image. So button and we say OnClick. And again, because we don't have the function
yet for the click, we'll just create an empty
function and the say then we disable it will be
disabled this button when we don't have
a select file. Or if it is loading. So if we don't have any file of the request that the
application is loading, the button will be disabled. And then we close the button, and then inside the button, we have the text that we need, and the text will
change depending on the state of the application
and applause of the image. So again, caribrs we say, if it's loading, we will
say applausing up losing. If it's not a clothing
and we have an image, we want the button
to say that this identifying or
identify the image so we can have the prediction. But if not, we want
the button to say a plosimage and the
functions will be different. So now if we save and
go to our browser, we say already here that we have something
up on the screen. So here we have the button
to choose the file. So our input And if you click, it opens a window
where we can check our images and the
Blosimage button is disabled because we
don't have any image. No error here is
appearing because we set it as a bullion
error that exists, and in our case,
it really exists. But if you put, let's say, null gives an error
because it needs to be string or null it
will not appear.
22. Explaining Tensorflow SSDMobileNetV2 and COCO DataSet: Now we're going to have a
short look into the packages or the learning models we're going to use
in the front end. So first of all, tensor flow is
developed by Google, and it is a powerful open
source software library for machine learning and
artificial intelligence. It's the foundation that enables computers to learn from
data and make decisions. Engineers and researchers
use tensor flow to build and train neural networks for tasks like
image recognition, natural language processing, and in our case,
object detection. The SSTam of A net version
two to use here is a specific neural
network architecture designed for object detection. It's a clever combination
of two concepts. So SSD, single shot detector
and Mobile Net Version two. SSD allows a network to detect multiple objects in an image
and in one forward pass, making it fast and efficient. Mobile Net version two is a
lightweight network structure that's optimized for mobile
devices as well browsers, ensuring the model
can run quickly, even on smartphones, embedded systems,
and stow computers. The coco, the common
objects in context, the Coco dataset is a large
collection of images that serves as a textbooks for training object
detection models. It contains over 330,000
images with more than 2.5 million labeled instances of objects across 80 categories. These categories range from
common items like cars and dogs to more specific objects like traffic lights
and tennis rackets. When combined, these
three components create a powerful system for real
world object detection. TensorFlow provides the tools and environment to
build and train the SST mobile net
Version two model using the Coco dataset. The result is fast,
efficient object detector. That can quickly
identify and create multiple objects in
images or video streams, even on mobile devices. This technology has numerous
practical application. It's used in autonomous vehicles to identify road signs and pedestrins and security systems to detect suspicious
objects or behaviors. In retail, for
environment management, and even in augmented
reality apps to recognize and interact
with real world objects. The speed and efficiency of
SSD Mobile net reversion to combined with a
comprehensive training from the Coco dataset, make it particularly useful for real time
applications where quick, accurate object
detection is crucial.
23. Adding MobileNetV2 SSD COCO Model DataSet: So now we can start adding the prediction model and the
dataset to our application. So we're going to use this
tensor flow learning model with a Coco dataset. So I'll leave this link
on the description. You simply need to navigate
to this Github page, and then we're going to
use this exact model here. So the SSD, mobile net Vt Coco. So we're going to
just click the link, and then you have probably
a security warning, but you will just
continue to the site. So after you download the model, you have a zip file like this. We'll just extract it. And then we love the
folder that we want. Just simply copy the folder
to our server project, so into the server folder that we created before
for our back end. And then here we just need
to edit the title and remove the creation date and just leave it
Coco at the end. So now we can save so then
back to our application. So we have where
we have our code for our server running,
so our back end. And we can see here we already
have our model integrated, and now we still need to pass
it into our application.
24. Loading PreTrained Model Into App: So back to our code, first thing we do is just
delete some part of the code. We can just leave the root
route just for testing and delete everything
else and then save, and we will go to import the dataset that we just
added to our project. So first things first, we need to create the
environment for it. So we will need to import
operating system so we can detect our operating system and enable all the
points to be accessed. So after that, we just do the operating system that we
imported, then dot environ. And then here we'll open the square braces
and then we'll set our tensor flow to enable
everything in capital, so enable one D and N, optimals Ds, and we will set these 20 string. So this is to allow
the environment to run the tensor flow later. So then we have to import
the tensor flow as well. And for that, we
will do it up here. So we'll just say import
TensorFlow has tensor flow. Remember that to
install the tensor flow earlier and we add an explanation
on the previous video. Then we have the
tensor flow imported. We need to set the
variables so we can use this model in our project. Right after we set
our environment, we have to call our model first. So we just create a environment
variable in this case, with capital letters,
model there. And here we want to call the saved model and size of our mobile net folder
that was unwed before. So in this case, we just
open quotes and we say SSD, and she would be
in small letter. So we just say, SS D. Mobile net Version two, Coco saved model. Please ensure this path is exactly how you written here because you remember we
just deleted the date, but if you left the date, you
need to add the rest here. Then we need to set the
model of the tensor flow. So we say model and then
TF saved model loads. To be a string, and we will lose
our muzzle there. So the directory of our model. So this ensures that we are
running the production model. So our pre generated
model coming from the tensor flow and mobile
net V two into this variable. And then we need to sign
the model to be inferred, so we can have it serving
as default on this case, and there's many options, but in this case, we're
going to do it as default. So just say infer
our variable and then model dot signatures and then open square brackets, and we say serving
default. So save it. And now we have loses our muzzle in variable
into our application, and next we're going to see
how we're going to use them.
25. Run Inference Function: And now we are ready to
create our first function, so our run
interference function. And this function is designed to perform the object detection on the given image using the pre loss tensor flow
that we added here. This function will take care
of preparing the image on the correct format
for the model and then runs the interference
and then returns the results. Then we also return box with it that we love the objects
such as bounding boxes, class labels, and
the confidence score that we're going to
use in our front end. And then these
functions it serves as a bridge between the
raw image that we will upload and the object detection
model that we added here. So for starting that, we will just start by creating a new function
so Python function, we use DF then
we'll just call it run inference or run inference. And we'll take and
this function will take a image as an argument. And the type of this
image we want to be nPi. And D array. So now we need just
to import the num Pi. So back to our imports, we open a new wine and we
say import num Pi S&P. So now the error here is gone. We have the num Pi we is
saying everything about the numPi even the docmuation if you want more
information about it. And this function will return
a dictionary of string, and then we'll just
do any for now. So we'll just want a dictionary,
open square brackets, and we'll add the string
as the label or name, and then we'll just
add any in size. And then we just
close the function. And now you see here, we have these two errors because both
of them are not defined, so we need to go back to our imports and
import the typing. So from typing we'll import the digs of dictionary and N. And
here I have a typo. It's not typing, but typing, so now it's correct, and the errors here are gone. So continuing our function, we go inside. Be careful. Again, we need
this identation in Piton and we're just going
to set our first variable, the input tensor that takes a tensor flow and
we'll convert to tensor. And this will need
to pass our image. So this is basically
the preparation or the conversion
of the numpy array. So this image here, to a tensor flow tensor, that is a required input to the object data model that tensor flow requires
to detect the image. So basically, we are converting our numPi array to tensor
flow tensor, so to tensor. And yet I forgot to ask
the error so input tensor. Next, we will assign to our input tensor will be our input tensor from
before. So this one. And we opened the
square brackets again and you say tensor
flow, New axis. And then spread and our
second variable is done. And what does this mean? So this adds extra
dimension to the tensor, so to our pretty fine. Tensor creating a
batch of size one. Many models expect the
inputs to be in batches, even if it's just one image. So we need to create
these batches to pass to the
input of our model. And then for the
detections variable, we will infer the input sensor we created. So what does this mean here? So infer this infer comes
from our model signature, from our pre loaded
and pre trained model. And this model will
run itself and will process the image and
return its detections. So after we take care of the
image, handle it properly, we will pass to our model that
will handle the detection. So here again, I forgot the S and B detections
because it's more than one. And then basically we just need to return these detections. So now we are ready to use
this function in the route.
26. Predict Route: So now we are ready to create our route to get the
predictions in our front end. And for that and because it's
going to be a post request, we have to do it right after our app where we
define our fast API app. So we can just say at up, so from fast API and then post. So will be a post request. And the name we have for our
route can be just predict. So the route is defined. Now we need the
functionality for itself. So we're going to have it
a sync function to be a synchronous throughout the
course and depth for function. And then we call it predict. And what our function
needs to take, which kind of files
need to take. So we need a file. So the image that
we're going to plod on the front end and then which
type this file should be. So in this case, should be a BlothFle that is equal to file with a spread, can be anything inside it, and then we'll return
or we wanted to return at JSON response. So we'll return
our json response so we can then handle
it in the front end. But then again, we
see some errors here, so that means we
didn't import it yet, so let's go back on top. And from our FAST API, we also need the file
and the upload file. So both of these come
from the fast API import. And then we also need the JCNRsponse because it's the only one missing year
that we want to return. And for that, we just open another wine ear and
we say from FAST API, and then from the
responses of FAST API, we want to import JCNRsponse and that's it
for our imports here. So continuing with our function, we first need to create a
variable to await our response. So we can just call the variable content or contents because it'll be
more than one content, and then we'll say, wait, and we say file res. So python function that
will res our file, so the upload file type. And this allows the file to be uploaded and read
asynchronously. Then we need to convert the files that we
upload into an image, such as converting from a Numpi array and
then to RGB image. So how do we do that? First, again, down here, we just call it image. That's my our variable, and it will be equal to the Numpi array then
we have the SV two. The SV two is the codes
for the image and then we say EMD E code. So the S V two is mainly used from image loading or color space conversion and
potential resizing the image. So here we are booting the
image or converting the image, the exact size or exact type, size, colors that we
want to be applaused. So in this case, we
have the C V two, again, is given an error. We need to import it. So
let's go back on top. And let's give it
a new line here. Then we say import as we do. And here we need to clean
this up a little bit. Let's give the imports
from from together and the single imports
together as well. So let's just clean this up. To be more cleaner and specific. Same here with OS, and now we have a clean
and more organized code. So going back to our function, again, the error is gone. And inside of our
EM image decodes, we open again another
brackets, and again, from our numpi we
want from our buffer, we want to convert our contents to num pi integer
or U integer eight. So with eight characters. And then continuing here, just put a coma and
again, the SV two, and we say the image was color so we can
get the color as well. Because the image comes with colors and then the model needs the exact RGB colors to
make the detection easier. And with that, we're going
to set again those colors. So we need to convert
the BGR to RGB colors. So we can set a
new variable here, so we say image RGB. That's what we want to get. Again, from SV two, and you say SVT color. So this is a function or a
class coming from the SV two. And then we passed our image
that we convert to here. And then we say SV
two and I say, Color. And because it comes as BGR, we say BGR to RGB. So we are making sure that the colors
before we send Image to the model come
as our go as RBG. So then we have the image set, the colors, everything
converted and ready to go. We need to pass the detection. So we just create a new
variable detections. And then here we just call the function
that we had before. So run inference
that will convert the image to tensor
and get the result. So run inference, and then we pass our
properly converted image. So image RGB. Then we are going
to get the results. So these results of the
detection that we set before from this variable here, but we still need to properly formatted and convert it to a object so it can
send to the front end. So in this case, we first set
the number of detections. The number of course,
is an integer, and we say detections, pop number of detections. So this extracts a
number of detections, and then we convert
the detection result into an umpire array that
we're going to do now. So again, detections. So calling the same variable, and here we open curly
Brass for object and every object will have
the key and the value. So this value, we open bracket and we say we'll
start them with zero. So a column and then we say
the number of detections. This one's here. We can just close this and a dot and then
we'll be an umpire. And for each key and value in detections. Is. Is. So what happened here? So here we are setting the object we want the
detections to go through a. So in this case, we say for, we want to set a key and the value as a numbi in all
the number of detections. So for each key
and each value in the detections on
these detections, we want to set a key and
value for it in the object. Okay. Then we need to
create a list to return. So we have these values here, so a list with all these keys that come from the detections, and we need to make a loop so convert the detection
results into a list. And for that, we just say
four key in detections. We say detections. Key equals detections. Key to list. After we look through the
keys in our detections here, we set the index of each detection key of each
detection to the list. Then we need to map the
detected classes to human readable labels
that we're going to use with our use label map. So for now, we can just
need to add the results. Be careful with the atentation and then we just need, again, the results and our results
will be returned as an array. So let's leave it empty for now, and we're going to
add label maps.
27. Label_Map: No, we're going to
need a label map in our code to make the predictions or our
model to get an ID, and the IT will have
a label that has the name of the object that will be rendered
on the front end. And because we are using the SST mobile and
the coco dataset, they already provide us with a label map with a PTN file so we can
add our application. So if you go then
here to this RL, I will leave it
in the resources, you can see that is
already provided a label map with ID number
and then with a name, the label of the object itself. This particular label
map can be quite big. I can have hundreds of
thousands of labels, but I shorten it, and now we can add it
to our application. So we can go to our Visal Studio codes and
there here on the main folder, we create a new file
and we call it label. Map dot pi. And then I will just
paste the label map. I will leave this file
also in the description, so you just need
to copy and paste, and then we have
our label map here. But then to use it in our main file, we
need to import it. So we go up here again and we save from son from imports part, to say from label map, we import label map. So basically what
we are doing here, we are importing this object here to our main application. So do not that we have our label map imported
into our main folder, we can continue with our route.
28. Returning Results from Predict Route: Now we can continue
with our Pre vic route. So we stopped here when we
create our results array. So right before data array, we going to loop through
the number of detections. So we can say four in the range of
number of detection. So we look through the range, entire range of our number of detections that
we will get here. We continue the function and
we say, if our detections, this one's get the detection scored because if you
remember that will come several detections
from one image, and then we just want
to take advantage of the most value
detection that has a higher score because it's the most approximate to the
object that we want. So if detection scored
one on our index image, we want just if it's
equal or higher than 0.5, that is the one
that interests us. So then if we have that
score and that detection, we will say that class ID will be an integer in this case, and we just set the same
number of the index. So the detection class classes will have the number
of our index, on our look, So basically, we are converting our class
ID to a number to an integer. And then we can set
the category as well. And the category will
be so our label map. We need to use our labels from
the file we added before. And inside of our label map, we're going to get the class ID or unknown. Unknown. So here, the category
will get the class ID of the detection that
we turn into integer and we'll match one of
these indexes here. So then we also need
our box of detections. We say box, and we
say detections. And here we want to get
all the detection scores. With an SA for about 12 S here, so just please do the
detection scores, and then we want
the index as well. And then finally, we
can of our results. So we're going to call OR array results and
append these results. So then we can check this array and to try it and have our data that
we need on the front end. But here we need to open an
object and we have the box. That will just be the box
that we created before. We'll also have the category. Again, same category from
here, and then the score. So we need the
score to render on the front tend to
score and score. And because we didn't
have the score, we need to audit here. So the score will
be I have an error here and I can see now
boxes here instead of boxes should be scores scored. And then here we open a new one, and so we box detections
open square brackets again, and we say detection. Boxes. Again, we want index. So now everything should
be okay with this loop. Then we want to see
the results somehow. We go back here, so two lines back because
of the ditation and we want to print Our results. So we say final, and we say results
for testing purpose, so we can see the
results on our console, and then we need to return the data already formatted
for our front end. So because the function should
return adjacent response, we need to return it as well. So we say return. And then Jason response, and our content will
be the results. So here, our array that we added data through
these methods. So we can save it,
and then next, we can test it in our console.
29. Testing Our Route: Now we can go and test our
route and our server for now. I just download as a few
images for us to test, and we'll leave them
here on the resources, and we have to run our
application with UVCorn. But I run the application with UVCorn and I
got this error. If you have this error, just please follow this
methods to solve it. So you need to install PIP
multip so you can do that. But first, I need to
stop this again and say, P install Python multipart. This is because it requires us form because we are
using a form here, and then we can use
again the UVcorn. So we say UVcorn main
colon up, and then relos. And now we can check
it in the browser. And if we go to our browser, we go to our route to our wireL we had application
and to the docs part. So there we can test already if our predict route
is working or not. So we're just going
to open it here. And then as you can see,
is a multip form data. That's why we needed to run
that installation of PIP because why our route was not working just because
of this post method. So we just say try it out. And here we can choose a file. So please choose the
file of your preference, but keep in mind it has
to be on the label map. So you can just use the images
are left on the resources. We just going to use image, say this boat, we upload it, and then we say execute. And now the execute went okay, everything works fine,
and we have our response. So as you can see, several
boxes, but in this case, we're going to get the
most that has a score. So this identified the
category of the team uses a boat as we expected
and with 98% accuracy. So this on the front end, we will just round
up to percentage. But here we have
a decimal number that is 0.98, meaning 98%. We can try with other image. So we just reset it let's say a cat say execute and
our prediction is a cat. You can also see
the results here on your console,
on your terminal. So the last two that
we predicted that are both and the cat one with 99% scored and
the other one with 98, so the images are clear enough for our
model to detect them.
30. Use Upload Image Hook: Now that we have our back
end or our server set, we can go back to our front end and
handle that connection. First things first, we
will create a react hook. So a react hook allows us to have a piece of
code or a function or any data in a single
file that can be used or hooked
around our application. So to create that,
we come here to our site where we have our files and we're going
to create a new folder, and we'll call it hooks. And then inside of
this hook folder, we will create a file and
the file will call use. So react hooks
always have to have the prefix use and then
say image aplosHok. And this is a Ts file, so a typescript file. Now inside of the react Hook, we will first impart
what we need, so we will need
the use state from react We will also need Axios. So Axios will and all our HCTP requests
through the application. If you don't know
whatxus is just and all XTTP request to get the
date or get post requests, and this is a helpful
package to make it simpler. So then we're going
to create alook with Export to be used throughout
application and const, and we call it use
image up loads. And the image applause
we get one argument, and this argument is
going to be the string or L that we're going to
send from wherever we want. And we'll call it
just applause WL, and the type is a string. Then we can just have arrow function and
return everything here. First, what we need to do
is to have some states, so we can create the
first one and we call it prediction and the prediction is what we're going to
return to our component. Prediction. These results
will come from the back end, process here, and then
we can use it anywhere. We say use state
and just cause it. Then we also need a
losing indicator. We'll create a volan
calls is loading, and then we say set is losing, and then we use state again. And in this case, we want
it to be a type of Boolean, and we start
application as false. So our loading state will
be false in the beginning. Then cost again,
and we also need one state variable for
error. So say error. So this application
will have any error, we'll just set it here, and then say error. And here, because it will
be an error message, we have to set it as string. But if we have no
error, we have null. So we start the life cycle of this hook with
this state null, so no error in the beginning. Then we create our
function inside you. So we need a function
for uploading our image. We can just say Cost, aploimage that will be an sync function that will
take file as argument. So here we upload our image
and the type will be file. And then we just return
something inside here. And inside here, first we need a form data
because we will act as a form when we upload our
image as we created before, is a form, and we have to
set it as a new form data. And then we will append that
image to our form data. So form data append the
file we passed as argument. So file fun. Okay, so then because we
are opposing the image, we need to have the
losing as true. So when we are
applauding the image, we have tend to have to
show losing as true, and then we create a spin or a losing indicator
to our application. So losing is true when we
are opposing the image, and then set error as null. So we still have no error here. And then we have a
try and catch walk. And first with a
try and catch walk, we will get or try to get that
information that we need, and we catch any
error if they exist. So to create a try and catch
walk which is a try and then back here we catch and we'll pass
any error if it exists, so we'll catch any error and
then open another block. But we are missing some parts
here of our application, like, for example, types. So we want our hook to
return a specific type. We also want our prediction
to be a specific type. So for that, first I have a typo here, so it's prediction. So for that, we're going
to now create our types.
31. Result Types: No, for our types, we can either create
the types here or to make the application
more organized. We're going to
create a new folder, and we call it folers types. And then inside of
the folder types, we create another file
called type dot Ts. And here, we can start creating the types
that we want to use. So first, we're going to use the production type
that we're going to then send to our
main component. And for that, we say export, so we can export this type, type, and we'll have
it at prediction. Prediction equals
open role braces, and then we say category
that we need a category. So the label, as you remember, and call it a string. And then we also need
to show the score. So we have our score, and our score will be a
number if you remember that 0.98 that we're going to
send as well our show there. So after that, we also need to return something
from our function. So we want the
exact types that we want to return
from our function, mainly use the image
applause result. And then we can say,
again, export type, use image, applause,
then result. And that will be the applause. Image that, in this case, will be a function like we
have our function that called file as a type file, and then we'll just return
here the promise voice. So we can return any
promise there we want. Then we also have
the prediction. So if you remember,
those prediction was coming as an array. So for that, here we
will use this type, and we call it prediction. But because it's an array, we want an array
as of predictions, but can also be null, can also have nothing
if there's an error of if there's no matches found, so we have to set
as well as null. Then we have, again, that is loading
that, in this case, is a bullion, and then
the error and the arrow, if you remember
the string or no. So now, we can go back to our hook and then set
what we need here. So here in the first
state variable, we need to set it
as the prediction. So open the tag here, and then we say
prediction and it's an array of predictions
and also can be null. And the first state when the component
renders all be null. So when the component renders, we have nothing yet, so we
have to set this as null. And now we need to import our type so we can use
it in the application. So if we go here to
our imports part, we say import, then
open kern pass, and we will import
both of our types. So it's prediction and applause
use image, applod result. And mistake I imported to time, so just delete this one. Now here, where we
want to use this, we want to use this
as the type of our function or the
result of our function, what they will
return at the end. So we use image,
applauded result. So now we get this
error because needs to return something and we are still not
returning nothing. So this error will disappear after we complete this
block of code here. We will continue at next.
32. Returning Data from Hook: No, here on our try
and catch block, we still need to do some
calls so we can have the exact results to then use
it on the main component. So first of all, we're going
to make the post request, then with axis that we
import, and for that, we say cost, and that will
be a response equals. And because it's a
synchronous operation, we'll say wait for the result, and then we say axios
post and we want to boost because we're
going to send it to from our main component will be prediction on array
of predictions. And then here we open the braces and what
we're going to send, what kind of information
we're going to send. So first, we need the RL that we still don't have our RL in the application, but
we'll set that later. Then the form data. So form data that we set here. And again, comma, and we open
the ers with curly braces. So for the ers, we first say ers curly braces and here we just set
the content type. So open quotes, content type, and in this case,
if you remember, from our tests on the server
is a multipart form data. So say multi part form data. Okay, so then I comma here. And the first thing we can do is after the response variable
or the response request, we give another line here and we can set first
the console log, so we can see the response. Say server response because if something happens and
the data is not coming, we can check on the console
if it's really coming or not, then response data. Then we need to check
if everything is okay. So we say if our array so our predictions array is an
array with a response data. So if this is an array, and the response that because
it's an RA as a length. And if that length
is bigger than zero, which means it's populated, we can say that the
data exists and we set the prediction
to response data. If not, so, else, we set the error. So because this doesn't correspond on the truth and
we don't have any data, we just throw an
error, and like that, we have to set the error and say unexpected response
format from server. So meaning something went
wrong on the server. If everything goes correctly, our application we'll send this data and our
predictions is populated. But remember, here we are still in losing state
because it's true, so still nothing, and we will
set later to show the data. So now, continuing our catch. So if we have an error, first we set a console error. And like I say, we add an error. We had an error of blosing the image and then we say
which kind of error it was. So this error then we'll also
render as a console error. Then again, we open I. And this is because we are
using Axios, we say Axios. I is axios error. So axis will detect this on error and set it as
a axios error here. We will set a different error. So here we say, maybe give it more space because
we need here the space and we say open a backtick. And then you say, so back ticks here
different ten quotes, say failed to los image. Then open $1 sign, curly races, and then we pass the
error message or error dot message
that comes from here. Then we can continue here. We have a dot and then status. So this is just useful
information in case of errors. We say error response status. We can again close
the curly braces, another dot, another dollar
sign, and we say error. Dot responds. We put this question mark
because it's an optional state. So that to allow if
it doesn't exist, doesn't continue and then
detail or just an empty string. Then in our I block, we also need to set s, and the s will be well, we just set a different error if any of those errors are true, but we still have an error. We just need to say
failed to blow image. Please try it again. So if there's no error of the server or no error of Axios, we just set this error message. And then we want to finish the block and
then we say here. So from our catch and our try, we need to finalize
and then we say just finally if everything
goes correctly, we said set is losing to foss. So now we are not
losing anymore. We are either showing an
error or the actual data. Save but then how do we extract? We need to return something. So we need to return every state variable
that we created. So we just say return
open Carl braces, and we return a plot image. So to set the function from the main component
to send it here, the prediction that we
need to send as well, this loading so you can check the loading state
and then error. So now if we save,
and we still have this error around error
because it's type of unknown, so to easily fix it, we just need to set it as here, and the error disappears. But I have an error
from any here. This is about my linting, because ideally in
a big applications on a production applications, we should never use any or only very specific
cases because it can break the application and the long term or the
continuous development. So I usually try to
avoid the use of any, but in this case,
we can just use it. Now, save it, and we are ready to test this part
in our application.
33. Using Hook In Image Control: So first thing we
should do is go back. We can close all of this and go back to our
image control dot TSX. And if you remember, we
created these variables here that we would not
need or would not use, but we have to change that now. So first thing,
we just say cost. And then we want to have the same things that we are
returning from our hook. So we have the closed image. We will need a prediction. Is losing and the error. So this that's what
we're going to use are already set in our
front end year, and this will come from the
hook with the correct data. And then we need
to use the hook, so we say image,
use Image applaud. And if I press just a tab, it will import the hook for me. So import, use image applaud from our hook folder and
the correct file. Now, because I have an
error here and why is that? Because if you remember, our hook that use Image plus requires an argument
that is a plus won, meaning we need to pass our key. So for now, we just
set as empty string, but we still need to ask
the key from our server.
34. API Key: No, we need the key of our server or the
URL of our server, so we can use it here. So for that and the
methods of separate, so we will not be visible here, we create a new
folder called keys. And inside, we create a new
file called APi kis dots. Please note this is
a very simple way to what the keys
not to show here, but as a matter of security, you should not use it
like this if you use it either on environment
variable or on a log file. So the user will
not have access to it through dev tools
or some other methods. So for that, we
just need to create a const first to
say export, cost, and then we call
it API key equals, and then because it's a
string, we open codes. And if you remember is HTTP, slash slash 127.0 0.04 0.1, and we are running the
server on the port 800. If you change the port, so please use the port
that you are using. So we just need also to
set the type typescript, and then because it's a string, we set it as a string. And now we can use this
APIKey in our application. So we go back to
our image control, and first, we need to import it. So again, here we'll say
import open curly braces, and we import our
constant API key from our kiss folder
and APIs kiss. So then here when
the image aplos, we just easily
pass what we want. So first, we need
to open the Bectix. So again, the ctix allows us to use variables
inside of the string, and we want to open the
dollar sign and curly races, and then we have our APA key. But because our APA key or the route we want to use is
just not ends with a port. We also need to pass our slash, predict because it's
our post method that we created before
in our application. So now we save it, and now
we can continue with this.
35. Handle Upload and Handle Image: We can already use these
deterrent methods from our ok, but we still need to
create functions to handle the image change or
the upload of the image. So first thing we're
going to do is to have a new function here, and we call it
handle image change. So the event that we're going to use to change the image
or to upload a new image, and the other will be an event. And this event will have
the type of change event. So HTML input event
because we are losing or plosing an image from our computer to
this method here. So we call it change event, the type that comes from react, and then open the
tags, and inside, we have the HTML input element. So to use this method here. And because we are using
this type from react, we need to import it. So after the use state, we say change event to
import it to our code here. So error is gone, and we are ready to
continue with our function. So it's going to be arrow
function and then curibass. And first thing we're
going to do is set a variable for the
file, so const file. And this is an event the target, the files we're gonna uplose has optional
because it can be empty, and we want the first
item on this array. Then we can also handle it to check if everything is correct. So if we have a file, we open again Carly braces. We said sect selected file. So remember our
state variable here, that is the type file. So we say sect
selected file to file. So the one that we uploaded, and then we set
image to the URL. So another interface in
this case for the URL, and then you say
create Object RL. So when we aplose image, this event will create RL, so we can display
it immediately. So then we save and I have an error here because the RL is capital letters. So then we save it, so our end image
change, it's working, but we need to use it in our template back
here to our input. We have this method to
the unchanged method, and we just need to delete this part here and
then just call a function and Image change. Also noting error here, and this is because
I forgot the braces here because two
oper case method is a method and needs a braces. So just add the braces
here I fill on. Now we need to
handle the upload. So for that, when we want to have the upload to
upload it to our server, we just say Cost, handle uploads Equals a sync arrow function
again and then say, there is a selected file, we await the upload image. To return something with
that selected file. So the upload image
is the function we have in our hook that we
created to handle that part. While this is uploading, we can await if
everything goes correct, it will send us the date. Else, we can just set the console error saying
that no file is solid. So now we can save. But before we test it, we need to add this function somewhere in our template because that's going
to trigger the event. And if you remember, we have this button with an
empty click here, and that's where we're
going to have our function. So we say endo applause. So now we can save it and we
can test our application.
36. Testing Image Upload: So in our application, if you don't have the
application running, we just need NPM run Def to
start our it application, then go to the local host, and then our local
host, there's an error. And this is because we
didn't install Axios. So let's stop our server, and then we say
NPM install Axios. Okay. So now we can try again and PM on dev back to our browser and
everything is correct. Now we can upload the image. So just do choose file here. And we have our images folded and we can
just open this one, the changing event change
the image correctly here, and then we can just
press identify image, and we have network error. And why do we have
the network error because these applications
endle with HTTP, and we need to set or to endle the cross origin requests to be accepted in
the application. So let's do that next.
37. Allow CORS: To our server in
our main PTN file, we need to add something else to handle the cross origins
of our application. So first, we need to import some parts that we're
going to need to use it. So here back on the
imports from we say from FAST API
middleware dot course for cross origin, we import course middleware. Then we go back after
our up fast API, we open a few lines
and we say up. So from FAST API as middle
wear and open braces. And then we call across middlewar and then we
say allow origins. And all origin is
going to be array. Then on the next line, we say allow
credentials to be true. And we say allow, then we say allow methods. And these methods are going
to be again an array. But in this case, we
open quotes and we say star to allow them
all, then again, comma and then say
allow Es, same thing. So array, quotes and allow of. Then we just need
to add commas here, and it should be ready to go. Here on the LOigin, we will set the strings of our front end of our
URLs that we are using, and we can use several
so we can use the, for example, HCTP local host because our applications
are running in local host, we can set, for example, this local host, or we can even use
multiple local hosts here, for example, again,
quotes and say, again, HCTB local host and then 42. So for example, we have a
react application here and the angular application
here that we can use, and the local host will be available and the origins will not cause problems anymore. So if I save it, and then
we can test it again.
38. Getting the Results into screen: No, back to our front end, we can just refresh the
page and try again. So open again the images. Let me aplous plane again, and then we just
to identify image. And as you can see, a ploding. And here we have the label that he detected as an
airplane and the percentage. So the percentage,
we set it here. If you remember that
strange number before, that long number with decimos, we are handling it here to be fixed as a percentage number. So we can try another say Zebra, and you see the image changed, but we still need to identify. So here, Zebra is actually very easy image to identify
because of the stripes. And then we can try
more, for example, a bus, plus 98% accuracy. So now, feel free to on
both more images and try. Keep in mind that to model is still learning and we mistakes. We'll different labels and accuracy or even saying that
the image doesn't exist. Feel free to try to make a more complex
application either on the back end or even try to improve this front end so
you can fit your own needs.
39. Splliting Into Smaller Components: Sees of react is it's
component based. So everything should be in smaller components to
keep the code clean, the project more understandable, and then the
components can also be reusable in other components. So here how we do this. First, what we're going
to do is to split these two components
into smaller components. So we can start image
component, and for that, we go again to our components folder and create a new file, and we call it image
component dot DSX. Then here we can just set
const, call image component. It's a functional
component, so our function, and then here we just
return something, and then we export
default image component. Then we simply need to copy this div and everything
inside it, cut it. And inside of the return, we paste the image content. We have a error here, and then we'll see that later, how to avoid this error and
we'll pass props as well. So we can save it for now. And then here we can import
our newly created component. So we just say image. Component and close the
deck. So let's see here. So as you can see, we import this image component
from image component. We can also split these prediction boxes where we have our scar and categories
into another component. So again, we come
here and we say prediction component dot TSX. And then we just the same so
cost prediction component. Again, functional component. And then we return what is inside of this
prediction, the box here. Copy Div and
everything inside it, Scut it, go to a
prediction component. Still have the error.
Look at it later, as I mentioned, and then we
will have to export default. Prediction component to save. And then in our image control, we call the component again. So in this case, is that
prediction component from prediction component. Don't forget to import it here. And now to get rid
of these errors, we need to pass on props, and we'll have a
look at that next.
40. React Props: So here we see both of these errors that happens
in our component, and that's because we
don't have props or this object here does not exist. And to fix that, we need simple to pass the props,
as I mentioned. So props is a way to pass
data from parent to child. So which components will pass
data to another component, so it can render dynamic. Depends on if you want to
reuse the component or not, you can pass different
data into that component. And here, how can we do that? First, we will turn
this component into a functional component. So here we have a
column and then FC for functional component
and open this text here. So this tag we need
to pass our type. So if you remember, in our type, we have this prediction
type that contains category N score that we used
to render on the screen. So here we can just import
that type of prediction. So import prediction from types. And this error is also
because we need to import the functional component
from the react library. So say import open
calibraces FC from reacts. And as you can see the
errors are v. Now we need to have also our props that we're going to
use as an argument. And for that, we can
open calibraces. And remember, we have instead
of type the category and DS core and we can just say
your category and score. So like this, the
error will be gone. But for now, it's still here, we just need to delete
this prediction here we don't use to simply
use category and score. So all the errors are
gone, we save it, and then to be used
in our image control, as you can see here,
immediately show an error, and that because it's
missing those types. What we need here, first,
we need a category, and we open a curly brace
and we pass the prediction. First element of the
array and category. And we also need to do
the same for the score. So again, curly braces, prediction, and scar. So if we just pass like this, it gives an error because
it expects a number. So the scar is a number, all the errors are gone. Now, let's do the same for the image component and
the image component. And because we don't have
the type in this case, we'll do it differently. So first, again, create
a functional component. Let's impart from react library. And then here, if you remember
this image is a string, we can just pass
if we don't have a type to pass it directly here. So curly raises and we say
image RL, that is a string. So we are expecting a string. And then pass it here
as an argument as well. Then we need also to change
the tier to image o, and all the errors are gone. Then in our image
component, again, we have the same issue and we need to pass this image that we are handle our logic
to the image component. So say image RL, equals image. So all the errors are gone, and now we can test it
in our application. So back to our application, we can just choose file. I just low image here, and we can try so no errors. Everything is working as
expected, and there we have it. So this is how we pass data from parent to child component
and react application.
41. Use Cases And Limitations: In today's world,
object recognition apps have become incredibly
useful tools, transforming how we interact
with our surroundings. So imagine walking down a busy street smart phone
in end with a quick scan, your device can identify
potential hazards, helping you navigate safely. Or picture yourself in
a supermarket where the same app assists
in managing inventory, making the restocking, a
breeze for store owners. These apps aren't just
for business though. They are opening up new worlds for those with
visual impairment, describing the
environment in real time. Nature enthusiasts find
them invaluable for identifying Fada and
fauna on their es. In classrooms, these apps are making learning
more interactive, turning everyday objects into
educational opportunities. But it's not all smooth sailing. While these apps are clever, they are not infalllible. On a cloudy day or
in a dim lit room, they might struggle to
accurately identify objects. And while they are great at
recognizing common items, they might scratch
their digital heads at more obscure or
specialized objects. There's also the question of battery life in case
of the mobile devices, running these apps
can continuously drain your phone faster
than you might expect, potentially leaving you with a dead battery at an
opportune moment, or even drain the battery
of your computer. And in our privacy
conscious world, the constant use of cameras
raises some eyebrows. People might feel
uncomfortable knowing their surroundings are
being constantly analyzed. Moreover, these see objects, but don't fully understand them. They can tell you
there's a share on a t, but they won't understand
that together, they make a dining set. They're like a tourist
in a foreign land, recognizing the vid words but missing the full context
of the conversation. As we continue to integrate these apps into our daily lives, we're discovering both their potential and their limitations. They are incredibly
useful tools, but not without their
quirks and challenges. As the technology evolves, we are learning to balance its benefits with its drawbacks, finding the right place for these digitalize in
our analogue world. So with that said, I cannot
wait to see what you can come up or your ideas to
create these kind of apps.