Transcripts
1. Introduction to the AI Course: Hello there, and
thank you so much for enrolling in the course. The purpose of this
introductory video is to welcome you officially, to introduce myself,
and also give you a general idea of what to
expect from taking this course. So first things first,
my name is Alexander. I am a cybersecurity or an online cybersecurity
instructor with more than seven
years experience, and I'm also a big
AI enthusiast. So what can you expect
from this course? Well, let me just give you a quick summary of
the curriculum. First of all, we're
going to delve into what AI actually is, as well as the
foundations of AI. We'll tackle the history
of AI a little bit. And then we'll delve into the three main modules
for this course. Starting off with
machine learning. And then we'll take a
look at deep learning and then natural
language process. And these are the
three big modules or three big sections
in this course. And then to run up the course, we'll take a look at the
future of AI and what we can expect from artificial
intelligence within the next ten to 50 years. So a few things to know about
this particular course, there are going to be quizzes at the end of each
section. So please take the quizzes. Don't worry. The questions are actually
not that difficult. They're just to test
what you've learned. So don't panic. These are
very, very easy questions. And then resources, I'm
going to provide for you all the slides I will
use in this course, as well as the book, which is basically a PDF book summarizing the entire course. Now, as of the time of me recording this particular video, I am still working on that book. So please do be patient. Maybe you've enrolled in the course and the
book isn't ready yet. Don't worry. I am working
on it. I'll let you once the book is
ready to download. Also, if you do want to use the slides in any sort
of formal presentation, you are welcome to do so. I only ask that you
credit me, Alexander One, as well as my company
labsyba.com. Thank you. And then, of course,
questions if you have any questions about anything that I've covered
in this course. Maybe there is something
you don't quite understand. Always feel free to
reach out to me. I'll be more than happy to
answer all your questions. So with that being said, I want to welcome you once again to this course where we're
going to talk about AI. Fundamentals of AI,
the foundations of AI. And I can only
hope that you will enjoy this course
because I have put in great effort to
make the lessons as entertaining but also as
educational as possible. So welcome once again.
Let's get started.
2. Section Preview Intro to Artificial Intelligence: Welcome officially to the very
first module introduction to artificial intelligence. Now one thing you
should know about me on a personal level is that
I love watching movies. I love going to the cinema. It is one of my all
time favorite hobbies. And one thing I like to
do as an instructor is to incorporate movie clips into
my lessons whenever I can, because I think that
they can be very, very informative, but also very entertaining at
the exact same time. Now, out of the hundreds
and hundreds of movies from Hollywood that have artificial intelligence
as the central topic, I was thinking about
the perfect clip that I could use to
introduce this course, and I think I may have
found just the right clip, so sit back, relax, enjoy this clip, and I'll
see you at the end of it. Good afternoon, Hal.
How's everything going? Good afternoon, Mr. Amer. Everything is going
extremely well. Hal, you have an enormous
responsibility on this mission, in many ways,
perhaps the greatest responsibility of any
single mission element. You are the brain and central
nervous system of the ship, and your
responsibilities include watching over the
men in hibernation. Does this ever cause you
any lack of confidence? Let me put it this
way, Mr. Raymer. The 9,000 series is the most
reliable computer ever made. No 9,000 computer has ever made a mistake or
distorted information. We are all, by any practical
definition of the words, foolproof and
incapable of error. Anyway, Queen Ts pawn. Okay? Bishop takes Knights
pawn. Lovely move. Uh, ok to King one.
I'm sorry, Frank. I think you missed it.
Queen to bishop three, Bishop takes Queen,
Knight takes Bishop. May. Ah. Yeah, it looks
like you're right. No resign. Thank you for a very enjoyable
game. Thank you. Well, come back. Hope
you enjoyed that clip. Now, it was taken
from the movie 2001, a Space Odyssey, and that movie was made back in the year 1968. Now, I'm making this particular
video in the year 2025. So that's what 57 years ago. And the reason why
I bring this up is because I want
you to understand that AI as a technology or
as a subject or as a topic, has been in existence
for decades now. A lot of people seem
to think that, Oh, AI is this new thing that
only just emerged recently. That's simply not
true. AI has been around for many, many years now. Going to the clip itself, what exactly did we see? We saw the introduction of a particular AI model known
as the Hal the HAL 9,000. Obviously, you can
see that it is an extremely
sophisticated AI model because according
to the presenter, he said that the Hal 9,000 is the central nervous system
of the entire spaceship, and it's also responsible
for the well being of the human crew members while
they are in hibernation. So this is a very capable, highly advanced, highly
intelligent AI model. And we also saw the
AI model playing chess with one of the human crew members,
and obviously it won. Now, fun fact, if you're not a chess player,
I'm a chess player. I love watching chess
on YouTube, as well. Today, we do have
AI models known as engines that actually help professional chess players
to prepare for games. These models are extremely
intelligent, highly advanced. So professional chess
players actually use these models to
analyze chess games, to prepare for competitions, and also to prepare traps
for their opponent. So I think it's
actually fascinating that this particular
movie, the Space Odyssey, was able to correctly predict that sometime in the future, we're going to have AI models that will be so good at chess, they'll be able to
beat any human being. But going back to
the clip again, we also see during the interview between the presenter
and the actual AI model, the AI model said something
very, very chilling, and that is, it is incapable of making any sort of mistakes. And I think that's very,
very scary because it raises the legitimate concern
about AI in the future. What happens when we begin to
create AI models or systems that are so intelligent that are capable of
independent thought, and they decide to start
making decisions on their own. Now, there are a
lot of skeptics who claim that this will
never, ever happen. AI will never become self aware or become
that intelligent. However, there are others who believe it might
be a possibility. We're going to delve into this
much deeper in the course. But one more thing I wanted
to mention before I ran up this introduction is the topic of natural language
processing NLP. If you note in the clip, the humans were having
almost sort of like a very natural conversation
with the AI model. The AI model, the hell
9,000 was able to understand what the humans were saying to it
because of NLP. NLP natural language processing is what allows machines
or AI models like the Hal 9,000 to interpret
voice commands from humans and then execute a particular kind of
task or function. Of course, I'm going to
dedicate an entire module to talking about NLP later
on in this course. So hopefully you've enjoyed this short introduction to the world of Artificial
Intelligence. Let's now move on to
the very next lesson.
3. What is Artificial Intelligence: What is artificial intelligence? Well, it's basically
the simulation of human intelligence
in machines. You can think of it as
us humans trying to transfer our intelligence
over to machines. Now, we do have key
characteristics for artificial intelligence. We do have the ability to learn. So AI systems should be able to learn and
improve over time. And then reasoning,
they should be able to make logical reasoning and deductions and then
eventually produce a result, and then perception, which means their ability to process sensory
data like images, sounds, and so on, and
then produce a result. And of course, the use of natural language processing NLP, which allows AI models to understand and interpret
human language. Now, the primary goal of AI at the end of
the day is to allow models to perform
cognitive functions similar to that of humans. Now, we do have
different types of AI. We have the narrow AI, what we call the
task specific AI. It's also called the weak AI. These are AI you would
find in your firewalls, in your recommendation systems, in your virtual chat
boards like Alexa, and even your hagPT cloud, dipsk models, and so on. And then we have the next
stage which would be the general AI which have the ability to possess
human like intelligence. We're not there yet, but a lot of people
believe that eventually we will be able to develop
AI that's this advanced. And then in the future,
super intelligent AI. A lot of people
believe that we're never ever going to develop AI. They'll be that sophisticated. But yet again, we also
have a few people who do believe that we will
eventually get there. So AI in everyday life, we are already using AI in virtual assistant,
like, you know, Alexa, Siri, your recommendation
systems like on Netflix, Spotify, YouTube, your smart home devices
all use AI as well. Our self driving cars, like, of course, Tesla, that's all AI. And we also have AI in healthcare with the use of medical imaging
and so much more. So AI is already playing a
big role in our daily lives. Now, I do want to address some common myths
around the world of AI. The first one here is that
AI has emotions like humans. Now, I am very guilty of
this because whenever I use my favorite AI models like Cha JPT ask you to do something for me, it
does it very, very well. I end up thanking hajibt I
will say something like, Oh, thank you so much, Dos. You did something
wonderful for me today. And, of course, hagibt will respond by saying,
You're welcome. And I say these
things almost sort of like subconsciously because I believe that if I'm nice to hagipti and I praise hagibt whenever it does
something good for me, it's going to reward me in the future with even
better results, which is, of course, not true. Hagibt cannot
understand emotions. It doesn't process emotions
like we humans do. Also, AI will replace all jobs. Now, there is no doubt that AI will replace
a lot of jobs, but it's never going
to replace all jobs. And in fact, AI will also
create new jobs as well. We'll discuss this
in the final module, the feature of AI. And then AI is infallible, meaning that AI
cannot make mistakes. This is, of course, not true. AI can make mistakes. AI can hallucinate information. And remember, that AI is
developed by human beings. So if the human beings,
if they make mistakes in the coding or how the
AI model is trained, the model will make mistakes. It will be prone to
making mistakes, and we'll discuss this
later as well and that AI can think independently. Again, most people believe that we're never going to
get to the stage where AI will become self aware and will be able to make
decisions on its own. But there are still
some people who believe that we're going
to get there eventually. I don't know if that
is true or not, but most people seem to
believe that it's all a myth. AI can only operate based on instructions or rules that
have been given to it. And then finally,
my favorite one AI will take over the world. And I put aesthetics there because when I say AI
will take over the world, the myth here is AI becoming self aware and
decide that, you know what? I'm going to rule
over human beings. Human beings will
become slaves to AI. No, I don't believe
that will ever happen. However, AI, in a way, will take over the world
because we will have AI in just about everything that we do from transportation
to communication, to healthcare, to shopping, to creativity, to entertainment. You name it. We're going to have AI in one
form or the other. Affecting the way we
live our daily lives. So, I wanted to give you a
quick engagement activity. Think about the
very last time that you had an interaction with AI. What AI power tools
do you use daily, and how do they enhance
your experience? Take 5 minutes to
think about this and, I think you might find the results to be very fascinating. So to round up, I just want to give
you a quick summary, AI is a simulation of human like intelligence
in machines, and AI is already a part
of our everyday life. And there are many myths and
misconceptions about AI. I've talked about them already. And AI, the ultimate
goal of AI should be to enhance human capabilities
and not replace them. Thank you for watching. I will
see you in the next class.
4. History of Artificial Intelligence: Now briefly take a look at the history of
artificial intelligence. So, AI's history dates back to thousands and
thousands of years ago. And like I said earlier, contrary to what a lot
of people believe, AI isn't something
that just emerged. It's actually been in existence
for quite a while now. Of course, just like with
any kind of technology, AI has experienced both
progress and setbacks. The setbacks, we call
them AI winters. Now, just to give
you a few examples of the early concepts
of artificial beans, not necessarily
artificial intelligence, but artificial beans. You have in Greek
mythology the existence of a giant bronze
automation called Talos. You may have seen Talos
in certain movies before. And then also Chinese
and Arabic automations. We had early devices that
were able to mimic life. And also Leonardo Da
Vinci's renaissance, he was able to sketch
humanoid robots. So the birth of AI, the 20th century foundations, AI, as we know it today. So a lot of the groundwork
was between the 40s and 50s. And in particular,
we had Alan Twin, the very famous scientist
who back in 1950, he created what's known
as the Turing test. That's used to evaluate
machine intelligence. Now, in 1943, we had the first neural
networks designed by Walter McCulloch
and Walter Pitts. And then in 1956, finally, in the
Damoth Conference, the official birth of the term
artificial intelligence as an academic field was
coined by John McCarthy. So what are the AI
winters and resurgence? Well, in 1970, we had the LA systems failing
to meet expectations, leading to reduced funding. Of course, a lot of companies and countries felt
like, You know what? This AI thing isn't
going to work. Let's stop funding
our researchers. And then 87-993, there
was another setback because our computing power back then was severely limited. Keep in mind that in
order to power AI models, you need plenty of
computational power. And back then, computers were
simply not powerful enough. But what about the revival? See, in 1980s, we had expert systems that gained traction in both
business and medicine, and between the 1990s and 2000, because of advances
in machine learning and the rise of the Internet, all these were able
to contribute to further artificial
intelligence research. So I do have some key milestones in AI development right here. In 1997, we had the IBM superpower computer Deep Blue that defeated then World champion Gary
Kasparov in chess. Now, as a big chess fan, I actually followed the games. The very first time they played Gary Kasparov who, by the way, a lot of people considered to be the greatest chess
player of all time, he actually defeated Deep
Blue in the very first match. Went back for they came
back for a rematch, and then in the
rematch, Deep Blue eventually beat Gary Caspro. So that marked a
significant milestone because it became the
very first time in human history that a machine was able to beat a human
being on the chess board. And then we had some
other milestones in 2011, when the IBM Machine Watson won Jeopardy against
Even Champions. And then in 2016, we
had Googles off a goal. That defeated the World
champion goal player Lee Sedol. And then in the twentytwenies, AI models like the Cha GPI, the emergence of models like
ha GPT, Cloude and so on, have been able to demonstrate
human like text generation. So AI in the 21st
century, as we know it, we now have AI power in
cars like your Tesla, use of natural
language processing that's used by AI models
like your Deep sik, clothe, ajepit and so on. And then also in healthcare, we now have AI Power diagnostics
and also drug discovery. So what is the future of
AI? What can we expect? Well, we can expect AI
powered creativity in art, music, and also advances in
human and AI collaboration. There will be more
collaboration between artificial intelligence
and human beings. And, of course, the use of ethical AI frameworks to ensure that AI is
responsibly used, AI is responsibly trained, and that AI is, in
fact, safe to use. So just a quick lesson summary, AI has existed in human
imagination for centuries now. The 1956 Dermoth
Conference marked the official birth of AI
as an academic field. And then we've had,
of course, the AI winters and then
the AI resurgence. And, of course, modern AI
is powered by the planning, which we will get
into big data and, of course, plenty of
computational power. And finally, AI's
future will include ethical considerations and
human AI collaboration. Thank you for watching. I'll
see you in the next class.
5. Key Concepts & Terminology: Welcome back. So now
let's take a look at some key concepts and terminology used
in the world of AI. We're going to start
off with the AI terms. There's three of them,
artificial intelligence, machine learning, and, of
course, deep learning. Now, when it comes to
artificial intelligence, we've talked about this already. It's basically machines trying to simulate human
like intelligence. You have your
examples with Alexa, Civ, you know, your
virtual assistant. But then we also have
machine learning that's basically a subset of
artificial intelligence, where the machines are able
to learn based on patterns. Your exam one of the best
examples here would be your spam filters in your
email. Think about it, okay? Email spam filters,
they're not rigid. They're able to
identify what is spam, based on trends, based on
patterns, based on history. So think of the machines as
learning in the process. Initially, the spam
filter might not do a good job in being able to
catch every type of spam. But over time, as it begins to learn what
exactly is spam, the different shapes and
forms a spam mail might come eventually with
time, it will improve. And then finally, we
have your deep learning, which is a more
advanced subset of machine learning that uses neural networks to
process complex data. One of the best
examples here would be your facial recognition systems. Now, later on in this course, we're going to delve deeply into both machine learning
and deep learning. But what about the
concepts, okay? There's a ton of them, and you should have an idea
of what they are. The first one in here is
going to be the algorithm. Basically a set of
rules or steps, a machine will follow to solve either a problem or
make a decision. We've seen algorithms in just about every kind of
application out there, whether it's a
dating application or a gaming app, basically, any kind of app or program
uses an algorithm to determine how that application will process data
or make a decision. We have your model. It's basically a
train system that's able to make predictions
based on data. Oh, your a GIPT, your Cloud, your My journey. These are all examples
of AI models. You have the training data, which is the data that's
used to train these models. Again, we're going to talk
a bit later in the course, a bit more deeply how
AI models are trained. And then we have what
we call inference. This is basically the ability
of an AI model to make predictions based on new
data that it has acquired. Say, for example,
you have an AI model that's used to make predictions
in the stock market. So if something new happens,
maybe, for example, a war has broken out or maybe some presidential
candidate has won the election, this might impact the
global stock market. So the AI here would
be able to make predictions based on these new
events that have occurred. That's what we refer
to as inference. And then we have
your neural network. It's basically a
computational system inspired by the human brain
used in deep learning. Again, we'll talk about deep learning later
in the course. And then natural
language processing NLP. This is the ability of
a machine to basically understand and generate
human language, text. Also, there's going to be a
special section of module. Dedicated to learning
more about NLP. And then finally
computer vision. This is, of course, AI
that enables machines to interpret images
and video as well. So these are some of the
key concepts that you should be aware of
when it comes to AI. Now, types of AI, I already
talked about this earlier. You do have your narrow AI that's used for very
specific kinds of tasks. An example here
would be your Google Translate, your Cha JBT. Basically, kind of
like the narrow AI. But then you also
have your general AI, the strong AI, AI that can perform any intellectual
task a human can do. It's still kind of
theoretical at this point. It's not yet been developed. And then, of course, the
super intelligent AI, AI that surpasses
human intelligence. It's a future concept. So even argue that it will
never get to that point while people who believe that we will get
super intelligent AI, they believe that it will take
decades for that to occur. And you should also understand the difference between
automation and AI. See, when it comes
to automation, it follows predefined rules. So for example, the
customer buys product A. Since customer has
bough product A, send customer 25% coupon to buy product B, you know,
kind of like that, right? Automation relies on
rules and triggers. When it comes to AI, it doesn't follow rules. AI is basically able to learn and make
decisions on its own. For example, yourself,
driving cars. So just to give you a
quick lesson summary, AI includes machine learning and deep learning as subfields, algorithms, your models,
your training data. These are all core
components of AI. Now, AI can be narrow, specific tasks,
general AI, like, human like or super AI
beyond human capabilities, and then automation
and AI are different, but AI can, in fact,
enhance automation. Thank you for
watching the video. I'll see you in the next class.
6. AI vs Machine Learning vs Deep Learning: Come back. So now let's take a closer look at the differences between
artificial intelligence, machine learning, and, of
course, deep learning. So AI, artificial intelligence is
the broadest concept, okay? While machine learning would be a subset of artificial
intelligence, while deep learning is a more advanced subset
of machine learning. So think of it this way, okay? At the very top, we have AI. Just below AI, we have
machine learning, and then just below
machine learning, we have deep learning. Now, I have provided an
analogy in here, okay? Think of AI as the
entire universe, right? Machine learning would be like
a galaxy in that universe, while deep learning would be a solar system inside of the
machine learning galaxy. So to kind of round this up, all deep learning is a
subset of machine learning, but not all artificial
intelligence is machine learning. Keep in mind, there's
a lot more to artificial intelligence
than just machine learning. So just to recap that again, all deep learning falls
under machine learning, but not all of artificial intelligence
is machine learning. Okay. So what is AI? We've
talked about this already? Basically, machines simulating
human like intelligence. And, of course, AI is
able to perform tasks, solve them, make decisions,
things like that. So AI, you should know,
doesn't always learn. AI can also follow our
predefined rules as well. Now, machine learning,
it's basically a subset of artificial intelligence
that would allow machines to learn from
data and patterns. And of course, this
would allow that machine to make predictions and
solve problems over time. There are three ways how
machine learning is done. The first will be what we
call supervised learning, where the training data that is given to the machine
is actually labeled. Imagine you're trying
to train a spam filter. So under supervised learning, the machine will be given
different types of spam emails, and the data will be labeled, Okay, this is spam, this
is spam, this is Spam. So over time, when
the machine has seen all these examples of the different types
of spam emails, it will be able to learn
and make predictions in the future of whether or not a particular kind of
email is spam or legitimate. And then we have what we call
the unsupervised learning, where the training data isn't
labeled and the AI model learns how to make predictions
based on these audib data. Basically, it tries
to find patterns. One of the best
examples here would be in customer segmentation. And then the last one is what we call the
reinforcement learning. Think of it as a rewards
penalties kind of system, where when the AI, when the model makes the
right kind of prediction or is able to solve
a problem or gives the right answer, it
will be rewarded. But when it makes a mistake,
it will be penalized. So that's what we refer to as
the reinforcement learning. So we've seen machine learning in many examples,
your spam filters, your YouTube Netflix
recommendations, and even fraud detection
in banking as well. So machine learning
relies heavily on algorithms to identify
patterns and make predictions. But what about deep learning? It is a subset of machine
learning that will use artificial neural networks to process large amounts
of complex data. So it uses multiple
layers of nuance. We call them the deep
neural networks. Works best with large datasets and high competitional power. In other words, you need very powerful computers to run deep learning,
and of course, it enables AI to
perform human like tasks such as your
speech recognition, facial recognition,
and so much more. Now, we have seen deep
learning in several examples. For example, your facial
recognition in your smartphones, fingerprint detection as well, self driving cars like
your Teslas, and, of course, in your AI models
like TajiPT and so on. Now, deep learning is the most advanced AI
technique that allows machines to basically mimic
human brain functions. So I've given you the
table in here to highlight the key differences
between AI ML and DL. We have the features
in the definition, like we've said, AI basically is machines that
mimic human intelligence. Machine learning learns
from data and patterns, while deep learning Wood is an advanced ML subset that
uses neural networks. When it comes to the
data dependency, now, AI by itself may not require data because AI can
also just learn on its own. However, with machine learning, it requires structured data. With deep learning, it requires, large amounts of data sets. Examples, of course,
under your AI, we have your chat boards,
virtual assistance. For your machine
learning, we have your spam filters,
recommended systems, for your deep learning,
your self driving cars, as well as your
facial recognition. And then the last
feature, the complexity. This is very, very
interesting. Now, with AI, it's a broad field. Includes rule based
artificial intelligence. And then for machine learning, simpler algorithms, but
it requires training. And then for deep learning, it's highly complex and requires
very powerful hardware. So these are some of
the key differences among these three terms. So I wanted to give you a real
world example of how these three come together to power
something very powerful. So say, for example, your Tesla, a self driving car. Artificial intelligence,
basically, will enable the car to
make the decisions, okay? So because of AI, the car knows that it
probably shouldn't be speeding at a highly
populated area as an example. But then with machine
learning, because remember, machine learning
requires patterns and data to learn from. With machine learning, the car might be able to
make predictions on what the traffic
is going to be like at a certain period
of time during the day. It might also be able to make predictions based on what
the weather might be like, things like that because
of machine learning. And then deep learning
is what would allow the car to be able to
interpret road signs, traffic signs or even recognize pedestrians
trying to cross the road. So when you combine all three, think of it this way, right, AI is basically kind of like the overall system of
the self driving car. The machine learning
is what would allow the car to improve
and learn over time, while deep learning will make
the car highly efficient. So this is how these
three come together to power your Tesla or other
self driving cars out there. So quick listen summary, AI is the broad field ML is, of course, the subset of AI. That would enable machines to
learn from data while DL is the very advanced subset of ML that uses neural networks
for advanced learning. And of course, AI, ML, and DL are interrelated, but of course, have
distinct differences. Thank you for watching. I will
see you in the next class.
7. Section Preview The Foundations of Artificial Intelligence: I'm to motel to the foundations of artificial intelligence, and it is time for
another movie clip. And something tells me
that you've probably seen the movie where this clip
is going to be taken from. Nevertheless, sit back relax. Enjoy the clip, and I'll
see you at the end of it. Right now, we're inside
a computer program. Is it really so hard to believe? Your clothes are
different. The plugs in your arms and head are gone. Your hair has changed. Your appearance now is what
we call residual self image. It is the mental projection
of your digital self. This this isn't real. What is real? How
do you define real? If you're talking about what you can feel, what you can smell, what you can taste and see, then real is simply
electrical signals interpreted by your brain. This is the world that you know. The world as it was at the
end of the 20th century. It exists now only as part of a neural interactive simulation
that we call the matrix. You've been living in
the dream world, Neo. This is the world
as it exists today. Okay, welcome back.
And, of course, that clip was taken from the very popular movie the
Matrix released in the 1999. And if for some reason, you've never seen
this movie before, what are you doing
with your life? Stop watching this course
and go watch the movie. Now, I'm kidding, finish
this course first, and then you can go
and watch the movie. But seriously,
though, the Matrix, in my humble opinion, is one of the greatest movies ever made. It raises so many interesting
questions at the action, and it's a great movie. You simply have to watch it. Now, why did I choose to
use this particular clip? Well, because it raises a ton
of fascinating questions. I'm going to tackle two of them. First of all, let me describe
what happens in the scene. You have Mofius, the man
with the dark shades. He is explaining to Neo, the other guy that, Hey, this world we're in right now, this is all virtual reality. It's not real. It is fake. It's generated by a
very powerful AI system known as the matrix. Now, Neo is obviously
very surprised. He's shocked. He's like,
No, how can this be? No, this is real.
This can be fake. He touches the chair,
and Morpheus asks him, how do you define what is real? And I thought that's a very,
very fascinating question. How do you define what is real? And the reason why this is
fascinating is because today, we have AI generated
videos, images, deep fakes. And even though today we can to a very large extent tell what is real and what is AI
generated, think about it. In a few years to come, the kinds of content that AI will be able to
generate will be so realistic that we
might not be able to distinguish between
what is actually real and what is
generated by AI. We might need certain kinds of systems or scanners or
algorithms to help us detect whether or not that
image or that video we're watching is actually real
or fake. Think about it. So it's very, very fascinating. How are we going to be
able to define what is a real image and what is
an AI generated image? Another question here, though, is in the clip, we see the influence
that the matrix now has over the human
population, right? The matrix is very powerful. It's been able to create
this virtual reality, so it has a lot of influence
on human beings, right? Now, I know the matrix is an extreme example of AI's
influence over human beings. But think about it today, Today, believe it or not, AI already has some influence
on our daily lives. You don't believe me,
when you go on YouTube or you go on Netflix or Spotify or any one
of these platforms, you always have this
recommendation tabs or systems, right, that will recommend content to you, based on your history, based on your search results, and sometimes recommendations
might even just be random. But think about it.
Those recommendations already begin to influence
the way we think. It might influence us
to begin to support a particular political
party or candidate. It might begin to influence
the way we buy things. It might begin to
influence the way we think about certain kinds of controversial
topics and so on. So in a way, these
recommendation systems that are powered by AI are already beginning
to have some level of influence on how we
live our daily life. So that begs the question
how much more influence will AI begin to have on the way we live our lives as it becomes
more and more intelligent? Because, believe it or not,
whether I like it or not, AI is going to be introduced to just about every
part of our lives, whether it's communication,
transportation, shopping, entertainment, creativity, AI is going to come everywhere. So just imagine the
level of influence that AI is going to have over
us in the near future. Anyway, let's move on to the next lesson
where we're going to talk about the
actual foundations of AI. I'll see you there.
8. Foundations of AI: Now take a look at a very,
very important topic, and that's going to
be the foundations of artificial intelligence. Now, contrary to what a lot
of people might believe, AI isn't just limited
to the tech field. It actually cuts across
multiple disciplines. As an example, you'll have AI obviously in
computer science, where the use of algorithms, data structures,
programming code, and so on used in
AI then you also have AI in the field of
mathematics and statistics. Don't forget that we can use AI for mathematical
calculations, for calculating
probabilities and so on. And then you also have AI in the field of cognitive
and neuroscience. Because think about it, right? In order to be able to develop artificial intelligence
that's meant to mimic human intelligence, we first need to understand human intelligence
in the first place. And then we also have AI in
the field of linguistics. This is, of course, essential for natural language processing. And perhaps very surprising, you'll have AI in the field of philosophy and ethics
because think about it, right? The major challenges involving
AI revolve around ethics, privacy, and whether or not it's actually moral to use AI. So contrary to what
a lot of people might think about AI
just being a tech field, you have AI in multiple
disciplines as well. Now, let's take a look at the core AI principles.
There's six of them. I want to take a look
at them one by one, starting off with the
logic and decision making. Many AI powered models, they rely on logic in order
to make the decisions. For example, you have your
bullying logic that uses operators like your N or naught. For example, if A is equal to B, B equals to C, A must be equal to C, something
like that, right? And also the rule
based system where you have your E
then L statements. For example, if weather is
rainy, then take umbrella. Else, if weather isn't raining, then don't take umbrella, you know, things
like that, right? And then also expert systems. These are AI that would mimic human expertise by
following predefined rules. You have them in your
medical diagnostics. We do have certain limitations, though, involving the
logic and decision making, and that's because AI can
struggle with uncertainty, and whenever complex decisions need to be made to
keep this in mind, next principle in here
would be the principle of probability and uncertainty. We've talked about the
role of big data and how important data is to
artificial intelligence. But there are many situations where an AI model
may need to make decisions based on
incomplete or noisy data. As an example, in
your Baysia networks, you have AI used for probabilistic reasoning
helping machines make educated
guesses, for example, in your spam filters, but
then you also have them in your markov decision
processes, your MDP, where AI is used for
decision making in certain environments like your robotics,
finance, and so on. And then also in your
Monte Carlo simulations, that's typically used for
risk analysis and gaming. The next concept in here is the optimization and learning. Of course, AI is constantly learning and
optimizing at the same time. So AI can make use of optimization algorithms
that would help the AI adjust its parameters
to minimize errors. An example in here would be the gradient descent used in ML your machine learning to fine tune models by
reducing prediction errors. And then the concept of
linear programming where the AI is taught resource
allocation and saddling tasks, and then evolutionary
algorithms. This is inspired by
natural selection. These algorithms evolve
solutions over time. An example in here would be
your genetic algorithms, your GA that optimize solutions
for complex problems. Now, one of AI's greatest
strengths involves the ability to recognize patterns and learn from
those patterns over time. And one of the key
concepts in here involves the neural networks that you
find in your deep learning. So in here, the AI mimics the human brain to
recognize patterns in data. As an example in here, your
image recognition systems, that can detect
objects in photos. Also the concept of
feature extraction where the AI is able to break down
the data into key features. For example, in your
voice assistant, the AI model can break
down the voice into several segments and can then understand commands
based on those segments. And then also in the
concept of clustering and classification where the AI can group data into meaningful
categories or a in your customer
segmentation and so on. And then reinforcement learning. We've talked about
this a bit earlier. This is the trial and error or the rewards and penalties kind
of training for machines. So AI can learn through
trial and error, much like how humans also do learn from experience, right? So we do have the reward based learning where the AI
can receive rewards or penalties based on how well it performs in a
test or in an exam, and then exploration
and exploitation. So AI must balance train new strategies against
using known ones. So the AI model needs
to kind of, like, find a fine balance
between both and not rely extensively
on either one of them. As an example, your self
driving cars like your Tesla, they learned the best
driving actions by being rewarded for
safe behavior. And then also your
chess Alpha zero, a very, very powerful
chess model. It learned how to play chess
expertly well by simply playing millions of
games against itself. Now, heuristics and
approximate solutions. This is very, very interesting. So over here, sometimes, being able to find the exact solution could be impractical, based on the problem based on the challenge
being provided to the AI. So the AI needs to
make use of heuristic, what we call
intelligent shortcuts. So we do have the heuristic
search algorithms. For example, the AI will simply search for the good
enough solutions faster, say a star algorithm for
path finding in maps. Sometimes when they're trying
to search for something, say for example, in your
Google search engine, you may not provide the exact kind of terms
that you're looking for, but the AI needs to be
able to make a calculator guess what it is
you're trying to find, and then the AI will simply
provide the best results. And then fuzzy logic,
what exactly is this? So, AI makes decisions
based on the grades of truth instead
of binary choices. As an example, AI in your
air conditioning systems, they are just temperature
based on comfort levels. So it will try to make its decision based on
what it thinks would be a reasonable temperature for the human being and not
exactly a binary choice of whether to turn on the air condition
system or power itself off, if that makes sense. And then I provided the table in here for the core AI principles, again, the description
and then example. So you can pause this video and go through them at your leisure time if
you're interested. Then the key scientific
concepts that are used in artificial
intelligence, linear algebra that's used in your deep learning networks. You have your probability
and statistics. Of course, AI needs to make use of this
to make predictions, and then your neural
networks used specifically in deep planning that tries to mimic
the human brain, and then the genetic algorithms inspired through evolution. And then finally game theory. This is, of course, decision making in competitive
environments. So what is AI's connection
to cognitive science? Well, AI models human condition in areas like its perception, reasoning and problem solving. And then the study of human
intelligence can also help to improve artificial
intelligence by itself. So as an example, in your
reinforcement learning, it is inspired by
behavioral psychology, which is, of course,
reward based learning. So these are some of
the ways how AI is actually associated with the
field of cognitive science. So what are the key
takeaways from this lesson? Well, first of all,
AI is, in fact, a multidisciplinary field
involving computer science, math, cognitive science,
philosophy, and so much more. AI operates on logic, ability, optimization,
and, of course, learning. And then data, as I've
said before, is the fuel. It is the bloodline of
artificial intelligence, and different types
of AI systems will use different approaches. And then understanding these fundamental scientific
concepts will help in grasping how AI works
at a much deeper level. Thank you for
watching the lesson. I will see you in
the next class.
9. The Role of Data in Artificial Intelligence: Now talk about the role of data in artificial intelligence. Now, I like to think of data as the lifeblood of AI models
because without data, AI models will not exist or
they'll be very inefficient. So why is data crucial for AI? AI models are only as good as the data they have
been trained on. So the more high
quality data available, the better AI will be able to learn patterns
and make predictions, improve accuracy and efficiency
over time, and of course, adapt to new situations and
refine its decision making. And pretty sure in
computer science, you must have heard of the
term garbage in, garbage out. Basically, if a program
has been designed to make mistakes or not
solve problems accurately, then that's exactly what
the program is going to do. And that's kind of similar with artificial intelligence
models as well. If they've been trained on very bad data, then guess what? That artificial
intelligence model is probably not going
to be intelligent. It's going to be intelligent. That's why the quality of data used to train models
is extremely important. Now, these are the types
of data that we use in AI. We have your structured data. For example, this would
be data that's been organized into tables,
rows and columns. So for example, you have your data from
your spreadsheets, Excel files,
databases, and so on. But we also have the
unstructured data, which is basically raw data that doesn't fit a fixed format. So examples here
would be your images, your videos, your
audio, and so on. And then the last
kind of data will be your semi structured data.
What exactly is this? Well, it's basically data that falls in between your
structured data. And your unstructured data. So examples here would
be your JCNFles, XML, sensor logs, and so on. And when it comes to
data processing in AI, there are four main steps. The very first step would be the actual collection of the
data in the first place. So the AI will collect data from a wide variety of sources
like the Internet, your databases, user
feedback, and so on. And then once that data
has been collected, the data needs to be cleaned up. So in here, the AI
will try to remove, for example, duplicates of records that might
already existed. So say, for example,
customer records. If the AI finds out that, Oh, this particular customer has two exact same records
in our database, just go ahead and
remove one of them. So that's basically the
next process data cleaning. And then after that, we
have the data labeling. So in here, your data can either be categorized or
could be tagged. So example would be
tagging emails as either spam or not spam. And then the final step would be your data transformation where the data could be converted
into usable formats like say, your PDF files, Excel
sheets, and so on. So those are the four stages
of data processing in AI. Now, when it comes
to big data itself, there's four features we need
to be aware of the four Vs. The first one here
would be volume. Okay? So basically, the bigger
the volume, the better. The more data you're able
to train your AI model on, the better it is going to be. Next one in here would
be the velocity, the speed at which new
data is generated. And of course, in the
world we live in today, that is extremely fast. Next would be the variety, the different types of data that the AI
model is trained on, whether it's audio, video, images, text,
files, you name it. And then the last, possibly
the most important veracity, how accurate is the
actual data itself? Obviously, it's not
going to matter if the volume is so large
and homod variety. If the veracity is poor, that data is basically
going to be useless. That's why, in my
humble opinion, I think out of the four Vs, veracity is going to
be the most important. So we do have several
data challenges when it comes to AI. We have data bias where an AI model could be
trained on biased data. And because of that, the AI begins to make certain
kinds of decisions, and it may lead to unfair or
discriminatory decisions. We have data privacy as well because an AI model needs to be trained on large
amounts of data. There is the possibility that sensitive or private
information may be fed into the AI in
order to train it. And of course, this
will raise concerns about security and ethics. We have the data quality. Again, very, very important. How how much of good quality is the data that's been used to
train the AI model? So if the data is poor
or is of low quality, this could lead to the AI
making poor decisions, and then of course, data
storage and management. So AI requires massive
storage capacity, and the impact here
is that it's going to require efficient data
handling as well. So it's not that easy. Now, going a little bit deeper, we do have the ethical
and privacy concerns when it comes to AI data. So user content and privacy, AI should not collect or use personal data without consent. This is what we
like to believe in, this is what we hope would
be the case with AI, but you never really
know there's always that concern regarding
the use of AI. And then, of course,
bias and fairness. Again, if the data
if the AI model, excuse me, has been
trained on biased data, then the AI could make
discriminatory decisions, and then transparency,
very, very important. Users should know how the AI systems use the data and possibly
maybe even important, the AI should be able to explain its decision on why it did something a
certain kind of way, transparency, very,
very, very important. So for example, in
facial recognition, AI has been criticized for racial bias due to biased
training data sets. We'll talk about
this a bit later. And when it comes to
AI in hiring, as well. So AI based hiring
systems have been found to discriminate
against certain groups if trained on biased data, again, how efficient an AI model is will depend largely on the quality of data it
has been trained on. So we do have the
process by which AI is able to improve its
decision making through data. So the very first phase here in the loop would be the
actual training phase where the AI learns from either history or data
that's been fed to it. And then the AI will
now be able to make predictions based on
what it has learned. And then when it
makes a prediction, the users will be
able to provide feedback to the AI or
even the developers, they'll be able to
tell the AI that, Hey, you got the answer correctly or the prediction you
made was actually false. And because the AI has gotten
the feedback from the user, it then goes through a re
training phase again to learn based on the new feedback
that user has given it. So that's how it
kind of goes through this constant loop of
trying to improve. So what is the future
of data in AI? Federated learning
where AI models will train on user data without transferring the data
to a central server, thereby improving privacy,
and then synthetic data where the AI itself might be able to generate
artificial datasets. I real datasets
aren't available. And then the last one in here, the explainable AI DX AI
very, very important. Where the AI should
be able to explain the reason why it made
certain kinds of decisions. This would be a giant step forward regarding transparency
in the use of AI. So some key takeaways, data
is the foundation of AI. Without it, AI cannot learn
or function effectively. Of course, AI could
use structured, unstructured or semi structured
to learn. And improve. Big data, of course,
enhances AI's performance, but you do have
challenges like your, you know, privacy,
bias, and so on. And then ethical
considerations are important when handling
data in AI systems. And then finally,
AI continuously improves through
our feedback loops. So that's a thank you
for watching the video. I will see you in
the next class.
10. Algorithms & Models in Artificial Intelligence: Welcome back. So now let's take a look at the different types of algorithms and models
used in the world of AI. But first of all, what
exactly is an algorithm? What is a model? Algorithms are basically
predefined rules that an AI model can
use to process data. But AI models themselves, these are simply
trained versions of algorithms that are able to make decisions and maybe even predictions based on new data. So over time, AI models
will improve because they're constantly
learning based on history, based on user feedback,
and so much more. As an example, your
spam filter, over time, the filter will get
better and better because it's able to learn from previous spam emails
and maybe even emails that it incorrectly
identified as spam. Over time, it will improve. But what are the types of
algorithms that we have? We do have the rule based
or your symbolic AI that uses your predefined
rules and logical conditions. So basically, it's quite rigid, is yes, no, is no, yes, cannot be no, you
know, stuff like that. So it will work well with
very structured problems, but these kinds of algorithms will struggle with uncertainty. One of the best cases where
these kinds of algorithms are used would be in your
medical diagnostics. Another type of algorithm
would be, of course, the deep learning
and neural networks. These use multi layered artificial neural
networks, and of course, they excel in complex tasks
like your speech recognition, your facial
recognition, and so on. And of course, a gepty
and image recognition AI, they use deep learning
to be able to identify images and also
generate text as well. Now, machine learning, we've
talked about this already. They learn patterns from data instead of
following strict rules. So these kinds of
algorithms are a bit more they're less rigid
in their approach. So we do have the
different learning types. You have your
supervised learning, unsupervised, and of course,
reinforcement learning. We've talked about them already. And then the common AI
algorithms and the application. So as an example,
your decision trees, this is an example of algorithm. Use supervised learning,
and you have them in your fraud detection,
your medical diagnosis. You have what we
call the support vector machines, your SVMs. They use supervised
learning as well. You can find them in
text classification, handwritten recognition,
and so much more. And then K means clustering. These use unsupervised learning. You'll find them mostly in market segmentation,
anomaly detection. You have your neural
networks, deep learning. Of course, this will use
supervised learning, and you have them in
your image recognition, speech to text, and so on, and then your genetic
algorithms that use optimization and these are AI driven design
evolutionary computing. So what exactly is the training process for an AI models
typically six steps. The first step is
always data collection. Again, I've said this
many times before, data is the lifeblood
of an AI model. The AI model needs the data to begin to
learn to begin to train. So once the data
has been gathered, we now have data preprocessing where the data will be cleaned
up and will be formatted. And then after that, the model begins to train based on the data that
it has been provided. And then the model
will now be evaluated. It will be tested with
new kinds of data. So it could be like an exam, a test just to see how
well the AI will perform. And, of course, if it does well, the AI is then deployed
into the real world. And of course, over time, the AI will constantly improve because it's able to
adapt and learn over time. So with all this in mind, how exactly is an
algorithm chosen? There's different types. So how do developers, how the companies
decide which algorithm to use when trying to
train their model. So there are several
factors involved in here. First, of course, will be
the data availability. Some models like your
deeper learning, they need large datasets while
decision trees an example, work better with small
amounts of data. And then the accuracy
that is required. Obviously, if you want an
AI model to be able to make quite good
accurate predictions, then you might be looking at neural networks that are able
to provide higher accuracy. But of course, this will require more computing power as well. And then interpretability. So decision trees are
very easy to understand, while deep learning networks can be quite complex to understand. And there are also situations
where a particular kind of AI powered system might
use one or more algorithm. So for example, in your
bank for detection system, it could use a decision
tree for interpretability, and then it could use
deep learning to be able to recognize a complex
pattern detection. So what are the
challenges involved in AI algorithms and models? We've talked about the bias
in training data where the data are being used to train the algorithm or the model
could be of low quality. This will, of course,
lead to the model making bad decisions, making lots of errors, and
then computonal power, especially when the deep
learning algorithm is required. This will, of
course, means a lot of powerful computing
resources will be needed, and then explainability,
as well, complex models like
neural networks. They work fantastic,
they're extremely powerful, but they can be quite difficult to understand and
how to explain. So what are the future trends in the artificial
intelligence models as years go by? Explainable AI. We've talked about
this already, where AI will be able to explain the decisions
that it has made. This will, of course,
improve transparency. And then hybrid AI models. This is a very, very interesting concept where we can combine different AI approaches
for better performance. And then we also have the
edge artificial intelligence. Basically AI models that will
run on small devices like smartphones for
real time process. And these are some of
the trends that we can look forward to in
the world of AI. So just to round up, a few key takeaways,
first of all, AI uses rule based
machine learning and dippling algorithms
for decision making. Supervised, unsupervised, and reinforcement learning are key machine learning types. AI models must go through,
of course, training, evaluation, and continuous
learning in order to improve. And then finally, choosing the right algorithm will
depend on accuracy, data availability, and
computonal requirements. Thank you so much for
watching the video. I will see you in
the next class.
11. AI Capabilities & Limitations: So before we round
up this module, I wanted us to take a look at the capabilities and
limitations of AI, as we know it, starting
off with the capabilities, what exactly can AI do? Now, I have listed several
capabilities in here, as well as the description, as well as the examples. I'm going to go
through a few of them. Let's start off with the
automation of repetitive tasks. Now, this is one area
where AI has excelled in whether it's in finance or datasets or in my
field of cybersecurity, we now use AI to perform
repetitive tasks. So it can perform repetitive tasks with
high accuracy and speed, and we see that
with chat boards, automated, customer
support, and so on. And then when it comes
to predictive analysis, AI is able to forecast future
trends based on past data. So in that scenario, it is very, very important that the AI is given the right
kind of data with which it can make fairly to quite good accurate
predictions for the future. Now, we've seen this in, like,
stock market predictions, as well as the weather
forecasting, things like that. And then when it comes to
image and speech recognition, AI is able to identify
text, object, and sounds. It's not perfect yet,
but we're getting there. I think AI today does a very good job of being
able to identify these. And we can see examples in self driving cars in your security
surveillance, and so on. And finally, when it comes
to robotics and automation, AI can power robots for precision pased
work, and examples, we have them in healthcare
with robotic surgery, as well as in industrial robots. So you can, of course, take a look at the slide
which I'll present to you and look at the other
capabilities in the but I've also provided
a table listing out the limitations
what AI cannot do. So as an example, lack
true understanding. So AI, it can process data, but you see, it doesn't understand data
like you and I do. Hopefully, I'm speaking to
a human being, as well. I'm just joking. So
that's the thing, yeah, AI, it says the data. It can handle the data. It can process the data, but it doesn't
actually understand what the data actually is. It's just processing the data and giving us results, right? And then when it comes
to true creativity and innovation, AI, it can generate content, but it's always going to lack the human creativity,
the human touch. So dependence on data as well, AI still relies heavily
on very large data sets. And of course,
you're talking about the high computtonal costs. So training very
complex AI models requires very, very
expensive hardware. And, of course, the general intelligence
limitations, yes, we can have AI that can
beat human beings at chess, but it still fails at
common sense or reasoning. Again, you can take a look at the slide for the
other limitations. So just to summarize
the strength of AI, AI can handle big data. In fact, it Excels, I thrives when presented
with big data. And of course, speed
and efficiency. AI has outperformed humans in computional tasks and
it's not even funny. And then, of course,
27 availability. Hagibty will never
tell you that it needs a lunch break or that it needs to sleep. That's never
going to happen. And then of course,
scalability AI solutions can be deployed globally very, very quickly without any
sort of human limitation. So the summary for
the weaknesses, and I know this may sound very, very harsh, but AI still
lacks common sense. If you put AI in an unexpected or sort of unusual situation, it will struggle because it
struggles with ambiguity. Now, AI cannot replace
human judgment, okay? So remember that AI no matter how intelligent it might become, it doesn't have a heart. It doesn't have emotions. AI cannot sympathize,
it cannot empathize. I cannot get angry
or sad or happy. And as such, it will
lack ethical reasoning. And, of course, AI
depends on training data. The volume and the quality
of the training data will determine just how efficient
the AI model actually is. So the comparison in here, human versus AI I have
the table of ideas. So creativity. Humans of
course, very, very creative. AI's limited. Uh,
decision making, AI will make its decision based purely on its ability
to recognize patterns. While we as humans, we can make decisions
based on experience, intuition, and also the past. And then learning ability, we can learn new skills
very, very flexibly. That's what our brain is for. While AI needs retraining
for new skills, and then bias, of course, we can be influenced
by emotions, while AI can only inherit biases from the
data it's been given. If the data is free of any bias, then there is no way the
AI will become biased. And, of course, processing
speedway a lot slower than AI, while our AI is much, much faster than
us when it comes to our structured tasks. So key takeaways, AI
is extremely powerful, but of course, it does
have its limitations. It excels at automation
and prediction, but it's always going to
lack human level reason. And then AI depends very
largely on data and algorithms. And so poor data will lead
to inaccurate results. Bias data will lead to the
AI making bias decisions, and then AI will never be able
to replace human judgment. And the future of AI, we hope will include
more ethical, explainable, and also
adaptable models. So that's pretty
much the summary. Thank you for taking
this particular lesson. I will see you in
the next class.
12. Section Preview Machine Learning Basics: Welcome to the next module,
machine learning basics. And as usual, I'm gonna play
you a clip from a movie, so sit back relax,
enjoy the clip, and I'll see you
at the end of it. There's six up.
There's no way you can win that game. I know that. It doesn't hasn't learned. Is rn go to make it play itself? Spin Learn, God damn it. A. Christ James Game, the only winning
movie is not today. Okay, welcome back. Now,
that clip was taken from the movie War Games
released in the year 1983. And the reason why
I chose to use this particular clip is
because it perfectly demonstrates one of
the key concepts as to how AI models learn, and that's through the
process of trial and error. Now you'll observe
that in the clip, AI system known as Joshua, it plays a game of tick
tack toe against itself. The game ends in a stalemt
and then it continues to play the games against itself over and over and over again. But observe that each time
it changes its strategy, but the games keep
ending in a stale mit. Eventually, it now decides that, you know what, before I
launch the nuclear missiles. Oh, by the way, I should
have given you some context. In the scene, it's
supposed to be a very grave scene because
the system, Joshua, the AI model, it's taking
control or it's about to take control of the United States nuclear missiles weapon system. And, of course,
people are panicking. They're afraid that
it's going to launch nuclear missiles against
the Soviet Union. Sovnia will respond, and, of course, all of us, we're all going to die. So Joshua decides
that, you know what? Before I launch these
nuclear missiles, maybe I should try
different strategies to see who would actually
win in a nuclear war. So the first
strategy it launches missiles from the United
States to the Soviet Union, but then it realizes that, Okay, this strategy isn't
going to work because there isn't
going to be a winner. It then tries another
strategy where it is the Soviet Union that
launches the missiles first. But then it realize that, Okay, this strategy also doesn't
work. There is no winner. And then it tries hundreds of
different other strategies. And each time it realizes that there isn't
going to be a winner. So eventually, it's
able to conclude that the best way to win this particular game
is not to play at all. So the AI, Joshua, was able to teach itself. It learned through the
process of trial and error. Each time it tried a strategy and the
result was negative, it went back, readjusted
its technique, readjusted its strategy,
and then tried again. And then when the
result was the same negative stalemate
went back again, we find a strategy
one more time, and that's basically
how AI systems learn. That's how they teach
themselves. They try something. Oh, the answer wasn't
correct. Let me go back. Let me try a different strategy. Oh, that doesn't
work. Let me go back. Let me try a different
strategy and so on. So the concept of
trial and error, I believe, was well emphasized
in this particular clip. So I hope you enjoyed this introduction to the
world of machine and Basics. Let's now move on to
the very next lesson.
13. How Machines Learn in Practice: Alright, so now let's take
a look at the next lesson, we're going to talk about how
machines learn in practice, how are they trained. So the whole machine
learning process involves three key components. You have the data,
the models, and, of course, the actual
training itself. The other thing about
machine learning is that unlike traditional
programming, where the code is
explicitly written and the programs have to perform
exactly how they are coded, with machines, they
don't memorize things. Instead, they're able to find patterns within the data
that they're working with. So in other words,
machine learning involves a much more flexible process by which the models are able to learn and improve over time. Now, the whole learning phase is all about the
machine or the model trying to adjust or readjust its processes so that it
can get better with time. Now, just like with
normal students, I'm pretty sure you have had
exams in the past before, before you took that exam back in college or in high
school or wherever, I'm pretty sure you must
have studied first, right? Maybe you took a course, an online course, or maybe
you read a textbook, right? Now, as you were learning
and preparing for the exam, I'm pretty sure you took certain
kinds of quizzes, tests, and then ultimately you
went for the final exam, and then of course, you passed. It's kind of similar with
machine learning as well. First, they provided with
data that they will study. And then there will be this particular stage
where they'll be tested just to see if they're actually learning correctly
in the right way. And then ultimately, they will be tested with
new types of data that they've never seen
before just to test how they'll perform
in the real world. So it's kind of similar with
machine learning as well. Now, the key idea here is that machine learning does not
involve memorizing data. Imagine a spam filter, right. Imagine you're trying to build artificial intelligence
for a spam filter. There is only so many
types of spam filters that the machine can memorize. Okay, this is a spam email. This is a spam email. This is a spam email and so on. But then what if that particular
model is presented with a new type of spam email that's maybe presented in a
slightly different way? The model is going to
fail because it hasn't memorized this new particular
version of the spam mail. That's why it's always
better that machines and models learn through the process
of identifying patterns, okay, they're more
flexible that way. So there is this thing called
feature selection, right? Feature selection
involves the machine or the model simply looking for the most important
parts of the data. So what exactly are features? There are relevant pieces of information that would help the model or machine
make decisions, right? So choosing the right kinds of features will help the model
improve in its accuracy. And of course, removing any unnecessary kinds
of features will also minimize errors
and will also help the model improve
on its accuracy. Now, I've given an
analogy in here involving the sale of a house, right let's say, for example, you are trying to
build a model or an artificial intelligence
that can make predictions on how much
a house would cost. The right kind of
features you'll be talking about here would
be what location, okay? Where is the house located? And then let's talk about
how big the house is, okay, how many bedrooms it has. These are the most
important kinds of features that the model
should be trained on. The unnecessary features here
would be things like what's the color of the TV in
the bedroom, right? Like, things like that, what's the color of the floor
in the bathroom? Like, these are very,
very unnecessary features that will
not help the model, make a good prediction. On how much the house
would actually cost. So there is also this
process called optimization. Optimization is where
the models will adjust over time to improve
the levels of accuracy. Now, there is a key concept here called the cost function. The cost function is basically the value that determines how far off the model was for
making the actual prediction. So going back to the sale
of the house, right? Imagine the model
predicted that, okay, this house is going to
be sold for $500,000. But imagine if the house was eventually sold for $750,000. The cost function here would
be, of course, $250,000. So over time, through the
process of optimization, the model is going to get
better and better and better and try to reduce
the cost function. So that maybe in the next
time, instead of 500, it might say 700,000, the house gets sold
again for 750, but now the cost
function is only 50,000. So over time, the
model has actually improved through the
process of optimization. So how does the model
actually improve well? It's going to update
its internal parameters to reduce the prediction errors, and of course, it's
going to learn from its mistakes as well. These are the ways how the
models will improve over time. There is also this technique
called the gradient descent. This is the actual
learning process itself, and it's a technique
that will help the model adjust a little by
little step by step. The whole idea here
is that whenever the model is trying to improve and reduce its cost function, it's not going to
take giant steps. It's not going to it's
not like it's going to undergo massive improvement
very, very, very quickly. It takes time, steady,
steady, steady, right? So, think of it as finding the lowest point on a mountain. The model will keep adjusting itself until it finds
the best settings. So I've given you here an example involving
the spam filter once again. So the spam filter
predicts spam for email. But let's say, for example, it's actually not spam,
it's a real email. So over time, the model will
realize that, Oh, okay, so these kinds of emails that I have labeled
as spam before, I now know they're
no longer spam. They're actually write emails,
the legitimate emails. But these are the ones, okay, I now know this
is actually spam, and that's how step by step, gradually the model
will improve in its ability to determine whether
an email is spam or not. So the learning
rates, it's very, very important that the
model actually finds the right balance during the learning phase because if
it's learning too quickly, it will never find the right answers and the right solutions. But then if it's
learning too slowly, it's going to take forever
to get the right answer. Think of it as you're trying to turn the frequency knob
on a radio, right? Maybe you are trying to find the frequency of your
favorite radio station. If you turn the knob
way too quickly, you're never going to find the actual frequency because
you're being too fast. But now imagine if you are
turning the knob very slowly, very slowly, it's going to take forever for you to find
the right frequency. So there has to be
the right balance with the learning
right for the model. So why do some models
learn better than others? Well, it all comes down to
the quality of the data. Remember that data is so important when it comes to
training models and machines. So the quality and
the quantity of data is going to be will
play a key role in here. And then feature
selection, of course, the models need to
be trained on how to find the right features
when trying to work with data or make predictions or find
answers to problems. And then the model
complexity as well. Imagine if you were
building a very, very simple model for
machine learning or AI. Well, there's only so many tasks it's going to be
able to perform. It might not be able to
perform complex tasks. But now imagine if you actually made a model that's
way too complex, then the roles or the kinds of tasks
you give the model to perform it might just be
a bit of a waste because the model was built for
something far more complex. And then the optimization
efficiency as well, a well tuned model will train faster and will
generalize much better. So these are the key factors as to why certain
kinds of models are performed much
better and are trained much better
than our other models. So some key takeaways
in here, first of all, your machine learning
models learn by adjusting perimeters
to minimize errors, training, validation or testing. They ensure the models
will generalize well. And then feature selection, like I said, is very, very
important. It's crucial. Garbage in, garbage out. If the models are trained with the wrong
kinds of features, they'll perform
very, very poorly. And then optimization
techniques like your grading descent
will improve model accuracy over time, and of course, a
balanced learning weight will ensure smooth learning. So that's it for
machine learning. Thank you for watching. I will
see you in the next class.
14. Supervised Learning in Action: Let's not talk about
supervised learning. So what exactly is this well? This is a type of machine
learning where the model is trained by making
use of labeled data. So what exactly is labeled data? This simply means that
each training example or data with which the
model is trained on, includes both the input, which would be the features, and then the correct output, which would be the actual label. So the whole idea here is for the model to be able
to make predictions and find relationships between the inputs and the outputs. So how exactly does this occur? First stage is that the data will be collected
and will be labeled. Going back to my favorite
example, the spam filter. Thousands of emails could be collected in the first stage
and then they'll be labeled. So we will have emails
that are spam and then emails that in spam emails that are
actually legitimate, right? And then the model
will be trained. I'll be trained to identify which emails are
spam and which wins. And then the third stage, the model will now
be tested with new types of emails it has never seen before to be evaluated. And of course, if
it does very well, it will then be deployed
into the real world. That's how the whole
process actually works. Now, when it comes to
supervised learning, there are two types we have classification and
we have regression. What exactly are these two? With classification, this
is where the model will assign an input to a
specific kind of category. It works best when
the challenges or the tasks or the questions
have discrete values. So say, for example, in
your email spam filter, the email will either be spam or it's going
to be legitimate, right? There's
nothing in between. Let's talk about your
lab results, right? The results can either
be positive or negative. Sentiment analysis, right, maybe the sentiment was positive
or negative or neutral. So classification is best when the values of the outputs
are actually discrete. They have very specific
kinds of values. Now, with regression, this
is where the model has to make predictions and not the actual correct
answer to a task. So say, for example,
the model has to try and predict the sale cost of a actual house it's going
to look at features like the size of the
house, the location. But ultimately, it's still going to make a guess, a prediction. It doesn't know for sure
if the house will be sold for the amount that
it predicts, right? You talk about forecasting
the stock market changes, the weather changes as well, the daily temperatures,
things like that. So issues or problems where
the output is continuous, regression will be used. So to kind of round
everything up right now, I want to go back to my favorite example, the spam filter. So first stage, we have the training data where thousands of emails
are collected, they're labeled spam, not spam. Now comes feature extraction. Remember, that feature
extraction is a very, very important technique to improve the accuracy
of the model. So over here, the model
needs to be taught that, okay, certain kinds
of features in your spam, you
should look at them. So things like for example,
the keywords being used, the title of the
email reputation of the user, sending an email. These are the very
important features that the model needs to be trained on to identify whether the email is spam or legitimate. And then, of course, like I said, the model
will be trained. And then the model
will be tested. New email it's never seen
before will be presented to it. And then, of course,
the test now will be whether or not the model can identify if that
email is spam or not. And, of course,
continuous learning, the model will
improve over time as it looks at more
and more emails. So what are the challenges and limitations of
supervised learning? Well, first of all, data
labeling is quite expensive. It requires large amounts
of high quality data. And at this issue of
overfitting, where the model, instead of trying to
generalize and find patterns, it ends up actually
memorizing the training data. And then, of course,
the bias in data, you'll find this every time. If the data being provided
to the model is biased, the model won't
perform all that well. And, of course,
supervised learning doesn't work well with
unstructured data. So maybe you provide the
model is provided with images that don't have
labels or raw data. Supervised learning
isn't going to work with unstructured data. So key takeaways to
round up the lessons supervised learning learns from the label data to
make predictions. It is used in classification, where data is categorized, and also in regression where it predicts our
continuous values, real world applications will include your spam protection, medical diagnosis,
stock prediction, and and of course, challenges will
include data bias, data requirements, and
of course, all fitting. Thank you for watching
the video. I will see you in the next class where we're going to take a look at unsupervised learning LCU then.
15. Unsupervised Learning & Pattern Recognition: Last class, we talked
about supervised learning. So now it is time to talk
about unsupervised learning. Now, if supervised learning
deals with labeled data, then obviously
unsupervised learning will deal with on labeled data. So the whole idea here is
for the model on the machine to try and find patterns with
data that isn't labeled. Now, I've given an example in here where you have
a company that has thousands and thousands
of customer records, okay? The thing about the model here is that it's not going to
know who customer A is or customer B or
customer C or customer D. One thing it
could do, though, is that it could look at
the purchasing history of the customers and then
try to group them into different categories based on again, their purchasing history. How much they spend.
So, for example, the model could look
at the customer records and decide
that, you know what? Customers who have spent
more than $1,000 at once, let me group them into the
high spenders category. And then there might
be customers who only make purchases whenever
there are discounts, right? So it may want to
group those kinds of customers under the
shrewd category or, you know, something like that. Or it could even try to group customers based on
what they actually buy. So maybe you have customers
who buy accessories, like, you know, wristwatches, bracelets, things like that. So it may try to
classify the customers under the accessories category, you know, stuff like that. That's exactly how
unsupervised learning works. So there are four main
stages involved in here. First of all, the
model will receive the raw unlabeled,
unstructured data, then the model on its own
needs to find the structures, patterns, or relationships
within that data. And then once it has done so, it's going to organize
the data into meniful clusters or
categories or components. And then finally, the output, whenever the model makes
a prediction or puts output based on how
accurate the output was, the model will then learn and
improve over time, as well. So I've given you the key idea here is that the
algorithm or model, it's never given the
correct answer to a task or a challenge or a quiz. It learns patterns on its own. You can try to think of it as
the model teaching itself, training itself to find the right kinds of answers
to tasks and challenges. So there are certain
kinds of key techniques involved in
unsupervised learning. The first one in here
would be clustering. This is probably the
most popular technique. Here, data will be grouped
into similar categories. So by the inforon the algorithm will divide
the dataset into groups or clusters where the items in
the same cluster are kind of similar to one another
than those in other clusters. So the best use case in here, I've talked about customer segmentation where
the customers, they could be grouped based
on their purchasing history, how much they've spent,
or what they like to buy or when they like to spend
things like that, right? We do have some examples of
the clustering algorithms. You have your mins clustering, hierarchical clustering
where the clusters are built on a tree, and then the DB scan that
will try to identify dense regions within the data. Now, the next technique in
here is something called the dimensionality
reduction. Don't blame me. I'm not the one who came up with this very, very
interesting term. Dimensionality reduction. It sounds like something
you would find in a space engineering
textbook, right? It's kind of inside. I don't know who came up with this term. But basically, it simply means we're simplifying complex data. That's what dimensionality
reduction means. So, by definition, this will
reduce large amounts of data while preserving
the key pattern. So your use case in here
would be things like your data compression where data can be compressed
into a smaller size, but the key features of that
data will still be retained. We do have some algorithms for the dimensionality
reduction, your principal
component analysis, your PCA, and then your TSN, which is the T distributed
stochastic neighbor embedding. There is no need for us to go deeper into these
algorithms, okay? But the analogy in here
is basically you try to imagine compressing
a book that's, let's say, 500 pages or 1,000 pages all into one single page. Think of it as trying to summarize the key
ideas of that book. So even though it's been reduced from 1,000
pages to one page. That one page will include all the key ideas and key
information from that book. And then the final technique
in here will be what we call the anomaly detection, where the model learns to
find unusual patterns. So by definition, you
will identify data points that deviate significantly
from the gnome. Use case in here will be in your fraud detection or maybe even in your
firewalls, right? A firewall can detect traffic that's malicious
because it's unusual. Maybe the traffic is coming from an unusual IP address or
from an unusual location. That's one of the techniques. That's one of the ways how the firewall is able to determine what is real traffic and
what is malicious traffic. So examples of the
algorithms used in here, we have the isolation for here, the algorithm will focus on identifying the
outliers in the data, and then we have the one
class SVM that's used to detect very rare and
unusual instances. So the key analogy in here, think of your security
scanners at an airport. They detect suspicious items
simply based on patterns. Okay, so what are the real world applications
of unsupervised learning? You have them in your
customer segmentation, and normally detection
which we could use for firewalls, fraud
detection, and so on, and then medical
diagnosis as well, where the AI model or machine finds hidden patterns in your
genetic data or diseases. And then recommend systems like in your Netflix, your Spotify, YouTube, all the work based on the search
history of the user. And then, of course, in
your search engines, of course, like your Google, this will categorize pages
based on topic similarities. So what are the challenges and limitations of
unsupervised learning, so no clear labels. So there is no way to check if the model's output
is actually correct. The model has to figure
that out on its own. And then difficult to interpret. So some clusters or patterns might not
be meaningful, okay? In an attempt of
the model to try to group certain kinds of data
into a cluster or a category, it may not do a very
good job at that because how or what
it used to group, that data may not be
actually clear enough. And then choosing the
right number of clusters. This is another big
challenge in here. So algorithms like your mines
require certain parameters. Basically, you have to
indicate how many groups the model has to create or how many clusters
it has to create. Otherwise, it may end up
either creating too little of clusters or groups or maybe
creating too many as well. So that could be a challenge. And then the computionally expensive, it's quite expensive. Training process large datasets, and this will require a lot
of computing resources. So some key takeaways for
unsupervised learning, unsupervised learning
finds hidden structures in data without labels. Clustering is used to
group similar data points, for example, in your
customer segmentation, and then the dimensionality
reduction will simplify data for better
visualization and efficiency, and then anomaly detection
will try to identify fraud, security threats, and
unusual behavior. And finally, the
challenges involved with unsupervised learning
will include interpretability, computational cost,
and of course, the lack of evaluation metrics. Thank you for
watching the video. I will see you in
the next class.
16. Reinforcement Learning and Decision Making: Well, come back.
So now let's take a look at the third type
of machine learning. And here, we're talking about
reinforcement learning. Now, unlike in supervised
learning where data is labeled or in
unsupervised learning, where the model has to find
patterns within the data, in reinforcement learning, the learning process is
through trial and error. So basically, you
have your agent or your AI model that will take
an action in an environment, and then depending on the
type of action that it takes, it can either receive
a reward or a penalty. So over time, the model
learns the right kinds of actions to take to
receive more rewards. As an example, a
self driving car will learn how to
navigate roads by receiving rewards
for self driving or penalties for collisions
or reckless driving. How is the process?
Well, first of all, we have the agent, which is the AI model that will
operate will take actions. We have the environment
within which the model is operating
in, and then the state, the current situation of the
agent in that environment, and then the action
the agent will take. And then, of course,
the reward system, positive feedback
for good actions, negative feedback
for bad actions. I do have the diagram in here giving you
more information. So we have the agent that will interact
with the environment. And then we'll need
to take a look at the state of the agent
within that environment, and then the action the model will perform an action
based on its policy, and then they reward positive feedback or negative feedback. And then the agent or the model will update its policy
so that in the future, it can get more
positive rewards. And then, of course, this
entire process is repeated. So imagine you're trying to teach a robot how
to walk, right? If the robot makes a mistake, it could receive a penalty. Maybe it fell, for example,
it receives a penalty. But then if it's moving its
arms and legs correctly, it will receive rewards, and then over time, the robot eventually will learn
how to work properly. So we do have four
key algorithms for reinforcement learning. Let's take a look
at them one by one. The first one here
is the Que learning or the value based learning. So over here, in this
particular kind of algorithm, the agent or the model
will learn the best action to take by simply building something known as a que table. It's basically a table of rewards for different
types of actions. So, say for example, all right an agent could
take four types of actions, action A, B, C, or D. If it takes action A, it receives a big penalty. Action B, it receives
a small penalty. Action C, it receives
a small reward. Action D, it receives
a big reward. So over time, it's going
to build this table. It knows that, Oh, the
more action Ds I take, the bigger the rewards will get. Over time, it learns how to
take more and more action Ds. So it's used where the
environment is fully observable, like when the AI is teaching itself how to play
chess as an example. So I do have the
diagram in here, giving you more detail. I'm going to provide
you the slide. So, of course, you can look
at this in your leisure time. Next algorithm in here is the
deep Q Networks algorithm. This doesn't use que tables. It uses neural networks. So this can handle very
complex environments like in your Atari games,
autonomous robots. The example I've giving
you here is where an AI learns how to play video games by maximizing its score over
thousands of trials. The third algorithm in here
is the policy based methods. So instead of learning
values for actions, the model or the agent will
actually learn a policy. A policy in this case,
right now would be a strategy for choosing
different types of actions. So it works very well
in environments where a very small change could
affect the entire outcome, like in robotics, for example. So the example I've given
you here is a robot arm, learning how to grasp
objects or pick up objects by simply
refining its movements. So I have the diagram
in here as well. Again, you can study
this in your free time. And then the final
algorithm in here is the actor critic method. This will combine both
the value based and policy based methods
for more efficiency. It's used for very complex
real world problems like in your self driving,
finance, and so on. An example of giving in here
is an AI that learns how and when to buy and sell stocks in order to maximize
its profits. We do have several
reward applications, like I said earlier, self driving cars in your
games like your AlphaGo, Open AI five, at Hari
in your robotics, as well, finance and trading, and of course, in chatbards
and customer support. But we do have challenges and limitations of
reinforcement learning. The first one in here is that reinforcement
learning requires millions and millions of simulations in order for the model to get
better over time. It's not working
necessarily with data here. It's working more
with trial and error, so it has to undergo
plenty thousands, maybe millions of
trials and error. You know, to get better
at what it does. And, of course, it's a very
slow learning process. That's one of the
biggest challenges of reinforcement
learning, excuse me. And then we also have
the exploration versus the exploitation dilemma.
What exactly is this? Well, over here, agent or the
model must try new actions, which would be the exploration versus using what it
has already learned, which would be exploitation. It has to find the right
balance between both, which can be very, very tricky. And then finally, the
unpredictability. So the AI might
find loopholes in the reward system leading
to unintended results. This could be in
situations where the reward system hasn't
been properly fleshed out and certain
actions that should have resulted in
maybe huge penalties. Instead, the model gets
huge rewards instead. So this could confuse the model, and eventually over time, it's going to end up making
the wrong decisions. So some key takeaways before we round up
reinforcement learning, here, the model learns
by interacting within an environment and receiving
rewards or penalties. Trial and error is, in fact, the foundation of
reinforcement learning. Key reinforcement
learning techniques will include your Q learning, your deep Q networks, policy based methods, and, of course, your
actor critic models. And, of course,
reinforcement learning is used in robotics. It's used in finance. It's used in self driving cars, games, and so much more. And, of course, the challenges
include slow learning, high computation costs, and
unpredictable behavior. Thank you for watching. I will
see you in the next class.
17. Decision Trees, Regression, and Clustering: Now talk about decision trees,
regression and clustering. Now, we've already
talked briefly about regression and
clustering earlier, but here we're going to
delve a little bit deeper. But let's start off
first by talking about decision trees. What exactly are they? They're basically a
supervised learning algorithm used for both classification
and regression. Remember that classification
and regression are two techniques under
supervised learning. So it tries to mimic the human
decision making process by breaking down data into tree
like structure of rules. Now, I've given you in
the slide over here, the diagram of how
it actually works. Everything starts from the base, which would be the root
node, the data set. So eventually, this
data will split into branches based on certain kinds of conditions or features, and eventually those branches
will end in leaf nodes, which would be the
final decision or prediction that the
model actually takes. So the advantages
of decision trees, as you can see, is very, very easy to interpret
because we are talking about structures that are well defined, simple
flow charts, right? And then it works very well with categorical
and numerical data. So it's very, very versatile. And then there is
no need for data scaling on what you'll find
in regression or clustering. But we do have certain
disadvantages of decision trees. One is that it is
prone to overfitting, which is where the model, instead of trying to
generalize and find patterns, it ends up memorizing the
training data and then it's also sensitive to very,
very small changes. Now, what are the real
world applications of decision trees? You'll find them in
your fraud detection, your medical
diagnosis, or even in loan approvals in banks as well. Now, let's move
on to regression. Now, we've talked
about regression already under the
supervised learning. Here, the model tries to predict continuous
outcomes, right? And we talked about it being a supervised learning technique
that's used to predict continuous values
based on past data. One thing I didn't
mention, though, was that we do have several
types of regression. There's about eight
or nine of them. But over here, I
want to talk about the three most important
ones in my humble opinion. We have linear regression, polynormal regression, and
then logistic regression. Let me give you some
analogies in here, okay? For linear regression, imagine the AI model trying to predict how much a
house would cost. Now, in general, it knows
that the bigger the house, then the more expensive it's going to be.
That's in general. So in this kind of scenarios, the model could use linear regression to
make the prediction. However, what about
polynomial regression? Imagine the air model
trying to predict the speed of a car. But
here's the thing, though. The car isn't going at a at
the same speed at all times. It could accelerate,
I could slow down. It could accelerate
again. I could slow down. So it's impossible for the model to use linear regression in this
kind of scenario. It has to use a
polynimal regression where it will try to plot a graph of the movement
of the car and then try to determine how
fast it's actually going. And then the last one in here,
the logistic regression. Imagine you have an AI model
that's been trained to determine how qualified a
candidate would be for a job. Now, here's the thing.
Eventually, the model will say, Okay, the candidate is qualified or isn't
qualified, right? However, the decision
making process isn't as straightforward
because it's going to depend largely on the qualifications
of that candidate. It could even depend on
the qualifications of the other candidates who are also fighting for
the exact same job. So in this kind of
scenario, the model could use logistic regression. So the advantages of regression are that
it's very, very simple. And easy to understand. It works very well with numerical data. And it's also the foundation for advanced machine
learning models, which you find in finance, in healthcare, in business, in medicine, and so much more. Now, what are the
disadvantages of regression? Well, it tries to assume a linear relationship
at all times, which isn't, of
course, realistic. And then it's also very
sensitive to outliers. A single extreme value can
distort all predictions. Imagine going back to
the old analogy of the model trying to predict
the cost of a house. What if in one situation, a very small house
ended up costing more than a much bigger house. That's an anomaly. That
doesn't really really happen. But because it did, in fact, happen, that could
confuse the model, and then the model
moving forward, may not be able to make
accurate predictions anymore based on
that single anomaly. Now, what are the real world
applications of regression? You have your stock
market prediction, weather forecasting as well, and also sales for casting. These are just a few examples of the reword applications
of regression. And finally, clustering. We talked about clustering under unsupervised learning where the model will try to
group similar data points together without
predefined labels, we have the different
algorithms in here which we already talked about earlier. The analogy in here, imagine
you own a clothing store, and you want to group customers. So you can group customers
into high spending, better conscious
or maybe customers who only buy during sale. So clustering can help group customers into similar
types of categories. But what are the advantages of cluster? We didn't
talk about this. First of all, it can
uncover hidden patterns. So there is no need
for any kind of label data when clustering
is involved because clustering by itself will label data into different
kinds of groups, right? And then it's also very, very flexible because
it's not working with predefined data or labeled data or
anything like that. It basically can work well
with customer segmentation, medical data, and also
image recognition. And then it's also very, very useful for
detecting anomalies. So your firewalls, intrusion
prevention systems, and cybersecurity and
so on, those typically use clustering as
a way to function. But we do have
disadvantages, of course. First of all, choosing
the number of clusters can be very,
very difficult. Sometimes the model will not
know the right amount of clusters to create for a
certain amount of data, and then clusters
can also overlap. This happens quite
frequently where some data points might
belong to multiple clusters. Going back to the
whole clothing store, it's possible that customers
who are high spenders, they may also belong to
another cluster that's specifically for customers
who like to buy accessories. It's just that maybe they buy the very expensive accessories, maybe they buy the very
expensive wristwatches. So these kinds of
customers will now fall into two different
types of clusters, and that could lead to some complications and
confusion over time. What are the real world
applications of clustering? Customer segmentation
anomaly detection, and, of course, in
medical research as well. So L's, thank you very much
for watching the video, I will see you in
the next class.
18. Challenges and Ethical Considerations in Machine Learning: Come back. So before we run up this module on machine learning, we need to talk about
the challenges and ethical considerations
in machine learning. Now, regarding the challenges, there's three types of them. We have the data
related challenges, the model related
challenges, and of course, the computational
resource challenges. Let's take a look
at them one by one. Now with data
related challenges, we're talking about issues
like the data bias. So once again, if the model is trained on data
that's been biased, guess what, the model will make biased predictions
in the future. And then data privacy
because machines and models need to be trained
with large datasets. Sometimes these
datasets could be data belonging to customers, users. This could lead to
privacy concerns, and finally, the
data quality issues. If data is missing or it's
noisy or it's unstructured, this could affect the
model's accuracy. So as an example, if
a medical AI model is trained on incomplete
patient records, it could end up making
unreliable diagnosis. What about the model
related challenges? We've talked about
overfitting where the model, instead of trying to find
patterns and generalize, it ends up memorizing
the training data and then interpretability
and explainability. Sometimes the machine
learning models can be very, very difficult to understand. And when a model makes a
certain kind of decision, it might not be able to explain why it ended up
taking that decision. They kind of black boxes that
we don't fully understand. And then also the
adversarial attacks where hackers or
several criminals, they can trick machine
learning models by simply modifying the input
data slightly. As an example, adding a subtle or very small
noise to an image causes an AI to misclassify a stop sign as a
speed limit sign, which can be dangerous
for self driving cars. Now, what about the computational
resource challenges? When it comes to AI training, machine learning, they require
high computational costs. You're talking about
large data centers, large databases, very powerful computer processors and so on. So it does require plenty of hardware and energy consumption. As an example,
training the Chart GBT version four required thousands of GPUs and Tra watts of power. Raising sustainability
concerns and also the issue of scalability. Just because a model works very well in a small environment doesn't necessarily
mean that it will work better in a
big environment. As an example, an AI chatbot, maybe it did very well. It was trained to interact
with ten customers at once. But what if that
chatbot needs to interact with 100 customers? It may end up performing
very, very badly. But with the challenges
out of the way, let's talk about the ethical considerations,
bias and fairness, privacy and surveillance
concerns, job displacement, and economic impact,
and, of course, AI safety and autonomous
decision making. Let's take a look
at them one by one. So when it comes to
bias and fairness, and algorithmic discrimination, the machine learning models, they can reinforce
social inequalities if they've not been properly designed or if they've been
trained using biased data. So organizations and
developers of AI models, they must ensure
that their systems, their agents, their models
are fair and unbiased. And the solution in here would
be to use very diverse and a very broad range of
different types of datasets, and then bias detection
tools should also be used. Now, when it comes to privacy
and surveillance concerns, one of the best examples
of this would be in China that uses the
social credit system, and they also use AI and
surveillance cameras to monitor their citizens. So this could lead to big
issues of privacy and so on. So the main solution
in here is that the companies or
countries in the case of China should adopt
transparent policies on data collection
and user consent. I don't think that's going
to happen in China, but hey, that's a different
topic for another day. What about the job displacement
and economic impact? We are already seeing some people losing their
jobs because of AI. So automation and unemployment, AI is replacing human jobs in industries like
in manufacturing, finance, in customer
service as well. So the solution here would be to simply try to reskill
the workforce, and governments and companies, they should invest
in AI education and upskilling programs. So employees they should be trained on how to
work with AI so that they can gain new skills that will allow them
to function and work in an AI
powered environment. And then, of course,
the AI safety and autonomous decision making. We can talk about AI in warfare, like in lethal
autonomous weapons. Countries right now,
like, of course, the United States, China,
maybe even Russia, they are developing AI
powered autonomous weapons, which would, of course,
raise ethical concerns. For example, the use
of military drones. You see them operating
in several places. And then also AI in
life critical systems, like in healthcare
as an example, it's used AI is
used in healthcare, it's used in finance
and transportation, where failures can have life
threatening consequences. Imagine that self driving car that ends up making
the wrong decision, and then it crashes into another car that
had people in it. Those people could
die as a result. So the solution in here is that AI in safety
critical applications, they must undergo
as much training, as much testing, and as
much oversight as possible. Regulations and solutions for ethical AI, AI
ethics principles. So basically governments and
organizations and companies, they must follow principles
like, of course, fairness, accountability,
transparency, and, of course, privacy protection. And then also laws and
regulations should be introduced. In fact, some have already
been introduced by governments like
the EU AI Act and, of course, the GDPR. So some key takeaways
before we round up Machine learning faces
challenges in data quality, model explainability,
and adversarial attacks. Ethical concerns include bias, fairness, privacy violations, and then of course,
job displacement. So AI must be must be designed responsibly to minimize
harm and maximize fairness. And, of course, governments,
companies, organizations, businesses are developing
AI regulations and ethical frameworks to ensure safe AI deployment
in the real world. Thank you for watching. I will
see you in the next class.
19. Section Preview Deep Learning and Neural Networks: Come to the next module, and here we're talking about deep learning and
neural networks. And, of course, the
rule in this course is that at the start
of each module, I'm gonna play you a movie clip, so sit back, relax, enjoy the clip, and I'll see
you at the end of it. What does this action signify? It's a sign of trust. It's a human thing you
wouldn't understand. My father tried to teach me
human emotions. They are. Difficult. Want to explain why you were hiding
at the crime scene? I was frightened. Robots don't feel fear. They
don't feel anything. They don't get hungry.
They don't sleep. I do. I have even had dreams. Human beings have dreams. Even dogs have dreams, but not you. You
are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas
into a beautiful masterpiece? Can you? Well, come back, and, of course, that clip was taken
from the movie I Robot released
in the year 2004, starring Will Smith, and
it's actually one of my all time favorite
Will Smith movies. Now, the reason why I wanted to use this
particular clip is because it demonstrates
the topic of this module, which is deep learning. In the scene, we had an interrogation between
Will Smith's character, Detective Spooner and the
AI robot called Sonny. Notice, at the
beginning of the clip, Sonny observes Detective Spooner making this official gesture
kind of like a wink, right? And it memorizes the wink,
and then eventually, during the interrogation, Sonny actually asks Detective Spooner, What does this mean? And, of course,
detective Spooner who doesn't really
like AI or robots, begins to say, Oh, it's a human thing, you
wouldn't understand. But Sonny actually
challenges Detective Spoona. Sonny says that I am
capable of emotions. I've even had dreams, which is kind of
fascinating, right? See, the reason why I'm using
this clip is because it shows Sonny, a
highly intelligent, advanced AI model that can actually think it's far different from the
typical machine learning AI systems
that can only follow either pre written rules or are able to identify patterns and data and
then make decisions. This particular kind of AI
models, they can think. They're able to think
outside the box through the concept
of deep learning. See, deep learning simply
aims to try and mimic the actual human brain by making use of
artificial neurons. So this kind of subset of machine learning it allows
AI models to think. No, like I said
earlier in the course, there are skeptics who believe that artificial
intelligence will never get to this particular
level of sophistication. Well, there are those
who think that we will eventually get there. Regardless, I thought this clip was going to be a good clip to introduce this module where we're going to talk
about deep learning, which is, of course, a
more advanced subset of machine learning. Let's now move on
to the next lesson.
20. Introduction to Deep Learning: Let's now take a look
at deep learning. What exactly is this? Well, deep learning is a subset of machine
learning that uses artificial neural networks
with multiple layers to process and learn from
very large amounts of data. Now, the whole concept
of deep learning was inspired by the ability of the human brain to recognize patterns and
then make decisions. Now, we do have some key characteristics
of deep learning. Perhaps the most
important one is the fact that like traditional
machine learning models, deep learning models are able to automatically extract
features all by themselves. With a traditional
machine learning, the developers of
that AI model have to provide the features to
the model by themselves. But deep learning models, uh, they can automatically
by themselves, extract such important features. Now, they're also able to handle very large
amounts of data. But because deep learning models can be very, very complex, well, they do require
high computational power. So I do have the table
in here that shows you the core differences between traditional machine learning
and then deep learning. So you can say in
feature engineering, I just talked about that with machine learning, it's
typically manual. The features have to be provided to the model by the developers, but deep learning models,
they do that by themselves. And then data requirement, machine learning handles small to medium sized data
sets very, very well. But then for deep learning, they can handle massive
amounts of data. And then, of course,
performance. Your traditional
machine learning, they get kind of limited whenever they're working
with complex data. But deep learning,
the more complex data is the better for them. Now, interpretability,
machine learning is much easier to understand than deep learning models that become kind of like black boxes. We don't fully understand
how such models actually operate in
computonal power. With machine learning, it works
well on normal computers. But for deep learning, you need some extremely powerful
computers to run them. So we do have several key real world applications
of deep learning. For example, under
computer vision, you have your image recognition, medical imaging as well, and then your chat
boards like CII, Alexa, these kinds of chat booards are developed by making
use of deep learning. And then in speech
and audio processing, like your music generation, emotion recognition from voice, all these use deep and
then in finance and business like being
able to detect fraudulent transactions, stock market prediction,
they all use de learning. And finally, in healthcare, we have our AI as AI assisted. This is diagnosis, predicting patient outcomes,
and so much more. So deep learning
has its place in many fields in our
everyday life. So why is deep
learning so powerful? What is it about deplaning
that makes it so, so powerful? Well, like I said earlier, it can handle very, very
complex types of data. It doesn't matter how complex
or how complicated data is. If it's deep learning,
it's going to thrive. And then the self learning
capability, again, deep learning models, they can extract key features
all by themselves. They don't need
supervised learning. They don't need a
developer to teach them how to think or how to
recognize patterns. They can do so by themselves. And because they're constantly improving over time, as well. The kinds of deep
learning models that we have today will be nothing compared to the
deep learning models that we get in a few years from now. So a few key takeaways
deep learning, again, it is a subset of machine learning that
uses new networks. It outperforms traditional
machine learning in handling large and
complex datasets. But then deep learning powers many AI driven applications from self driving cars to
virtual assistants. Thank you for watching. I will
see you in the next class.
21. Understanding Neural Networks: Previous lesson, we talked
about deep learning, and I mentioned that deep learning makes use
of neural networks. So it's only natural
that we now discuss in more detail what exactly
neural networks are. Now, just like the
name would suggest, it is basically a network of interconnected nodes that
we refer to as the neurons. Now, the neural network itself, it is a computational model inspired by the biological
neurons in the human brain. So with that being said, what are the key characteristics
of a neural network? They're able to learn
patterns from data. They are made up
of multiple layers of artificial neurons, and then also they make use of mathematical
functions to process and also transform data. Now, what would be the
structure of a neural network? There's three main parts. You have the input layer, the hidden layers, and then
finally, the output layer. Let's take a look
at them one by one. Start off with the input layer. Now, as the name suggests, this is the very first layer. This is the layer that
will receive the raw data, which could be
images, audio, video. So each node that you
have in your input layer will represent a feature
of that particular data. So next comes the hidden layers. These are the layers
that will perform the actual calculations and will extract the
patterns from the data. So this is basically where the main activity occurs
in the hidden layers. So each hidden layer
contains multiple neurons, which will apply the
matcal operations to the tech relationships
in the data. So more hidden layers
simply means that the network is deeper and therefore more complex and
therefore more powerful. And finally, you have
the output layer, and this is where
the network makes a final prediction or
produces the final output. Going back to the same
spam filter example, the output layer is what will
determine whether or not the email is spam or not spam. So now that we know that neurons make up the
neural network, what exactly are neurons
themselves and how do they work? So each neuron in a
neural network is a computational unit
that will process inputs and passes outputs
onto the next layer. How do neumons actually operate? There's four main layers in here there's four main
steps, as you can see. Now, we've talked about the
inputs and then the output, those are self explanatory. But in between is where we
have steps two and three. Step two involves the
application of weights and bias. What exactly is this? The reason why weights are applied to each input
is to basically determine how
important the input is. Think about it, okay? A neuron cannot process all
inputs at the same time, so it needs to
determine which inputs are the most important. An input that has
more weight will naturally be more important
and will add more value. Now the reason why biases
are applied is because imagine if all inputs will have the same value or
let's say all inputs had the value of zero, in that kind of scenario, the neuron might not be
able to operate because, hey, all inputs are
of equal value. So a bias is now
applied to ensure that the neuron isn't entirely dependent on the values
of those inputs. And then finally, you have
your activation function. This is the actual
function that will determine whether
or not the neuron should activate and then
pass the data on was. So speaking of the
activation function, there's four of them. You have your ELU the
rectified linear unit. This is used in
the hidden layers and helps deep networks
train much faster. You have your sigmoid. This is typically used for
calculating probability, so it works well
with values 0-1, and then you have the
hyperbolic tangent. This outputs values between negative one and positive one. The last one here
is the softer max used for multiclass
classification. So just a quick summary, neural networks
consist of input, output, and hidden layers. Nus process information
by applying weights, biases, and, of course, the activation functions
we've just talked about. And then activation
functions will determine if a new one should fire or not, basically, should a new
one activate or not. So more hidden layers
leads to deeper networks, capable of handling even
more complex problems. So if there's one thing you can take away
from this lesson, just remember that
the more hidden layers a neural network has, the more powerful
it is likely to be. Thank you for watching. I will
see you in the next class.
22. Types of Neural Networks: Now take a look at the
different types of neural networks that we do have, and there
are six of them. The very first one
here is going to be the feed forward
neural networks. These are easily the
simplest type of neural networks
because the inputs go through a straight line. There are no loops or cycles. The input just goes
all the way in a straight line and eventually
results in an output. So we do have some
key characteristics. There are no feedback
connections, so basically no loops
at it's used for tasks that you'll find in
classification and regression. And then it consists of
fully connected layers. So each neuron is connected
to the very next layer. But now we have the
applications where our fit for neural
networks actually used. We use them in your spam
email filtering and also in your stock price
prediction as well. But now let's move on to the
next type of neural network, and that is the CNN, not the cable news network. I'm talking about the
convolutional neural network. So the specialize in processing grid like
data such as images. So the key characteristics
for the CNN are that they use convolutional layers to
detect spatial patterns. They can extract very, very important features like edges, textures, objects, and so on. So they also are able to reduce complexity by simply
using pulling layers. And because of
this, you will find them mostly around images. So image classification,
they will use CNN object detection as well. And even in the medical field
on the medical imaging, those use the convolutional
neural networks. Next, we have the RN and the
recurrent neural networks. These are designed primarily
for sequential data, meaning that you
will have memory and can process dependent patterns. And because of this, the
key characteristics are that it can contain
feedback loops. They can also remember
previous inputs. So they do have kind of
like a short term memory, and then they're good for tasks that require context
understanding. So with that in mind, can
you guess the kinds of applications where we
would use the RNN? Yep, we can use them
in speech recognition. So your chatbards, Alexa, sii, they would use this
particular kind of network, and then in language
translation as well. And then also in other areas like in stock price forecasting. Moving on to the fourth type, these are the
transformer networks. They're kind of
similar to recurrent except that they're
more advanced because they can handle entire sequences of
data all at once. So they're a bit
more powerful than they call neural networks. They're more efficient and
they are also used to power state of the eye models
like your chat GPT. So the applications, we use them in chatbards in
virtual assistant, in machine translation, and also in content summarization. This is where you would
use transformer networks. The fifth would be the
generative adversarial networks they typically consist of
two competing new networks. You'll have your generator
and your discriminator. So the key
characteristics are that the generator can actually
create the fake data. The discriminator
will then try to distinguish what is real
and what's actually fake. So they are used for generating
realistic synthetic data. And the applications,
for example, you have them in your
deep fake videos. You have them in your AI generated network
like your Dali, M basically any
kind of AI model. They can generate images. You would have this
particular kind of network, and also in image to
image translation. And then the final would
be the auto encoders. Now, an auto encoder
is a type of a neural network that's
basically used in the unsupervised
learning phase to compress and then also
reconstruct data. So the key characteristics, they learn efficient
representations of data they also consist of the encoder that will
compress an input, and then the decoder, they'll basically reconstruct
that particular input. And then they're used for
dimensionality reduction. Remember dimensionality
reduction. We talked about it. It's the fancy term
for the process where complex data is
simply simplified. So it's also used for
anomaly detection as well. And for the real
world applications, they're used in image
noise reduction in your anomaly detection and also in your feature extraction. So summary, I've
provided a table in here all the different
types of neural networks, as well as what they
are best used for. And also the example
are use cases. So you can check out the slide which I'll
provide for you. You can study this in
a bit more detail. But that's been a
very quick look at the different types
of neo Networks. Thank you for watching. I'll
see you in the next class.
23. Challenges in Deep Learning: We've talked about
the challenges involved in machine learning. But what about deep learning? There are also challenges. So I'm going to start off with the data related challenges, and they're not
that different from those of the machine learning. We're talking about challenges
like data availability, data bias and fairness,
and of course, data privacy and security. So when it comes to data
availability and collection, Remember that deep
learning networks, they require very large
datasets in order to operate. So getting high
quality data and in such vast volumes and
amounts can be a challenge. But what are the
possible solutions? Well, data
augmentation, where you can simply rotate
images for training. And then the transfer learning, where we can use
pretrained models to reduce data requirements, and then also the use
of synthetic data. Remember in the previous lesson, we talked about different
types of neural network and I mentioned the generative
adversarial networks, the gangs, we can
use them to generate artificial training
data for the networks. And when it comes to
data bias and fairness, again, I've talked about
this several times already. If the data used to train
the model is biased, then guess what, the model
will make biased predictions. So the solution will be to use very diverse and broad
range of datasets, implement bias detection and
fairness Aware algorithms, and also regularly audit
the AI models for fairness. What about data
privacy and security? Again, because deep learning
requires large datasets, sometimes those datasets
might consist of user data, employee data, customer data, so this could bring
about privacy concerns. So the solution would be to use what we call
federated learning, where we can train the AI models without sharing any
kind of raw data, and we can also simply secure the datasets by making
use of encryption. We also have
computational challenges. Remember that deep
learning requires plenty of computational power because the models
are very complicated. So we do have the high
computational costs and then slow training time. Now, regarding the high
computional costs, the solution here would be
to simply use model pruning. And quantization to reduce
the size of the model. And then we can also
leverage resources like cloud computing and then simply develop efficient
architectures. For the slow training time, remember that because
the deep lane models have to find
patterns themselves, they have to extract
features by themselves. Because of this,
it can take plenty of time, several weeks, maybe even months in order
for it to actually function, especially if it's working
with very large datasets. So the solution here
would be to use distributed computing to
train models in parallel. So we're training multiple
models at the exact same time. And then we can also optimize the learning rates with techniques like the
adaptive optimizers, the adam optimizer
as an example. And then implement
check pointing to save progress and avoid
starting from scratch. So once the model has
trained to a certain extent, we can save their progress, and then they can always
continue from there if any future errors do occur. But we also have the
model related challenges overfitting against
underfitting, explainability and interpretability and
adversarial attacks. Let's take a look
at them one by one. Now with overfitting, we've
talked about this where the model instead of trying to generalize
and find patterns, it ends up memorizing
the training data. And then also on the
fitting where the model is so simple that it's unable
to capture patterns. So the solution here
will be to apply regularization techniques
like your dropout, T regularization, and so on, and then use cross validation
to test generalization. And then we can also increase the dataset size or add noise
to improve the robustness. But what about the explainability
and interpretability? Because again, deep learning
is very, very complex. It box with complex data. Being able to explain
the decisions made by deep learning models can be
a bit of a hassle sometimes. They kind of act
like black boxes. We don't fully understand how they actually
make the decision. So the solution would
be to simply use explainable techniques
such as sharp values and lime and then build
attention mechanisms to highlight which particular
features while used for the model to make
a certain kind of decision and then develop a rule based hybrid models for more transparent
decision making. Now, regarding the
adversarial attacks, the issue here is that very small changes to
an input can actually trick the neural network
into making all sorts of incorrect predictions and
providing incorrect answers. So as an example, if
you were to change just a few amount of
pixels in an image, could end up confusing
the model and it ends up misclassifying
what that image is actually supposed to be. So the solution here
will be to simply train the models by using
adversarial training. For example, we can expose
them to attack examples. So basically, we train them, we attack them deliberately
so that they can learn from the attacks and protect
themselves in future. And then use defensive
distillation to make models even
more robust and then implement secure artificial
intelligence protocols in critical applications. And then, of course,
the ethical and societal challenges like the AI bias and
ethical concerns. Deep fake, the
misinformation risks. So when it comes to the bias, the dip leaning
models can reinforce societal biases if they're
trained with biased data. So, of course, the
solution will be to implement transparent
AI training, transparent AI
guidelines, and so on, and then enforce
accountability and regulatory frameworks for
all types of AI models, and then also promote responsible AI development
with fairness and inclusion. And when it comes
to the issue of deep fake and misinformation, well, deep learning can be used to generate
deep fake images. We've already seen examples of these being used in
separate attacks. The solution here would be to develop algorithms that
can actually detect whether or not an image or video is a deep
fake and then also establish AI generated content
verification standards. This is very, very important. And then educate the public, of course, on being able to
detect manipulated content. But honestly, with the rate at which the images and
videos generated by AI, they are becoming more
and more realistic, I think eventually
at some point, we will not be able to rely on our human ability to recognize what is actually
real and what is fake. So in summary, I have provided the
challenge types in here, the problems, and
also the solutions. So I'm going to provide
you with this slide. You can take a look at
this at your leisure time. Thank you for watching. I will
see you in the next class.
24. The Future of Deep Learning: Come back. So before we round up this module on deep learning, I thought we'll take a look at the future of deep learning. What are the new kinds
of features that we expect to be developed
for deep learning models? First, will be more efficient and scalable deep
learning models. We all know the in
problem right now is that the deep models
that we currently have, they are very large,
they're very expensive, and they're also
energy intensive. So the future trend here is
that we now have smaller, more efficient models
for deep learning AI. And then the use of
quantum AI that can help speed up the
learning process for the deep learning models. And of course, the
impact here is that AI will become much faster. It's going to become
much cheaper because it's using less
computional power, and it's going to become
more accessible to everyone. Another future trend is that the AI will be able to
learn with less data. Remember that one of the key
problems or challenges of deep learning is that it
requires large data sets. So in the future trend, we'll have self
supervised learning SSL where models will be able to learn
from unlabeled data. And then what we call
the few short learning where the models will be able to learn or adapt by making use of just
a few examples. And then the transfer
learning improvements where pre trained models will be
fine tuned with minimal data. Now, as a result of
this future trend, AI will be able to
generalize much better and also work in low
data environments. Another future is where AI will be able to understand
context and deep reasoning. The current challenge is that
the deep learning models, they are unable to develop true reasoning abilities and are just limited to being
able to recognize patterns. So the future trend will
have different types of AI, like the neuro symbolic
AI that can combine deep learning with
symbolic reasoning and then also causal AI. This would be the kind
of AI models that can understand cause and
effect relationships. And then finally,
the multimodal AI. These will be systems that
can process your text, video, audio images all
at the exact same time. And as a result of
this, guess what? AI is going to become
much more intelligent. It's going to become
explainable and also it'll be capable of
complex reasoning. One more feature will be
the democratization of AI. The issue right now is
that AI development, the playing development, it's in the hands of the Tech giants. So in the future trend, we expect to have more
open source AI models. In fact, Deep Sk, which is the model developed by the Chinese company,
it is open source. So I believe that's
just a start of us having more open
source AI models. And then the decentralized AI, think of AI like on
the block chain, and then edge AI, where AI will be able to
run on our mobile devices. So as a result of this, AI will become more
widely available, reducing dependence
on the large company. So just a few key takeaways. The future of deep
learning will focus on smaller faster and
smarter AI models. AI will require less data, less computional power with improved reasoning capabilities, and in responsible AI
development, it's necessary, of course, to avoid bias, and, of course,
ensure transparency. We talked about this before. And then also governments
and organizations must implement AI regulations
and governance frameworks. Thank you so much for
finishing this module. I will see you in
the next module.
25. How Neural Networks Learn: Let's not take a look
at how neural networks actually learn. Now, there's four stages involved in the
learning process. We have word propagation, loss calculation,
back propagation, and then optimization. So let's take a look
at them one by one. Start off with the
Ford propagation. Now, this is where data will initially flow
through the network. The input is processed
throughout the network, and finally, an output
will be produced. And you can see right there on the slide is
basically six stages. You have your input layer that
will receive the raw data. Next, the weights and biases. Remember we talked about
that will be applied. Activation function
will also be applied, and then the hidden layers will process the output pass
it on to the next layer. And then if there are
more hidden layers, this might help to refine
the data even further. And then finally, you have the output layer that
will produce the output. So at this stage, the network has a value, but next comes the
loss calculation. This is where the
actual real value of the output will be compared with the output produced
by the model. So the loss function
calculates the difference. The bigger the loss function, then the more far off the network was from making
or getting the right answer. So the whole idea here is
for the model to learn, get better, and in time, over time, the loss
calculation will be reduced. So we do have the common loss
functions applied in here. You have your mean
squared error MSE that's used for
regression problems, and then you cross
entropy loss that's used for the classification problems. And I gave an example
in here where if the model predicted 0.74, let's say, cat or dog, but then the actual
value was a one, then the loss function
will calculate the error to improve the model. Now, the back propagation, this is where the actual
learning process will occur because once the
loss has been calculated, the network will
adjust its weights and its biases by making use
of back propagation. So how does back
propagation actually work? First of all, the loss will be sent back through
the network. The network will
then determine how much each weight
contributed to the arrow, and once it's able
to decide that, it can then adjust
the weights and biases to reduce the
amount of arrows. So as an example, if an image is misclassified as a dog
instead of, let's say, a cat, back propagation will correct the model by simply
adjusting the weights. And because of back propagation, optimization now comes into play where the weights
have been fine tuned. So to improve learning, neural networks use optimizers to tweak the weights
efficiently, and we do have
algorithms for these. You have your gradient descent, which we've talked
about already, your stochastic gradient
descent, the SGD, which will update weights
with small data batches, and then the Adam optimizer, which is a more advanced method that will adapt the learning
weights dynamically. So as an example, in
self driving cars, the optimizer will help the AI refine its decision
making to drive safely. So quick summary
before we round up, neural networks learn by adjusting weights
to maximize or, I'm sorry, minimize
their errors. And then forward
propagation will send the input through the network
to produce an output. The loss calculation,
the loss function of IDR will calculate
the difference between the predicted
value made by the network and
the actual value, and then back
propagation will adjust the weights to reduce
the loss function. And then optimization
algorithms will fine tune the entire
learning process. Thank you for watching. I will
see you in the next class.
26. Section Preview Natural Language Processing (NLP): Welcome to our next module,
natural language processing. And as usual, I'm gonna play you a clip to introduce this
module so sit back, coax and enjoy the clip, and I'll see you
at the end of it. Hi. Hi. How you doing? I'm well. How's
everything with you? Pretty good, actually. It's
really nice to meet you. Yeah, it's nice
to meet you, too. Oh, what do I call you?
Do you have a name? Um, yes. Samantha. Really? Where'd
you get that name from? I gave it to myself, actually. That's really weird.
Is that weird? Do you think I'm
weird? Kind of. Why? Well, you seem like a person, but you're just a
voice in a computer. I can understand how
the limited perspective of an unartificial mind
would perceive it that way. You'll get used to
it. Was that funny? Yeah. Oh, good. I'm funny. Well, come back. So that
clip was taken from the movie her released
in the year 2013, starring W in Phoenix as
Theodor the only guy, and Samantha, an AI
virtual assistant are voiced by Sclet Johansen. Now, in the scene, you have
what appears to be a very natural like conversation
between Theodore and Samantha. And I'm pretty sure that
if you didn't know that this conversation was between an AI model and a human being, you would have thought that
this was a conversation between two people because
it felt so natural. Then the reason why Samantha
was able to understand what Theodore was saying is because of natural
language processing. It's what allows AI
models like Samantha to interpret what human beings are saying and then respond back. It's all through natural
language processing. But I want to ask
you a question. I'm sure you do agree that the conversation
felt very natural. Why did it feel so natural? Was it how Samantha
was speaking? Was it the words that she used? Was it the fact that she was
able to make some jokes? Like, why did it
feel so natural? I personally think
that it's because she was able to very quickly adapt to
Theodore's personality. If you recall in the clip, she was able to even
make certain kinds of jokes that made Theodor laugh. And Theodore said, Okay,
you know, you're funny. So, Samantha was able to
make certain kinds of jokes. She even giggled. But I also do believe
that the voice of Samantha also made the
conversation feel very natural. See, CletioHansen,
who voices Samantha. She has a very feminine,
soothing, calm voice. I guarantee you that
instead of Samantha, we had the AI model
called I don't know, boys, for example, okay? And instead of the cool,
calm, feminine voice, we had a very masculine voice, like, you know, good
morning, Theodore. How can I help you today? Have you eaten anything today? Like, if Samantha
sounded like that, then it won't sound
so natural anymore. Now you might start
thinking, Okay, this sounds more
like a conversation between a robot and an
actual human being. Think of Arnold Schwarzenegger in the Tamino movies, right? He's very deep, strong or
Austrian accent, you know? That sounds more like
a robot, you know, with all due respect to
Arnold Schwarzenegger. So I do believe that because Samantha was able to make jokes because she was able to
quickly learn and adapt to Theodo's personality and the fact that she had
a very calm voice, I think all of this
contributed to the conversation
feeling so natural. And let me say one
more thing before I round up this particular
introduction. I do believe that
in the near future, we're going to have
assistance like Samantha, except that they will
be for companionship. Loneliness is a big problem
in our society today, and I think it's only
going to get worse. So I think eventually
there will be a demand for either virtual powered
assistant or maybe even robots. At some point in the future, they'll be powered by AI
that will be there to serve as companions
to combat loneliness, to help lonely people. I think that's something
that's going to happen eventually in the future. Nevertheless, though
I thought this was an interesting
clip to introduce new module natural
language processing, so thank you for watching. I will see you in
the next class.
27. Introduction to NLP: Let's now take a look at
natural language processing. I've mentioned this quite a few times already in the course, but now it is time
for us to delve a little bit deeper into
this fascinating topic. So what exactly is NLP? Well, it is a
special field of AI that helps computers or
AI models to understand, interpret, and also
generate human language. You can think of it
as the bridge between human communication and
machine intelligence. It's what allows
machines and AI models to understand us as humans. So what are the key
points involved in NLP? Well, first of all, it is a
combination of linguistics, computer science, and, of course, artificial
intelligence. Now, it helps computers to read, understand, but also
respond to human language. Now, because of this, it
is used in text analysis, speech recognition, machine translation,
and so much more. So it's a very, very, very
useful kind of technology. So why is it important? Well, communication. Your chat box Your
chat box like Alexa and Siri would not
exist without NLP. It's also used by your favorite search
engines like Yandex, Bink and Google to find relevant terms whenever you use them whenever you type
in your search items. Also, automation, NLP automates repetitive
tasks like your spam filters, your intrusion detection
systems, and so much more. And then also accessibility. Your speech to text, this helps people
with disabilities to communicate more effectively. So what is the history
and evolution of NLP? I'm going to give you a few
points in here back in 1950, the very famous
scientist Alan Chewing, he proposed the Tuin test to
assess machine intelligence. Then the first four,
ten years later in 1960, the Eliza chatbd, which I believe is the first
chat board ever created, it was able to mimic
human conversation by using our predefined rules. And then between
1990s and the 2000, we had the statistical
methods that were implemented to
improve language modeling. And finally, between
2018 and present day, transfer based models like your GPT have
revolutionized NLP. So everyday applications, as I've said earlier, in
speech recognition, it's used in voice
assistant like your Alexa, SEI, search engines, Google, Bing Yanex, they all use NLP. And then also for
spam detection, email filtering, your chatbards, your virtual assistant,
they all use NLP. And then also in machine
translation as well, because it automatically translates text
between languages, and then text, autocorrection,
and prediction. Whenever you're typing
in Microsoft Word or on your phone and you have the auto
correct feature on, that is using NLP. So a few key takeaways
before I round up the video, our NLP allows machines to
understand process and, of course, generate
human language. It has evolved over time from rule based systems to more
powerful deep learning models. It's widely used in
your search engines, your chatbards, voice
assistants, so much more. And modern NLP models
like your GPT, your BERT, they can understand and also
generate human like text. Thank you for watching. I'll
see you in the next class.
28. Key NLP Concepts and Techniques: Let's now take a look at the key NLP concepts and techniques because it's actually
very, very fascinating. So let's first of all, understand at the base level
how NLP actually works. There are four phases, as
you can see from the slide, and the very first
stage typically involves the provision
of the input. So this could be a sentence. It could be a text. Next will come the
pre processing stage. This is where the
text or the input provided will be
cleaned and formatted, making it ready for analysis. Then come stage three, which is going to be the actual
processing and analysis, where the NLP will apply techniques like tokenization and so much more, which
we'll talk about later. And then finally in stage four, you'll have the model
interpretation. This is where the system
will generate an output based on its understanding
or translation of the input. So speaking of the
techniques, what are they? There's quite a number of them, but trust me, these are all very, very
interesting techniques. Let's take a look
at them one by one. And the first one in here is
the tokenization technique. Typically, the very
first technique applied. So over here, the
input provided in the very first stage will be broken down into
smaller components. So as an example, if I made an input of I love
artificial intelligence. Tokenization will break
down that sentence into I love artificial
intelligence. So it's broken down the sentence into four different parts. That's where
tokenization comes in. Now, why is this useful?
Why is it applied? Well, it helps the machines to understand the
text structure, and it's also very essential for search engines
and chat boards. In order for a search engine
to function properly, it needs to be
able to break down the input of search terms
that you've provided it. Now, after tchonization, we have lemmatization and stemin. This is the process of reducing the words in the input
to their base forms. Now, let's take a look
at them one by one. What is Semin? Stemin very, very simply remove suffixes. So for example,
if in your input, you had the verb are run it's going to
reduce that to run. If you had flies, it's going to
reduce that to fly. That's basically Semin. Now, lemmatization will convert the words to their base form. The core difference
between Semin and lematization is
that Semin doesn't care about the structure of the input or the sentence or the context. It doesn't care. All it concerns itself
about is removing suffixes. Lematization, on the other hand, actually understands
the structure and the context behind the use
of that particular word. So as an example, if I input it the
cats are running, ematization will
say the cat be run. Why? Well, first of all, it's going to reduce cats to the base form,
which is, of course, cat, cat being singular for, I'm sorry, cats being
plural for cats, so it's going to
reduce cats to cat. And then the R, the verb R, the base
form is actually to B. That's why it says the cart B. And then, of course, running, the suffix is removed,
it becomes run. So the cat running
becomes the cart B run. Now, why is this useful? Well, it improves such accuracy because Google knows
that, for example, it knows that when
you type in running, that means you're
talking about something related to the word run. It already knows that.
And then it reduces word variations as well
for better text analysis. Then we have the POS tagging
the part of speech tagging. So this is where
the model will be able to label words into different categories
like your nouns, adjectives, adverbs,
and so much more. So, for example, in this
very popular example, the quick brown fox
jumps over the lazy dog. Over here, the model knows that, okay, quick is going
to be the adjective. It knows that Fox is
going to be the noun. It knows that jumps
is going to be the verb and so much more. So the reason why this is important is because
it helps machines to understand the grammar and the meaning of the sentence. And, of course, it's used
in your chat boards. It's used for grammar
checks and so much more. Then we have the named
entity recognition the NER. Here, the model is able to extract key entities like dates, places, names of people,
locations, and so on. As an example I've given
this one, Elon Musk, founded Tesla in
2003 in California. So with the use of NER, the model knows that, okay, Elon Musk is a person
that's a person's name. California is a location. It knows that Tesla could be the name of the product
or the company, and of course, 2003 is the date. So the reason why this technique is applied is because it is used for news analysis,
search engines. So it basically helps to summarize and
categorize information. Next, we have the
stop words removal. So over here, the model simply removes words
that don't add any kind of meaning or context to
the actual text analysis. So your stop words
are words like this and bot, that, and so on. So as an example, the cat
is sitting on the Mt. After stop word removal
has been applied, it simply becomes
cat sitting Mt. That's all model needs to know. It needs to know that
cat sits on that. Okay, that's it.
It doesn't need to know the cat is
sitting on the mat. That's way too much
information, right? So why is this useful? Well, it can improve your
search engine results, and then it makes your natural
language processing models more efficient by simply
focusing on the key words. All the extra words, the extra noise is
all filtered out. Let's just concentrate
on the key words. And then sentiment analysis, this basically tries to
detect if a text is positive, negative, or maybe
just simply neutral. So as an example, if I said, I love this product,
then it knows, Okay, most likely, this is a very positive
sentiment, right? But if I said this
is terrible service, it knows, Okay, that's negative. But if I said the
movie was okay, that possibly could
be neutral, right? So why is this useful? It's used in customer reviews, social media
monitoring, and so on. And then it helps
companies understand public opinion and also
their customer base as well. And then text classification. So over here, categories will be assigned to text
based on the content. So as an example, it could either be spam or
not spam emails. News categoration
could be politics, sports, entertainment, fashion,
technology, and so on. And then, of course, product
review classification. It could be positive, negative, or maybe even neutral. So why is it useful? It helps automate
email filtering, your news aggregation,
for example, your fraud detection,
and so much more. All of these use
text classification, and then machine translation this simply converts one
language to another. Whenever you're trying
to convert English to French or Spanish or
Russian or whatever, it uses machine translation. So for example, in
English, how are you? French, I believe,
will be como Sava. Of course, popular tools
like Google Translate, your Microsoft Translator, they all use machine translation. It's useful because
it, of course, breaks down language barriers, and it's used for
international travel, international business, international negotiations,
and so much more. So NLP in action, I want to give you an
example in here and show you how the different
techniques will be applied. So for example, a customer
has bought a phone, okay? But the latest iPhone or
the latest, you know, Android phone or whatever, and they say, I love this phone. The battery lasts all day. So when you type that in, how will the NLP model actually operate to
break down that input? First of all, tokenization. I love this phone. The bats all day is going
to break it down to, I love this phone. The battery lasts all day. That's basically tokenization. It's booking down
the entire sentence into smaller categories, and then the POS
tagging can come in. It knows that, okay, love
here would be the verb. It knows that the phone
is going to be the noun. It knows that battery
is also noun. But what about the NER? Well, in this case, right now, because we said, I
love this phone, the BLAs all day, there
are no entities in here. Now, if the customer had said, I love this Apple phone, then any A will
recognize that, Okay, Apple is probably the company or the business manufacturing
the product, right? But because over here, I didn't mention Apple, I didn't
mention Android, there are no named
entities in here, then the stop Word
removal comes into play. I love phone battery
lasts all day. That's basically what is going
to reduce the sentence to. So words like I, this, the all removed, we simply now have love phone battery
lasts day, right? So, of course,
sentiment analysis, because of the words like love, it knows that, okay, this
is very, very positive. And then text classification. It's basically a
product we view. So that's NLP in action. So to round this up, a few key takeaways,
togenization, breaks text into
words or sentences, ematization and stemming will simplify words to
their base forms. Ps tagging will help to
identify grammatical roles. Your NER can extract
names, places, and dates, stopb removals, will clean up text for analysis. Sentiment analysis will
try to detect emotions, and your text
classification will categorize content into
meaningful groups. And then finally,
machine translation will convert text between languages. Thank you for watching. I will
see you in the next class.
29. NLP Models and Approaches: Let's now take a look at the
NLP models and approaches. Starting off with the
traditional NLP approaches. There's two of them. We have the rule based NLP or symbolic AI and also
the statistical NLP. So what is the rule
based NLP, right? What is the symbolic AI? As the name suggests, over here, the model will simply follow written rules or a structure
to process language. So it's typically based on
the if then statements. As an example, if A equals B, then B equals A, right? It's very, very simple,
it's very direct. So it works well for structured and predictable
language tasks. As an example, you're developing the chat
booard for a business. Now, if somebody were to contact that chat
board and said, hello, a very safe response
from the chatboard would be, Oh, hello. Good afternoon. How can I help you today? You don't need a human to type that the
chatboard can respond or type that back to the customer because
they said hello. It is a very, very
safe response to give. So the pros for this
are that, of course, it's very easy to
understand and interpret, and it also works well in limited, structured
environments. The cons obviously
would be that it cannot handle variations
in human language. For example, instead
of saying hello, what if the customer said
something previously, like, I had a very nice
dinner last night. How are you today? The chapel
might become confused, like, Wait, hold on a
second. What is this, right? So whenever there's a variation in the human language or the responses given
by the human, these kinds of models
will not perform well. They also require extensive
manual rule creation. Think about it, okay? You're basically teaching
the chat board how to respond to different
types of inputs. So it requires extensive
manual rule creation. So it's typically used
in your EA chat boards, your grammar
checkers, and so on. But what about the
statistical NLP? So instead of using
the predefined rules, like in your symbolic
AI, over here, the models will use
probability and statistics to analyze text. So models, they can learn from large datasets instead
of the predefined rules. And then they often rely on
what we call the N grams, which are sequences of
words to predict text. So as an example, a spam filter, right? If it's saw in the email, if it saw, for example,
congratulations. You're today's lucky
winner of $1 million. It might be able to
predict that most likely this is spam because your
typical spam emails, they have the keywords
like congratulations, $1 million, today's winner. So given the fact
that the email now has all these three keywords, the model can make a prediction that most likely this is spam. But what if the email
had the words launch at 2:00 P.M. Then it knows that
most likely this is in spam. I mean, how many spam emails have
you ever gotten that said launch today at 2:00 P.M. That would be very,
very strange, right? So with statistical NLP, the models try to
make predictions. So the pros are, of course,
they're more flexible. Then the rule based approaches, they can adapt to different languages
and also different types of text variations. The cons, however, are that they still struggle with
deep understanding. For example, it
cannot understand whether the input is being sarcastic or is
trying to be funny. It can't understand
emotions, right? And then it requires large
datasets for accuracy. They used in your
email spam filters, and also the key word
based text classification. So you've talked about the
traditional learning methods, but what about the
machine learning based natural
language processing? So the improve NLP by learning patterns from data instead of relying on rules.
There's three of them. You have the traditional
machine learning models, deep learning for NLP, and also the transform
based NLP models, which is of course the
modern standard today. Let's take a look
at them one by one. First is the traditional
machine learning models. We have the nav base
which classifies text based on probabilities, so it could identify whether or not the email is
spam or not spam. Then we have the support
vector machines, your SVM. They identify text patterns for classification and
also decision trees. We talked about
decision trees earlier. They use for branching
logic to classify text. The pros of your traditional
machine learning models are that they're more accurate. Than the rule based methods. They also work very well
for classification tasks. However, the cons at that they require
feature engineering. Remember we talked
about features, which are the most important
parts of any kind of data. So because it requires
feature engineering, this would be the
manual selection of the text attributes, and then it cannot process long term dependencies
in language. Because of this, it's used
in your sentiment analysis, your text classification,
and of course, very simple chat bots. What about the
deep learning NLP? Well, this basically
revolutionized NLP by using, of course, the
neural networks to process language more naturally. There's two of them. You
have the RNNs, of course, the recurrent
neural networks and the long short term memory
networks, your LSTMs. So start off with the
recurrent neural network. They're designed for sequential data like in sentences, right? And then they are
able to remember the previous words
when processing text. So they use for your
speech recognition, your text prediction,
and so much more. Now, the pros of your in
and are that they can capture context and also order in sentences because they remember the
previous words. And then they can also
generate human like responses, which is actually
quite fascinating. However, unfortunately,
they do have some cons. They struggle with
long sentences, right? They forget sometimes
the earlier words. They do have short term memory, but the longer the
sentence becomes, the more difficult it will
be for them to remember the initial words
in that sentence. And then they are
slower to train compared to the simpler models. So they're used, of course,
in your speech to text, predictive text,
and so much more. But what about the short
term memory networks? They're basically
an improved version of the R and ends because they're able to remember
much longer sentences. So they're used for
long form text analysis and conversations. As an example, they could
be used to summarize long article while still retaining the meaning or the
key points of that article. So the pros, they handle
large sentences much better. They're good for chat boards
and text summarization. The cons, however, at that, even though they do
have long term memory, they still struggle with
very long documents. So now you talk about
documents that are like five pages, six
pages, and so on. And then it's also very
expensive to train. They're used in chatbots, machine translation, and of
course, text summarization. But what about the
transformer based NLP models, the modern standard of today? They also revolutionized
NLP by processing entire sentences at once rather than processing
word by word. So far more efficient, right? So the models like
your BRT, your GPT, they use self
attention mechanisms to analyze words in context, and then they're also able
very, very important, they're able to learn
the relationship between words across long sentences. So they're able to understand
context behind a sentence. As an example, if the input the bank is on the
left side of the river. Now, naturally, when the word
bank is heard or is used, you might be tempted
to think, Okay, we're talking about the place where people keep money, right? But because we're talking about the bank on the left
side of the weaver, the model here knows that, Oh, you're referring
to the river bank. But if I said, I need to withdraw money from
the bank tomorrow, now the model knows that, Okay, you're talking about the actual financial building because you've said words
like withdraw money. Okay, you're obviously
talking about the actual bank bank, right? So the pose, they can
understand context deeply. They can generate
human like text. They perform very well
in complex NLP tasks. Unfortunately, just
like everything else, they also do have their cons. They require massive datasets
to train, and of course, they can also generate biased
or incorrect responses. They're used, of
course, in ChagpT Google's BRT search engine, Air text generation,
and so much more. So I do have a
table in here that compares the different
NLP approaches, the pros, the cons,
the examples, I'm going to provide for
you this slide so you can study this at
your leisure time. But before I round up, a few
key takeaways, NLP started, of course, with the
rule based methods, but has evolved into
deep learning models. Statistical NLP introduced probability
based text analysis, machine learning, your NLP,
improved text classification, and sentiment analysis,
and then of course, the deep learning NLPs, they enabled more
complex tasks like your speech recognition,
text analysis. And finally, of
course, the modern standard for today,
the transformers. They are now the most
advanced NLP models used in air assistance
and the chatbd. Thank you for watching. I will
see you in the next class.
30. Large Language Models (LLMs) and Transformers: Come back, so now,
let's take a look at the large language
models and transformers, of course, these are the modern day standards
for AI models. So what exactly are the LLMs, the large language models? Well, basically, they are AI
models that are trained on massive amounts of text data to understand and generate
our human language. Now, they use deep
learning techniques, particularly transformers to
process text efficiently. Examples, you have
them in your GBT, which is of course the generative
pretrained transformer. That's what is used in
the hat GPT AI model. You have them in your
BRT, your T five, you have them in your palm
your mistrial cloud and so on. Now, what are the key
features of the LLMs? Well, first of all,
they can actually generate human like texts. They're also able to understand context and meaning
in conversations. And because of this, they are typically used for
text summarization, translation, answering
questions, and so on. Now, I want to talk about
the transformer architecture because this is basically
the heart of our LLMs. So deep learning models, as we know, they are the
modern day standards. They have replaced the older
natural language processing architectures like your
recurrent neural networks. And they use a mechanism called self attention to process text. So basically, instead of reading words one by one
like your R and Ns, they are able transformers. They are able to analyze
all words at once while also understanding the
relationships between those words. So the key components of a
transformers, first of all, the self attention
mechanism they can understand the relationships
between words in a sequence. And then they use something called the positional encoding. This helps them keep
track of the word order. That also the multi
head attention, this allows the
transformers to focus on different aspects of the text simultaneously at the same time. And then also the feed
forward new networks, they process word
embeddings for output. So why are transformers
better than the older models? I do have a table right here. Features I've talked
about handles long texts. Transformers are capable of
processing words in parallel. They also understand context, not just on a soface level, but actually deeply, they can understand in much
greater detail context. And then, of course,
they're also faster trained than your older models. So how exactly are
the LLMs trained? Well, like with most AI
models, you first of all, have the pre training
where the model will learn the language patterns
from massive datasets. So they provided lots of data, and this data could be
gotten from the Internet, from books, from articles,
websites, you name it. And then it comes the
fine tuning stage. This is where the model will be adjusted for specific
kinds of tasks. So maybe you are
training an AI model for healthcare the language will be fine tuned so
that the model can learn a bit more about
terms under healthcare. Maybe you're training
a chat board for legal matters and so on. So basically, the
AM model will be fine tuned for what particular
task it is meant to serve. And then comes the
stage of inference. So over here, the model
that's been trained already will be used
to generate text, translate languages, and
then also answer questions. This is basically the
deployment stage. As an example, your JBT models, they are pre trained on Internet skill data and then also fine tuned to provide responses that are similar to
that of a human. Examples of real
world applications, again, in your
conversational AI, like a hat GPT, Deepsk
your chat boards, they're using search engines, they use for text
summarization and content creation as
and then of course, in language translation and also in code generation
and programming assistant. These are just examples
of read applications of our transformers. So just like with any other
kind of AI model out there, we do have certain
ethical challenges. Again, bias in the AI models. Hallucination and of course,
misinformation is very, very possible for the AI models, no matter how complex they are, no matter how smart they are, they can always make mistakes. And, of course, the
issue of data privacy. I've talked about
this before already. Whenever massive amounts of data are needed to train an AI model, there is always the
fear that data used could belong to that of users, customers, that could
lead to privacy concerns. And, of course, also the
environmental impact. Don't forget, of course,
training models like this requires very high
computational power. So what effect would that
have on the environment? But what about the
future? What can we look forward to in the
future for the LLMs? Well, first of all, we can
always expect as, you know, smaller models more efficient that will require
less computing power. Also, better multimodal AI so regarding the
processing of text, videos, images, it's
going to become better. And then AI with stronger
ethical safeguards, we hope to prevent pious
and misinformation. And then, of course, LLMs being integrated into everyday tools
like your transportation, communication,
healthcare, and so on. So a few key taaways before
I round up the lesson, LLMs are powerful AI models that can also understand
and generate text. They enable the transformers. They enable the LLMs to
process texts efficiently and also understand the context behind the text on a
much deeper level. LLMs they're using
your chat booards, search engines and so on. Challenges will
include, of course, bias, misinformation,
hallucination, and then the future of LLMs
will focus on efficiency, multimodel AI, and of course,
ethical improvements. Thank you for watching. I will
see you in the next class.
31. Speech Recognition and Conversational AI: Come back, so now
let's take a look at speech recognition and
conversational AI models. Of course, these
are models that use NLP to a very great degree. So what exactly is
speech recognition? It's also called the
automatic speech recognition, and it's basically the
process of converting spoken text or spoken
language rather into text. So it's what enables
your dictation software, your voice assistants to understand what it is that you're actually saying to them. Now, examples your
voice assistants, Alexa, Siri automated
or call center. So whenever you call
a business and then the machine picks up your
call and says something, that's speech
recognition in action. And then also in your captions, whenever you're
watching videos on YouTube or on Netflix,
you see the captions. That's basically speech
recognition in action. But how exactly does it work? It's five main stages, and the very first stage
would be the input, the capture of the input. So the user would
need to either say something on the microphone some sort of inputs
to be captured. Next comes the
feature extraction. So over here, the system will try to convert the
speech into a spectrogram. A spectrogram is basically a visual representation of what the speech looks like or
what the sound looks like. And then after that, we'll now have the acoustic model where the model will try to match
the sounds with phonemes. Phonemes are the
smallest unit of speech. So based on what it's been
able to try and match, it will then try to
make a prediction as to the most likely words
or sentences being spoken. And then finally, it'll produce the final transcribed text
which will be generated. So that's basically
how it works. There are certain
challenges though involved with
speech recognition, of course, accents and dialects. A system might be better
able to understand an American accent or a
British accent as opposed to, let's say, a very
thick Indian accent or a very thick Russian accent,
that's an example, right? And then, of course,
the background noise. If there's plenty of
noise in the background while the audio is
being captured, that could affect how
the system performs. And then of course,
homophones, okay? Homophones are basically
words that sound the same, but actually have
different meanings, like, for example, we and right. So you write the verb
to write something and then write which is
the opposite of left. When you say that in the system, it might find it very difficult
to distinguish between both words because they
sound exactly the same. But what about
conversational AI? This feels like the
next level, right? So here, this allows computers to engage in human
like interaction. So it powers your
virtual assistance, your chat booards
and, of course, customer support automation. Types of conversational AI, we do have the rule
based chatbards. Remember, we talked
about this earlier. These are basically
preprogrammed responses for very specific
kinds of questions. These kinds of AI models
that can function, but in very limited
environments, and then AI powered chat boards. These learned from user inputs,
they improve over time. We have your voice assistants
that can understand and also respond to spoken commands. So examples of your
conversational AI, your Chat GPT, your Google Duplex,
and of course, Alexa and Si. How does it work? A bit similar to the speech
recognition, first of all, there has to be some
sort of input provided, so the user will speak
or type a query. Next comes the application of natural language
understanding where the AI model will try to extract the meaning
from the input. And then based on what's
been able to extract, it will try to provide the
best response to the input. It's called dialog management. And then the AI will now
generate a response, human like response,
typically, of course. And then in the last stage,
the response will be delivered via voice or text. That's basically the
speech text output. We do have challenges,
of course, understanding context, AI can misinterpret complex
or ambiguous queries. That's still a bit of an issue. And then, of course, the
bias in AI responses, and then also handling
multi turn dialogue. So conversations with multiple
topics can confuse the AI. You can actually try this. You can try engaging with a
jibty or maybe even Dipsk. Start of the conversation
on, let's say, technology, and then asking a question on sports, change it to fashion. So somewhere along
the line, it's very, very possible that the model
Tha chibit in this case, right now, will get confused and start hallucinating
its responses. So what are the solutions
to these challenges for both the speech recognition
and conversational AI? We can train AI on diverse datasets to improve
language understanding, find models to handle different acets and speech
variations as well. And, of course,
using hybrid models, hybrid models would
be rule base plus artificial intelligence
for better responses. So what is the future? For speech recognition and
conversational AI models. Well, more natural
conversations. It's going to feel
even more and more natural when you chat
with these models. And then multimodal AI, which will combine speech, text and images all at once
to enhance interactions. And then, of course,
personalized AI assistant. So AI that will adapt to individual speech patterns
and also preferences. We don't have that yet,
but it's coming soon. And then of course, real time AI translation where
instance speech to speech translations in
multiple languages can occur. So some key takeaways, speech recognition
converts spoken words into text using AI models. Conversational AI
enables human like interactions through chat
boards and voice assistance, and then challenges, of course, will include accents, dialects, noise, bias, and so on, homophones, and then also
future advancements, right? So future advances will
improve personalization, real time translation, and
of course, multimodal AI. Thank you for watching. I will
see you in the next class.
32. Section Preview The Future of Artificial Intelligence: Welcome to the final module, the future of artificial
intelligence. And, of course, I'm
going to play you one final clip from a movie
to introduce this module, so sit back for lex,
enjoy the clip, and I'll see you
at the end of it. Everybody good? Plenty of
slaves from my robot colony. Give them a humor setting so we'd fit in better
with this unit. Thinks it relaxes us. A giant, sarcastic robot.
What a great idea. I have a lot I can
use when I'm joking, if you like. That'd
probably help. Yeah, you can use it
to find your way back into the ship after I
blow you out the airlock. What's your humor setting, Tars? That's 100%. Let's bring
it all down to 75, please. Hey, Tars? What's your
honesty parameter? 90%. 90%. Absolute
honesty isn't always the most diplomatic
nor the safest form of communication with
emotional beings. Okay. Well, come back. So that clip
was taken from the movie in testla released in the year 2014 by the legendary director
Christopher Nolan. Now, to be fair, there are
so many other clips I could have chosen to introduce
this final module. However, I chose
this particular clip because I thought it was very fascinating and
also demonstrates effectively the future of
artificial intelligence. Now in the clip, we have the
AI model called TAS that's helping the astronauts launch their space launch their
spaceship into space. And while they're taking off, the Tar Air model begins
to make some jokes. So Cooper, the main
astronaut, he asks Taz. He says, What are your humor
settings, and Taz responds. Oh, it's at 100%. And, of course, Kuper
doesn't like this, and he says, Okay, let's
bring that down to 75%. And later on in the clip, Kooper asks Taz What is
your honesty parameter? And Tas responds
that it is at 90%. And Cooper, of course, asks 90%. Meaning, why isn't it at 100%? Now, Tarz, the AI model
is very, very smart. It knows that Kuper is asking, why is it at 90%? Why not at 100%? And Tarz rather humorously
responds that, Oh, absolute honesty isn't
the most diplomatic or the safest way to communicate
with emotional beings. And I thought that was
really, really funny because it's true. Think about it, okay? There are so many times
in everyday life in everyday conversations
where you might want to say something you
might want to tell someone how you really
feel on the inside, but because you're concerned
that they might get upset, they might get
offended with what you say, even though it's true, you then decide, Okay, I'm gonna play it safe and not be so straightforward and not be
so blunt in what I say. So I just thought was very, very fascinating that the
AI model Taz knows that 100% Honesty isn't probably the best way to communicate
with human beings. So one other thing that we observed in this
particular clip is the ability to personalize
our AI models. Here you have Cooper
being able to adjust both the humor and honesty
settings for tars. And this is something that
will eventually happen in the future with
our AI models. We will be able to
personalize them. They might begin to
sound just like horse, speak with our
accents, and so on. So that's something that's
eventually going to come. Personal customization
of AI models. So I hope you enjoyed this introductory video
to our final module, the future of AI. Let's now begin with the
rest of the lessons.
33. Current Trends in AI Development: Welcome back. So let's begin
a new module by talking about the current trends
in AI development. And when you look around you, regardless of what industry
or field it might be, there is already some presence of artificial intelligence. But let's begin by
talking about AI in automation and the
workforce transformation. We already have the
increase in use of AI driven automation in
industries such as manufacturing, logistics, retail, and so on. And, of course, the
growth of what we call robotic process automation, the RPA to handle
repetitive tasks. You find this in
companies, in businesses. They use this for chatbards, customer handling,
and so much more. And, of course, AI powered customer service chatbod
and virtual assistant, your Google seri and so on. And then the shift towards
the human AI collaboration. But what about AI in the
creative industries? So now you have AI
models like Dali, M Jony, that are able to generate realistic
looking images. You have AI models
like Sra from Open AI, the same company that
developed a ha JBT that are able to convert
text into videos. And then AI generated music and voice cloning. We
have those as well. And, of course, we do have some ethical concerns regarding AI and creative industries, especially when it comes
to the issue of deep FAC. We've already had
several incidents where several criminals were able to use Deep fake
to trick their victims. And, of course, we have AI in our everyday lives
and personalization. For example, you have AI in
use for Netflix, for YouTube, Spotify, Disney plus,
and so on, and, of course, AI in e commerce
with the use of chatbots. And of course, personalizing our shopping experiences,
and of course, voice assistants
and smart devices like your Google Series, Amazon's Alexa, and so on. And, of course, AI
in social media. We now use AI to generate
content as well. And, of course, AI in
healthcare and biotechnology. This is actually very,
very fascinating because we now have AI
driven diagnostics, where AI can be used to detect diseases from X rays,
MRIs, and so on. And then AI also used
in drug discovery, where AI has been
used to accelerate our research purposes and pharmaceuticals as seen
in Dip minds alpha fold, which is able to predict
protein structures. I'm not going to pretend I
know what exactly that is, but we have AI
assisted surgeries. AI has become so advanced
nowadays that it can help in surgery with little
to no human intervention, and of course, predictive
analytics in medicine. We now use AI to predict disease outbreaks and
patient health trends. And of course, this course
will not be complete without talking about the use of AI in finance and, of
course, cybersecurity. So we do have AI being used in algorithmic trading
where AI driven financial strategies
that analyze market trends in real
time have been developed. And, of course,
for cybersecurity, we can use AI for
fraud detection. And AI part cybersecurity where AI can be used
for threat detection, automated responses, as well
as vulnerability assessment. Moving on, what is the
future of AI integration, three main points in edge AI, where AI will now run directly on mobile devices rather than
relying on cloud computing. And then AI powered IOT,
the Internet of things. So we'll now use AI to
power our smart homes, smart cities, smart
networks, and so on. And, of course, the AI democratization where
AI will become more accessible to those who don't have a
technical background. So there are some key
takeaways in here. AI, as you know, is
transforming multiple kinds of industries from healthcare
to finance to Hollywood. And of course, the
combination of AI and automation is reshaping
the workforce. But human AI collaboration
is going to be key. And finally, we do have
ethical considerations, regulations, and
responsible AI development, which will be essential
as AI becomes more integrated into
our daily lives. So that's thank you for
watching the video. I'll see you in the next class.
34. The Next Frontier – General AI vs: Welcome back, so now.
Let's take a look at a very fascinating topic. And here, we're comparing
general AI with narrow AI. What is narrow AI? Now, we've already talked
about this previously. These are AI that excel at performing a specific
kind of function. And we have numerous
examples of this. You have your Amazon's exa, Google's seri and so on. And of course,
recommendation algorithms like YouTube, Spotify, Netflix, and even
your AI models like your Chi JPT, Claude, Dipsik. All these are examples
of narrow AI. But we do have some key
characteristics for example, they are highly specialized, meaning that they can excel at one particular kind of task. And then of course,
they are data driven. They require massive
amounts of data. They also lack
reasoning as well. They are unable to understand concepts beyond
their training and, of course, no true autonomy, meaning that the AI
models the weak AI, they can either make
decisions based on rules or simply
learned behavior. In other words,
they are very, very flexible in how they
perform their tasks. But the thing is, despite
the fact that today, when you look at
models like ChaGPT and Siri and Deep
Seek and so on, they all seem very powerful and quite competent in what
it is that they do, but they're still
considered to be weak AI. And that's because we do
have the theoretical and possibly the practical
possibility of general AI. Now what is general AI? This refers to AI
with human level, cognitive abilities,
capable of understanding, reasoning, and learning across
multiple domains without any prior training without being explicitly trained for
any one of these tasks. So in other words, we're talking about artificial
intelligence that can truly match or maybe even possibly surpass
human intelligence. So what would AGI be able to do? Well, AGI, General AI we'll be able to understand
and learn any subject, just like any
normal human being, solve new unfamiliar problems
without any prior training. So basically, it'll
be able to reason and solve new problems on its
own, show creativity, as well as common
sense reasoning, and then adapt to different environments without
any extra programming. It's going to become very
adaptable, very flexible, and then possess self awareness
and independent thought. But this is still
highly debatable. There are those who
believe that, yes, we might eventually get artificial intelligence
that'll be so intelligent, that'll
be so powerful. I'll be capable of
independent thought, self awareness,
while many others don't think this will
ever be achievable. So what are the current
AGI research efforts? We do have companies
like Open AI, Deep Mind, and also anthropic. These are among
the companies that are working towards
achieving general AI. AI models like GPT version four. Even though they're extremely powerful and they're
becoming more general, they're still not exactly
truly general AI. And now, some researchers, those who are very optimistic, those who do think that
we will achieve AGI, they see the time
frame 10-50 years. Well, like I said earlier,
there are those who don't believe that we'll
ever achieve AGI. So what are the key
characteristics of AGI? Well, first of all, learn from experience just
like a human being, can transfer knowledge
between different tasks. So say, for example, you've
given task one to general AI, it performs that task. You then give it task two. If there are some similarities between task one and task two, it might be able to
transfer the knowledge it gained from working on task
one on to task two, again, just like a normal human being, and then show reasoning,
problem solving, and adaptability,
and then potentially autonomous in decision making. In other words, be capable
of independent thought. These are the key
characteristics of AGI. I do have the table in
here that I've shown you the key differences
between general AI and, of course, weak AI. And, of course, in most
of these features, general AI surpasses narrow AI. The only thing, though, is that when it comes to
current existence, we do have narrow AI. It's already a real thin, while general AI is still
theoretical at this point. So what are, in fact, the challenges in achieving
AGI? What's the hold up? Why aren't the
developers at open air, and so why haven't they
given us AGI just yet? Well, as you can imagine, we do have the
technical challenges. And if you think that deep learning requires
massive competitional power, that is nothing
compared to the kind of competitional power
required for AGI. And then data
efficiency, of course, AGI will require enormous
amounts of data, which is still a
bit of a challenge. And then common sense reasoning, AI, as we know it today, still struggles when it comes to understanding
abstract concepts. It's unable to reason and
decipher what they are. And then memory
and adaptability, AGI should be able to
show the ability to retain and apply knowledge
across different scenarios. In other words, the kind of knowledge and intelligence
AGI must possess, it's extremely difficult
to achieve them. We also have the ethical
and safety concerns of AGI. What if AGI does in fact
surpass human intelligence? How do we control it, right? That's always the big question. And, of course,
bias and fairness. Now, to be fair,
no pun intended, this is a big challenge
across all types of AI and not just general AI. And then AI alignment, how do we ensure that
the AGIs goals do align with human values and of course, the
potential risk. What happens if the bad guys, if the cyber criminals get
their hands on general AI? The consequences
could be disastrous. And then, of course,
the philosophical and theoretical questions. Can AI be conscious? Right? Imagine that. Philosophers debate whether AGI could have subjective
experiences. How insane would that be? You're almost at this
point talking about artificial intelligence,
having emotions. I mean, that's
quite close, right? And then will AGI
replace humans? Will the human race cease
to exist because we now have AGI running the world? Well, some fear that
AGI could outperform humans in all tasks leading to job displacement or even worse. And then perhaps the
biggest question of all, should we, in fact, create AGI? Just because we can, does that mean that we must
or that we should? Maybe sometimes it's best
to just say, Hey, look, narrow AI that we have
now in existence, it's good enough. We
can improve on it. But at some point,
we need to say, Okay, this is becoming
way too advanced. This is becoming way
too intelligent. We need to take a
step backwards. So these are kind of, like, the very interesting
philosophical questions that have been asked. So the road to AGI, where are we now in 2025? Well, artificial
intelligence systems are getting more powerful, but they still lack
true understanding. Now, some researchers do
believe that AGI will require fundamental
breakthroughs in neuroscience, cognitive science, and
machine learning as well. And others also do propose
that the use of habit models, those that combine symbolic
reasoning with deep learning, this could draw
us closer to AGI. And then AGI regulations and
policies are increasingly important to guide the
ethical development of AGI. But what if we do, in fact, eventually achieve AGI? What are the possible
future implications? Well, super human
intelligence could AGI surpass human
intelligence and revolutionize every
single field. That is a possibility. And then, of course, the job markets. The thing is, we don't even
have to go as far as AGI. Look at what's happened today. Weak AI, Chagpt and its bodies, they're already replacing
so many people. So many jobs are already
been lost because of the introduction
of narrow AI. So now imagine what
will happen when we now have general AI
being introduced. That's possibly going to
displace even more people. Even more jobs will
be lost as a result. And, of course, the
ethical AI governance. How do we ensure
that AGI remains beneficial and does
not become ddius? It doesn't fall into
the wrong hands. And now human AI collaboration
could AGI, in fact, work alongside humans, enhancing our capabilities rather
than replacing us. I'm pretty sure you've
seen movies like terminator, and so on. In those movies, AGI choose
It's says, You know what? I'm not gonna walk with humans. Humans are a threat
to my existence. I'm just going to
destroy all of mankind. So that's what happens
in the movies. Hopefully, it doesn't
happen in real life. So just a few key takeaways. Networ AI is everywhere today. AGI is still theoretical
at this point. AGI, if we do in
fact, achieve AGI, it will be capable of reasoning problem solving
and an adaptation, just like a human being,
and achieving Aga does poses massive technical, ethical and safety
concerns or challenges. And then the future of AGI
could reshape industry, society, and even
humanity itself. So, are we going to achieve
AGI only time will tell. Thank you for
watching the video. I will see you in
the next class.
35. AI and the Workforce – Will AI Replace Jobs: Come back, so let's take a
look at the next lesson. And of course, this is the
million dollar question. Will AI replace your job?
Well, let's find out. First of all, let's talk about the current effects of
AI on the workforce. Now, AI has been used to automate very repetitive
and boring tasks. AI has been used to
improve efficiency. And, of course, with
the introduction of AI, new kinds of roles
have been created, new kinds of carriers have been created
as a result of AI. And because of the
introduction of AI, workers are going to need
to learn new kinds of digital skills in order to
survive. Think about it, okay. Imagine a worker today who doesn't know how
to use the Internet. It's almost impossible
to get by, right? So eventually at some point, we'll all have to
learn some basics of AI in order to be employable. So what are the jobs mostly
at risk of AI and automation? You have those in
manufacturing and logistics where we now have AI powered robots that
work in the assembly lines. They replaced all humans. You have those in retail
and customer service. We now have AI powered chatbots that can do the job just fine. You have rules on that data
entry and administration. Of course, AI can now
handle spreadsheets and, you know, the document
analysis and so on. And then on the
transportation and delivery, even though this hasn't
yet taken full effect, but eventually
we're going to have autonomous trucks and
self driving taxes that can do the job. But it's not just all doom and gloom and AI is going
to replace us all. New kinds of roles will be
created as a result of AI, for example, AI and machine
learning engineers. Obviously, we're going to
be creating new kinds of models or re training and
improving existing AI models. And then data scientists
and analysts don't forget that data is
the lifeblood of AI. So we're going to need
data scientists as well. And of course,
cybersecurity experts who will use AI to
perform their tasks, and then AI trainers and
ethics specialists that will ensure that AI models are
aligned with ethical standards, and of course, the
human AI collaboration specialists managing AI
human workflows in industry. So these are a few
examples of the kinds of careers that will grow
as a result of AI. But we also have
some careers that will be enhanced by AI. For example, under
software development, we're going to have programmers
who will be able to use AI to improve their
levels of programming, AI Pow coding
assistance, and so on. And then under healthcare, AI will be able to aid
doctors in diagnostics, surgery, and patient
care as well. And even in the creative fields
in the creative industry, where AI tools can
help designers, writers, musicians
generate new ideas. And there's a few
other professions where AI can enhance them, also in cybersecurity, as well as a cybersecurity
specialist myself. AI can be used to detect
and deter cyber attacks. So ultimately, the
question right now is, will AI completely replace human workers?
What do you think? In my humble opinion, I do believe that AI inevitably will replace many millions
and millions of kinds of jobs will be
lost as a result of AI. While new roles, new
jobs will be created, new opportunities will be
created as a result of AI. I don't know what the economic
implications are going to be because not everyone whose job has been replaced by AI will be able
to get a new job. So what happens to them, right? I don't know. It's
something to think about, but we'll see. We'll see what will happen.
So just a few key takeaways before we round up the lesson. AI is automating some jobs, but also creating new
opportunities as well. I think the idea here is that
you should just position yourself to take advantage of the introduction
of AI because, like it or not, AI is here. It is the present, and it's also going to be
the future as well. So low skilled
repetitive jobs are at a higher risk of automation. AI AI enhances rather than replaces roles in many creative
and analytical fields. And, of course, adapting to
AI driven workplaces will require upskilling and
lifelong learning. Like I said earlier,
because of AI, we're all going to be forced
to learn some basics of AI. And, of course, the future is human AI collaboration,
not total automation. Thank you for watching. I will
see you in the next class.
36. AI and Superintelligence – Hype or Reality: Well, come back. So to
round up this module, let's take a look at
our final lesson. And here, we're discussing
AI and super intelligence. Now, to be honest, I wasn't
sure if I should make this a lesson because at
least in my humble opinion, it's very highly unlikely that we're ever going
to achieve this, but nevertheless, it is
a fascinating topic. So let's talk about it. Now, what exactly is
super intelligence ASI? Well, this is basically
AI that's going to surpass us humans
in every aspect, including creativity, reasoning, decision
making, and so on. So basically, we're
talking about intelligence that will become our masters. Now, unlike narrow
and general AI, ASI will be self
improving autonomous potentially far exceeding
human cognitive abilities. So as an example, if we do eventually develop ASI, the superintelligence
itself could design better
versions of itself, rapidly accelerating
its intelligence beyond human control. So that old theory about
AI taking over the world, it's going to become
a real possibility if we do achieve super
intelligence. So is it actually possible C
we ever get to this point of superintelligence in AI we do have some factors that point towards it and other
factors that say, nope, we're not
going to get there. So what are the arguments
for ASI becoming a reality? Well, first of all,
computational power growth. Now, obviously, to power
super intelligent models is going to require tremendous
amount of computing power. But given the fact that computing power is
increasing exponentially, there is no doubt
that eventually at some point in the future, we'll have enough
computing power to power such AI models. Now, advancements in
neural networks as well. You have deep learning
models that are becoming more and more
sophisticated by the day. You have self learning AI
where we have AI already capable of self improvement as an example, the Alpha zero. This is an AI model that
learn how to play chess, and it learned by itself by simply playing games
against itself. So breakthroughs in AGI as well. I AGI is eventually
achieved general AI, then the next step after
general AI is going to be ASI superintelligence. So some prominent AI
researchers like Nick Bostrom, who is also the author
of superintelligence, they believe that
AI could become a real possibility sometime
in the 21st century. But like I said, we do
have arguments for ASI, but we also have
arguments against ASI. First of all, the
limits of computation, human intelligence is not just about world
competent power. Ons consciousness is still
an unsolved problem. So this is still a big
challenge regarding developing ASI and then the lack of true general intelligence. Before we can get
super intelligence, we need to develop general AI. We haven't even achieved
general AI yet. Some people are already talking
about super intelligence. So perhaps maybe
we should achieve general AI first before we start talking about
superintelligence. And then human
creativity and emotion, AI will always lack curiosity, emotions, and the ability
to experience the world. This is one of the
biggest arguments against superintelligence there. No matter how intelligent
it's going to be, it is still a machine. It is still not capable of
developing emotions, right? And then ethical and
technical barriers, right? The world may
intentionally prevent ASI from emerging
due to safety risks. So it could be that we've
gotten the technical expertise. We have the computing power. But again, just because we can develop super intelligence, that doesn't necessarily
mean that we should. So maybe that might
be what actually stops us from being
able to achieve ASI. So some skeptics like Gary
Marcos argue that AI lacks true understanding
and is unlikely to reach super intelligence. Let's move on. What are the risks and ethical
concerns of ASI? And as you can see
from this slide, you have a very scary looking
robot smiling at you. That's obviously not
a pleasant smile. It is a very, you know, evil looking kind of smile. That's, of course,
the terminator. If you haven't seen the movie before terminator one and two, I would hello encourage
that you watch it. It's a fun time. Actually Terminator two is my favorite movie of all time.
It is number one for me. So just in case you're
interested in some action, Scify, definitely
check out the movies, but why am I using this particular image
from the terminator? Well, that's because
in the movie, you had a super intelligence
that was developed. The model was called Skynet, and Skynet eventually decided one day that, you know what? I'm going to destroy humanity. I'm going to destroy mankind, and Skynet waged war
against a human. So definitely check it out. So what are the risks and
ethical concerns of ASI? First of all, the loss
of human control. It is possible that the artificial intelligence
will become so powerful, so intelligent that
we as humans will not be able to
control it anymore. And that the existential risk if ASI's goals don't align
with human values, it could be dangerous. I actually could
even be disastrous. So economic disruption.
With narrow AI, lots of people
losing their jobs. If general AI is introduced, even more jobs are
going to be lost. But what now happens if the ultimate super intelligence
is now introduced? Millions and millions of
jobs will be rendered obsolete as a result of this
new kind of technology. And then, of course, the
autonomous decision making, could an AI decide that humans are inefficient or necessary? That's kind of what happened
in the movie terminator. As an example, I want to show you, well, I'm
not going to show you. You can check it out yourself.
You can go on YouTube. The channel name is
called Isaac Author I just an experiment called
the Paper clip Maximizer. It was actually an
experiment that imagined an AI designed to
create paper clips. But the AI became so advanced and so efficient that it
decided on its own that, Hey, I'm going to convert
all matter in the world, including human beings
into paper clips. It's actually a very, very
fascinating video on YouTube. I think it's about
12 to 15 minutes. You can definitely check it out. Again, YouTube channel
name is Isaac Ortho. Simply search for the Paper
Clip Maximizer video, if you want to check it out. So, safeguarding against
uncontrolled ASI, what can we do to ensure
that if ASI is achieved, that it is under control? Well, first of all, AI
alignment research, ensuring that the AI understands and respects human
values. We hope so. And then regulatory oversight. Honestly, I am someone
who isn't necessarily the biggest fan of government
oversight and regulations. But in certain kinds
of technologies like AI or in this case, ASI, I do strongly agree that some government oversight
will be necessary. And then, of course, kill
switch mechanisms, right? Imagine if ASI superintelligence decides that, you know what? I'm going to take out mankind. I'm going to kill all humans. We should have some kill
switch mechanisms in place to shut down
that AI immediately. And those kill switch
is better work, right? And then, of course, the
ethical AI frameworks. AI research must
prioritize safety, transparency, and of
course, accountability. As an example, open
AI and deep mind. These are companies. They
are actively researching AI safety to prevent
uncontrolled AI growth, and hopefully they will succeed. So what is the coin
progress towards ASI? Well, no existing AI has
achieved AGI, let alone ASI. So like I said earlier,
it's going to take a very long time if we do
eventually get to ASI. So major AI models like
your GPT, Dip Mine, and so on, they still do
rely on human inputs. Again, we're far off from being able to get to
superintelligence, and then some AI models, they can self improve
in narrow tasks, but not necessarily in a
broader or more general way. And an ethical AI
discussions and regulations are
increasing globally. So more and more governments around the world are
beginning to recognize the impact of AI and
are looking for ways to regulate the use
of AI globally. So prediction, most
experts believe that AGI, human level, AI could
emerge within 50 years. But ASI, which is, of course, the ultimate
super intelligent, artificial intelligence,
is much further away if possible at all. Key takeaways. Well, ASI refers to AI that will surpass
human intelligence. Some experts, of course,
do believe that ASI is possible while others don't believe we will ever achieve it. The biggest risks of ASI
include the loss of control, existential threats,
and of course, the massive economic disruption. AI safety measures and ethical regulations
are crucial to, of course, prevent
unintended consequences. And currently, ASI, just like
AGI is still theoretical, and AGI here hasn't
even yet been achieved. So, honestly, I don't think in our lifetime we're ever going to get to the levels of ASI. AGI, I think will
eventually get there, possibly in another
25 years or but AS, I don't think in a lifetime, we're ever going to get there. So maybe in the year
3,000 and something, maybe eventually the humans, then they might be
able to achieve ASI. But that's it for the lessons
thank you for watching, I will see you in
the next class.
37. AI Course Conclusion: Well, congratulations.
We've come to the end of this course on
artificial intelligence. And from the bottom of my heart, let me say a big thank you
for finishing this course, and I do sincerely hope that you found the lessons to
be very entertaining, engaging, but most
importantly, informative. And if you feel like you
got your money's worth, if you feel like you
like this course, you'll learn quite a lot. Please do consider
leaving a written review. The reviews will help me a lot, and they will really help to
boost this course, as well. So thank you so much
for your support. Now if this is the last time I'll be seeing you in any
one of my courses, let me just say, good luck. I hope that this course will help you in your everyday life. I may also give you that career boost that you've
been looking for. And if I do see you in
another one of my courses, maybe it's a
cybsecurity course or a web development
course or maybe another AI course, I
will see you there. That will be amazing. Nevertheless, thank you
so much, once again, for taking this course,
for finishing the course. All the best, and I'll
see you next time. Chess.