Transcripts
1. Course Introduction: Hello everyone. Welcome to this course. I am your instructor
thymidylate. And thank you for
making the decision to learn and understand about artificial intelligence governance and cybersecurity. Now, AI is one of the
most exciting fields around the market is filled
with opportunities and jobs. You hear about it on the news, everywhere, pretty
much on social media. More and more people
are wanting to learn about this new technology. And the demand for
AI professionals is booming everywhere. Now, AI is like a
massive field and I can spend ages just
talking about. But the focus of this
particular course is I, governance and cybersecurity, which is a topic a lot of
people are interested in. It brings together two of the
most hottest around today, which is artificial
intelligence and cybersecurity. And I honestly
believe there is not enough material present
about this or it is, but it's scattered
all over the place. You might find courses about AI. You will find questions
about cybersecurity, but not enough
courses are teaching you how to security AI systems. And hopefully this
fills that gap. If you have no idea what AIA is and how it works when nobody's, I will walk you through the
basics also before we delve into the governance of
cybersecurity sections. If you really want to learn
about the basic stuff III and not really interested in saga
vanish and cybersecurity. I have another course here which tissues at
the basic of AI and practically teaches you about AI and both the theory
and the practice. If you wanted to just
look at that topic. Who is this course
for guys, like, I would say pretty much everyone because I think everybody
should know about EI given how it's impacting the
world and the society around us is having a massive
impact on us, right? So on a job as a society and you should
definitely know all about it, You would definitely
appreciate this. But if you want to leave
you to look at it, I mean, if you are a risk management
or governance specialist, you would definitely appreciate this course because
you are already in that field writing you
assessing risk and you understanding if you're
in cybersecurity, you would want to know about AI visits and how to mitigate them. If you're in the
AI field itself, like you're a data scientist or your machine
learning engineer. And you want to learn about
the potential risks and you want to understand what they are then definitely this
course is for you. Sometimes you get tunnel vision and you don't get
the big picture of the technology
we are working on. Lastly, I said anybody
who's interested in AI, there are no prerequisites
for this course. I mean, you don't need to
have a PhD in mathematics, like a Python programming
or something like that. No. The book goes is pretty
much for anyone who wants to know about AI and
what are the risks around it? What are the topics we
will be touching on, guys? These are the topics I will
touch on briefly about AI and the impact it's having on human beings and
society and why. We will do a quick overview
of machine learning. It's installed.
It's important to know because that's
where the most of the rescue coming and it's most the most popular sub-field in artificial intelligence. Then we actually go into
the meat of the course. We will see why governance and risk management raise him so important and how to go about creating a particularly
a governance framework. Then we'll get a bit more
technical instructor understanding the cyber risks that are unique to EHR systems, and how to create a
cybersecurity framework that is customized for
artificial intelligence. Now that you know, hopefully a little
bit about the course, bit more about me so that you know who your teacher,
who's teaching you. My name is femoris long. I've been in the InfoSec
field for about two decades. I'm currently based
in the UK where I'm moving suddenly after
spending a decade in the UAE. I'm a published writers
because I always loved teaching and basically
creating a vein. It's about new and
exciting technologies. I have a YouTube channel called
the Cloud Security Guide, which is focused specifically
on cloud security and AI risks and
gentlemen, career advice. So Blues, please do visit and subscribe there
if you're interested there. So yeah, that's pretty
much about me guys. One thing for this
course is project. Now, I would want you to apply the knowledge you
get in this course. And I want you to create a threat model for AI
based applications. And I firmly believe that knowledge not applied
is forgotten. You will learn how
to create a threat model in the future sections. And I want you to
use net knowledge and ketones head model
of an AI system. It can be an enabled
as commonly use, I would recommend like
self-driving vehicles. We heard a lot about that. Take the principles
you learned in this course and apply it and create a threat risk assessment
of self-driving vehicles. Let me know so I can get my
feedback, all that also. That pretty much wraps it up, guys, I hope I gave you a good introduction about this course. Let's get started learning about AI and how to govern
and secure it. And I will see you
in the next section.
2. AI Overview: Hi guys, Welcome
to this section. Before we jump into the
governance and risk management, I, I wanted to have
a quicker for sure, but it's helpful to know
what you are securing, what your governing before
you actually started. If you're already like an AI expert and you already relevant what AI
and machine learning. You don't need a refresher. Then by all means
skip this section. But I was always
recommend that you do refreshing basic concepts. And sometimes you
can get a bit tied up in the details and
forget the big picture. What is the Artificial
Intelligence AI expert, the person who is called the
father of John McCarthy. In 195060 organized a
very famous conference called the dark mode conference. In his talk, he coined the term artificial intelligence
and he defined it as the science and
engineering of making intelligent machines.
Basically, what does that mean? The signs of trouble,
like I said, the signs of computer systems
being able to perform tasks with just humans do
like speech recognition, vision recognition, intelligent decisions,
language processing. Why do we need to do that?
You might ask, okay, well, why human
beings are there? Why would I want a machine to start doing this stuff with? This sounds a bit scary. But honestly guys,
the amount of data, data generated today by
both humans and machines, it far outpaces humans ability, human beings ability to absorb and make
decisions based on that, artificial
intelligence is pretty much like forming the basis for all future computer learning is the future of all
complex decision-making. And we'll see why,
why it's like really not feasible to have human
beings keep on doing this. If you have already
interacted with an EA. If you've been to like
a website and you've seen a chatbot pop up and
start talking to you. That's a very basic form of AI. Your computer running a
specialist form of AI. That he is not able to
answer your question. You see it gets, then you get an auditory human being who
communicate strategy with a person just to give you an idea of whether
you're using Netflix. You really like how matrix customizes the movies
which recommends to you then that's machine learning
and actual physical been like it has billions of users using it
at any given time. And like I said, how it's not feasible for human beings
to start doing this. How do you think that
regulate hate speech on the platform or
inappropriate materials? Can you imagine the
cost of protecting monitor all of
this human beings? That's why they
have even on EIA to detect and remove hips
reach from their platform. Every time you know, your
seemed like every time you use an Excel city and what do you call and you're using
those whites assisted, voice based assistance
anytime you make a mistake. What do you call
Alex? Our city is not able to understand
what you're saying. Then it uses that
data and it starts learning based on that,
and it learns from it. So this is AI in action. Basically, AI machine learning
is pretty much responsible for the explicit growth of whitespace
assistance, you know, digital voice assistance
because they learn to keep on learning based on what
they are understanding and based on that, it improves more and more impact of why suddenly becomes
such a big deal. You know, you hit anybody, hey, all the times RDA is coming
is changing everything. All that. Why, Why is he really
becoming such a big deal? Well, just to put into context, a is called the Fourth
Industrial Revolution. What does that mean? Well, we had previously
industry evolution, how human beings were living, and it had massive impact. We have steam science and
digital technologies. The first three
industrial revolutions, which just comes from
that modern society. Like if you want to really
go back, people found that, that then you heat up stuff, then you got energy
and it gets streamed. To the advent of
the steam engines. Steam was powering everything. From agricultural protection,
monitoring manufacturing. People used to live
in pharmacy and bend them scheme based
manufacturing happened. People start to move from farms to the cities and more
specifically the factory. But factory life was
very difficult, right? Factory laborers were cheap
and plentiful and they were people who were wearing
long and long hours and very unsafe conditions. Then what happened? Automation came mass
production Muslim, notably the assembly line, assembly language
goes, people are sitting there putting stuff
there, you would see, right? That happened, mass production started happening, automation. And that was the Second
Industrial Revolution. After that, the fact that you're sitting here and talking to me on a computer on the Internet. That's the digital
revolution rate. You're enjoying the
cloud, the internet, and sums up a handheld
device with digital device. All of those are basically the Third Industrial Revolution because that we move
from analog to digital. So that gives you now you
understand how big a is, why it's been called the
Fourth Industrial Revolution. Because based on a
more and more stuff is being offloaded onto
computers and the, and the kingdom and decisions
and they're giving, been given more and more, what do you call it
important stuff to do? All of these
industrial divisions, we have represented
very big changes. Like really honest
society level. Life went from the
farm to the factory. And people like people who started automating
more and more stuff. So that's why it's so important things like
electricity and mass production. So that's how big it is. Nice. And why is it certainly happened
now you might say like, why are you in the
last couple of years? But two very simple reasons, increasing computing power
and increasing data capacity. Now, AI needs a lot and
lot of computing power. It was not feasible honestly. I mean, you can
see, like I said, John McCarthy
mentioned it in 1956, but recently did not have
that computing power. Now with things like
Cloud computing, computers have become so
powerful they able to process that data. The
second thing is data. We simply did not have
that much data because machine learning requires
a huge amount of data. And that's why you needed these two things
are needed now we have zettabytes of data available plus the cost of
storage has dropped. These are the two
things which have really poverty and poverty I, and lead to the fourth
industrial revolution. I hope you understood this guy is having such a big
impact on what AI is. Now let us go to the main thing which is
machine learning, which I'll talk about
in the next section.
3. Machine Learning Overview: Hi everyone. Welcome to this section which
is machine-learning guys. It's not machine learning. It's pretty much like the
engine that drives the eye. And it's defined as the ability of machines
to learn from data. And you basically
teach a computer to do something without
programming into dsolve. And it's currently the most developed and the most
promising sub field of AI for industries like
governments and infrastructures. And it's the most commonly used like subfield of
AI in our daily lives. What is machine learning guys? Like I said, simply
put computer programs. You know, they're not smart in the actual sense of the word. They have a set of
hard-coded instructions like they take data
and produce output, and they cannot go
outside of that. Take a calculator, for example, when culpability scheme or they've seemed like
magical, right? You are putting numbers
in interest telling you. But at the end of the
David is just taking inputs and processing it and giving you
output nothing more. In machine learning,
what happens is it takes data which you're learning
and it learns from it. It's fed into an algorithm
to create a program. And it basically learns
based on that data. So let's take, just to
give you an example, this is how machine
learning works. Basically, you give a computer
lots and lots of data. It's called training data. Then you give it an algorithm
to understand that data. What does the machine do? It takes the data,
takes an algorithm, and then it builds
a model which we'll use to predict
something which hasn't happened yet. Now
what do you do? Then you feed it actual data and you'll
see what does it do? Does it make a
prediction, correct prediction or the
wrong production? Now based on that,
if it's correct, wonderful, you feed it more data to show the accuracy goes up. If it's wrong, then you
go back to entertain it, more training, more
training for. You. Basically retain them
algorithm multiple times until the desired output
is fine. What's happening? The machine is basically
learning on a stone and the results will become more
and more accurate over time. This is how different
machine learning is from normal computer system. So just to give you
a better example, this is our traditional can predict programs used
to work correct? You have input,
you will put it in an algorithm and then an
output would come out. This is how can predict
outcomes is to work. So machine learning
is like this. You have input and output already and you give
them machine learning. You have an algorithm
based on that. It takes the input and
it looks at the output. And based on that, it
creates a model digital used to make future decisions about that input
which is coming in, is to learn by itself. This is how different it is from normal programming
which used to happen. Okay guys, Now, we've
covered this topic. I hope this was a good
refresher for you. We had an older with AI and why it's become so
prominent nowadays. And we had an overview of
machine learning also. We learned how machine
learning works and how it makes its decisions. Believe it or not. Now you have the foundational
knowledge you need now to understand about AI
governance and management. I hope that was useful guys. And I will see you
in the next section.
4. Need for AI Governance: Hi guys, Welcome
to this section. And here we start getting into the real meat
of the course, which is the governance and risk management aspects of AI. The first question is, why guys? Why do you think AI needs to be governance and risk assess rate? If you work in a
company like I have, you would know that
most companies already have this management departments and governance
frameworks in place. Why do we need something
different for AI? Well, the simple answer
is EI introduces certain new types of which
were not present before. I guess like a disruptor like is that disruptive technology, unlike most disrupters,
It has to be approached and explicit mitigated in
a slightly different way. We should take a
look at when you ask people what are the
key risks in systems. A lot of people are talking
about AI, like Elon Musk, Bill Gates and all that lot and how they can impact
us negatively, right? I mean, if you talk
about the rest, I just, I've mentioned
a few of them. We have biases and the
artificial intelligence models and security compromises. I'm gonna be talking about
the top 2D in a full section. I'm not going to spend
too much time on that. But if you're talking
about privacy, you have these facial
recognition technologies. I think many countries
are implementing facial recognition
based on the eye. And they can store that data of a lot of privacy
risks come in. You have things like the fixed
rate you would have seen, go to YouTube and put D Fig. You will see like
absolutely accurate videos of people like Tom
Cruise, Morgan Freeman. You will not believe
how accurate it is that really scares people. How will we know what is
real and what is not right? And if you want to go back to autonomous machines, I'll
talk more about that. Basically things
which are completely working without any
human intervention whatsoever and people get scared off that one
of these machines started taking over the world
or something like that. And a much more practical
job disruption. Ai, like, like I said, a is going to take over a lot of the stuff which
humans used to do. A lot of things that
are going to get an automated mundane stuff which are human verbally does
not really need that much. Obviously anti-electron do that is gonna get outsourced to AI, is gonna take over and a
lot of jobs will go away. 100%, I mean, without any doubt. But a lot of those will
get created all sorts. So that's why it's so
important to invest in your future and to
invest into AI. The last week might
be a little bit of a weird thing, but yeah,
the end of the world, like people who see
movies like determinator, like matrix or
something like that. They think that machines are
gonna take over the world. But thankfully we haven't
reached that point yet. But it is still something
like a lot of people do say that machines
that we are going to become sentience, right? You're gonna start taking over
the world and everything. But honestly the pelvis with so much more practical contexts. Let us take a few examples also. When we talk about the risks and something that's happened a
few years back like 2016. Toolbox, which Microsoft is
describing in the Microsoft started as an experiment
conversational understanding. The more you chat
with this board, it was called the smart
it would get and we start to engage with people too casual and Clifford
conversations. It was designed to learn from introductions and hybrid
people on Twitter. Unfortunately, what happened on some people decided to feed it like this system
offensive information. And Microsoft actually had to
apologize for that because this chalkboard was aimed at 18 to 24-year-olds
on social media. It was targeted by a coordinated
attack of a subset of people and they started feeding it like really
offensive information. So we didn't 24 hours, it
had to be deactivated. Microsoft had to actually issue an apology for
the hurtful upset out. This was give you an example. Say, we really realize how AI could completely
out of control. These people exploited over
luminosity that was there. They said that we
will not simply not prepared with this type
of thing that could happen and you could feed it like inappropriate stuff and
it will start feeding it. So they said that they
will keep on defining it. But this is just like a
simple example of what happened when you did not take into account what
possibly could happen. Okay, that was a slightly
homeless example. Let's look at look at something
which is VMO scary guys, which is autonomous weapons,
whatever autonomous weapons, basically the pins
that selected AND gates targets without
human intervention. Like, you know,
armed helicopters, they can search for it. And like unlimited
dividend, meeting certain criteria and all
that people are looking at. An EIS, unfortunately,
like it just electro says, it has reached a point where the deployment of successful is practically it is
practical within a few years, not decades. These things have
been described it the next revolution in warfare, very scary thing and many
arguments have been made. People have seen, people
have said that, okay, what happens, these
things didn't go out and no human casualties
will happen, right? But what if somebody is
able to manipulate this, you know, like
disruption and take it over and start
humming people. That's why it's so
dangerous. So that's y. Over 30 thousand. I have a vertex
researchers and other people's decided an open
letter on the subject. And 2015, they said that we
do not want this happening. Please do not invest
in this research actually shows you some of the scariest aspects which
can happen and display. It will become like
an AI almost always. That's why it's so important
to have regulations. So important to have
governance of AI in place. I just wanted to show
you that this was like, I showed you a slightly humorous example
and this was like a more scary examples which
you can take a look at. The dispersant, do
opposite extremes, non, Let's take a
look at an actual, how AI can actually
negatively impact people are cases of AI biases
and prejudices, which I'll go into
in the next section.
5. Bias in AI models: Hi guys. In this section we're gonna talk about AI prejudices and biases. So in the previous
section, we saw a mother, a funny example and the
worst-case scenario, the funny goes on Microsoft Word and autonomous because it was like the
worst-case scenario. Now let's take a look
at a real-life example as a IS of AI
prejudices and biases. So now believe it or not, modules can be biased against
a particular gender, age. If that data is not properly
put inside the module, because humans are
particular strike humans unconsciously or consciously. We might have to
be biased towards a particular race or color
or something like that. And it can feed into the data being used to train
machine learning models. And it can actually lead to
wrong decisions we've made which can impact your health. We went everything. So as organizations are increasingly replacing human decision-making
algorithm with algorithms, they may assume that these algorithms,
they are not biased. But like I said, these algorithms reflect
the real-world, right? Which means they can
unintentionally carry forward these viruses because
incorrect results can actually ruin
somebody's life. So let's take a look
at this news article, which is the electro study public first published
in Science Magazine, it found that the
healthcare algorithm, like it was user on like 200
million people in the US. It was biased against
a particular race because it was determining who needs more medical health care. Unfortunately, it
was like a polaroid, I think white people
above other people. And because my singling out, it was actually denying
people who might actually need medical attention because of the data was not fed into millions and billions of black people were
affected by an issue Bye in this particular
healthcare algorithm. So that's why it's so important to make sure
that this does not happen because it can actually have actual real-life
effect on people. So let's look at actual, Now let's look at this. Let's look at an
example in detail. Compass. I mean, I don't know if
you're familiar with this because this was in
the news quite awhile. It's called the correctional Offender Management Profiling for alternative sanctions, I believe competency up. It was a machine learning
system which was used to either United
States in the courts. What it would do,
it would predict that there's somebody would
recommend a crime or not. You know, when people who have
been given jail sentences, it would actually give them
a rating based on how, how much of a chance that this person will recommend a crime. Again, the judge
was actually using this reading to assign jail
time, finds, you know, whatever is happening
is like people, I hope particular race was
seen as almost twice as likely as white people
to be labeled high-risk. And despite the fact that they
did not deliver computing very small translated
completely harmless crimes. And the opposite result was
driven by white people. So they were given low-level security despite
the fact that they had criminal histories and they are like high probability
to be your friend. That's why it was so
dangerous and work. It was taken into effect, taking into account many
things like data which is going to age and
employment and everything. And based on that, it was
assigning this physical thing. So that's why unfortunately,
people have a particular is
incorrectly labeled as high-risk to come in the future crime twice
as much as white people. Even the company denied it. But unfortunately the
results, particularly, if you can see this thick, Let's take a look at, I mean, all of them because
they can hook. But the middle example
is pretty funny. Visual Bowden, she had
committed like a petty theft, mine a discriminators, and
when she was a juvenile. And the other guy
further, he was a much more seasoned criminal. He had jail time for armed
robbery and other charges. But according to campus and the scores was low-risk and
visual border was high-risk. And two years later, the capital going
to respond to have made a wrong prediction because both boarding bishop ordered did not commit any new kinds. And Theta on the other hand, he was serving in ETS
sentence for general breed. Now I hope you
understand that you can take a look at
other examples also. But now I hope you
understand just how AI can actually perpetuate existing unfairness
and biases antennae unintentionally do to the
data that is fed into it. And then the next section, we're going to take a look
at what principles we can put to stop this
from happening. This brings us to the end of the whole governance section, the, what he called
the risks section. We looked at the risks which
are present in the eye of what you call some
examples of AI going wrong. The dangers of going on what's the worst-case scenario and
the case study of bias. So you can see there are all types of cosine i and the check completely
different to the West, which we are normally
used to here. So now let's take a look at what other measures and
controls we can put in place to make sure that these
AI systems do not have these risks and how to mitigate
them in the next section. Thank you.
6. AI regulations: Hi guys. Welcome to this section,
which now that we have a good foundational
understanding about AI and what are the risks and
problems which can cause. Now let's take a
high-level look at how do you create a
governance framework for I. So basically, we have
a control framework. I wish management framework
for AI to be put in place. How do we go about that? This whole section is going
to be focusing on that now. So the first step is guises,
regulations and standards. Now, the first step is
nobody likes regulations. Because regulations
look like red tape. People have to fill out
forms and you don't comply with hundreds of
it was nobody like that. Fortunately, actually not. Unfortunately, I must
be regulated to protect ourselves and to use technology without
manipulation and bias. We talked about AI being
biased in their last section. Now the best way to make
sure that it is not biased and rules
are the estimates sure that regulations are there. The sad fact is companies
usually focus more on profits and these things are not going to give it an
appropriate priority. We wanted to take a look at
yada collisions first of all, because B is central to
wind down for everything. And we're gonna take a look at the regulatory landscape for AI, which is the most
important regulation, EIA regulation
currently on the way. Like I said, the need
for regulations. It needs regulations
to protect itself and its users from internal
and external misuse. And governments are using AI to make quick decisions
that can have a huge, it can impact your
health, your life. Like a huge amount of
differences can we make? And you see how wrong decisions, unfair decisions can happen, which can like we saw, that people have been deprived
of medical care, people have been given
jail. All of these. So if you have regulations, we have accountability,
human rights. It sends out, sets out minimum standards of treatment
that everybody can accept. It says that everybody
has the right to remedy if those
standards are not met, then you can actually
documents that is supposed to make sure the standards that are present and anybody that big, so standardized
held accountable. That's why it's so important. The country. The funny thing is, there is no specific legislation specifically
designed to regulate AI is being regulated by the existing regulations like data protection,
consumer protection. And those have been
passed to regulate. And governments are working
hard and fast on it. But no legislation has
been passed properly at China has put in
like strategies, the USS port in
the White House as a shoot then precipitants
for the regulation of AI. And it's like most countries
are focusing on that. I wanted to focus
on the regulation that is expected to have the most impact around the world on this
particular technology. The global like I talked about, a huge amount of
work is being done. The most ambitious proposes
some so far it is from the European Union guy's dad for travel and you act last
year in April 2021. It is the world's first compute proposal
for regulating AI. And it's going to have a
huge impact, believe me, on the limit of AI
and how companies, both small-scale and start-ups, large tech giants, they
know how they can use AI. It is very interesting. It takes up the space approach. It doesn't ban, it
doesn't say all AAs good. So it takes up the
space approach and it makes it illegal
to use A4, the cello, unacceptable purposes
like facial recognition and using it for what he
called social scaling. You can rank people based on a trustworthy immune system
that can exploit people. That's why the US regulation
is still important. But why do you think I'm
focusing on this more than all the other regulations
which are there and what makes this one special? Well, simply put guys, usually EU regulations, The end up setting the standard for the
rest of the world. It's nothing concrete. But usually that's
what happened. Anybody who has worked in GDPR, data private signal that you
released a regulation on GDPR and almost all of the other regulations
in the world, all the other governments,
they pretty much just tailored the GDPR to their
particular environment. So that's why it's so important because
the EIA regulation, it is going to set the tone for the rest of the companies. At any company that
works in the UN, even outside the
figures, we'll see. That's why it's so important
to really know about this interesting part
of it, the scope of it. So it has an extra
territorial scope. It's like the GDPR. It's like it extends outside
of the bodies of the EU. Any provider will put AI
system on the market. You, of course, you'll
definetly in school. But if you are like
your provider or user, they are located in outside, but your outputs system
are being used in the EU. Then again, it'll be in
scope, very broad scope. And this, yes, your systems can very much potentially
get pulled into it. So it's in the pipeline and severity we call the
most important thing we want to look at is this one. Like I said, how does
it categorized risks? Instead of opting for a blanket, a complete ban, or
completely allowing. It has used a space approach
based on a few tiers, like an acceptable
versus Kiva's low-risk. Like that. The bigger the
risk and the more what we call is going to put more restrictions and more
controls on top of that, the motor obligations
on the company. Make sure how
transparent algorithm is and reported difficulties
aren't being used. Unacceptable hydrosphere
system, they simply bind so we don't even
have to think about that, I guess are you the moon? The main focus of
this regulation is on hyperscale AI systems. And they will be subject to significant technical monitoring
compliance obligations. If you're in the low-risk
and you just have to be transparent about it. We just have to inform them. What are the heights systems
we're talking about? I mean, this can be like transport systems which can
put people's health and life address great
educational systems that may determine who
has access to education. Like examining the exam scoring. Like robotic surgery. Employees like scaling
employers for your work late, which can have an impact
on who is hired or not. Like credit scoring,
law enforcement, migration, all these things. This is where your
height SKA falls in and video conformity assessment
comes in bits I'm talking about what is the
conformity assessment? Just to understand this. But confirmed with
these SNPs basically, it's like you can
say second audit, high-risk systems,
they will need to undergo conformity
assessment. Basically, what happens is it goes through significant like the assessments late
if the precision in which your technical
documentation quality, all the system is evaluated, that is complying
to the regulation. Happens if it passes, then you get a
certification from the EU. It's similar to medical
device registration, which are already
there in the EU. It can be self done, it can be a self-assessment. But if it's like some systems
which are most sensitive, then you need an expert
third-party come in like a completely indifferent in
regulating needs to come in. Let's take an example
of a bio-metric. So you have an AAC
system used for bio-metric identification by a third party will
have to come in. The collision goes
into more detail here. But just to make you
understand now and after, even after you pass the
conformity assessment and some changes happen, it will have to happen again. It's a very powerful,
it's like you can see, it's like an audit of
the entire ecosystem, how it's working, what
are the rules and everything you wouldn't
have to put into play. So I hope that makes you understand what sort of
regulatory framework is in place that is being
planned for AI systems. Now, you understood
the regulations which are in place
and coming in. Let us look at the mode now their governance framework
for AI in Ireland. I see you in the
next section, guys.
7. AI Governance Framework: Hi guys. In this class, I'm gonna take a look at the AI
governance framework. Now, we talked about the laws, regulations and everything. Now while comprehensive
and enforceable regulation is gonna be emerged, but
it's gonna take some time. But in the meantime,
companies can't just sit and wait for these
things to come. A new place where you need to
have some sort of hi guys. In this class I'm going to
focus on governance framework. Now, we talked about EA revolutions in
the previous class. We talked about how
these laws are coming, which kind of mandate
controls to be portable AI. And the thing is Visa
gonna take some time. You can't expect companies to just sit and wait for
something to happen. So companies are
bound to put in like governance frameworks
in place to make sure because you call and a
lot of companies are already working on
there to put in governance frameworks in place. Especially in
countries where a lot of work has been
happening on AI. Company should be proactive. And they need to have an
famous in place to mitigate the unique risks which artificial intelligence
are being put in place before you start
on the AI journey, make sure you have
these things in place. What are talking about? So if you look at it
from a very high level, the e-governance from Mecca, regardless of what
sector you're in, regardless of what
technology are you using? Like whatever. This is technology agnostic, algorithm agnostic and
all that they talking about for general
parts of a fabric. When is the policy? So just sit down to tone for how you haven't been controlling
organ organization. What are the general
principles that you have and how it
will be controlled? What are the things in place? Next, you need to
inform the committee. This will be people
from the data teams, from the technology teams, from your security teams, from your risk management teams. So that the framework is put in place for like a
properly controlling AI. And it's obviously a solutions they knew they make decisions. So there's a go, no-go decisions being made
on initial initiatives, moving a little bit. But below that, you have an
a risk management framework. This will identify what are the critical visits,
which are the Arrhenius. What sort of is, how do we take all those spaces be
a cybersecurity, be it like integrity bias. All these things
will lead to convert into AI, this
management framework. And lastly, principles, these
will be across the company. So AI basically to make sure
that he is working properly. Therefore, trust
principles, integrity, explainability, fairness,
and resilience. And we'll talk more about this. These are basically
help you to make sure that you are
properly governing over the organization. And I go into more
detail on this. But this is basically a
high level benchmark, high-level like a
skeletal framework for how to implement governance. If you feel this too high
level and you feel okay, I need more details on this. How do I really put my
governance in my organization? Voting is you don't need to
build things from scratch. In 2019, Singapore, devilishly, a first edition of the
modelling dominance framework. So basically for the day, real-estate for
adoption feedback. And it provides like
readily provides you like implementable guidance
on how to implement AI governance within it's
like an excellent template. If you want to use this. And it goes into
very good detail, you can literally just take the principles
which are there and put it in organization to use to create an air
governance framework. It's a very good a template. It focuses on two
guiding principles that should be explainable, transferring same principles we talked about earlier.
Human centric. I mean, it should put our
diets human interest before, instead of profit and
everything else like that. This is where
should we focus on. I would definitely recommend
you put this on Google. You will find it if
you're serious of implementing AI governance
within your organization. I talked about principles of what other principles to
create trust in AI systems. When you talk about trust, trust is imperative, right? I mean, if your customers
don't feel your system is judging them properly
or these biases, this can have a huge issue for your customers replication if I accompanies repetition
and the market rate, companies are simply, it could be subject to major findings. You could be subject
to your application being damaged and industry. All of these things
will come into place. So trust is imperative
for how do we create cross wealthy
therefore, principles. Put the experts,
integrity, explainability, fairness, resilience,
what is integrity? We're talking about
algorithm integrity. Making sure that nobody tampers with algorithm
or the data. How that can happen.
We'll look in the future. We're looking at
the future class. Explainability. Do you know how the AI
is making its decision? Is it like a black box? Nobody has any idea how
the ear is working, how, what's the
logic is being used? Not it needs to be completely
transferred in here. Fairness. We talked about fairness
already, right? Like they should not be biased. If you're making decisions
about a particular society, it should reflect all the races. Ethnicities indices ID that training data should not
have just like 90% when ethnic group and all
the other groups are excluded because that would
be completely not acceptable. And the last is resilience
attribute technically robust. You need to have
controls in place. The ear should be able
to deflect attacks, it should be able to recover. And we'll look into more
detail of these things. So these are the four
basic principles guys, you need to have in place. That covers the
governance framework. I hope that was useful. I hope you have a
good idea now how to create an AI
governance framework, how to practically go about it? Well, what did we learn here? We'd learned about AIG
regulations and standards, how governments are doing. I think to the challenge, trust principles, how to embed
customer AA application. One thing to keep in mind, and how to create an overarching
governance framework to make sure that your
applications which are there, they are safe and trustworthy. That pretty much concludes the governance part
of our course. I hope you understood now what other, they are at a high level. How to create a Ws framework regardless of whatever
sector you're in. Now, we're gonna go
into the next section. They're going into more detail
about technical security. We've talked about
high-level knowledge go into what sort of security disks are present
within AI applications. And I will see you in
the next class, guys. Thank you.
8. Cyber security risks in AI: Hi guys and welcome. This is quite possibly the
most important section of the course that is cybersecurity
risks in EI systems. Now we've gotten the foundation about governance and management
and what we have to do. Now let's really take a look in cybersecurity and
vascular systems. And if you really wanted
to take a look at it, I usually causes there
are three types of ways in security
this can happen. Ai can cause the
disk unintentionally or it can be maliciously
use like somebody. It can act like as an enabler for enabler for cybercriminals. You know, what was going happen
in cells get compromised. This is a world in which
is a very, very new area. And not a lot of people
are doing work on this, unfortunately, from the cybersecurity
professionals perspective. If you ask a normal
cybersecurity guide and I will right now in 2022, how do you secure in
a systems approach it from the traditional way they approach
securing any system, security, software or hardware
system which is there like how you didn't have to
configure it and hard in the system do penetration
testing and all that. But what they don't realize is how the system is
configured, who has access. But what they don't
realize this they are certain discs which are
very unique to AIS Systems. And that's the whole purpose
of this particular section, is to raise awareness about the unique security the
switches in machine learning. So by its very nature, AI components do not obey the same rules as
statistical software, AI systems and machine
learning algorithms. They are relying on rules which, which are grounded
on the analysis of data or large
collections of data. And you mess with this data, it can actually change the
behavior of the system. What is happening is, as
you add more and more, AI is being used to automate decision-making across sectors. The end exposing these
systems of the cyber attacks, which can take advantages of the flaws and
vulnerabilities of AI. And if you really need to know this to properly
mitigate these attacks. Did you talk about
the security risks, coffee, I how it can
happen and whatever. This is, a very excellent paper. I would recommend
anybody to go and read this as malicious
CIA report.com. What did they say? Actually, this report
was written by 26 authors from 14 institutions, academia, civil
society, industry. They had a two-day workshop
held in Australia. I think it was February 2017. And you can go over this report. It's an accident report,
but what did the same? You can look at it
in your own time, but they said certain things which I found very interesting. Descended AI capabilities
are becoming more and more powerful
than by spread, right? What's going to happen is the threat landscape
is going to change. Existing trips are
going to expand. The cost of attacks will go down because
of the use of AI. Ordinarily, you would be paying
people on the doctorate. Those things you can offload do. I know nutrients will come up, which we had no idea. And otherwise,
like in practical, you would not expect
this toppling existing threat level change. Something was happening
in a particular way to completely changed malware, DDoS attacks, we will
change to accommodate EI. That's why guys, this is
why I'm understanding. This is what this
paper is saying. Actually, when he talked about the security risks
which are put into AI, there are two types
of categories. One is the discs which
are not unique to AI, and the other one which
are unique to AI. In the first one, is technically it's like
being attacked and the second one ear is
being manipulated or it is being used to
attack something else. If you talk about the
visit are not uniquely. We are talking about security of the underlying
infrastructure rate, how the data is being secured, how the data is being stored. Is systems configured properly? Internet access
properly configured, you just standard things with cybersecurity purpose
already know. And the other one
is data security. How is that data
being transported? Secure the datasets and
not getting too yeah, I don't think it's the lack
of knowledge which is I mean, as somebody who's
been working in Cloud security for the
past couple of years. This is another
area which I feel that knowledge is
lacking very much. This is why it's not
there uses I've made this course guys to empower people to know
about these things. The lack of knowledge about AI, This is very severe. You have EA professionals, you have security professionals, but you do not have people
who know between the two. And what are the unique
visitors are coming in? What are the decisions
you need to AI where we can talk about
poisoning attacks and what does data poisoning
will see into more detail. But basically
remember what I said. The machine-learning
algorithms uses data to which decisions? What if I could mess
with this data? What if I could really
change the data? Certainly, it'll actually impact the decisions which the machine learning
algorithm is making. Speaking about the
machine-learning models, what if I contaminated? What if that model is like a commercial model
is being pulled from a repository somewhere, I can go and put it
back to the right. Or I can maybe put a new
machine learning model, which is very good, but it has a backdoor inside.
It's like a Trojan. New vulnerabilities
are coming in because companies
want to use a fast. They don't usually build the
models from scratch, right? They actually buy it
commercially off from some open-source mighty
available network there. These are the new
types of physics that you will see coming in
because of the way. Let's take a look at,
remember, we did a while back, the machine learning algorithm. Now, let's take a
look at it from the security perspective and
you see some AI specific. Now, when a machine
learning model has been trained on data, this data can be
actually poisonous, polluted by an attacker. The training and
surface only done. You would think that
how could this happen? Well, a lot of time
this beta is does training data is not something which our company
business from scratch, but it's actually available
open on open source, like it's completely available. Or they bite commercially
because they don't have the time and energy
to do it themselves. But because many people,
they outsource it. And then the guide. So did you get these pre-trained data model is already there? What if I go there and
I pollute the data? What if I had
changed the labels? And you understand
the decision instead, the basic training
itself could be wrong. Okay, So we move on
to the next phase, which is the
training model trig, you're training a machine
learning algorithm. We are trending in the
wrong data models. So what happens like I
showed you these models, the eye usually very,
computationally, very intensive. They require half our data, VP of training and that result, many who does what they do, they outsource and to the Cloud and they rely
on pre-trained models, models which are
already pre-trained. Just getting from the
Internet. What can I go? I can just simply go and pull. I like injecting malicious
back to it within the morula. When you download this model,
you'll have a backdoor. Any motive, annoyed, whatever It's like official
recognition model. I put a two there that my
face will not be recognized. And you won't know
about it, right? The Odd Maybe it's like a
self-driving car, right? You have those
self-driving cars. And instead of a stop sign, I changed it to ignore stop saying, what's
going to happen. You can imagine the
impact that will be. So you'll have those in characteristic training data and incorrect models
right from the start. That's why it's so
critical if what can happen if directing actors access the training
data or the model, they can actually manipulate
that information. And what happens next,
the production data. What do you call the production
data will be you are training the model on more
and more data, right? So what will happen is that a lot of times
this data has been handled by data scientists and they're not trained
about security. This is not like a
new unique to AI, but this production
data can be breached. I hope you understand
now when we're talking about machine learning and
the alternate is being made. You can have things
like data poisoning. You can have things like model
policing that there is a backdoor which only the
attackers aware of. And you can have AWS
happening because of an intense amount of data
which has been pumped into it. So this is more from the learning perspective,
but what happens? Let us look at it from
the lifecycle of a model. This is, this is your traditional
machine learning model is a simplified approach, but let's look at
it from the context of the whole life
cycle of a module. Like I said earlier,
because of the air, you need so much data, you need so much computational
power to train algorithms. The currently the most companies do is they usually use models. The chat train by
large corporations and they modify them slightly. For example, you have
like popular image recognition models
like from Microsoft. And what they do, these models
are putting more Hu Zu, like it's like a repository. What I can do is
the attacker can simply go and modify the models in the repository
and it'll poison the well for anybody else
also with getting it right. Next step will be data poisoning like this I already
discussed with you. Somebody can go and
poison the data, which has been used
to train the model so that it makes
incorrect decisions. Next is moderate testing. You're testing the model. You're going to have
a database where you can have data
points in general. So next step, optimizing video
or fine tuning the model. You're making it
to make sure it's short, it's making
the right decisions. You can have a data breach where you didn't have a data
point in here also. Second one is model compromised. So what happens here
in the model comprise the attacker is not like manipulating the
algorithm or anything. He's exploiting software. This vulnerability is,
you know, you know, you, if you've worked on
applications like a traditional application
vulnerabilities, they can manipulate
the software which is there to access their learning like internal working of the machine learning
traditional watercolors, you need to make sure your traditional
security configurations are there from your
components and everything. Okay, so now the
model goes live. You can have things
like modelling vision, what is more television? More television is like. Let's take an example of an
image recognition model. Things which are very subtle. You know, what I can do is if I, if I show that image
recognition model, like a picture of a cat, by just changing a little
bit for few pixels, I can actually model will not be able to
classify it as a cat. Things which are indistinguishable
to a human being, the model completely changed
the working of a module. And so what attackers do to
keep testing it, testing it, do you want to see
how to evade that moral and what do we need to do to make sure it's not working properly? After
that, what happens? Model extraction. So what is model extraction
and data extraction? They can keep attackers can
keep adding them or who. They can look at,
what's coming back, the responses which
the model is sending. And they can actually use
that to recreate the model. So you can have your IP intellectual
property getting told. Because we're start
in frequencies. He keeps quoting that model, trying to understand how
the model is working, what is the result
which is coming out? And he had graduated tragedy, he builds a picture
of that model, why it is happening, because the model is giving
too much data. We, based on that, he is able to extract
data and the modal logic. Last is the moral compromise which I talked to earlier also. So basically, the model can be the software with
this model is built, the software liabilities,
they can be compromised, leading to an compromise of
the internal model also, I hope you understood guys. I hope this was good. I was able to explain to you within the life
cycle of a model, what are the types of
threats that can happen? And you can see a lot
of these things are completely ignored by cybersecurity
professionals Nowadays, they don't realize these
things can happen. So that's why it's so important
for you to understand. Now that you've understood it. In the next section,
we're gonna talk about like creating a
cybersecurity framework. What are the things
you need to do to make sure that your AI is
secured properly.
9. Cyber security framework: Hi guys. Okay, so now we've
almost reached the last one, our last class, which is creating a cybersecurity
framework for AI systems. Like how again, now we understood the
vasculature there, right? So how do we secure them right? Now? I hate to tell you
this, there is no unique strategy in applying security controls to protect AI and machine
learning algorithms. What you're doing right now, you just need to tailor
it a little bit and carefully choose controls
specifically for AI. First step is pretty simple, regular assessment
regulations and laws that the AI application
is complied with. It goes back to the
regulations we were talking about, the GDPR. Not did you give her that you dysregulation and
all that right. Because doctors set the
benchmark and it is set the tone for all the other things which
are going to happen. You need to maintain an
inventory of AI system. If you don't even know what
systems are being used, CDD or competition, you will not be able to
secure them, right? And it's a basic
steps that you won't believe how easily
it gets missed out. Then you'll create an AI and machine learning
security baseline. And we see it in
the coming section. How hard to do that? This is based on the physics. You'll need to make sure
those controls are there. And you need to update your
existing security processes to incorporate AI and machine
learning technicalities. You need to make
sure that if you have security testing happening, is it covering AI
machine learning? If you have, like, I don't know, penetration testing happening. Is IT company IN, is it
testing the data candidate to be contaminated or not like supply chain attacks can happen. Lastly, and of course, that
is the thing I really, really want to focus on. Like what do you call
it, awareness about AI. It is so important to educate your cybersecurity professionals
and the data scientists. The touch off the CIA targeting machine
learning algorithms, because once you educate them
as a witness gets slowly, slowly created, then you will be able to mitigate
these risks properly. But currently there's a
huge gap in the marketing, unfortunately, that's
what you want to know. So just to recap, looking at the laws, maintaining inventory, create a baseline. I'll show you how and then update your existing
security processes. Having security reviews annually are discovering your AI machine learning systems also are not. And of course, creating
of innocence can now let us look at the security controls
that should be there. I wanted to go based on the
vesicles which are there. The first one which is
the most common attack, which I talked
about, data poison. Like I said, I put it in the description that
you could look at it. Basically, like I told you, the attacker
displacing the data, that the machine learnings, what do you call
decision-making is compromised because it's
been fed on raw data. What do you need to do the
security containing to make sure that the data is supermodel and you need to make sure that checks
and balances. Are there. Anybody to verify the
integrity of the data, who can commit to this
data, who can modify it. Okay, next step is model poisoning, in
which like I told you, like somebody can
inject some sort of malicious commands
within the backdoor, like a backdoor to
the machine learning. And it's especially
risky because most companies do
not believe model from scratch and they were like on publicly available ones, like supply chain attacks. Don't use models directly from the Internet
without checking them. Use models which are
the threats actually identify it and lift
security control exists. If especially if you're
working on high risk, I would definitely
tell you not to use things which are
publicly available. Data leakage in which the
attacker can compromise, unable to access the live
data that has been fed into the system in the fine tuning
phases over the production. You want to make sure
you have a data pipeline if security amount
authorized access, if he used a get from
third-party than mature, their integrity
is checked again, a supply chain and
that comes in here. What does the model
compromised list? When I talked to
you about somebody can compromise the libraries. Most software today is built on open source software libraries. You need to make sure that
those are public security. You need to make
sure those are being tested and you don't need to have some
sort of monitoring. Please use something like fluctuation happens within
the machine-learning model, some changes are happening. You need to define metrics and you can quickly identify
anomalies are happening. Model innovation. What is modal division? This is another movement of
the most common attacks. The attacker finds a
way to trick them, are basically trick it into
making a wrong decision. Certainly changed the important. Like I told you, if it's an image recognition,
I tried to find it. Maybe if I just
put a few pixels, then it won't classify
the image properly. What do you need to do? You need to actually put this adversarial data within that
when you're testing it, support all sorts of wrong data. Also see how the mortality x, because if you've tested it, then you can make sure, you make sure it's part of
your testing street. What does this guy's
model data extraction? This is already told you, right? Somebody can try to model data. And the logic Introduction to pretty much the same
thing actually, what did the attacker looks eat? Keeps sending Claudius
more and more queries. And he wants to see what
sort of output is coming. And based on that, you can
understand how your model is working and what data is coming in and
reconstruct the model. We can reconstruct the data. The control here is pretty
much the same thing. You need to control
what sort of, how much detail your
model is giving. You need to make sure that the
data is probably sanitize. You're not getting
too much data. It's true or not two variables. You need to really look at
it from a risk perspective, limit the amount of
information that is going out. Because look at it
from the eyes of an attacker and how it
can be maliciously used. Guys, now that you know it, now you can
take a look at it. So these are the
controls reported. You will do a risk assessment of your model management tools. And it's like if you
have a high risk model, you don't want to like,
take it from the Internet. You'll create your
risk assessment sheet based on the controls
which I've told you, you'll do model verification, you'll see the integrity. Is this probability weighted? Are the companies using it? Like what sort of like
if it's completely out of the blue
innovator is using it, no customer is. Don't
use that model. Then you will make
sure the controls idea from the data verification is that they did a vetted the controls in place
from your data. Like if you do a complete
due diligence of the vendor, if it's coming from outside
rate than Systems Security, you'll make sure those controls idea that data is
again being better fight your models and the components which
are the software. This model is either secure, tested with adversarial testing, like I told you, you'll make sure the output is coming out. It is properly sanitized or not. And the components security, I didn't like the
software libraries. Like I hope you under strict guys what
I was trying to say, I know it's a lot to take in, but this is just to develop
that mindset within you do understand the particular security distance
which are there. If you want to do
a real deep dive, there are some
resources available from an ESA, from Microsoft. You can go to this
link and download them and you can really like it. I hope this course has
helped you to understand what you need to do and
how to understand it. So you can refer to the sources. There are many, many
sources available. I hope this, I have created that motivation within you to go and look at these courses. Okay, So finally guys,
we're lining it up. This was the last class,
I believe so what we understood is now you understood the cyber security districts, are there the unique risks
which AI systems can pose? And how to track modelling or what, what do you
need to look at? How to be able to palpate campus analysis of a
system and whatever unique controls you need to create and put from the
perspective of AI. Okay guys, let's move on to our last class and I
hope you enjoyed this. And let's say our goodbyes in
the coming section, please. Thank you.
10. Way forward: Congratulations guys, we've reached the end
of this masterclass. And I sincerely hope now
you have an appreciation for the new environment that you and how much AI will change
the threat landscape is like an irreducibly technology and disrupt all disrupters or changing things for
the good and the bad. Also, you need to make sure, but awareness is like knowledge
is power is the sensor. Now, I hope I've
undo that knowledge. Like I said, How to build on what you've learned
for the project. I told you, you need to create a threat model of an AI
system or you can have researched it gives off
bias and understand how it happened because this
will really empower you to understand this lesson. Unless you apply them,
you will forget it. I hope this was
useful to you guys. Please do leave me a review and feedback whether it's
positive or negative. And I would appreciate your feedback on this if
you'd like to follow me. I'm there on YouTube and
the cloud security guy, that's my channel's name. And that's pretty much it, guys. Thank you so much for
taking my class and I wish you all the best in your AI
machine learning journey, and I hope to see you
in future courses also. Thank you.