Transcripts
1. 1 Intro: Hello, hello everybody. Welcome to my new
course which is the AI Regulations and
Frameworks Crash Course. Now the whole point of this
course is to educate you on the different type
of AI frameworks and regulations
that have come out. And to really give you
a running head start to understanding all
the different types of regulations which
are coming out, all the frameworks
are coming out. Sometimes it seems there's
a new regulation coming out every literally every
month we hear about it. Because EI is developing
so fast, right? The EI regulations are
developing so fast. So we need these sort of
control and regulations. So this is the whole
point of this course. The goals of this
course are very simple, which is to make you understand the global EI, regulatory
landscape, right? All the frameworks
that are there, why they are needed,
what is the big deal? And why do we need so many
AI frameworks and standards? And then what we're
going to do is we're going to take the
most important, the most critical AI regulations and frameworks that are present. And we're going to
deep dive into them and I'm going to give you some practical
implementation tips. So I don't, I'm not
going to just reach out, read out the standard, right? Because you don't
need me for that. I mean, you can do it
yourself, honestly speaking. Just download it and read it. No, I'm going to give you
practical implementation steps. How, what we're going to
do is we're going to take case studies and examples of these speculations
and standards. And then see how a company would go about implementing
these standards. So that is the whole
point of this course, to give you a running head start on the AA regulations
and governance, which is extremely
critical in today's world. So why should you
be listening to me? Just a brief introduction. My name is Tamurislahal. I'm a multi award winning
cybersecurity leader. I've won a few awards
here and there. I've been in cybersecurity
for the past 21 years. And last couple of years, I've focused more on creating courses and content and writing, giving back to the industry. I also help people get
into cybersecurity. I have a Youtube channel
called The Cloud Security Guy. I wish I had like
millions of followers, but it's a very small channel. But basically I talk about cloud security
in the eye there. I've also written a
bestselling author and course creator on
multiple platforms. You can check that these
are few of my books. I've written books on EI,
governance and cybersecurity. Write for beginners, Zero
trust for beginners. I don't like writing very
complicated, deep dive books. I always feel that let people
get a running start into a particular technology and then let them find
their way. Right? I also write on
media and Linked, I like I said a Youtubechannel.
I'm there on Twitter. I also have a free
newsletter which I basically share things
about cybersecurity career. Why is cloud security AI? So these are the things
basically do I just wanted to show like I do
know a little bit about AI when I'm
talking about it. So first of all, why do we need AI regulations? And firms like these are
basically guidelines, standards frameworks designed
to mitigate AI risks. And at the same time drive innovation and increase
trust in AI systems, like what is the big deal? Well, first of all, ethical considerations. What
does that mean? Well, AI systems can
make decisions that can impact human lives
and society, right? If AI is making
decision like giving you a bank loan and it
makes an unfair decision, it makes an incorrect
decision, right? That can actually
impact your livelihood, your family, right? What if AI is being used for law enforcement and it
makes a wrong decision, that can lead to that
person getting arrested. So again, these are not
small things, right? And that is why you need AI regulations to provide
guidelines to ensure that AI systems are designed and used in a way that respect
human rights and values. We also have like
safety and security, To think about AI systems, particularly in areas
like healthcare, transportation defense. They need to be reliable
and safe, right? And regulations
will make sure that AI systems go through
rigorous testing and validation to make
sure that failures are not there that could lead to harm or even people
dying. Right? And lastly, which is my
particular specialty, AI systems can be vulnerable to security threats
like data breaches, manipulation, and that's why you need cybersecurity
controls also. So all of these things where AI is being used
in critical areas, like hospitals, in
criminal systems, financial areas, they
can have a huge impact. That's why you need
regulations to set down best practices and frameworks
and standards for AI. That is where the whole
thing comes from now. So what is the problem,
unfortunately, that I've seen and why I made this course a
crash course, right? And I've deliberately
made it a crash course. I could have made it
like a ten hour course because I know people
get bored, honestly. The problem with
learning AI frameworks is it's important to
acknowledge this fact, that discussion around
AI frameworks and regulations that can sometimes seem a bit dry and
overly complex. Because the way the
topic is, right, you have this legal language with advanced
technical concepts. It makes it very unaccessible
to regular people. The whole point of this
course is to break down all this complex stuff
and make it accessible. In a way, the whole
point I've made this course has made it
accessible to everybody, right? And the other problem is
navigating the landscape like one of the
biggest challenges is too many
frameworks are there. It's like navigating
like a big jungle. You have so many frameworks. How do I know which is the most important, which applies to me? And this can lead to
confusion, making it hard for. Developers, cyber
security people, governance people, companies to decide which framework
is relevant from them. And the other thing is
practical implementation. The other problem I've seen is the gap between these frameworks and practically applying it. So guidelines offer very
high level principles, like sometimes you
get too deep into the technical details like how the machine learning algorithm
should be configured. So there's a missing
link, right? So practical, actionable advice is what I
want to give you on how to implement
these regulations in your regular companies, in your day to day
operations to help you understand how
to proceed, right? And making sure that you're
able to apply this knowledge. So who is this course for? This is course for anybody I, professionals, whether you're in risk, legal and compliance, cybersecurity,
whether you're a CIO, CTO, anybody interested in AIS. You don't need to have
a deep knowledge of AI. Just a basic understanding will suffice. And why
should you care? I think I don't need to
spend too much time on this. You already know AI is impacting every sector,
every standard, right? And companies they
want these sort of industry standards and benchmarks so that they can
make the very best AI system, which is competitive
and trustworthy. And they want to make
sure that they are following EI best
practices, right. When you get an ISO
standard for your company, you're telling people,
look, I am ISO certified, same way you want for AI. And lastly, why you should care. Ai is having a massive impact
on the job industry also. And that is what I
wanted to talk about. There's a huge demand for
AI scan governance skills. This is from Gartner by 2026 that AI models that can
operationalize AI transparency, trust, and security will achieve a 50% improvement in like adoption and business
goals and user acceptance. People will accept them, right? They will want to have
these sort of AI systems. You want that public trust, so it will drive innovation. Additionally, you have
studies showing how generative AI is changing
the workforce, right? Like a lot of people
worry about jobs going, they don't realize
how many new jobs are going to like created. There will be job losses, undoubtedly anybody, because AI will take over a
lot of activities. But new jobs are also
going to be created. And by 2030, they estimate like it's
going to have a massive, massive impact on the economy. And companies obviously are going to be looking for
people with AI skills. So by taking this course, you are actually
investing in your future. You're making sure you have
the relevant skills, okay? And you can see that 96% of companies will be looking
for workers with AI skills. And AI is not just technical. A lot of people get confused, they think I need to know
data sciences or statistics. No, you can have this
knowledge which is EI, Governance and AI
frameworks, right? So this is what I want
you to do in this course. I want you to understand the standards and the
frameworks that are there, Understand the common concepts, go over the case studies, and then apply the knowledge
that you've learned. So how will you apply
this knowledge? We will have a project. I'll give you an
assignment to do. I want you to take
that and apply it to understand fully how this
project will be working. So I hope this got
you motivated. I hope you now understand the massive new world in
which you're living in. And this is another area
where AI governance and risk, where you can really shine and you can make your own mark. I hope you are excited. Whether you're an AI
risk or cybersecurity, this is a great way to
invest in your future. So let's move ahead and I'll see you in the
next lesson. Thank you.
2. 2 AI Risks: Hello everybody. Welcome,
welcome to the next module which is about And I Risks. Like a brief introduction. And like I said before, what we're going to cover here is briefly I'm going to tell you about AI and the types of
AI risks and the ville, examples of AI failures feel you know about
it, you can skip it. I would recommend not skipping
it because it gives you that framework and context about this course
and why we need it. Because I really want
you to understand the background about
why these frameworks have come about. What is AI? Before we deep dive into this, I want you to understand
very crucial thing, You do not need to be a
technical person for AI. It helps to have
that background. But no, I know many people who are completely non
technical because AI is so massive for understanding AA
regulations and frameworks. You just need to have a
high level understanding of AI and that will
work, honestly speaking. So you don't get
intimidated by AI. Okay? A lot of
people get worried. You can deep dive as
much as you want, but just a high level
understanding will also work for you.
So what is AI? Ai is a branch of
computer science that focuses on creating systems that performs functions usually associated with
human intelligence, such as reasoning, learning,
and self improvement. That's the boring
book definition. Simply put, AI systems. Ai is a branch of
computer science that computer systems
behave like humans. How? By making decisions like reasoning and
understanding information. How do we humans work? We take a lot of
information we learn from it and we use that to
improve ourselves. That is exactly how
AI works, right? Regular programs,
like your calculator, it follows a fixed
set of instructions and does not change unless
you manually update it. Whereas AI systems,
they can improve. So they can actually use the
existing information and the previous information
and use it to make themselves more
intelligent over time. And AI can make decisions
based on data analysis. It can assess complex
situations, right? And it can mimic how human
beings are making decisions. Whereas, like I said, computers, they just have a predefined, your regular applications, they have a predefined
set of instructions. So it can handle ambiguity and the greatest thing
about it can automate. It can do things
by itself, right? Like self driving cars. You don't need human
beings to come in and like drive
the car, right? It will operate by itself. These human like capabilities
are what AI is like. How AI is really changing
the world since 2022. I would think that Chip came out even though I existed
long before that. I was talking about
AI before that. Chagpt and generative, it has really made AA accessible
to regular folks. That's why there's so much
interest around the topic. Now, how does AI work
without getting in too much into machine
learning and how supervised and unsupervised
machine learning work? How deep learning works? Simply put, AI takes in
a lot of data, right? And then it uses an algorithm to find patterns in this data. And then it uses
these patterns to make decisions about future
things that will happen. So if you want to break it down, it takes a vast amount of data from different
sources, right? It can be numbers or
unstructured data, like images and it
finds patterns, it sees correlations where
human beings might not. So this can take, this is where machine learning
comes in, right? Like the name says it.
Machine is learning it, taking all the data
and learning about it. If you feed it a lot
of facial images, it will find patterns
and then it will be able to recognize other people. Right. Based on this and these patterns and insights
that are taken for the data. That's why I said it makes
decisions about the future. For example, an AI might
predict customer behavior. It can diagnose
medical conditions. Or it can optimize like vehicle routes,
logistic routes, right? The more data it gets,
the more improves, the more it improves,
the better it gets. All this is like a
constant cycle, right? This is how AI works
in a nutshell. And of course we already
talked about this. I'm not going to spend too
much time, but of course, AI is having a massive, massive impact by
2030, 15 trillion. It says it can
contribute honestly. And of course, there will be
a massive job disruption, so people focus more
on the job losses. There will be job
losses absolutely, because jobs which
human beings can do, a lot of AI will be able to do, but new jobs will be created. Okay. That is what
people forget the amount of jobs that
will be removed. You forget that the amount of new jobs that are
getting created. Nobody was talking
about AI governance, AI security, like a few
years back in 2021. I remember I was
talking about it. Nobody was paying interest
because simply put, nobody cared about that much. But now AI has changed
the world, right? The disruption, it has happened. So that's why new industries
are being created. So don't get worried
or scared about AI, embrace it and learn from it. And understand the
potential that is the industry is there and
of course what is the risk? So AI is being adopted by
critical industries, right? You have things like
healthcare, law enforcement. And the biggest thing
is AI is not perfect. It can make wrong or
biased decisions. If you feed it
wrong information, it can develop a bias fight. Which like I said earlier, it can have a massive impact
to people and societies. You can even trick it. You can like manipulate
or trick it. And it can be attacked
by cyber attacks it. This is where we
see the huge impact that can happen with AI, right? What if AI accidentally arrests the wrong
person or AI makes the wrong medical diagnosis
or AI does not give a person a loan
when he desperately needs that or medical treatment. Right? We're talking about real life impact
and like I said, it can be manipulated.
Also, it can be tricked. Like if you have a facial
recognition system, you can actually
create a fake picture and fool the AI system. And of course, we
have new types of cyberattacks which are
coming up, all these things. It's like a new
world which has come out because of AI, right? And so these are a few examples. I mean you'll find millions
of examples on the Internet. This poor person, he got
misidentified by an, A system and he was
actually arrested. So you can imagine how much of an impact this could happen, how much traumatized
this poor person would have been because
of facial recognition. Ai based, it was being used
and it led to his arrest. Right again, self driving cars were having this swissing
more and more introduction. I'm in the UK and you see like the self driving cars
are being tested out. But unfortunately they have made mistakes and they
have led to human lives, right? Accidents have happened. So you can imagine
as self driving cars become more and more popular, the types of attacks
that can happen, the types of incidents
that can happen, and this can happen to lead to like a real impact
to people's lives. So this is why we
need to make sure that we can mitigate AI risks. And this is where
I, regulations come in, right, And unfortunate. The sad fact is a
lot of companies, especially those in
the private sector, they do care more
about profits, right? Their priority is not to put in regulations and controls
and governance. That's why you need
that regulation to come in and say,
hey wait, stop. You cannot go ahead without putting these
controls in place. And regulations over
AA is still lacking. They're still catching
up. The rate at which is developed is far faster
than any regulation. And even companies
who are responsible, who are ethical, they said no, we need to make sure
we mitigate the risk. They want that guidance, right? So that's why these
regulations are coming out. And of course, standardization, you want to make sure you
have a common standard across the board over which these controls
can be applied. So this is where EI
regulations have come up and why we
need AI regulations. So I hope this was a quick lesson about
the overview of AI, the impacts of AI, the
impact of AI risk, and why we need all these frameworks which are coming out. Because nobody likes regulations
and frameworks, right? People think that this
is just to like you, slow down innovation,
slow down development. But now I've shown you why the impact can happen if you don't put in these regulations. So now we've set the background. In the next lesson, we're
going to talk about the AI regulations
and frameworks that are there at
the high level. And then we're going
to start deep diving into specific regulations. Thank you and I'll see
you in the next lesson.
3. 3 AI Frameworks: Hello, hello everybody.
Welcome to this module which is about AI
regulations and frameworks. A global perspective
or a global view. We've talked about AI, right? We've talked about AI
and the damage that can happen if AI is not properly
controlled and regulated, and controls are not there. In this module, what
we're going to do is we're going to have
an introduction to AI frameworks and regulations
and really take a step back and look at the
global landscape that is there from
AI frameworks. And what is happening at a
regional and national level. And what are the challenges that are faced when looking at this massive list of AI frameworks and regulations
that are there, right? And we're going to,
before we deep dive, we need this lesson
before we deep dive into the individual frameworks
and start looking at it. Okay, so first of all, let's take a look at
what AI frameworks, standards, and regulations are. These are a set of guidelines and legal requirements, right? The whole point of this is to govern how AI is designed,
developed, deployed, right? We want to mitigate the risks. They want to make sure that
AI systems are the ethical, that they're not
being unfair, right? They are safe. They're
causing damage to human life. They are transparent,
meaning that you can understand why they're
making these decisions. And they're actually
causing benefit to society. And you want to mitigate
potential security risks, vulnerabilities, and damage
that can happen to society. The whole point of creating any framework be whichever
country you're talking about, that is always the goal, right? And this is how they help.
Like I talked about before, that companies, they
focus on profit, right? They want to make the
most amount of money. And sometimes they can
go overboard on this and deploy AI without proper
controls being there in place. So if you have a regulation, a mandatory regulation
that is there, that will find them
like the EU one, right? So they will be forced to make sure these
controls are there. And even if it's
not a mandatory, maybe it's just a framework. An optional framework, right? What will happen is they will have to follow this because
customers will divine. Customers will not
buy their systems. They will say, no, we want
the ISO certification here on your system. Is your system certified to the Nist framework
and all that. These will ensure
that companies follow good standards when
developing EI systems. And of course, they make
accountability and traceability, so they make sure that why is
the AI behaving like this? That those things are present. They protect the AI system
against manipulation. If somebody tries to attack
the AI system or trick it because they follow these standards, the
controls will be there. Same against cyber
attacks, right? You have new types of
cyber attacks happening. How do you make sure that
the system is protected? By enforcing a minimum
baseline, a minimum standard. And they make the
risk assessments, they make companies adopt
a risk based approach. Not all systems
are equal, right? Some might be very,
very sensitive, like those which are
working in hospitals. And some might be
just a simple like facial recognition app you
have on your mobile phone, right? All of them. They force companies to
adopt a risk based approach. And lastly, international
cooperation, which is very important. So you don't want to have these different silos happening. You have international
cooperation happening across countries. Everybody is working
together to mitigate this. It sounds like a utopia. I know a lot of you might be thinking this will never happen. But unfortunately,
the risk of AI is so dangerous, these
are being forced. Okay, like you will see
a level of harmony. The best example I can
give you is the GDPR. The EU released their general
data protection regulation for data privacy if you're
not familiar with it, but because it was such
a stringent standard and so well followed literally
every country in the world, they have implemented their
own data privacy framework, but they've taken elements from GDPR because it has become
the global benchmark. And same like this, if you take a high level look at
this, there are many, many frameworks, but honestly
the most important and which impact every other
framework that is out there. First of all, at the top,
you want to start with the OECD principles on AI. And I link these frameworks and standards so you
can take a look. So basically, OECD, the Organization for Economic Cooperation and Development. Yeah, It is an international
organization that works to build better
policies for better lives, okay? They don't mandate. But what they have done
is they have released certain principles on AI, right? And what has happened is at most of the countries
in the world, sorry, they have taken
these principles and are using them to
make their own standards. So the national level strategies you have in the big
countries like US, UK, the Middle East
Vision, Pac vision, all of them have taken
in, but most of them, they are consistent with
the core principles for AI as defined by the OECD
and endorsed by them, okay? Usually it's pretty
much the same. The human rights sustainability, transparency, strong risk
management. You'll see these. You'll see what do you call co tenants being repeated
again and again, and most of the countries, they are following these principles. Apart from that, we
have now the ISO 4201, if you're not familiar, that is an AI management
system standard that has come out by the ISO. And that is like an
international standard. Soon, companies will be certifying to this
standard, right? Just like you have the
ISO 2701 for security IS 9,001 for quality
management. Same thing. You will have this
and you also have the Nist AI Risk
Management Framework, which is an AI risk management
framework created by Nist. It is like a tech agnostic, vendor agnostic, but it's a very, very powerful framework. A lot of, if you're
in cyber security, you will be familiar
that most companies, they have adopted ISO 2701 and the Nist Saba
Security framework, right, as an
overarching framework because of just how well
these standards are made. Same thing you can
expect with AI. And apart from that,
at the regional level, the biggest change
that's going to happen is the EU AI Act. The EU AI regulation
which is coming out, which takes a risk based
approach to these regulations. And we're going to deep dive
into all of them and this is going to impact the national level strategies at
all the regions. So these are, this
is like a Holistic, like a very high level view. So the OECD, like I said, they have said that they make global policies
for countries. I think they've
five of these value based principles and
what has happened is they have become like
a global benchmark that help companies and governments take that approach towards creating their
own national strategies. I would definitely recommend
taking a look at them. Most of the countries that
you see making strategies, they are influenced
by these only, right? And these are the
same tenants, right? To make sure that AI is working towards the well being
of human beings. They are fair,
they're transparent, you have proper
security and safety, and accountability. They
are very high level. But they have helped
companies to start forming their own policies
so that they understand. Right? Okay, apart from that, you have the EU AI regulation which is coming out
within the EU region. This takes like a
risk based approach. This is going to have
a massive impact. We're going to deep dive into this regulation because
of high important cases. This is the first comprehensive, like you can say
regulation or law. And this is going
to be mandatory for any company that works. They are similar
to the GDP which became the global framework
for data privacy. This is going to become
the global framework for AI regulations. Even countries that
do not follow, which this law is
not applicable to, they will take some
principles from there. So it is definitely recommended to really understand this. I'm going to deep
dive into this. The problem with EU laws
are they are very boring. I mean, you fall asleep
when you're reading them. So I'm going to take you
through it step by step. And mostly you
need to understand how the risk based
approach works. Apart from that, we have ISO
4201 which is the first, the proper I management
standard Like you have the information security
management systems, ISMS's. Now you have for AI and
AI management system which is like the global, it creates like an umbrella. Best practices, right? And it's a very
comprehensive standard. It's optional. It's not
mandated right now. But companies that
really want to, what do you call get
a competitive edge? They will want to certify
against the standard, just like you have
companies today, They get certified to ISO 2701 or 9,001 And they have this as a competitive
advantage, right? They show to customers, look, we are certified because we
are following good practices. Companies that are building AI, they will want to take
a look at the standard. Apart from that, you have the Nist AI Risk
Management Framework. Again, this is an
optional framework, but believe me when I say most companies that are
serious about risk management, they are going to be
adopting this framework just like Nist release the
cybersecurity framework. And that has become the de
facto industrial standard because of how well it is. The thing about Nist
is when they release standards like hundreds
and thousands of people review them and they
really go under high scrutiny to like
really polish them. So this is going to become another major standard and we're going to deep dive
into this also. And we also have
the executive Order by the Biden government
which came out, I think it came out 38
October last year to 2023. Right? And this is, again, promoting FN, trustworthy
AI development. This is an executive order. It cannot create new laws
or regulations by its own, but what it does is it triggers the beginning of such processes. And it's a big,
massive advancement in the accountability of how AI is developed and
deployed across companies. And it has a huge
amount of regulations, not regulations, sorry,
recommendations, standards. And it's going to
impact a lot of people. Nist has been very much involved within the
development of these frameworks. The AI Risk
management framework, which I talked about, it's
referenced multiple times. And what happens is
these executive orders come out and they give a high
level mandate that look, this is how it's going
to be. Laws come out. And public sector starts
implementing them. And because the public
sector is following them, the private sector follows
and it becomes like a unofficial mandate
across the board, right? So these are just a few of the standards I
wanted to talk about. The challenges are, like I, like I said before, there
are too many of them. It gives you a headache. So that's why I've chosen
the most important. And believe me, if you get a
good understanding of these, the one I'm going to
be talking about, you have pretty much
covered most of them. Because all of them draw
from the same concepts. Okay. And the other
challenge is harmonization. I'm trying to understand where the commonalities are and
which we'll discuss about. The other one is optional
versus mandatory. So things like Nist and ISO, they're not mandatory, right? The EUA regulation is it, once it becomes into
force, it is mandatory. So you need to understand
what is mandatory, what is optional, right? And other option is sector specific across sectors or
sometimes some regulations. They are specific to a particular sector like the energy industry or
the water industry. And some are across the board, so we want to take a look at
that and understand them. And lastly, most important, balancing innovation
and regulation. If you put too much regulations, the pace at which AI
is being innovated, it'll go back or most companies will try to move out
of that region, right? So you want to balance it,
you want to put in controls, but we want to make sure that it doesn't stop innovation
from happening. And the last, the challenge,
which is always there, that AI is very, very fast, it develops extremely quickly. Regulations are always trying to catch up with
this development. So that's why these
AI regulations sometimes at a quite
high level, right? They want to make sure that
they cover everything, so they don't want to be
specific about any technology or any to be too prescriptive. Otherwise they will get out
of date very, very quickly. So I hope this was
useful to you. We took a high level look at AI frameworks and regulations, what they are needed
and key regulations, and what are some challenges which come about
when adopting them. Now what we're going to do
is we're going to deep dive into a particular regulation. And start deep diving it, looking at its components and how we can get
them implemented. Thank you and I'll see
you the next lesson.
4. 4 EU Act 1 : Hello everybody. Welcome,
welcome to this lesson. And now we're going to
kick off a deep dive. And we're going to kick
off a deep dive with possibly the most
important regulation which is coming out shortly, which is the European Union's Artificial Intelligence Act. This will probably be
the first ever law dedicated to artificial
intelligence. It's known as the AI Act. When passed into law, it's going to set
the global benchmark or the standard
for AI regulation. And other countries, other jurisdictions are going to build their own
national laws, but they're going to
take from the EU AI Act. And this is, we've
seen this historically also with like things
like the GDPR. And I'm going to show you and why that is
usually happened, that the they sent like a
global benchmark, right? And while the legislation
as of Feb 2024, it still needs to be
formally passed by each legislative body business if you're operating in the EU. Now you have a much
clearer picture of what you need to do to be
compliant with the EUAI Act. Once it gets into force, right? It's usually, it's
expected it'll be passed into law in early 2024. These things that
take a lot of time, but the framework is there and now we know what
they will be basically enforcing the EU basically it wants to ensure that
the AI is ethically and safely used and they're
very serious about upholding the fundamental
laws and rights of people. Right? They've taken like a proactive approach
to governing AI technologies based on risk. And that's what we're
going to take a look at. This is what we're
going to cover. We're going to take a look
at the EAA overview of this, what the risk based approach is, specifically what
the high risk AI is, because that is where
the most of the time is going to go and how
to implement ticket. We're going to take
a fictional company and actually see step by step, what you need to do to
implement the EA Act. I'm going to focus on the
critical provisions of the act. The act itself,
like most of the U, it's like very boring to read. You'll fall asleep if you
start reading it, honestly, unless you're a legal guy, somebody in legal or compliance
who likes to read this. So what I've taken is I've deliberately focused
on the key areas. The areas where the most amount
of effort will be needed, which is the risk based
approach of the EAI systems, and that is what's going to have a massive impact. Like
a brief overview. Now, this is the first
ever proper law, as you can say, dedicated to AI. And it has the potential, and I would say it
is going to become a global benchmark
for AI regulation. Similar to how the general
data protection regulation is set a global standard
for data privacy. And if you're working
in privacy, you know, whatever country you are
in one way or the other, your law has been
impacted by GDPR. Similarly, the EUA regulation is going to set the global
benchmark for AI. Why? Because first of all, it's the one of the most comprehensive
frameworks focused specifically
on regulating AI. Okay? What it does, it focuses on the
risks which are there. And it makes sure it provides a very detailed
regulatory approach. It's going to become a
model for other countries. And most importantly, it
takes a risk based approach. So it classifies systems
based on what risk they have, so from minimal risk
to unacceptable risk. And this gives you a
balanced approach, right? Because it doesn't say no. Every single AI system is
going to be like this. Now, an AI system that is, say, working in a hospital
or law enforcement, is not the same as a simple chatbot on your website, right? Answering questions,
of course not. You don't want to apply
the same controls there. But the EU is a massive market. It is a significant global
economic player, right? Companies operating
in or selling to the EU market will need to comply with the
EAI requirements. And this compliance pressure
will lead to businesses worldwide adopt
similar standards which are influenced by the EIA. That's why I've started this. So definitely it's
going to become a global benchmark
and it's going to influence a lot of
countries and businesses. And of course, like the GDPR, the penalties are quite severe. I think 2% to 7% of the
global annual turnover. And it depends, of course,
there's a lot of levy there, but the fines can
be quite severe. If you ever have any doubt, do take a look at how
the GDPF sometimes find companies and you'll
get a better idea of how potentially
companies that are not compliant with this going forward or they take
it too leniently, how they can get
impacted with this. So who is getting impacted? Basically, companies who develop or create AI systems, right? Like software technology, software developers
or technology firms. Maybe you're deploying them. So you're deploying and using
AI systems irrespective of what industry you're in.
Public bodies, right? So government bodies and agencies that are deploying AI systems for public services, regulatory and
supervisory bodies. Authorities that are responsible for monitoring and ensuring compliance with the EUA act like data protection agencies. Right? And consumers of
course, the general public. And like I said, it's going to have a global impact even if you are a non EU organization, but you're offering
services to EU citizens, then it'll become applicable. That's why I said it's going
to have a global impact and companies who are
even outside the EU. They will be like forcing
compliance with this. That's why it's so important
to understand this. Now, the key point, the
most important point of the EAI Act is it classifies EI systems into four
levels of risk. Based on that level of risk, you have to put in controls. The higher the risk, the more stringent the controls
you have to implement. Now first of all is
the no minimal risk. These are the majority of AI systems which have
no risk and they won't face any regulations like a simple chat bot which is not collecting any data
or anything, right? The other one is
the limited risks. The AI systems
with limited risk, they will have
transparency obligations. What does that mean? They have to disclose that this is content is
AI generated, right? And you have to
inform, like I said, chat pot or maybe something which is using
generative AI, you know. So you have to make sure
that those things are there. High risks are also
there high risk. I systems will be
authorized with specific requirements and
obligations for market access. This is where the majority
of the effort will go in. And lastly is unacceptable
risk that is simply banned. This particular type of AI
systems will not be allowed. And they've given a very
detailed definition like AI systems which attempt
to manipulate emotion, or they're regulating
like they're actually monitoring emotions
in workplaces, educational institutions. Or they're using
facial recognition for remote biometric
identification. They've given some
exceptions for biometrics, but you know, the EU is very, very strict when it
comes to data privacy. Then I think no jurisdiction
is more strict than the EU. So they've taken this
risk based approach. And so if you look at it, this is from the US, This is
what it looks like, right? If you start from the
bottom, minimal risk, no obligation, limited risk
transparency is there. You have to take a look and get, you have to disclose
that this is AI based. High risk is a
conformity assessment which is like an audit. You can say that
you have to enforce these level of controls
before you're allowed. Unacceptable risk is
simply banned, sorry, if your AI application is like this and these are
the things you're doing, sorry, you're completely banned. And what is unacceptable
or prohibited? So basically anything, certain actions they've
defined, right? What does it mean that AI
systems subliminal techniques, what does that mean?
Let me translate it. Ai systems that use sneaky
subconscious methods to change a person's behavior drastically are
completely off limits. You know, they're
trying to influence your behavior, influence
your emotions. Biometric that use sensitive characteristics
like political, religious, philosophical beliefs. No, they're
not allowed. Biometrics are allowed with exceptions or are similarly
untargeted scripting of facial images from
Internet or CCD footage to generate facial
recognition databases. Social scoring, the most AI which is being used to exploit the vulnerabilities of people, right, because of their
age or disability, they will be
completely prohibited. They're not joking
about, on this, EU is very, very
sensitive about privacy. That's why real time, remote
biometric identification in like public spaces unless I think they've given
certain exceptions or that but they've completely
prohibited those AI systems. High risk is high risk are
those systems that are allowed but they might pose
significant risks to health or safety, fundamental rights and freedoms. They will include AI systems being used in critical
infrastructure or your education. Employment essential like public and government
services, right? And they require
strict compliance, strict mandatory
requirements before you can put those on the market. What are those requirements? You need to do a proper
risk assessment. They have to fill out a
proper risk assessment, impact assessment document
what they're doing, a high data quality standards, so the data which is
being used by the AI. They need to make
sure that data, data has not been compromised, or poison, or
influenced in any way. They need to make detailed documentation and
record keeping. If you've ever been
in a GDPR audit, you know the amount
of documentation, you have to make it
available, right? And of course, transparency. So you have to provide
clear information about the AI systems capabilities,
what it's being used for. And this is if you're
in a high risk system. This is from the EU website. This is like, these are the
conditions under steps. So if you developed a
high risk I system, they would have to
undergo this assessment. It gets registered in the
EU database and into sign. Basically, you get the CE
marking the European Union. Once it's life, it's been
in the database on changes. If there's any major changes, you need to go back to step two. Get like the approval done. High risk systems,
they are very, very serious about this. They're not like joking
about it, right? Lastly, the limited or the minimal risk like we
talked about here, like simple things like
chat bout a generative AI, which is not taking data, right? You want to make sure that
you have transparency there. Limited or minimal
risk is pretty much, they have the same minimal risk. Like you don't need
limited risks, you need to be disclosed. The fact that you're using like the output is AI generated. So this is how the EAI act looks like from
1,000 feet of view. How do you prepare for it?
Right? So like I said, don't try to memorize the standard unless
you're a lawyer. Even if you're a lawyer,
I wouldn't recommend memorizing it because that's not the point of the
EUAI act, right? So while the act is still
about to be passed, you can start being
proactive about it. So you can start inventing
and classifying your systems, implementing an AI
governance framework. So to make sure that
those processes and a chart is there. Doing a gap analysis, prioritizing and finding out the risks which
are there, right? And then doing training
and communication. And instead of me just
talking about this blindly, let's take a case study, right? Let's take a company which
is about to implement the EU AI Act and
see how they would go about it so that
you get a better idea. That wraps this lesson up. We've seen what the EUAI Act is, how it works, the
risk based approach, and how high risk AI
systems are treated, and now the requirements also
by the risk based approach. So let's take an example
of a company and see how the EUAI act will
look like in action. Thank you and I'll see
you in the next lesson.
5. 5 EU Act 2: Hello, everybody. Welcome,
welcome to this lesson, which is now we've talked
about the EAI Act. And like I mentioned,
the UAA Act can become very dry to read. If you're just
reading the theory, you won't understand,
you'll forget about it. I want to show you how
theoretically a company would go about implementing
the EAI Act, okay? And there are multiple
ways of doing it, but we're going to take
a simplistic example so that you get grasp
the key concepts. And remember that's
what I always focus on, that's why I made this course. Please do not try to memorize these standards because you
will forget about them, okay? If you do a case study or
you enforce it yourself, then that knowledge
you will never, ever forget. And
that's most important. I think I showed you this slide in the previous lesson also. But this is how a
typical company would prepare for the EAI Act. They would inventory and
classify. What does that mean? They would review
existing AI systems and use cases and they
would categorize them that, hey, do I have any
high risk systems that require compliance
to the EA Act? You can use questionnaire, maybe use some automated system which finds out the
purpose of this. If you already have an
inventory, you can use that. Then they would go
about implementing an AI governance framework. Because AI does not
exist in a vacuum, it needs a proper
governance framework. You need committees. You need an Act chart. You
need processes in place to actually properly
implement the EUA Act. And then you will
do a gap analysis. So we do a proper gap analysis against the requirements.
What do you already have? If you have a hier system, how many parts of the EAA
Act am I compliant with? Right? So what happens is
you will get an action plan. You would know what
the risks are, what the issues are,
and then you would prioritize and manage
those AI risks, right? You would start enforcing
them, trying to fix them. Maybe you don't have the
proper documentation, maybe you don't have the proper AI security
controls there. And lastly, you
would start doing the training and communication because there's a lot of things. Maybe your developers would
need to be made aware of, Your senior management would
need to be made aware of. So these are the key steps. The core steps you would
have to think about when you're enforcing the
EU AI act in the future. So let's take a case study, right? A simple case study. Maybe you have a global
technology firm that specializes in data
analytics, AI systems, right? And there they have to align their operations
with the EUA. Act management is concerned Km, and this looks serious. What happens if we
get fined, Right? Because they have a
very broad portfolio of AI driven products
and services. They've really made a lot
of money with the AI boom, but now they realize
regulation is coming and they are
quite intelligent here. They've proactively decided
to adjust their practices to comply with the new AI regulatory landscape
in the European Union. They've hired you like you're thinking about how
to go about it, right? So the first step,
like I mentioned, inventory and classify
your AI systems, okay, to understand
the implications. So you need to first find out if I have AI models in
use or in development, or about to procure. Which might fall under the
highest category, right? Many services they used, like existing models they
bought from somewhere. Or maybe they've
built them in house. So you need to have this
sort of model inventory. Maybe you can do it manually,
spread questionnaires. Or maybe you have
a automated system which can quickly find out
what are the AI systems. Supposing you're
running in the cloud, you would know how many AI services you're running, right? So there are many ways
of going about it. It's no hard and fast rule, but the first step always
is inventory and classify. Find out what you have and
what needs to be done, right? And so supposing
data sphere finds like a system that
might be a high risk, let's take an example. They have an AI driven
diagnostic tool which assists doctors in identifying
rare diseases based on patient data. And given the
potential impact on patient health and the nature
of medical diagnostics, this AI system would be fall on the high risk under the EAI Act. What will they have to
do? They will have to do a detailed risk assessment of
the diagnostic tool, right? They would evaluate
its impact to affect patient safety and the
accuracy of medical diagnosis. They would do like enhance
their compliance to make sure that MI being compliant with
the EOAI Act or not, right? They would do an ethical
and regulatory review. What is the implication for
patient privacy and concern? Do I have permission
from the patient? Do they know this is
an EUAI act, right? That, sorry, this
is an AI system which is making decisions. They would enhance
data governance, data privacy and security
around these controls, right? Transparency and disclosure. Are they providing clear
information to doctors and patients about how the AI tool
is making its diagnostics, what data it's using, and then deployment
monitoring, right? So is it being deployed in the proper controlled
clinical environment with monitoring in place? And do you have
documentation and reporting that informs the
auditors what is being used? And are we engaging with the healthcare owners like
hospitals to get feedback? Because by taking these steps, they will not only ensure that their diagnostic tool
complies with the EAI Act, but they will also
show that they are upholding high standards. These are the steps
they will need to take to make sure they're
compliant with the EAI Act. Next step would be enhancing the AI governance
framework, right? Maybe they don't have a
proper governance right now. They don't have anybody
dedicated to AI. They have no policies,
they have no frameworks. How will they make sure that who decides whether an AI system is compliant or
not and stops it? What are the checklist? Is
somebody auditing them? Is somebody implementing
these controls? They need to hire
people. They need to change their policies
and procedures. So this is where your AI
governance will come into place. It might fall under the
overall risk management or it might be a new function. But these are the decisions
you need to take place. And then of course,
gap analysis, you will get a long
list of items, of things you need to fix. So you would have to do
a proper gap analysis, Find out what are the
areas we need to work on. These policies are missing. These actions are missing. These persons need to be hired. These AI risk systems
need to be fixed. So these are the things
you would have to think about and get cracking on. Have a proper tracking
happening, right? And of course, system evaluation,
you need to make sure, okay, systems alive, once
you've gotten the conformity, how do you know make sure that if any change is happening, you would have to find
out what are the changes. Do we need to do a
reconformity audit? All those sort of things
have to be there. And later on when we talk about the photo one standard that enforces an AI
management system, right? This is where the EU act and the ISO one which is coming out. They can complement each
other and help each other. And AI risk management
of course, right? So this will form
under the governance. So you need to make sure that your risk management processes have been enhanced and they know what are the proper ways of finding out the AI
risk I rates are very, very different, right, from
traditional AI attacks. Ai security issues. They're
all very different. Do you have a proper way of
identifying these risks? And this is where the Nist AI risk management
framework will come up, which we'll discuss
in the next lessons. So all these things you
need to think about. Lastly, and the part which everybody hates the most,
which is documentation. Like I said, if you've ever
done a proper GDPR audit, you will know how much
documentation needs to be done, and this drives everybody crazy. Start creating that
repository to make sure that all AI systems are properly
documented, what you've done. So if anybody comes
for the audit, you have this complete
documentation ready showing them what
the processes were there. And of course,
employee training on AI ethics and compliance. The workforce needs
to be informed. You would need to like educate your developers, your
data scientists, your AI risk management people,
your senior management, to make sure they understand what new obligations
are coming out. This will promote a proper
risk aware culture so that people are understanding of the new environment which
is coming out, right? So that people just
don't go ahead and buy AI systems without
informing you, without informing the
risk management teams. And lastly, stakeholder
communication so that everybody knows, do the hospital, are
the hospitals aware? They have to inform us
if any issue happens. Are we getting feedback
from the hospitals, from doctors, if any
issue is happening? Do we know if any
incident happens? How to inform the EU
legislative authorities that some incident has happened? Proper documentation, proper job description
needs to be there. Processes need to be there. This is how you would go
about at a high level, enforcing the EA act. I hope this gave
you a better idea. Now, instead of me
reading out line by line, these are the clauses
I showed you how a company would go about
actually enforcing this. Use it as a guiding point. I'm sure you could probably
add more things here, but this is just to give you a high level view of how
to enforce the EUAI Act. More and more
companies will start implementing it and we'll
find out new things. But this gives you
a very great way to get started
with the EUAI Act. I hope this was useful to you and I'll see you
in the next lesson.
6. 6 NIST 1 : Hello everybody. Welcome,
welcome to the lesson. And in this lesson,
we're going to deep dive into a new AI framework, which is the Nist AI Risk
Management Framework. This is not a law, this
is not like a regulation, like the EU AI Act,
which we talked about. No, this is a framework, but it is extremely
important and I'll show you why and why it is going to have as big an impact as the EU AI act like
we talked about. It's going to have
a global impact. This is also going to have
another global impact. And like I said, this is a, this has been developed
specifically to address AI risks. If you remember, when we were
discussing the EU AI Act, they talked about creating an AI risk management framework. Well, that sounds very nice,
but how do you go about it? Right? How do you create a
framework for assessing, identifying, and
managing AI risks? So this is where
Nist have relied this framework which is an absolutely
excellent framework. And we're going to deep dive
into this in this lesson. So first of all, we're going
to cover who is it like, why should we listen to them? The introduction to the
framework and overview, the components and
the benefits of it. And like I said,
like I always do, we're going to do a case study, we're going to take
a company that is implementing this framework. And what are the high level
steps they would have to do. So hopefully, you can take that lessons and imply it within your own
environment also. So first of all, what is Nist? The National Institute of
Standards and Technology. It's a noun, regulatory agency of the US Department
of Commerce. Their mission is to
promote innovation and industrial competitiveness and the various publications. Okay, and if you're
in cybersecurity T, you would best know them for the Nist Cybersecurity
Framework, or the CSF, which
is basically like a set of guidelines
and best practices. The framework was
created between like industry and government experts. I think it came out over
ten years ago in 2014. And initially, it was aimed at like the critical
infrastructure, right? However, it was such
an excellent document, the guidelines and
everything its use has spread beyond critical
infrastructure. It has been widely adopted by companies apart from
critical infrastructure in the US, and globally. And pretty much every
sector is using this in one way or the other if
you're in cybersecurity. Believe me, you
have heard of Nist. And Nist CSF, it looks
like this, right? And they've recently
updated it to, I think 2.0 So they
used to have the sort of high level co functions. Initially it used to be five, which is like identify, protect, detect,
respond, recover, and now they've released a
new one which is Govern. And it's extremely powerful
because it is flexible and it can be adapted to the specific needs of
any company, right? So you can take the
best practices and principles and you can use
it to improve your company. You might be thinking, why
am I telling you about the Nist Cybersecurity
framework I'm showing you this? Because then you'll
be able to appreciate the AI risk management
framework because it's built in a similar
fashion, right? So like I mentioned, they've released this
framework completely freely available
because they also recognize the amount of risks and threats that
are possible with AI. So this gave birth to the Nist AI Risk
Management Framework. And like the EUA Act, they're not just talking
about the negative things. They want to
maximize innovation, maximize the positive
aspects of AI. It gives you a structured
approach to identify, assess, and mitigate the risks that are there with AI systems. And with AI, it's so technical. So you need to have some
sort of a framework. What does the framework
look like, right? So, like I said, it's a
set of guidelines, right? It's not a law, it's
completely optional. If you want to enforce it, yes. If you don't want to enforce it, nobody can enforce you. Right? But first of all, the most important
thing is tech agnostic. It's designed to be
applicable specific regardless of what
technology you're using. So it can be applied to AI built with any technology
on prem cloud. So it allows a
framework to remain relevant and vendor agnostic, so they're not
geared towards any particular any cloud platform. So it's both vendor
agnostic and tech agnostic. And it defines what a
trustworthy AI system is. So it gives you a
very clear criteria. What is an AI that
can be trusted? I'll show you what
the criteria is. Things like fairness,
reliability, transparency. And it gives you a clear
guidelines that if you're developing a system and if
you meet this criteria, your AI will fall into
something called trustworthy. And it's divided into two parts. So they give you
foundational information. These are the basic principles
of AI, risk management. You know why you
need it, what are the foundational concepts? And then the core is the
action oriented part. And we're going to
show what that means. The core is where you'll put in the processes to mitigate, and this is what it looks like, this is what the
core looks like. Like I said, this is
the action oriented, this is how you implement AIs management within your company. So it details a key
functioning like establishing a governance
structure mapping, measuring, managing the A risk. And you can see it
looks very familiar to the cybersecurity framework. So if I just go a
few slides back, this is the Nist IRMF, and this is the
cybersecurity framework. So that was the point of
showing you this, this is how. It takes a very broad but
very detailed approach at how to implement like a risk management framework
which will help you to identify and mitigate AI risks
within your environment. So we talked about that Nist tells you what a
trustworthy AI system is like. Something which your
company will put out, and customers can trust that it's not going
to get compromised, it's going to treat
their data fairly. So these are the aspects which they talked about,
valid and reliable. So it doesn't go down
the outputs it's given, those are like they
can be relied upon. It's safe. It doesn't
put people into harm. It's secure, right? It's not going to get
compromised tomorrow. Resilient. It's accountable
and transparent. So it tells you how it's
reaching its decisions. It's explainable. So the
documentation is there. It protects privacy,
and it's fair. So it doesn't get biased
to a particular race, to a particular company, to a particular gender. It takes into account
all of these things. And this is how Nist defines
the harm that EI causes. That harm can happen
to people like people who get the
wrong diagnosis, or they get like hit
by a self driving car. Harm to a company's reputation
can get harmed, right? Their business operations
can get impacted. And even harm to an ecosystem. So this is something which
people forget that you know, even the global
financial ecosystem, your global environment. As more and more things
move towards AI, those things also
get in practice. All these things it
takes into account. So what are the benefits of the Nist Air risk
management framework? First of all, it's an
excellent way of putting in an effective risk
management framework of identifying and
mitigating risks. It gives you trustworthiness. We talked about how it tells you what a trustworthy
AI system is, right? And it's going to
make it makes it more easier to be in compliance
with best practices. If you've implemented nistIRMF, we're going to talk about it. It'll be easier to meet the regulations of the
EUAI Act also, right? And it shows you how to create a proper collaboration
within your own company. All of these things,
when you think about it, makes what do you
call the meeting, the compliance of the
other regulations also. So if you compare it
with the EUAI Act, the Nist is completely
optional, right? It encourages
voluntary adoption, whereas the EU is
more mandatory. It says, no, you have to do it. But the Nist AI Risk
Management Framework, it can serve as a practical
guide for companies on how to implement the principles and requirements in the EUA Act. Especially where
there's not much detail is provided, right? How to implement a risk
management framework, how to go about it, right? So this is like a
high level overview of the Nist AI Risk
Management Framework. So this is what we talked
about, what it is, what are the key components, the foundational and the core, and what are trustworthy AI is. So like I've told before, instead of me just babbling on and like trying to
make you memorize it, let's take a look
at how to enforce the nist AI risk management
framework within the company. Thank you and I'll see
you in the next lesson.
7. 7 NIST 2: Hello everybody. Welcome,
welcome to this lesson, which is a continuation
of the previous lesson. In this one, we're
going to implement the Nist AI Risk
Management Framework, and see how the steps
would be taken. So this is similar to
what we did previously, but at a high level, these are the steps
that you would go around understanding
the framework, assessing your current
level of AI usage. And then implementing
the core of the risk management
framework and making sure that you have
a training and a cultural change within your company when it
comes to enforcing it. And if it seems
like a very similar to the EA act, yes,
that's correct. Like the steps which
are there there, you'll see a lot of consistency within the steps because that is where the overlap happens and it becomes easier for
you to do, right? So these are the
steps you would do. You would have an educational
initiative, right? You would make sure
that people are trained about the next AIS
management framework, the benefits, and how
you would govern, map, measure, and manage AIS. Can I'm going to talk
about this in detail, but you would do this through
training and workshops to make sure that people understand
how this is happening. Before you do about it, like you would do an
assessment, right? Hey, how many AI
systems do I have? You would have to go
circulate questionnaires, look at your AI systems, do like a stakeholder
consultation, right? And maybe you'll
find out some risks in this initial stage. But this is where
you would go around. Usually, look at
existing documentation, create surveys, questionnaires, look at your AI systems just
to get an inventory of where you stand as of today with
regards to your AI usage. So this is where, once you have that inventory, this is where you would start implementing the core of the Nist AI risk
management framework, which is the first
part which is govern, govern is your overall
framework, right? That's why it's in the
middle of the diagram. And you would go around
creating policies, developing or updating policies relating to AIs management, ethical AI usage, data
governance, AI risk management. And you would align
your organization. What does that mean? Ai governance should not be
an isolated function, right? Ai risk management, it should fall within
risk management. You should have a proper chart. You would create a
proper accountability who is responsible
for making go, no go decisions when
it comes to AI? And you would have transparency
in ethics policies. Are there that any AI
system that gets created, It has to be document
what it's doing, how it's reaching its decisions. And again, stakeholders, you
would reach out to legal, reach out to your
customers to find out how AI is being enforced. What are the things
which are happening. So this would be the first
part which is the governor. The next part is mapping. In the mapping
part, you would go around identifying the
risks which are there. So you have the inventory, you've created the framework. Now you would do a
comprehensive identification of the potential risks and benefits associated with each
AI system, right? And this could be like, maybe the AI is vulnerable
to security risks, maybe it has bias, maybe the
data is not being cleaned. You would reach out
to stakeholders, IT staff, data scientists,
external parties, right? And then you would have to consider the context
in which I operates, right? What's the regulated
environment, what are the market dynamics, what are the social
impacts as you would say. And you, of course, document
all these findings. So this is, you would say, the identification part and now you would do
the measurement. So you've identified the risks, you would now quantify or qualify this like
using some sort of a risk analysis to find out
what are the key areas, like what are the critical
risks, what are the high risk? Use your own internal risk
scoring systems, right? And you would also establish
performance metrics for your AI systems that what are the performance
metrics for? Accuracy, fairness, transparency, what is
unacceptable, what is acceptable? And you would put in
look at implementing ongoing like monitoring of your AI systems and you would
enforce a feedback loop. So you would create like
some sort of mechanism to get insights from the
risk management process. This loop will inform, and this will help you to
adjust your AI systems. What if NAS system is running, suddenly its security goes down? Or if resilience goes down, how would you find out about it? Right? You would do
monitoring and you would be getting
feedback from the users. This would enable you to have
a constant feedback loop. And lastly, is
managing the AI risk. So now you've identified,
you've measure them. This is where the
mitigation comes up. You would develop strategies to mitigate the identified risks. You could maybe retrain the
AI model to reduce bias, put in more security controls, put in human oversight. Maybe the AI is reaching
a wrong decision. So these are all the
things you would do. You would, of course, need
resources for doing this and to make sure that the risk
mitigation is happening. You would communicate about this ARS to your stakeholders. If there's some issue happening, how do you tell the
customers, right? And this is done through
policies and procedures. And of course,
this whole process would be cyclical, right? You would do regular
reviews to make sure that everything is being
done in a proper way. And of course, lastly,
same with the EAI Act. You would have to do a
cultural change, right? Make sure that you
have trainings happening at the
executive level, the mid management level
for both technical people, security people, and of course your leadership
needs to be involved. So this is like a crash course in enforcing the Nist Air
risk management framework. Let's take an example of
a company instead of me just rattling off what you have to do. Let's
take a company. So maybe you have a mid
sized company and they're like specializing in AI
driven analytic solutions. And they want to
enforce, that is AIs management framework because they see the benefit of it. They want to make sure
any risks that are there, they get enforced, right? So the first step would be, of course, developing workshops and training sessions
led by experts. They would hire people
to enforce them. And this training would go over the core components
governing mapping, measuring, what we
talked about, right? And then of course, they
would do an assessment, they would do an AI audit. They would conduct an audit
of all the AI systems and use what are the purposes
and the potential risks. They would maybe use the
existing tools and get input through questionnaires on how these systems impact
their operations, what are the potential
risks that are there. And then they would start enforcing the core
which is governing now. They would update
all their policies. They would hire
people. They would create a framework, right? And they would set up clear
roles and responsibilities. Look, this person's
approval is needed before any new AI system
goes into the loop. And that system, that person would make
sure that any AI system, when it goes into production, it is meeting compliance with the Nist AI risk
management framework. With your internal
policies and checklists. And the next step
would be mapping. So like we talked about earlier, they would assess the risks
and benefits of EI system. Do a proper risk assessment, so you would create a
document all the findings. You would have a
complete list of risks which have been
identified in this phase. And then the next step, we would measure where you start
measuring them, right? You would decide the metrics. How do you decide if a
system is trustworthy, like is it secure? And how would you monitor
these sort of things? So all that criteria
would be set down, the next step would be managing. Now you would have
to mitigate, right? How would you put in the
controls if they have an AI system which is not
secure or which is not fair, which is not documented,
you would need to put in those controls
and periodically review and update their
risk management strategies to make sure that they
are compliant with this. And we talked about this
earlier, a training, ongoing training
because you don't do training once and
then forget about it. Otherwise, there will
be no cultural change. So all of these things
you will have to do and what are the benefits? You would improve
your risk management. You would increase the
trustworthiness of their systems, You would increase their
regulatory compliance. Like I talked about
earlier, Nist, RMF aligns very nicely with the AI Act and you would have
a proper strategic use of the AI systems and your
company basically becomes more resilient to any risks
which might be there. So it's just benefit
upon benefit, right? So this is what we
talked about here. A sample case study with the
key outcomes for each step, use it as a starting point. I'm not saying this is,
you understand everything now about the
NistIRMF framework. I have other courses which
deep dive into the framework. Remember what I said earlier
that this complements the more high level guidance
of the EAI can use both. It's not one EA
Act is mandatory. Nist, IRMF is more optional, but it's more thorough
and more detailed. Use it, put the EA Act
at the top and use the Nist IMF to create that
framework which you need. I hope this was
very useful to you. Now we cover another
framework and another standard in
the next lesson. Thank you, I'll see you there.
8. 8 ISO 1 : Hello, hello everybody.
Welcome to this new lesson, which is now about
a new standard, which is ISO 42 double 01, which is an AI
management system. By now, we've covered a few
frameworks and regulations. You should be getting an idea of how these work and what are the key things
they are covering. Now we're covering a very, very important one which
has recently come out, which is SO 42 double 01. If you've ever worked in
cybersecurity or quality, you would be familiar
with the ISO standard right before ISO 27 double 01, ISO 9,001 We're going to cover
how to use the standard, understanding what is an
AI management system, and we're going to
do a case study for implementing it, right? So first of all, let's take a step back
before I jump into this, What is a management
system, right? The management
system, so these are designed to help
companies improve their performance by specifying a series of structured
processes and practices. So they have a high level
structure management systems, and this is a common
high level structure which is like
standardized across the ISO standards.
And why is this like? Because it ensures consistency
and compatibility. So it becomes easier
if you're certified to 9,001 Getting certified to
2701 will be easier, right? Because the same clauses
are being followed, in the same structure
is being followed. So you can create multiple
management systems. One is a quality
management system, one is an information
security management system. And now you have an AI
management system, right? And each of these
management systems, they are like domain
specific, as mentioned there. So you have 2701 for security. You have a privacy one. You now have an AI one. You have a business
continuity one. And this allows
companies to address specific aspects of
their areas, right? You don't want to get one management system which covers everything you
want to focus on, privacy or security, or AI,
and management systems. They have the thing about
continuous improvement. They emphasize a
repeatable process of continuous improvement which is captured in something
called the plan, do, check, Act. We're
going to cover that. And this approach basically encourages companies
to continually assess and enhance
their practices to achieve better
results over time. It's a risk based approach. These standards, they adopt a risk based approach
to management, requiring companies to identify, assess, and manage the risks which are specific
to their domain. It's a risk based, you don't
just implement everything, you implement everything
based on risk. And lastly importantly,
certification by third parties organizations that once they've developed and implemented a management system, they can choose to
get certified to a management system standard by independent third parties. It usually involves an
external audit to verify that the company's system is actually complying
with this standard. And this really helps to boost the company's
profile and credibility. If you are in security
getting ISO 2701, if you're in
business continuity, I think it's 2201, I
might be wrong there. But if you're in quality 9,001 and now we have an AI standard, which is why it's
become so important, right? And it's
become a big news. So these are the benefits
of management standards. Like I said, they are recognized internationally and
across the world. Everybody knows what
the ISO 2701 is, customers, they rely on ISO standards for assurance
and trust, right? If you go to a kitchen and
at top of the kitchen, it says ISO 9,001 certified. So, you know that kitchen is following a quality standard. Similarly with the
company with the 27 double 01 certified, and along with all the benefits and the matured processes, companies can also use them
as a competitive advantage. Right? They can proudly
tell the customer, look, I'm certified to ISO
two double 01, right? Which is like the standard for 27 double 01 which is the
standard for security. So it really helps you to
stand out within the market. And now we are reaching
the SO 42 double 01. This is referred to as
an AI management system, a relatively new thing. It's a new framework which
is designed to address the rapid development
and adoption of AI across various
industries, right? And it defines an AI
management system. What is that? It's like a comprehensive framework that are designed to
provide policies, goals, and procedures for the responsible
development of AI. The whole thing we were
talking about earlier with it and with the EU
standard, right? And they help to put in
like an ethical standard so you can create trustworthy
and ethical AI systems. And the best part is because it provides like a
structured approach for companies to put in these processes like trust
ethics, risk management. So it actually aligns
with the EAI Act and the other ISO standards also because it's following
the same thing, Security. We talked about
transparency earlier, we talked about bias, how to secure these AI systems. All of these things here encapsulated at a
very high level. And it actually an if you are being compliant
with the EA Act, you can also put in AI
management system to make sure all these processes
are being followed and for the risk management
you can use nist. And we're going
to talk about how all these different standards
they can come together. So this is what it looks like.
You can go to the website, do a free preview of how
it looks like just to get a better idea of what an AI management system is and what it will
look like, right? So this is the high
level structure. It's very similar to ISO 27 D01. So it has high level
clauses and annexure, which has a listing
of the controls pertaining to AI policies, AI system life cycle, impact analysis,
data management. They have multiple clauses. At a high level clauses, they have things about
context of the organization, like how the
organization is working, leadership planning,
support, all these things, right at a high level. And then you go down to the
antar where it goes into details about the controls
that will be there. As leadership, you
will have to decide, okay, what do I have
to implement, right? What is the scope of the
AI management system? And if I decide to go through with it,
what are the controls? I will need to put, it's
a very mature standard, it has the full power
of ISO behind it. And that's why it's
such a big deal because now companies who are
developing these AI systems, they will be able to get certified to a very
mature standard and actually promote
themselves as being certified when
it comes to AI. So now what we're going to do is we've covered this at
the high level and we've talked about the
benefits and what the SO 42 Zubondival standard
is, what the structure is. Why don't we take a
look at the case study and see if a company would
want to implement it. What are the steps
they will have to do and how they would go
about implementing it? I'll see you in the next
lesson. Thank you very much.
9. 9 ISO 2: Hello everybody. Welcome.
Welcome to this lesson, which is a continuation
of a previous lesson on the ISO 4201 standard. Now, we're going
to take a company, a fictional company, and see
how it gets implemented. One thing very important,
as of February 2, 2024, this standard
just came out recently. Companies have not yet been certified to audit
against the standard. But that's just a minor thing because it is like
they are going to get certified and you should
know what the process is. Whatever standard you choose. When it comes to ISO, it follows the plan,
do, check, act. This is what they recommend. What is it? It is a four
step management method used for continuous improvement on processes and products. And when it comes to
the ISO standards, I call it the PDCS Cycle. It gives you a
systematic framework for implementing and maintaining these management systems like 90012701 and now
the AISMS, right? And it's integral
to make sure that the company continually
keeps improving. They don't just get audited and they forget about it, right? So the first step is plan. Where companies, they start
identifying what they want to do and the processes
that I need to deliver. What's the scope like assess themselves to find
out where the gaps are. The second step is where they start implementing
the plan processes. You execute the things and
act upon what they decided. The third step is
check where they start monitoring and measuring
the performances and the policies and everything, right to make sure like are the things working
properly or not. The last step is a which is
where based on the analysis, they keep on improving
the continued improvement of the system to improve the
processes and everything, to make sure how this happens. So this is how the PDC E works. It's not complicated, it's
very straightforward. So let's take a company
and see how they would go about implementing it, okay? So this is what the
company we were talking about, green
tech innovations. Now green Tech is like a mid
size technology company and they are focusing on
AI driven solutions for the sustainable
energy market. Green energy,
sustainable energy, which is a very hot
industry right now. And they have embarked
on a journey to implement the ISO 4201 standard. Why? Because they really
want to show that they're serious about the AI
obligations responsible AI. And they really want to be at the forefront of this industry. And they realize that getting the ISO certification with Willie help them get
that competitive edge. It really demonstrate to their
customers that they know how important responsible AI is in having a mature AI
management system. And they looked after
like smart energy grades, renewable resources,
energy consumption, right? So how would they go about it? Well, first of all, when it comes to AI initiatives like what's the need for
implementing this standard? They had ethical
concerns, right? They didn't know that are they meeting all their ethical
responsibilities or not? Are the AI systems, when they're doing
their decisions, are they doing it
in a proper way? Is the data being
collected properly? And the other thing is
I is changing so fast, they want to have some sort
of a mature framework there, a management system which will tell them and they
want to make sure that it integrates with their already established
management systems like 27019001. This is where the
background starts. The first up, will we of course, plan where they're establishing
the AA management system? Most important thing
first of all, is a scope. You don't have to implement a management standard
on the whole company. You can choose a system, you
can choose a business unit, a particular area, a
particular department wide. And usually that's what
recommended that you start small. And each year you grow
it more and more. Okay. And so that's
what they first did. They decided on the scope, then Green Tech's
leadership team, they developed a
proper EI policy that aligned with
their standards. And they followed the clauses which were there in
the ISO standard. They established a context in which EI operates, like,
what are we doing? What are the external
and internal factors that are impacting us? And they conducted
workshops and survey to understand like what are the
expectations of employees, customers, partners
regarding them, right? So they created a
whole ecosystem for I. The next step was do, where they did something called an
impact and risk assessment. Impact assessments are like
a big thing in the AIMS, which is evaluating how AI systems affect
people individually, whether groups or
society as a whole. Making sure that you're
thinking about safety and transparency and putting
those controls in place. You're doing that
impact assessment and the risk assessment to make sure that
you're aware of what the risks are within
your AI systems. Then based on that, they will decide on what controls
to take from the Annex A, the annex of the standard. It will have a list
of controls, right? This is where you'll focus on, like maybe they don't know
about data management. Is our data being collected in an open and accountable
and a transparent way? Is the training data prepared properly? Is the
preparation done? We don't know, right? So
this is where they will want to focus on because that
is where the impact is. That is where the risk is.
Then they will run out the training programs because the AI management
system is a system. It's not a solution,
It's not a product. You would need to
create a culture to educate employees about proper EI practices
and the importance of like ethical considerations
in AI systems, right? The next step is check, which is performance evaluation. So now they're going to implement
tools and procedures to make sure that
this AI management system is getting monitored. And like how it's impacting,
is it working or not? Whether the AI systems are
being impacted or not. Not just monitoring,
they're going to do audits. Also, they have established a schedule for internal audits. Internal teams are
going to check whether the AI management system is compliant with
the standard or not to make sure they are getting
like they're actually meeting the standard and they're
following its processes. Last is Act, which is
continual improvement. They create feedback mechanisms. They're going to take surveys, they're going to spread out questionnaires to the customers, to the users, to find out whether the applications
are working as expected. Whether the AA management
system is efficient or not. Based on this list of corrective actions and
corrective feedback, they're going to
keep on refining the data, taking
this performance, and they're going to keep on
improving year after year. Most importantly, if they
want to get audited, they would have to choose like a proper certification body. Think about things like number of experience, number of audits. And this will be difficult
right now with 42 double 01 because it is
very newly published. But you can look at a
certification body which has a solid track record
with 27 double 019001. Then you can do something called a pre assessment,
Which auditors do. They can use it to
do a quick audit to tell you whether you're meeting the
requirements or not. You can use this as
a way to fine tune your processes and understand what the auditor will look at. Then of course, you do
an audit preparation, make sure that all the gaps
are identified and fixed. You do your certification
and remember that the certification has to be maintained and
improved over time. This is nothing like
what do you call, it's not rocket science. It's a very clear
and defined process. What are the results of this? Of course, like what's
the output of this? You'll get more trust,
more credibility. Because you've reached
this major milestone of becoming certified
242, double one. You'll get better risk
management because you have now a proper
management system, management system
in place in which risk management and everything
is ingrained into it. And of course, your process has improved and you have a
proper framework, right? A proper mature management
system in place. Now when they explore
new AI opportunities, they have that
confidence, right? They know that we have a proper governance framework
in place which is going to tell us if the AS system is insecure or the AS system is not meeting the requirements. So that was it from the ISO one. Now we're going to jump into
a more technical standard, which is the Google
Google's AI framework. And I'll see you in the
next lesson, everybody. Thank you.
10. 10 Google SAIF 1 : Hello everybody. Welcome.
Welcome to this lesson, which is about the
Google SAA IF framework. Now this is a
framework from Google and this is a very
unique framework compared to the others
we discussed before. Why? Because it is very
heavily security focused. You don't the other
ones we talked about, they focus more on
risk and more on governance and more on
management systems, right? But this one, the
Google SAI Framework, is very much aligned to the
security dimensions of AI. When you talk about
building AI systems, it is more focused on AI.
And that's a great thing. That's why I've included
it in this course. Why? Because I do feel that
sometimes the AI frameworks, they are a bit too high
level and they don't go down deep into securing of AI. And this is where the Google
SAI Framework can come in. And this is absolutely
an amazing framework. You should study it,
especially if you want to use it to supplement things like the Nist AI Risk Management
Framework we talked about, or the ISO 4201, right? All of the Google AI, it can really complement in the most excellent
way because it is more like at the ground level when it
comes to securing AI. And that's what we're
going to talk about here. In this lesson, we're
going to talk about the framework, the key concepts, and of course, always how
to implement it using a case study without
delaying it. Let's talk about the
framework first. So what is the Google
a secure AI framework? That's what I transfers, the secure AI framework. So it is a conceptual
framework for securing AI systems that draws from Google's security
best practices. It's inspired by security best practices that they've applied to things like software
development and focus more focused on AI. Why? Because it's
quite important. Right over the years at Google, they have seen the types of
attacks that can happen, and they've designed this
framework specifically to mitigate risks pertaining
to AI systems, like stealing of the
model, data poisoning, prompt injections, you know, data injections, data poisoning, all these sort of things. As AI capabilities are
becoming more and more integrated into like
systems across the world, it becomes the security of
these systems will become much more important if you've been following me
on any platform. You know, I talk a
lot about AI security because I feel this
is an area which is very underrepresented
and not that many people have that
expertise in it. So the SIF, it draws from Google's security best
practices and addresses AI specific security
concerns focusing on access management network and endpoint security,
application security. So this is what we're
going to talk about here and what does it look
like at a high level? So these are the six
core elements of Google, the secure AI framework. Let's look at it one by one. The first one is expand strong security foundations to the AI ecosystem.
What does that mean? It means leveraging
your secure by default infrastructure
and protections to protect AI systems. You know, all that cybersecurity which has become so mature, you need to make sure that they are covering AI
systems also, right? So let's take an example. We know about SQL injections, right, where you have
to sanitize the input. Now you need to apply that same input sanitization to protect against techniques
like prompt injections, right? The other one is extend
detection and response to bring AI into an organization's
threat universe. You know how timeliness
is critical when it comes to detecting and responding to AI related cyber incidents. So you want to extend
your threat intelligence, right, like a company, this can include
monitoring inputs and outputs of like generative
AI to detect anomalies. Like if it's somebody trying
to mess with the NII system. And using threat intelligence
to anticipate such attacks. So you can collaborate with your Soc team and everything, right? The next one is
automated defenses to keep pace with existing
and new threats. As AI innovations, they can improve the scale and speed of your incident
response, right? And cybercriminals
also know this, that's why they're
going to use AI to scale the impact of
their cyber threats. That's why it's so important
to use AI as a defense as you'll use AI to combat
AI specific threats. Okay, Don't just think about AI as something that
can be attacked. Think about as something that can protect
your systems also. Okay. The next one is
harmonized platform level controls to ensure
consistent security across the organization. So consistency across
your frameworks, it can support AI mitigation. Also AIS mitigation. So you want to make sure that like whatever protections
you already have, you've extended it to your
underlying platforms. Like for example, if you
have using AWS Sage maker, it has to be hardened according to the best
practices, right? And making sure that
all your, hardening, all your configurations, it is consistent across
the organization. Okay, number five is
adapt controls to adjust medications and create faster feedback loops
for AI deployment. Constant testing of implementations through
continuous learning. It can ensure that detection
and protection capabilities, they are like addressing the new types of Its
that are coming out. So you can use things like machine learning based on
incidents and user feedback. So you can update your
training, update your AI, and fine tune the
models to respond strategically to new types of
attacks that are happening. Right? Ai can detect anomalous behavior and it can adjust controls in real
time to make sure. And that is something
which is quite new. Previously we had to
manually change controls, raise or lower our
security controls. With AI, you can
completely automated. And number six is
contextualized AI system risks in surrounding
business processes. Conducting end to end risk
assessments relating to how companies are going to deploy AI will help you inform
your decisions, right? You need to do an assessment to an assessment of like
the business risk, like data validation, what
sort of things are happening? And basically talk
about risk here, like the same thing we
talked about within the AI risk
management framework, making sure that everything is driven through a
risk based process. So that is at a high level, what the secure AI
framework is about. As you can see, it is more technical and it goes down
to the platform level. Goes down to what attacks are there and how would we go
about implementing it. It's pretty straightforward. You would understand the usage, It's very similar to the things we've already talked
about before. First step one would be
understanding the use. Understanding the
specific business use that you're using AI for. What data is being used, what protocols are
going to be used. The next one would be
assembling the team. Just like systems
like we talked about, they're quite complex
and they have a large number of
moving parts, right? You want to make sure
that whatever you have, whatever team you're
using privacy, IT all of them are aware
of this and you form part of a multidisciplinary team that can help to
mitigate these AI risks. And then step three,
as per Google, is you do an AI framer, which is that the
teams need to be like, properly trained and made up to aware of the new
types of methodologies, new types of attacks
which are happening across a model
development lifecycle. And last one is applying the six core elements of the SCIF, the one
we talked about. And remember the six steps. They're not intended to
be chronological, okay? You can apply them in any, like any order you want as long as you apply
this 61 we talked about. So it's not like you
have to start from the left and go down one by one. No, you can apply them all together step by step
or sequentially, but whatever you feel is more like a suitable to your
particular organization. That was it from the Google SAI Framework, High Level Overview. Now we can talk
about implementing this like I always do with a case study and see how a company would go about
implementing this framework. And I'll see you in the
next lesson. Thank you.
11. 11 Google SAIF 2: Okay everybody. Hello,
welcome to this lesson. And now we're going
to be implementing the Google SAI Framework within a company
like I always do. So let's take a look at
what we're talking about. So here we have a
global bank cop, which is a leading
international bank. And they're going to deploy
some AI powered chat pots for you personalized
customer service. They're going to use
natural language processing and machine learning, something like a chat
GPT type of chat pot to handle the various
customer inquiries from account management. And you can imagine
just how much, like dangerous it
could be if somebody managed to hack into it or
bypass the security controls. So given the sensitive
nature of the data involved, the bank is going
to decide to adopt the Google's SEI framework to safeguard a deployment
and vals into production. So we're going to use the same steps we
talked about earlier. And we're going to
start with step one, which is understanding the use. So what is the bank going to do? They're going to define
the specific goals and scope of this initiative,
like the chat board. So what's the objective?
They want to provide 2047 personalized
customer service, but they want to maintain the highest level of
security and privacy. Data requirements will
be customer interaction, data, account details, transaction history,
all these sort of very sensitive data, right? And the expected outcomes will
be reduced response times, customer satisfaction
and data security. Using all of this information, the bank is going to do a
proper risk assessment, right? To identify potential threats, like what are going
to happen before they go on to the
next step, right? So they're going to do a
proper risk assessment and make sure that they
understand what the risk is. The second step is assembling the team which you
talked about earlier, so they're going to assemble a proper cross functional team. You could have AI
and ML specialists, IT security experts to make sure that everything
is like secure, the chat bot is not going
to get compromised. Prompt injection
compliance officers. You want to make sure that maybe the EU or the Nist regulations
have to be complied with. The ISO ones, they
are following it. Customer service managers
because they are the people who are going to be interacting with this the most. And privacy advocates to make sure that the data is
properly collected. So we have a diverse team that makes sure that the chat pot
when it is being deployed. All these multiple stakeholders, they're getting
involved with it. Step three is level set with the AI premium
which we talked about. A proper AI training program. Ai machine fundamentals, natural language technique
security list specific to AI, and ethical AI usage. This will make sure
that the team will have a proper understanding of the technical and the ethical
aspects of the project. This is what the Google
SAI Framework recommends. Step four is now we've got down to the six core elements
which are there. The first one was be expanding the strong security foundations. They're going to evaluate their existing security
infrastructure on which the AI platform
will be hosted. Encryption securing APIs, proper authentication is there they can extend their
detection and response. So they're going to have develop an AI specific threat
detection model to make sure that whatever, whatever is going back and
forth from the chat pot, if any malicious
activity is shown, they can immediately the
sooty milk get alerted. Number three is
automating defenses. They're going to
leverage machine learning to enhance the
defense mechanisms. So they're going to
automate the detection and response to
security threats within chat board conversations to make sure that any attacks happen. They have the automated
response ready and they can have human oversight
for more complex cases. Number four would be harmonizing the platform level controls. They're going to standardize, we talked about this earlier, the security controls
across all platforms which are involved with
that chat board life cycle, whether it's development
or deployment. You can have controls there. Number five would be
adapting controls. As the chat board system gets
more and more integrated, the bank will use machine learning to keep on adapting
its security controls as more and more threats
emerge because the security threat environment is very dynamic,
especially in AI. And lastly, it's contextualized
as AI system risk. So they can incorporate EI, St management into its
broader risk framework. All the learnings that get
from this I chat part, they're going to be
considered around other operational risks
and decision making. And what are the
benefits of this? Of course you can see
they'll probably manage to successfully defy the
AI powered chat pots. They're going to enhance their customer
response framework and without compromising
on security or compliance. So this is just to show you a practical way
of how you would go about implementing the
Google SAI Framework. It is a very
excellent framework, especially when you can
use it in complement with other frameworks like the
EAI regulation or ISO Nist, because it is more
technical and it goes down to the deep level
about how that tax happen. This is what we
talked about here, like what is Google SF? What are the key components and how to go around
implement it? I hope you've got a
good idea of Google. I would recommend
it's completely free. Do go and check it out. Read through it and get a good idea of how
Google has recommended. You have to understand Google like there is a reason
they're big tech, one of the biggest
companies in the world. So they have a wealth
of information which they're freely put
into this framework. So don't let this
information go to west. Do go through it and understand
the framework fully. Thank you everybody and I'll
see in the next lesson.
12. 12 How To Use: Hello everybody. Welcome,
welcome to this lesson, which is about how to
use this knowledge. And this is almost at
the end of the course. Now, one thing which I've found that people get confused
about these frameworks, there are so many
of them, right? We talked about Nest, Google, ISO, EU, all these
sort of regulations. So it becomes kind of deal, hey, how do I use this
information, right, If I just want to mitigate
the risks which are there. So there are many, many
ways of going about it. And that is like the whole point of making this course right. You have to understand like
each of these frameworks, they fit in a
particular way, right? Eu AI regulation is like a
law, so that's mandatory. You don't really have a choice. Iso 4201 can be thought of as a complement
to the Nist AI framework. Similarly, like the, Google can be considered more
of a lower level, where you think about
security of these systems. So remember, think about the common areas which
is trustworthiness, risk assessments,
governance which are common across no matter which
framework you talked about. And see what the strengths
are chi each framework. So you can implement
all of these like or take some of these
which are applicable to you, or you can create
your own framework which is compliant
with all of them. So even if you implement
your own framework, you will automatically be
compliant with all of them. And if you want to look at, this is how I would visualize it. So either EU AI regulation
would be at the top, because this is a law, right?
You have to get it done. Then you can put in an
AI management system to have like a proper umbrella under which everything operates. And you can do it in parallel to an information security
management system like 2701. And believe me, a lot of people have said
that the EO, AI Act, they're going to say that when you think about
high risk systems, they would want to have
something like the SO 4201 mentioned there. Okay? And then when
you go down all of these frameworks to
talk about risk, right? We want to have a risk
management framework. There is no risk management
framework better than mist because
it goes down into such a high level of detail about how to go around
implementing it so they can complement each
other and you can implement a proper risk
management framework based on St. And last of course is more towards the security of these underlying platforms I, where you can implement
the Google SIF framework. So this is just to
give you an idea. I'm not saying that this is
what you have to implement, this is what I would
recommend if you want to take the best parts of all of
these frameworks, right? And it's not just
these frameworks, you have new ones coming out. There's an ISO standard
under development which is geared towards
AI security guidance, to addressing security threats and failures on AI systems. So that's pretty cool.
I'm really looking forward to this one when
it comes out, right? And I would recommend no matter
which country you're in, you will have a national
strategy for AI. I'm based out of London, in UK. So I'm looking at the national AI strategy which they have a ten year plan to
make Britain a global AI superpower within the US, they have an executive
order for the safe, secure, and trustworthy
development and use of AI. I would recommend
going through that. This is not a law, but all the laws will be
probably derived from this. If you're in the
UA, the Middle East where I used to work before, you have a national AI
strategy at 2031, right? All of these things just read through them to get
a better understanding. And you will see a
lot of commonality, like common patterns
within all of them just and it'll give you a very good idea of
how they're working. Okay. So I hope that
was like useful to you. Now what you're going to do, we're going to wrap
up this course. I hope it was very useful
to you and you got some tangible
knowledge out of this. I'll see you in the very next
conclusion of this course.
13. 13 Conclusion : Hello everybody. Welcome
to this course and welcome to the conclusion of this course where
we are wrapping up. I hope this was useful to you. One thing I would recommend is to continue our
learning journey. Ai is a very fast evolving
field and a lot of things you learn that I deliberately keep this
course as high level. I don't want to talk about a
specific technology because whatever technology
I talk about is going to get outdated
very, very fast. But remember, AI
is here to stay, and a lot of new jobs are
going to get created. So stay updated with
industry trends. Stay updated, keep learning. And you will keep growing. So congratulations for
taking this course. I try to make it not
like a huge course, like 4 hours or 5 hours. I don't like those courses
because people get bored. So I hope you got some tangible
information from this. Remember the project, which is, I want you to take
this information and create your own framework, your own AI
governance framework. Or take one standard
like Google or ISO Nest and try to apply that within your
company and see how it works. So that website up everybody. I hope this was useful. If you want to reach out to me, I have a newsletter
on substack and I'm there on linked in
and on Youtube. I'll try to put the links there and you can
reach out to me. Always happy to hear
from your students. Please do leave a review. If you thought this course gave you some good
tangible information. If you thought this was
the worst course ever, then please tell me. Always happy to
get some feedback. Thank you very much and I
wish you all the best in your AI governance and
risk management journey. Okay, I'll see you and I'll see you in the next course.
Thank you everybody.