Transcripts
1. Welcome to this course!: Welcome to APIs,
the basics anymore. The only course he need to
understand APIs from scratch. I am Alex, a senior
software developer that has consulted and worked for some
of the biggest tech firms, both in London and
Silicon Valley. I left my job last year to teach full-time and show others how to succeed in this industry. And since then, 20,000 programmers
have taken my courses. You see, I've been in
your shoes before. There is still much information out there on the Internet. Don't mini courses, video
tutorials, glasses. But when you're just starting
out, what do you pick? You need a robust material, much like the one
presented here. And I'm not the only
one who is saying it. Most of my reviews
can confirm it. Most courses on this topic just don't cover the full spectrum of technological
particularities to become a master in APIs or even
understand what they are. Now in this course, we won't be taking
any shortcuts, but we won't be
wasting any time on information that is
outdated either. It's all about efficiency. If you value your time, everything in here is directly
related to what you can do as a developer to jump start
or grow into your career. You are going to
learn real skills that are in demand right now, then used by some of the
biggest firms like Google, Amazon, Facebook, or Netflix. We will go through all the notions from the
definition of an API, Web Services, Data file
formats, and API Security. Now, when you enroll, you will get lifetime access
to tens of HD videos, as well as coding exercises
and related articles. I also update this
course on regular basis. Since its inception, the number of lessons
has doubled in size. Remember, nobody is
born a developer. We all started at zero. Nobody is good at
anything. At first. You will struggle, you
will make mistakes, would meet each video. You will learn more
and more each time, making your brain just
a little bit smarter. And I guarantee you that I
will help you get there. Whoever you are, wherever
in the world you are, you can become a
software developer.
2. APIs Briefly Explained (4 Complexity Levels): Even though I went to university
for computer Science, I always found myself a bit
confused by the term API. As this concept really
starts to blend in with the hands on
practice of coding, it wasn't clearly
explained to me anywhere. In this video, I will try
to explain this notion to you at four different
levels of complexity, starting from a five year old
all the way to someone who has a somewhat tech related
background, five year old. Let's say you are a
teacher that needs to make a lesson about severe
weather phenomenons, but you only studied
hurricanes and thunderstorms. You don't know anything
about tsunamis. There would be the option
to go study them yourself, which would take
at least a month. Or you could just call a friend that knows
all about them and would be happy to explain this phenomenon in
your lesson himself. 16 year old, you're building a very important
high school project. In order to score
the maximum grade, you need to also have a
small subsection about a topic you are not really familiar with and the
deadline is tomorrow. Last minute, you remember that your older brother already
did this project and you can reuse his subsection
of the project in yours after he
agrees 25 year old. Let's say you are coding an
application from scratch. The application is meant to
systemize the productivity of your users by implementing the pomodoro technique
in their work blocks. You code it up, but you realize that it would
be nice to also have the weather displayed in a corner just in
case it is sunny. Their five minute break might be taken outside
on their balcony. This would be the case where
you would use a weather API. It would return the current
weather of your user based on their location and
the specific time of day. Someone who has a somewhat
ten related background, API stands for Application
Programming Interface, and represents a set of
methods or functions called endpoints that are made
available to be requested. You can think of
these endpoints as functions or methods that
are connected to the web. And you can call them any
time in order to obtain information from them by giving them the
right parameters. As you imagine, this is very
useful for two reasons. Firstly, you keep implementing the entire functionality that you are calling the API for. This approach saves
time that can go into developing other features
related to your app. And secondly, you
are assured that the code of the API
you are calling is optimally written
and will take the least amount of time
possible to execute, making your application
smoother and faster to load.
3. What is an API? (In-Depth Explanation): Welcome to this lesson about the programming
concept of an API. Here we will look
at what exactly is an API and how
it is useful in the current Internet context as the coding space becomes even more wanting
of new developers. By understanding this, you will get a clear edge in front
of those who don't, especially given how much of an important piece
this represents. In the bigger scheme of things, I am a developer that has been writing code
for three years. Even though I went to University
for Computer Science, I always found myself
puzzled at the term API. As this concept really
starts to blend in with the hands on
practice of coding, it wasn't really clearly
explained to me anywhere. This is exactly the
reason why I felt the need to make this simple lesson meant
to get you started. Starting off with
the abbreviation, API stands for Application
Programming Interface. And represents a set of methods called endpoints that are made
available to be requested. You can think of
these endpoints as functions or methods that
are connected to the web, and you can call them
anytime in order to obtain information from them by giving them the
right parameters. Api's were made
available to make communication easier in
between web services. Let's imagine you
are a coder that wants to implement a
new simple project of Apomadoro clock to help
the users concentrate for 30 minutes straight for some reason in your
web application. You also want to let them know the current weather
based on their location. To do that, you would
call the endpoint of a weather API that will
look something like get weather and pass as a parameter that user's
location and his current time. You can now see the power and leverage that
an API can provide. By using them, you can skip
implementing entire chunks of important functionality that would otherwise take
you quite some time. This way, if you
choose a great API, you also have a peace of mind knowing the implementation
is done flawlessly. There are API's for security, like a free SMS, two factor authentication
for your app API's that have huge databases behind on different topics ready to be
queried by you and so on. Chances are most
likely there is an API on the topic of your current
project that can help you. You just have to know how to
take full advantage of it, and this is what we
will learn here. A great API also has
great documentation. The documentation is made up of explanations for each
endpoint on how to call it, what it will return, and what parameters you should pass to make that possible. There is an abundance
of free API's over the Internet that you can call to get started and
see how they work. They also have the documentation needed to make your
journey easier. Code snippets on
the exact code for the call are made
available in some places. The one that I find
myself using the most is a website
called Rapid API. You can go ahead and
check out their hub. I appreciate you sticking
with me until the end, and I look forward to seeing
you in future lectures.
4. Best APIs + How to call them: Hello guys and
welcome to this video where we're going to
talk about how you can get access to some public APIs quite
easily and called them with not much of
a headache as they have their documentation
made available. And for that, we are going
to head to rabid API, which is an online tool
that I find very useful. It has a hub that has hundreds and thousands
of public APIs, as I've said along with
their documentation. And also ways for
you to call them in any language that you might
have your application in. So you don't have to look
any further than this. Besides their hub,
they also have a blog and they have
interesting articles here, like the top 50
most popular APIs. And if we take a look, they have all sorts
of APIs going from flight information to
whether to the nasa API, the URL shortener service, MBA, music APIs like genius for getting lyrics to
your favorite song. These are, and things
like that you have to do is search the API
that you need here. You can see that you can
toggle on the public APIs and go on to find the one that
would fit in your project. And you can just
call it like that. You need to pay attention. Because some of them, even though they are public, they charge you some money after an amount of requests
made to them. So for example, here we have the recipe foot nutrition API, and you need to
subscribe the test. And these can see you have
the basic which is free, but you are going to be charged a very
little amount after you surpass 500 requests per
day on the results endpoint. But with that being said, some of them are completely
free and you don't need to be worried that you might
get charged at some point. For example, let's take
the Google Translate API, which I know for a
fact that Tp is free, we can go on subscribe for free. As you can see, it did not prompt me to enter my
credit card details. And the nice thing about these rapid API is that
besides their end points, they also have the
API documentation made available for you. You can see we have
three endpoints here, the detector languages
and to translate, each of them is documented here. Things along the lines
of what parameters you need to put in in order to
get a successful request. If we take a look at
the detect endpoint, for example, which is post, this is supposed to
detect the language of the text you are
sending as a parameter. In rapid API, you also have handler parameters that
you need to send in order for the API
to know who you are and where are
you calling from. You can think about these as, instead of giving you
our API in API key, you are going to
use the X-Ray API, the API key and
host in order for them to know where you are calling from and who
you are basically. But besides that, this
method, as I said, has one parameter and that is
called a queue apparently. And this is a text that you can write in the
language of your choice. And what it will return
is the language which it is written in if
it can detect that. And as you can see, if we write English is hard but detectably so we can
test the endpoint. We get back the EN language, which stands for English. The code snippet for this, meaning the code that you need
to write in order to call this exact endpoint of this exact API is given by
them and you can just copy it. And besides that, you also have, as mentioned before,
a wide array of languages that
you can choose from. For this, we're going to go with JavaScript since it is much easier and you don't
actually have to install anything to call it. And we are going to go
with XML HTTP request. This is the code
we're going to put in a script that is in the
JavaScript language. And it is going to call the
API for us with no problems. It sets up the request
headers as well. So it gives it the
rapid API key and host, and also the data, meaning the parameter for
which to call the endpoint. And this is English, is hard, but detectably. So as you can see, to actually call this endpoint
from our local machine, we're going to head
to the terminal. And I already did this. You can do the ls
command in order to see what directories
you have at the end. You can change
directories with CD to whatever directory you wish, this folder of this
project to bean. I made it on desktop
and then you can MKDIR in order to make
a folder and then write its name, ID API calls. And then after changing directories into this
API course folder, you can touch an index
that HTML and script.js. You can create this
with Command D, O, C, H, and then the
name of the file. So index.HTML. I already did this. Going even further. I open them with
Visual Studio Code. And as you can see, I have a basic snippet
of an HTML document. These are all local, so we're not going
to deploy them. This is very simple
and beginner friendly. Oh, I did. Here was I referenced the script.js and I let the
remainder of the body empty. If we go to the script.js, I just copied the contents of the API call
given by rapid API. This is actually
another API call, but we are going to
update it with this one. And when you open index.html
file in your browser, you can see that we
can reload this. And if we press the button
to open the console up, you can see that we have the response given
by the endpoint. At line eight. We have the console log, meaning it logged
into the account. So the response of the endpoint, which is similar to
what we had here, if we call it, you can see it had the
language en for English. And here, this is
what we get as well. It also gives you a reliable and confidence
variables in order to better assess its confidence
and reliability in the fact that this
is the language of the text that you
gave as a parameter. So there is a wide array in these Hub besides
Google Translate, you can check flight data, you can get CT data. You can get free SMS at your phone number to do factor authenticate
and application. So that might be very
useful if you are implementing the security
part of your project. And you don't want to have any headaches by
implementing give yourself further or you want
a quick way for two-factor authentication
to be available to you. But besides that, you can also make an account
with rapid API and check them out if you want to take a look at
the other end point. But besides this, I
thank you very much for sticking with me up to
the end of this video. If you enjoyed it or have any other ideas regarding
videos about APIs, feel free to leave them in the comments and until next
time, Have a good one.
5. API Best Practices : Hello guys and welcome
back to this API course. In this lecture, we are
going to take a look at the best API practices
to ensure that API's are effective,
efficient, and maintainable. Adhering to these best
practices is very important. In this lesson, we will delve
into these best practices for designing and also
implementing API's. Now, one of the core principles of API design is consistency. Api endpoints methods
and responses should follow established naming
conventions and patterns. This consistency simplifies
both development and consumption of API's, and here you can use descriptive
and meaningful names. That means that API
endpoints and methods should have clear names
that reflect their purpose. Avoid cryptic or
ambiguous names that require users to consult
extensive documentation. You should also follow
Restful principles. Representational state
transfer is widely adopted architectural style for
designing network applications. Adhering to restful
principles helps maintain a standardized structure for
API endpoints and resources. You should also include
versioning in API RL to ensure that changes to the API don't break
existing clients. For example, use V
one resource and V two resource to differentiate
between API versions. Another critical aspect of API design is effective
error handling. Proper error responses help clients understand and
resolve issues efficiently. You should use appropriate
HTTP status codes. These provide
valuable information about the outcome of a request. Use codes like 200 for
K, 201 for created, 400 for bad request, and 44 for not found accurately to convey the
result of the request. In addition to status, codes include descriptive
error messages in the response body. These messages
should explain what went wrong and if possible, offer further guidance
for resolution. Security is another fundamental
aspect of API design. Predicting sensitive data and ensuring secure
communication is crucial. Always use HTTPS to
encrypt data intransit, preventing eavesdropping
and data tampering. Also, implement robust
authentication mechanisms to verify the identity of clients and
employ authorization to control access to
specific resources. Options include API keys 00 and JWTs to prevent abuse or
overloading of your API. Implement rate limiting
to restrict the number of requests clients can make
within a defined time frame. Clear and comprehensive
documentation is essential for developers
who consume your API. It should serve as a
reference guide allowing them to understand how to
use the API effectively, include detailed information
on available endpoints, request methods, parameters, response formats,
and error codes. You can also provide
real world examples of API requests and responses to illustrate how to interact
with the API successfully. Next, thorough testing and
quality assurance processes ensures that your API functions
correctly and reliably. For the write unit
tests on all of your API endpoints to validate that they
perform as expected, return accurate data, and
handle errors gracefully. Also, conduct integration tests to verify that the API works seamlessly with other components and services it interacts with. When it comes to versioning
and backward compatibility, you should keep in mind that
API's evolve over time, just like most of
technology does nowadays. But it's essential to maintain this backward
compatibility to avoid breaking existing
clients that are already using UR API for this. Again, API versioning
comes into play. Implement versioning strategies
to introduce changes to the API without
affecting existing clients. This can be done, as
we've just mentioned, through URL versioning
or headers. Next, clearly communicate
deprecation and sunset plans for
older API versions. Inform users about when and how they should migrate
to newer versions. Optimizing API
performance is vital as well as it impacts response
times and scalability. For this, you should keep
in mind the response size. Minimize it by excluding
unnecessary data. Use pagination for
long lists of items, and provide filtering options. Another thing to
keep in mind is to implement caching mechanisms to reduce the load on your server and improve
response times. Utilize HTPP Caching hazards
like tag and cache control. Ongoing monitoring and
analytics provide insights into API usage performance
and maybe potential issues. So you should track key
metrics such as request rate, error rates, and response times. For this, you can use tools
like Google Analytics or custom solutions to gain insights into how your
API is being used. Set up alerting systems to notify you of
performance issues, outages, or unusual behavior. Best practices for API's are essential for
building reliable, secure, and maintainable systems by following the guidelines
that we just talked about. Developers like
you can design and implement API's that
are consistent, secure, and efficient,
and also that provide clear documentation
for their users. An effectively designed
and well documented API not only enhances the
developer experience, but also contributes
to the success of the systems and applications
that rely on it. With all of this being said, I thank you guys very much for sticking with me up to
the end of this lecture, and I look forward to
seeing you in the next one.
6. API Load Balancing: Hello guys, and welcome
back to this ABI course. In this lecture, we
are going to talk about two critical components
of API infrastructure. They are scaling
and load balancing. I highly suggest you
implement both of these concepts in your own
future API development. But first of all, let's
talk about what they actually are when it
comes to API scaling. This is the process of
expanding the capacity of an API infrastructure to
accommodate increased traffic, user demand, and data
processing requirements. The goal is to maintain seamless performance even as
the load of the API grows. Scaling can occur both
vertically and horizontally. Vertically means
adding more resources to a single server, and horizontally means
adding more servers. Load balancing, on
the other hand, complements scaling
efforts by distributing incoming requests evenly across multiple servers or resources. It ensures that no
single server becomes overwhelmed while optimizing
resource utilization. Load balancers act
as traffic managers, intelligently directing
incoming requests to the most suitable server
based on various criteria. You may be asking yourselves now why scaling and load
balancing matter? First of all, they
ensure that API's remain available even if in the face of high traffic or server failures, redundancy minimizes down time. Evenly distributed
traffic means that each server operates at
an optimal capacity. Maximizing response times
and reducing latency. The ability to scale horizontally
allows organizations to grow their API infrastructure in response to increased demand. Effectively, future proof
the services load balancers can detect and re, route traffic away from
unhealthy servers, ensuring uninterrupted
service in the event of a server failure. There are quite a few strategies for scaling and load balancing. First of all, we have
vertical scaling. This implies adding
more resources, whether they are CPU
memory or storage, to a single server. This approach is suitable
for applications with moderate traffic and
can be cost effective. Next, we have
horizontal scaling, adding more servers
to handle traffic. It is a scalable approach, but requires careful
coordination and also load balancing. There are load
arancer algorithms. Load balancers use
algorithms like Robin Hood List
connections or IP has to distribute incoming
requests of the server. Understanding these algorithms helps optimize load balancing. Some applications
require sticky sessions, where users requests
are directed to the same server to
maintain session state. Load balancers can be configured for session persistence
when necessary. The best practices
when it comes to scaling and load
balancing are as follows. You should implement
redundancy at all levels, including load
balancers and servers, to ensure high availability. You also should
continuously monitor server health and
traffic patterns to detect and address issues. Proactively implement automatic scaling mechanisms
that can dynamically adjust resources based on real time traffic and
performance metrics. Ensure that security measures
such as fire walls and Dos protection are
integrated into the scaling and load
balancing setup. Lastly, rigorously
test the scaling and load balancing setup under various conditions including
peak traffic scenarios. Also here don't shy away from thinking about
the edge cases. These are, more often than not, what ends up ruining
the happy path that we have implemented
or certain ABI. With all this said, I
think that API scaling and load balancing are
integral components of building reliable and
performance services in this digital age. By understanding the
principles, strategies, and best practices
associated with scaling and load
balancing organizations. And you can create API infrastructures that
seamlessly growing traffic, ensures high availability,
and delivers the performance that
users of the API expect. I really hope you guys get
something out of this lecture, and you will think in the
future when developing your own API about scaling
and load balancing. I thank you very
much for sticking with me up to the
end of this lecture. I look forward to seeing
you in the next one.
7. Make your API User Friendly: Hello guys and welcome
back to this API course. In this lecture, we will
talk about designing user friendly APIs
because in my opinion, the true value of an API
goes beyond functionality, it lies in its
user friendliness. In this lesson, we will explore the art of designing
user friendly API's, emphasizing why it matters and how to achieve
it effectively. User friendliness in APIs
is not a mere luxury. It's a necessity.
And here is why. A user friendly API is easier for developers to
understand and to use, reducing the time and effort required to integrate it
into their applications. Well designed API's are less likely to lead to a
confusion or errors, reducing the need for extensive support
and troubleshooting. Developers are more likely
to choose and advocate for API's that are intuitive
and easy to work with, leading to increased
adoption and usage. User friendly API's are more likely to be maintained
and updated, ensuring their long
term viability in a fast evolving tech landscape like the one we have nowadays. As far as design
principles for user friendly API's, we
have consistency. You should maintain a
consistent design pattern throughout your API. This includes
naming conventions, request response structures,
and error handling. Use clear and concise names for resources, endpoints,
and parameters. Avoid ambiguity and
excessive complexity. Make the API's
behavior predictable. Developers should be
able to anticipate how the API will respond
to their requests. Ensure that the API
is self explanatory. Use meaningful names and provide detailed documentation,
including usage examples. Implement versioning to allow for updates and improvements while maintaining
backward compatibility for existing users. Documentation is a corner store. Here comprehensive
documentation is key. Thoroughly document your
API, including endpoints, request response structures, authentication methods,
and error codes. Provide clear examples
and use cases. You can go to the section where I explain how
you can hands on document your API using Open API specification Swagger
Hub in this very course. Next, consider providing
interactive documentation using tools like
Swagger or Open EPI. Allowing developers to test API endpoints directly
from the documentation. Keep a change lock to
inform users about updates, bug fixes, and new features. This builds trust and helps developers adapt to
changes much quicker. Also, it raises
awareness to them. We also have here
consistent error handling. Use standard HTPP status
codes for errors, 44 for not found, 400 for bad request, 200 for a good response. And request include a
descriptive error message in the response body. Whatever the status code
received by the client will be. Provide additional
error details, including a human
readable description and potentially a link to
relevant documentation. If rate limiting or throttling is in place in your API,
which it should be, as we talked in another
lecture of discourse, communicated clearly in the
documentation and responses. As far as authentication
and authorization goes, you should have a
clear authentication, specify authentication methods clearly documentation
If possible, implement granular authorization
mechanisms to allow developers to control access
at a fine grained level. Leslie testing and feedback
loops are essential, again, while developing
a user friendly ABI. Rigorously test your
API to ensure that it behaves as expected
in various scenarios. And don't shy away from
thinking about H cases. Actively seek feedback
from developers who use your API and use their
input to make improvements. Designing user friendly APIs
is an art that combines technical proficiency
with empathy for the developers that
are going to use it. A user friendly API reduces friction in the
integration process, fosters developer satisfaction,
and contributes to the success of both the API
provider and its users. This is exactly why I chose to talk about it here in
a separate lecture. I hope you guys will implement
some of the strategies discussed here in your
own API development. I thank you very
much for sticking with me up to the
end of this lecture, and I look forward to
seeing you in the next one.
8. RESTful Design: Hello guys, and welcome
back to discourse on APIs. In this lecture, we are going
to take a closer look at the restful design principles when it comes to
general API design. These restful design
principles have emerged as a guiding philosophy. We're creating elegant, scalable, and
maintainable systems. Rest, which stands for Representational
State Transfer is an architectural style that
emphasizes simplicity, clarity, and resource
based communication. In this lesson, we will
explore the core principles of restful design and why they are essential in modern
software development. When it comes to
understanding restful design, it is crucial to know that it is centered around several
key principles that facilitate the creation
of web services that are easy to understand,
flexible, and scalable. First of all, we
have statelessness. Restful services are
stateless, meaning well, each request from a client
to a server must contain all the information needed to understand and
fulfill that request. This principle simplifies
server implementation and also improves drastically
scalability in Rest. Also, everything is a resource, which can be a physical object, a conceptual entity,
or a data item. Resources are identified by uniform resource identifiers or as you might have heard, R, yes. They can be manipulated. Standard HTTP methods get post, put, patch, delete, and so on. Resources can have
multiple representations such as XML, Jason, or HTML. Clients interact with
these representations rather than directly with
the resource itself. This flexibility allows for decoupling between
clients and servers. The communication is
stateless itself. Clients and server
communicate through stateless requests
and responses. Each request should be independent and the
server should not retain any information about the client's state
between requests. Everything from the
authentication bearer token of the client to the response provided by the server will take
place in that request. A uniform and consistent
interface is a core principle. This includes the use of
standard HTTP methods, self descripted messages, and hypermedia as the engine
of application state. This allows clients to navigate the applications resources by following links in responses. We will talk about why
restful design matters. Restful APIs are
inherently scalable because they are stateless
and resource oriented. This makes it easier
to distribute and load balance requests rests. Straightforward design
principles make it easier for developers to
understand and use API's. This simplicity reduces the learning curve
for new developers. The use of multiple
representations and the decoupling of
clients and servers provide flexibility for evolving API's without breaking
existing clients. Restful APIs use standard
HTTP methods and formats, promoting interoperability among different
systems and languages. Resources and their
representations can be versioned and
updated independently, which simplifies maintenance and reduces the risk of
introducing breaking changes. The best practices for restful
design are as follows. First of all, you should use
nouns for resource names. Choose meaningful
and pluralized nouns for resource names. For example, users for a
collection of user resources. Use HTTP methods appropriately. Get retrieving resources, Post for creating new resources, put for updating resources, and delete for
removing resources. Follow these conventions to
maintain a uniform interface, provide clear
documentation document your API thoroughly
including resource URI's, available HTTP methods, and the structure of resource
representations. We have a whole
section in this course that will help you
understand how you can hands on document your API using the Open
API specification. Swagger Hub, you
can check that out. Next we have versioning. If changes are needed, version your API to ensure backward compatibility
for existing clients. Again, we have a
lecture on this. Use appropriate
HTTP status codes to convey the outcome
of a request. 200 for success,
44 for not found, and 400 for bad request. With all of this said, I think that Restful
design principles offer a robust framework for building web services and API's that are efficient,
maintainable, and adaptable. By adhering to these principles, developers can
create systems that not only meet the needs
of today's users, but also provide a
solid foundation for future growth and
evolution of the API's. Thank you guys very
much for sticking with me up to the
end of this lecture. I really hope you will
implement some of the restful design
principles that we talked about today in your API's. I look forward to seeing
you in the next lecture.
9. API Versioning: Hello guys, and welcome
back to Discourse on APIs. In this lecture, we
are going to take a look at API versioning. We will see why versioning
matters in the first place. Then we will understand
what are the strategies for API versioning and
the best practices when it comes to versioning. Getting started with
what it actually is, API versioning is
the practice of managing changes
to API's in a way that ensures backward
compatibility while allowing for necessary
updates and improvements. It serves as a bridge
between the old and the new, ensuring that
existing integrations and clients continue
to function smoothly, while accommodating
the evolving needs of developers and users. Moving on to why
versioning matters. While existing clients rely
on the API's behavior, abrupt changes can break these clients and disrupt the services that
depend on them. Here is where
versioning comes in. It helps maintain
the stability by isolating changes from
existing implementations. Also, API's need to adapt
and improve over time to meet changing requirements and also technology
advancements. Versioning, again,
allows developers to introduce new
features, fix bugs, and optimize performance without affecting existing users
that are already using that specific API
versioning provides a clear and standardized way to communicate changes
to developers. It encourages comprehensive
documentation and transparency
in API evolution. Talking about strategies
for API versioning, there are quite a few starting off with
the most used one, which is URL versioning. This strategy involves including the version number of
the API in the URL. For example, you can
have the domain of your API and then put
a V and the number of version you are on The
continuing with the resource you are trying to do
a CRD operation on. It is straightforward
and easy to understand, but it can lead to cluttered
URLs as versions accumulate. This is exactly why there are more strategies
for API versioning. The second one is
header versioning. Here the version is specified in an HTTP header
instead of an URL. You can think of this header
as accept minus version. And then you can specify
the actual version, which can be V and then the number of the
version you are on. This approach keeps
URLs cleaner, but may require additional
effort for clients that are trying to call your API to parse the header correctly. They might even forget to put it there as URL versioning is straightforward and it comes directly from the URL that
you are trying to access. It has a much lower
rate of errors. There also is media
type versioning. This approach embeds
the version in the media type such as
applications VND example, then the version plus Json. This provides a
clear way to specify the version in requests
and responses, but can be more complex
to implement as the actual files need to have
the version in their names. Leslie, there is
semantic versioning. Using semantic
versioning is common in open source libraries and
can be applied to API's. It offers detailed version
information making it easier to understand
the nature of the changes. We will take a look now at the best practices when it
comes to API versioning. And the first is to
implement it from the very beginning of
your API development. Just plan for versioning
from the start. This avoids challenges
when introducing versioning to an existing API
that is much harder to do. It will be easier if
you have it all along. If possible, adapt
semantic versioning for clarity on the type of
changes in each version. These changes can
be major, minor, or just a simple patch
whenever possible. Also, maintain backward compatibility
with older versions, deprecate features,
and give developers grace period to update
their implementations. Also, if you keep this
backward compatibility, it will be much
easier for them to integrate with your
newer version of API. Next document, versioning
strategies and Changes. Thoroughly provide
migration guides for developers transitioning
to new versions. Consider offering
long term support for critical API's
ensuring stability for clients with
extended life cycles. Rigorously test each
version to catch regressions and
ensure functionality. Monitor API usage to
identify deprecation needs. This would be about it when it comes to versioning
your own API. I really hope you guys got
something out of this lecture. If you have any questions, do not hesitate to
reach out to me. Thank you very much for sticking with me up to the end of it, and I look forward to
seeing you in the next one.
10. Setting up a coding environmnent: Hello guys and welcome back
to this course on APIs. In this section, we will
take a look at how we can create an API
from scratch and also put it on sale on the website Rapid
API that I have already talked to you
about in this course. First of all, I will link in the resources
folder of this course, the Github repository, with the code that I
will present to you. You can go ahead and clone it on your local machine and
change it how you like. As you can see it on the screen, this is the actual repository. This will make it easier
for you to set up a new API and publish
it to be sold. The method of
deploying a public API presented here will be very
quick and straightforward. Also, it will be
completely free. By the end of this section, you will have an
online quariable URL that you can make requests
to using this URL. You will publish your
first API on repid pi.com and I will also, as I said, show you how
you can monetize it there. As you can see here, the API is deployed live at this URL. And if we click on this link, that is actually ours. You can see that we
have here the URL. And it also takes a parameter. It gives us the dividends
for the past ten years for a certain stock symbol I have here for reality income. But you can also go with Apple, let's say right now, it is actually taking the data from the dividends that Apple gave for the
past ten years. As you can see, we have
the dividend rate, the amount, the
declaration date, the record date, and
the payment date. We will be using a
technique called web scraping here to get
the actual information that our API will expose
straight from the Internet without storing it in
any database whatsoever. For a quick server set up, we will be using the
Express Framework. This whole functionality
will fit in Javascript file. As you can see, it is
the dividend JS file that I will present to you in more detail in future lectures. Once we get the
information we need our API to expose
from the Internet, we will need to parse it only to the fields we will need
to expedite this process. We will use the
Cheerio framework. This is, as you can see, library for parsing and
manipulating HTML and XML. We will obviously
use it for HTML. It is very fast and it has very intuitive and
easy to use syntax. Now, once we have the API built locally and the retrieval of information from the website on the Internet works on
our local endpoint. We need to publicly
deploy it on the web so we can have a link of
our own for rapid API. As you saw that I have here
the dividend Netlify app. To do that, I found a free hosting website called Netlify and we will
integrate with it. This is very similar to Roku, but I tried Roku and it does not offer a
free plan anymore. Unfortunately, Netlify does. This is a very good
option to deploy our website and all
of it for free. If you go to the pricing, you can see that the starter
plan is free and you have 100 gigabytes bandwidth
and 300 build minutes. The build minutes refer to when your website is getting built
before it gets deployed. 100 gigabytes of
bandwidth refers to how much data other users consume when watching
your website. 100 gigabytes of
bandwidth is enough, especially if you are
just starting out. And I found that to
be the case for me, especially for the purpose
of what we are doing here. You can actually make some
sales on Rapid API for sure, before running out of
these 100 gigabytes. The specific functionality
of the API we will be building here will be to
retrieve, as you saw, the dividend information
as in date and amount from the past ten years
for the stock symbol we provide to it as a parameter. Now the website
that I decided to scrape is called Street Insider. It's this site right here
that looks pretty old, but it has a pretty
simple structure. So it will be easy to target
the information that we need from its tables rows using
the Cheerio framework. This is actually the
reason that we do not need a database
behind the scenes. And we don't need to store
all the dividend data for all the symbol of stocks
that we could think about. Now if we do a function F 12 on this and we go and
navigate the Des. Once we get to the rows that contain the
actual dividends, you can see that it has a table with a class
called dividends. This one has a body, and the table rows, as you can see on the left, contain the actual
dividend information. Once we open a table row, you can see the information that we can go ahead and scrape. We will use Chi in
order to detect the table data identifiers
in this HTML document. It's very handy because
it's not that complicated, and these are actually the only table rows
that the site has. But with that being said, I will explain all the code that we have here and how
to also deploy it on Netlify and then how to actually deploy this API that we
develop on rapid API. You can see it deployed here
and also it has a paid plan. If the users want to make
more requests to it, then a basic limit
that you will set up. If you are looking forward to
learning about this topic, I look forward to see
you in the next lecture. Thank you for staying with me
up to the end of this one.
11. Coding our own API: Hello guys and welcome
back to this API course. In this lecture, we
are going to take a closer look at
the script that I showed you in the
introduction of this section on how
to sell an API. And we are going to understand
what it does exactly in order for you to be able to replicate it with
a different URL. Scraping some different
information from the web, and creating your own API to deploy it live
on the Internet. And furthermore,
publish it on Rapid Api.com and make some
sales with it as well. The first thing that you need to do in order to run this code will be to install Nog
on your local machine. You can go ahead to the
No Jtg website and on the downloads page you will be able to select your
operating system. I have Mac here, platform Windows or Linux. You can choose
different options, and once you press on this, the executable will be starting to download
on your local machine. And you will go ahead
from there and install it with a pretty
simple set up wizard. Once you get node installed
on your local machine, you can go ahead and
clone our repository. From this link, you
can go ahead to this Github URL that I will also link in the resources
folder of this course. Click on this code button, copy this URL, and go
ahead in your terminal. If you have gets installed and run git clone and then this URL, it will clone the whole
repository to your local machine. Then you can open
it up in an editor. I have here visual studio code to run this repository and see it in a
more beautiful way. Once you download it, clone it to your local machine, You won't have the
node modules folder and I will explain to you
why in just a second. But besides this, you
will have all the files. The gitignore file is
used to ignore some of the files that you have inside folder and
not commit them. Why this is useful because once you will have an
node modules folder, you won't need to
actually commit that. We specify that in
the git ignore for this code to run you
are going to run NPM in first the package that Json will get created and the package lock J will
get created as well. Here, all the packages necessary for this project
will be specified. So you can see here
that we are using Express and Axis
Cheerio as well, which are all packages that
you don't have installed yet. For that you can run NPM install and then the
name of each of them, but you can also just
run NPM install. This will create
the node modules folder with all the
packages that it, that you will be needing. You can see that
by default it will install all of the
other packages like Express and so
on and so forth. No, once you get to this point, you should be able
to run NPM Start in order for this application
to work locally. But what we are
trying to do here is a step forward to that
and make it live. Deploy it on the actual web, so you can call it from there. Of that being said, let's take
a look at the actual code. You can see that we have
a folder code functions, and that is for the Netlify application where
we will deploy our code. You can see here we need
to specify the build, and in the functions folder, it is the script that
we want to deploy. The Netlify ML is another
essential file for you, and you will need this in
order to deploy it on Netlify. You can see that here we have the router constant declared, which is the express router. And this will create
server that has routes. You can create more
than one route with it. You can see that we
have a default route on our deployed website. You can see this is
the default route, and it just says welcome to the Stock Dividend API
as you can see here. But it also has a Route that takes a symbol. If you remember from
the last lecture, I gave you a symbol
and it retrieved the dividend information
for that stock symbol. If the user enters the
symbol path, first of all, it will take the
symbol parameter from the request and put it in
a constant named symbol. The URL, as I've already told you guys is from the website
that will get scraped. This is what we
are going to make a request to in order to
retrieve the information. The URL is treating insider.com
slash dividend history. And then the first query
parameter is specified by the question mark
and Q and then equal. You can see if we go to this, this is the exact URL. Here it takes the symbol of the stock that you want to
retrieve the dividend from. We received that as a parameter, and then we use axis to make
a gat request to this URL. And the actual path is
the URL plus the symbol. This will retrieve the
actual HTML for this page. Once we get that, we get
this whole HTML document, then we have it here
in the response, that data, we assign it
to an HTML variable. And then we use Chi, the framework that I've told you about that can easily pick the HTML elements and the information
inside it to load it, and we put it in the
dollar sign variable. This dollar sign variable, this is the cherry of syntax. We are going to look
for the table data. As I've told you again
in the previous lecture, this is where the
dividend are getting stored in this table
that you see here. There are some table rows, elements that store this
information that we need to get parsed for each table data, we are going to take the
text out of it and then push it into an array
that is dividend data, and I declared it here, and it is empty. Then we will iterate through
this dividend data array. And we are going
to make an object called dividend with all
these string properties. And we are going to store, emit the actual dividend
data it will not retrieve, as you see here. For example, if I give the stock symbol directly
the dividend dates. In this form, it will
need to get parsed a little bit more because it will contain some other
things as well. But I came up with this specific code in order
to just get the actual date. Then I will push this dividend object into
another declared empty array. In the end, the result
will be this array with all of the parsed
data in the Json format. This is what the ball
route will return to us. You will also need to
use this app here, that is the express app, to make this Netlify friendly. In order to be able to deploy this whole app on Netlify
as the first parameter. This will take the name of the function you have
and the function name of the folder and that
Netlify Before that it will also take a router. The router is the
express router. Also the module exports. Handler should be specified to be server less
for this app here. Once you clone this exact code, which I suggest you do, just make it
deployable on Netlify. Deploy it there and
see if it works. And if it does, then go ahead, change this URL
with something else and scrape a different website. But this is about the code. In a future lecture, we
will take a look at how you can actually deploy
it on Netlify. Then after that,
we're going to take a look at how you
can publish it for sale on the Pipi.com
Studio spoiler alert. It is done from here. From the API project. But with that being
said, thank you guys for staying with me up to
the end of this lecture. If you have any questions
regarding this project, I really look forward
to answering them. You can message me here. That would be about
it with this lecture. Thank you for staying with
me up to the end of it.
12. Deploying our API on the Web: Hello guys and welcome back
to the course on API's. In this lecture, we
will take a look at how we can deploy life, the API that we built
in the last lecture, and how we can do
that for free in a very straightforward and
easy to understand method, especially for
beginners for that, I found the Netlify.com site, which is very similar to Roku, but Heroku didn't have
a free plan anymore. All the plans were paid. Netlify, on the other hand, has a starter plan
which is free. It offers 100 gigabytes of bandwidth for 3.300
build minutes, which I found to be enough. Especially if you're
just starting out. You can deploy several API's and put them on
sale on rapid API. And once you start getting
some revenue from them, you can go ahead and
upgrade to the pro plan if you find that this one is
a little bit small for you, but even then I think it will be enough
for quite some time. You cannot run through 100 gigabytes of
bandwidth that quickly, especially if the information that you return looks like this. So no images, nothing that big. You can see here that they make deployable websites
live on the web. You will not get custom
domain with the starter plan. As you can see here we have Netlify dot app, but it is live. That is all that matters. Especially because once you
deploy it on Rapid API, the actual end
users will not see the URL that you gave Rapid
API for your endpoint. It will be a different URL that Repid API provides to them. So with that being said, you can see that
they say you can scale effortlessly and they do websites that run campaigns and shops,
things like that. So quite a big array
of possibilities here. We will just use this to
deploy our simple web API. For that, you can go
ahead and sign up. I've already done that. You can see that my Dividend
API is already deployed. I played with it a little
bit and you can see that I only consumed 1 megabyte
out of 100 gigabytes, and two build
minutes out of 300. This is more than enough. All of this bandwidth and build minutes get reset monthly, so it is not one time, only it is monthly. But if you want to go ahead
and deploy your own website, you can go ahead and enter the app, Netlify.com
slash Start. Here. You can connect,
first of all, to your Github account and you will click on
Deploy with Github. After that, you will dilk the actual repository
that you want to deploy. Here you will need the actual repository
on Github already. Put there, you will need
to take my solution and your own Github
repository or you can just use my
repository for a test. If you, then you will
deploy it with Github. It is some two factor
authentication that gets done here. And then for all the
public repositories that you will get
an option here. You can go ahead click here. You can also specify the branch
that you want to deploy. The functions directory
is very important here because the name
of it might differ. In your case, just enter
what the name of it is in your case and then you
can go ahead and deploy it. Once you do that, it
will get deployed. This is the actual build minutes that they were talking
about right now. The deploying of the site triggers a build and
that takes some time. You are limited to
300 minutes on that, but you saw that
I only used two. You can see how fast it
already got deployed. Now if you want a custom domain, which I imagine you don't, but if you are going
to need to buy one and then to secure
your site with HTTPS. But for the purpose
of this video, for the purpose of
what we are trying to do here, this is redundant. Now you can see that our
website got deployed here. We can actually specify a different subdomain
that they give us here. But you are not going to be able to get rid of the
Netlify dot app, but you can't give it
a more pretty name like I did here
with the dividend. I do Netlify dot app. And after you do this, your website should be pretty much deployed as
you can see here. And you should have two routes. The default one which, if you remember, returned welcome to the
stock dividend API. And another route that, on a symbol that you specify, would retrieve the
dividend data that we scraped from the Street
insider website. This was about it for the deployment of the API
on the live Internet. In the next lecture,
we are going to take a look at how you can go on rapid API and deploy this
API and furthermore, put it on sale next to of the other great ones
that are available here. But if you are
curious to see that, I look forward to see
you in the next lecture. Thank you very much for sticking with me up to the
end of this one.
13. Monetizing your API: Hello guys and welcome
back to the API course. In this lecture, we will take
a look at how you can put your deployed live API on
the rapid API website. And how you can generate
revenue out of it, how you can monetize it. If you followed
our last lecture, you should have a dividend
API life on the Internet with an actual URL that you can enter on your Chrome browser
or whatever browser you use. And it should return
all the information from another website
that you scrape. Once you have that,
you should go ahead and head into Rapid Api.com Here are all the API's that are published for sale
or even for free, and you want to publish
your API as well. You should create an account
on Rapid Api.com and then head over to the
My API's section. As you can see, I already have this dividend API deployed. I gave it an image, I gave it a category which is finance a short description
dividend data for over 75,000 stocks delivered,
inconvenient Json format. As you saw, the format in which these dividends come is Json. Now, you can also have a long description and here
is the most important part. You will give it a version, you will just have
a version one. So that doesn't matter, but the URL is very important. You should provide the base URL, which is this one here, as you saw from the
web browser, as well. Without this symbol, that
was a different path. Once you have this, you can go ahead and go to definitions. Here we have the
definition of the symbol. You can go ahead and
create endpoint. Here we have the name, get the dividend
information for a company. Assure description that
this endpoint will return dividend information based on the symbol you provide
for the past ten years. It also has a parameter
which is the symbol. You can also give
an example here. For example, the value of
apple should return something. It is also of type string. This parameter, I also gave it a successful response taken
from the web browser. Other than that, if you have and will create other endpoints, you should also
specify them here by adding different endpoints on
the Create endpoint button. You can also create similar
endpoints in groups. For that, you will press
the Create Group button. Now the documentation
for your API, I left the empty here, but if you have a
documentation that specify in more detail
what your endpoint does, you should write it here. The gateway, again, I left it as Rapid API provided it
for the community. I did not change anything whatsoever then
for the Monetize deb, which is what you guys
actually came here for, where you can actually create revenue from this API
that you created here, you can actually
create some plan, paid plans for your API. You can give a basic
one with 20 requests a month so the user can see what your API returns
and what it is like. Then you can specify
some other ones. You can see I have a pro plan which has a rate limit of
ten requests per second. We can also make this the recommended plan or another one, but also you can specify
a subscription price. Once you do that, you can go
ahead and publish your API. After that, you will be
able to take a look in the analytics and see
how is your API doing. What is the average latency? It will also receive mark your API -9.5
which is pretty good, but it is based on how many
successful requests there are and also the latency that
I was talking to you about. But with that being said, now you have an API that
is deployed the Internet, and it is also put
up for sale on Ppp.com which was our entire
goal for this section. Thank you so much for staying
with me up to the end of this section and I look forward to seeing
you in future ones. Also, if you have any questions, do not hesitate to
reach out to me Here. I am available for you guys. And again, thank you very
much for enrolling in this course and for listening to it with
me. Have a good one.
14. Common API Vulnerabilities: Hello guys, and welcome
back to discourse on APIs. In this lesson, we will explore common API security
vulnerabilities such as SQL injection and cross
site request forgery. And also discuss
strategies for prevention. I think this is very important, especially if you are
developing your own API. Let's start by understanding the common API security
vulnerabilities. First off, we will start
with SQL injection, the most common one. Sql injection occurs
when an attacker inserts malicious SQL statements into
an input field or request, potentially granting
unauthorized access to a database or compromising
data integrity. To prevent this, we can use
parameterized queries or prepared statements to separate user input from SQL commands. Implement input validation
and sanitized data before processing are some of the other strategies used to
prevent this type of attack. Next, we have cross
site request forgery. These attacks trick users into executing unintended actions
on a web application, typically when authenticated
without their consent. In order to prevent this, you can use Ant cross site
request forgery tokens and ensure that all
state changing requests. As an example, here
we have post put and delete require
authentication. In the next example, we
have broken authentication. These vulnerabilities occur when authentication mechanisms are
not properly implemented, allowing attackers to
gain unauthorized access. To prevent this,
you can implement strong authentication
and session management, use secure password storage, enable multi factor
authentication, and regularly test
for vulnerabilities. Another type of API
security vulnerability is insecure derealization. Attackers here exploit
vulnerabilities in derealization
processes to execute arbitrary code or gain
unauthorized access. Employ safe
derealization practices, including wide listing,
permitted classes, and using signed
data where possible. Last security vulnerability
is sensitive data exposure. These vulnerabilities occur
when sensitive information, as in passwords or tokens, are not adequately protected. You should use encryption, for example TLS for data
intransit and data at trust, and employ strong
encryption algorithms. Limit access to sensitive
data on a need to know basis. Moving on into the most
important part of this lecture, the strategies for prevention. First, you should always
validate and sanitize user inputs to prevent malicious data from
entering the system. Use positive security
models such as white listing whenever possible. And try for each input that you have to specify
a type for it. The user cannot enter strings when he should enter
a date, and so on. Use parameterized queries or prepared statements to separate user inputs from SQ queries. This will drastically reduce
the risk of SQL injection. Implement strong authentication
mechanisms and enforce proper authorization
checks to ensure that only authorized users can
access specific resources. Include CSRF tokens
in requests to verify the authenticity of incoming requests and
mitigate CSRF attacks. Implement rate limiting
to prevent abuse or DDOS attacks on your API. In this course, we
have another lecture especially dedicated for rate
limiting and throttling. Utilize CSP or content security policy
headers to mitigate cross site scripting
attacks by specifying which sources of content are allowed to be loaded
by a web page. Conduct regular security
audits, code reviews, and penetration testing to identify vulnerabilities
and weaknesses. Train developers and
maintainers about secure coding practices and the importance of security
in the development cycle. Keep all components, libraries, and frameworks up to date
to benefit from security, patches and updates that
might get released over time. Epi's are the lifeblood of
modern software systems, but their widespread
usage exposes them to various security vulnerabilities
By understanding common vulnerabilities
like scalar injection, CSRF and others. And by implementing robust
prevention strategies like the one mentioned here. Developers like you
and organizations can fortify their API's
against potential threats. This was about it for the common API, security
vulnerability threats. And I really hope you guys
got something out of it. And you will get to implement
some of the strategies that we talked about here
in your own API's. With all of that being said, thank you guys for sticking with me up to the end
of this lecture, and I look forward to
seeing you in the next one.
15. API Rate Limiting: Hello guys, and welcome
back to discourse on APIs. In this lecture, we
are going to talk about a very important subject that is rate limiting
and throttling API. Rate limiting and rattling are techniques used to
control the number of requests made to an API
within a specific time frame. These mechanisms serve
several crucial purposes. They help prevent abusive
or malicious use of an API, such as distributed denial of service attacks or scraping. They ensure fair access to the API's resources
among all users. Preventing a single client from monopolizing the
services capacity. They maintain the API's performance
by preventing overload. Allowing it to serve
requests reliably. Again, requests coming at a
normal pace because of them. Many API's have usage
limits outlined in the terms of service which
users must adhere to. These limits are, on
a concrete level, implementing by rate
limiting and throttling. Rate limiting and throttling are often used interchangeably, but they serve slightly
different purposes. Let's talk now about each
of them to understand the difference between them and what each of
them are exactly. Rate limiting restricts
the number of requests a client can make
within a specific time window, such as 100 requests per minute. Once the limit is reached, the client must wait or receive an error response if he tries to make more than 100
requests per minute, and the count will be reset after that specific
minute passes. This is just an example. Of course, the actual limit can be different from
an API to the other. Throttling, on the other hand, controls the rate at which requests are processed
on the server site. For instance, a
server may process a maximum of 50
requests per second, Regardless of how many
requests are received. Access requests are
cued or delayed. There are many strategies for implementation when it comes to rate limiting and throttling, and you should really
take a look at them if you are trying to
build your own API. First, we have the
token bucket algorithm. This algorithm
involves assigning tokens to clients
at a fixed rate. Each request consumes one token. If no tokens are
available from a client, this client must wait. This approach is flexible and can handle bursts or traffic. The second strategy is called
fixed window counters. Here, the rate limit is calculated within
fixed time windows, for example, per minute. Once the window expires,
the counter resets. This strategy can lead to rate spikes if not
managed carefully. So you should really pay
attention to how big your counter is in a
specific time window. Lastly, we also have the
leaky bucket algorithm. It is similar to a physical leaky bucket requests are processed at
a constant rate, access requests overflow
and are discarded. This method ensures a
steady rate of processing. The best practices when it comes to rate limiting and
throttling are first of all to clearly
communicate rate limits and throttling
policies to API users. This is usually done through documentation and
response headers. Then you can implement user
friendly error responses and provide retry mechanisms for clients that exceed rate limits. You should of course,
continuously monitor API usage to detect and mitigate abuse or
performance issues. Collect and analyze data to adjust rate
limits dynamically. Another advice that I would
give to you here would be to tie rate limiting to
user authentication. Allowing for different limits based on user roles or tires. You can implement
a bigger limit for somebody who has a
bigger privilege. For example, an
administrator might get 100 requests per minute, while a normal user
would get 50 or 25. You should also plan
for burst limits to accommodate occasional
spikes in traffic. With all this being said, API rate limiting and throttling are vital tools in
maintaining the stability, security, and fairness of API services in today's
digital ecosystem. By understanding
these mechanisms, implementing them
effectively and adhering to the best practices, API providers like
you can strike the right balance between
providing access to their resources and protecting their systems from
misuse or even abuse. This is exactly the reason
why I think red emitting and throttling are a
detrimental part of the implementation
of your own API. Thank you guys for sticking with me up to the end
of this lecture. I really hope you got
something out of this. And limiting and rattling
will be something that you will implement in the
future in your own API. I look forward to seeing
you in the next lecture.
16. How to keep your API secure?: Hello guys and welcome back
to discourse on API's. In this lecture, we
are going to take a look at how exactly you can keep the APIs that you
develop secure or malware, or people with bad intentions. That is a very important
topic nowadays that I feel that not enough
people are talking about. There are multiple
aspects to it. So we are going to run
through them in this lecture, and hopefully by the end of it, you will have a better
grasp on what exactly are the implications of keeping the API that you develop secure. First of all, you
need to understand the significance
of API security. Api's are like bridges
that allow data and functionality to flow between
different applications, both internally and externally. Securing these bridges is
crucial for several reasons. First is data protection. Api's often transmit
sensitive data. A bridge can result in the exposure of
private information, financial data, or
intellectual property. You also have here
reputation management, security incident can tarnish your
organization's reputation, eroding trust among customers, partners, and stakeholders. You can think here about the multiple reputation
hits that took place in a lot of companies that
actually had a breach in their users database and
their passwords got leak. As an example here, I keep thinking about the company stock X that
is selling sneakers. The third thing that
we have here on understanding the
significance of ABI securities compliance. Many industries are subject to strict data protection
regulations. Failing to secure
APIs can lead to legal consequences
and hefty fines. The second point that you
need to keep in mind when talking about API security
is the anatomy of it. It involves multiple layers, as I said before,
and components. You can think here
about authentication, which is properly verifying the identity of the
parties involved. And this is the first line of defense techniques
like API keys, the oath protocol and Jason web tokens are commonly
used for this purpose. We also have authorization
which is after authentication. And its role is to
determine what actions and data each user or system
is allowed to access. Role based access control and
scopes help enforce this. Because in applications, different roles have
different privileges. A user and an administrator
are very different. You also need to think
about data encryption. You can use HTPS to encrypt
data in transit and consider data encryption
at rest for stored data. Next, it is crucial that
you do input validation. You should protect
your APIs from common attacks like scull
injection and cross site scripting by validating
and sanitizing the input that your users
are giving to the API. Rate limiting is another
very common technique which is preventing the abuse
or over use of your API, which can lead to
resource exhaustion and service disruption. You create a limit of calls that your clients
can make to your API. Lastly, logging and monitoring. When you develop your API, you should keep detailed
logs and monitor API traffic for unusual
patterns or security incidents. Tools like SEE M, which comes from
security information and event management systems are invaluable for this purpose. The third thing that you need to keep in mind
when talking about API security is called API
authentication best Practices. The authentication is
the cornerstone of API security and here are
some of the best practices. You should use APIs to authenticate clients
accessing your API, and ensure they are stored
securely and rotated. Periodically, you
should implement out for secure and standardized
user authentication and authorization 0 is widely adopted and provides
various grant types for different use cases. I also have a tutorial on
this here that explains exactly how this authorization and authentication
flow is working. And finally, the JWTs, which come from Json Web tokens. If your API issues tokens for
authentication, use these. They are compact,
self contained, and can carry user
claims securely. When talking about the
security of an API, you should also think about authorization
and access control. Once a user is authenticated, you need to control what
they can do within your API. You can implement either
role based access control. Assign roles to users or systems and define permissions
based on those roles. Regularly review and
update permissions. As we talked about before, an admin and the user are very different in what they
can do in an application. Or you can use, as
I said, scopes. The scopes are used to fine tune access control
within your API. Limit access to specific
resources or actions based on the user's consent
and the client's scope. So each client gets a scope
and based on that scope, the privileges of
him are calculated. When talking about
data encryption, we can firstly about the HTTPS, the transport layer security. You should always use it to
encrypt data in transit. Tls ensures the data
exchanged between the client and your API is secured and cannot
be intercepted. Data trust, you should consider encrypting sensitive data
stored on your server, so not just in traffic when the data is going
on in the requests, you should also keep it
stored if it is sensitive, like passwords or card details
on your servers encrypted, just use industry standard
encryption that you can find quite easily
when searching on Google. When talking about
input validation, you should first of all, sanitize user input, which means you should never
trust user input. You should do this to prevent common attacks like
Cl injections. And another thing that
you can do when it comes to input validation
is using the API gateways. Implementing them can filter in, sanitize incoming
requests before they reach your API endpoints or your database to retrieve information out of it or
change information from it. Implementing rate limits
define a barrier that is very important to prevent abuse or
denial of service attacks. Consider different
trade limits for different types of
clients or users. Finally, logging and monitoring. Keep detailed logs
of API activity, including authentication
and authorization events. And retain logs for an appropriate period to
aid in investigations and implement real time
monitoring and alerting to detect and respond to
security incidents properly. Api security is not
a one time effort, but an ongoing
process that requires vigilance and adaptation
to evolving threats. As you know, the text
base is developing quite quickly and
we need to keep up with it by understanding the critical components of API security and
following best practices, you can protect your digital
assets or your databases, preserve your
organization's reputation, and ensure compliance
with regulations. Keep in mind that security is a shared responsibility
and every stakeholder in your organization
should be educated and aware of their role in
maintaining API security. With the right strategies
and tools in place, you can confidently
navigate the world of API driven applications while keeping your data
and systems secure. I really hope this lecture
will help you do that. Thank you very much for sticking with me up to the end of it, and I look forward to
seeing you in a future one.
17. Web Services: All giz. And welcome
back to this API course. In this lecture, we
will take a look at web services because they are the fundamental principle of modern
interconnected systems, enabling applications and software components
to communicate and exchange data seamlessly
across the Internet. They have revolutionized
the way we build, integrate, and extend software, paving the way for a world of interconnected services
and applications. In this lesson, we will explore the key aspects of web services, their types, protocols,
and their role in modern software at its core. A web service is a software
system designed to support interoperable machine to machine interaction
over a network. It allows applications
to communicate and share data regardless of
their underlying platforms, programming languages,
or operating systems. This interoperability is a
fundamental characteristic of web services and sets them apart from traditional,
monolithic applications. Web services can be broadly categorized into
three primary types. First, we have
Soap Web services. This is a protocol
for exchanging structured information in the implementation
of web services. It relies on XML as
its message format, and often uses HTTP or SMTP
as the transport protocol. Soap web services are known for their strict standards and strong support for
security and reliability. However, nowadays they became deprecated as people tend to prefer Rest web services that use Jason as their
message format. Rest is an architectural style designing network applications. Restful web services adhere
to a set of constraints, including statelessness,
client server architecture, and the uniform interface. They use standard HTPP
methods like get, put, post, and delete to perform
operations on resources. This makes them lightweight
and very easy to implement. Lastly, we have Json
and XML web services. These are remote procedure
call protocols that use Json and XML as
the message format. They provide a simple way to invoke methods on
remote servers, making them suitable for
various applications. Particularly when simplicity and efficiency are essential. Now to enable communication between web services
and clients, several key protocols
come into play. The first protocol
that comes into play, I have an entire course on, and we have it mentioned even
here in several lectures. And it is the hypertext
transfer protocol. The HTTP is the foundation
of web communication. Web services naturally
often rely on HTTP as the transfer protocol
for sending and receiving data between
clients and servers. It provides a standardized way to make requests and
handle responses. We also have the
Soap protocol that employs XML for
message formating, and often relies further on
HTTP or SMTP for transport. It includes a set of rules for security transactions
and message reliability. We also have Rest here, which is much more important. Rest comes from representational
state transfer. The web services that are restful use HTTP methods to interact with resources
identified by URIs. They embrace the principles
of statelessness, meaning that each request
is independent from each other and does not rely
on a previous state. They also have as key aspects a uniform interface and
layered architecture. Lastly, we have Jason
and XML that are common data formats for structuring messages
exchanged between web services and clients. They are human readable
and machine parsable, providing flexibility
in data representation. When it comes to the role of web services nowadays
in modern software, they have become the
absolute backbone of the modern ecosystem, facilitating integration,
extensibility, and scalability in
microservices architecture. Small independently
deployable services communicate through
web services. This approach enables agility, scalability, and
easy maintenance. Mobile apps often rely on
web services to access back end resources including
user data from databases, media files, and real
time information. Internet of Things
devices and sensors communicate with cloud
services to web services, enabling data collection and control from remote locations. Lastly, e commerce platforms use web services to facilitate
online transactions, including payment gateways, inventory management,
and shipping. While these web services
offer numerous advantages, they come with some
challenges as well. First, you should ensure the
security of web services, including authentication
and data encryption. This is very
important in order to prevent unauthorized
access and data breaches. Optimizing the performance
of web services as well includes response
times and scalability. It is essential for providing a responsive user experience, handling changes and
updates to web services. Api's while maintaining
a backward compatibility is a delicate balance and this is where
versioning comes in. Lastly, comprehensive and
up to date documentation is essential to help
users understand how to use web
services effectively. In summary, web services
have transformed the way we build and
interact with software. They provide a flexible
and standardized approach to integrating
applications and systems. By understanding
these different types of web services that
we talked about, their underlying protocols and the role in modern
software architecture, you can leverage this
powerful technology to create interconnected, efficient, and scalable systems. With all of this being said, I thank you guys very much for sticking with me up to
the end of this lecture, and I look forward to
seeing you in the next one.
18. How to Debug a Request: Hello guys, and welcome
back to this course. In this lecture, we
are going to discuss troubleshooting and
debugging HTTP requests. Because in my opinion, at least as a developer, it is very important
to understand how to troubleshoot and debug
HTTP requests effectively. This skill is essential for ensuring that web
applications run smoothly, diagnosing and fixing issues, and delivering a seamless
user experience. Http requests are at the heart
of every web interaction, whether it's loading a web page, submitting a form, fetching
data from an AP I, or even streaming
multimedia content. It all starts with
an HTTP request. However, these requests
are not always error free. Various factors
can lead to issues such as server
misconfigurations, network problems,
client site bugs, or conflict between
different components. And this is exactly
where this lecture comes in to help
you understanding first the common issues
that can arise in HTTP requests is the first step, ineffective troubleshooting. Here we have things
like server errors, because servers can encounter
various errors such as 44, which is the code for not found, 500 for internal server error, or 53, which is
service unavailable. These errors may result from misconfigured servers
or application issues. We also have errors
on the client side. Clients such as web browsers can also generate these errors, like 400 for bad request
or 43 for forbidden. These often occur due to issues with the client's
request or permissions. Network issues are
another factor that can disrupt communication
between the client and server. This includes problems like
DNS, resolution failures, slow connections, or even
complete network outages. Cross origin resource
sharing violations can also block requests
to a different domain. If not configured correctly, this can lead to security
and functionality problems. Lastly, we have SSL
and TLS errors. These two layers of
security can cause errors. They can occur when HTTP connections are not
properly established. These errors can disrupt
secure data transfer. Now the most important part of the lesson, debugging
HTTP requests. This can be a
systematic process to identify and resolve
issues effectively. Here are the steps to follow. First, check the browser
developer tools. Most modern web browsers offer developer tools that allow you to inspect network requests. You can view requests
and response headers, payloads, and error messages. This will happen in Google
Chrome, for example, if you press control
F 12 or function F 12 if you are on a Mac
and then go to the network. Next, you should review
the HTTP status codes. These codes in the
response provide valuable information about
the outcome of the request. Familiarize yourself
with common status codes to identify the issue quickly. You can also inspect request
and response headers. Pay close attention to
request and response headers. They may reveal critical
details about the problem, such as missing
authentication tokens, incorrect content types,
or course issues. Next you can examine the payload data is the request
involves data transfer. Inspect the payload
anomalies or issues. Mismatched data formats,
corrupted data, or missing parameters
can cause problems. You should check the
server logs as well. Server logs can
offer insights into what's happening on
the server side. They might reveal errors, performance issues, or,
again, misconfigurations. Next, you can verify
network connectivity, Ensure that your
network connection on the machine from which you are sending requests is stable. Sometimes intermittent
connectivity issues can lead to request failures. Lastly, you can use
online debugging tools. Several online
tools are available for testing and
debugging HTTP requests. These tools can help
simulate requests, check headers, and
validate responses. Here something worthy of
mentioning is that if you are developing the web
application where the HTTP error is happening, you can again press
function F 12 to go into the browser developer tools
and then press command P and search for the script where you think
that the error is happening, put a break point there, and then go on with
the debugging, which you can do by
refreshing the page. When it comes to common
debugging tools, there are several that can aid in debugging HTTP requests. First, we have of
course, Postman. This popular API testing tool allows you to send and
receive HTTP requests, inspect headers,
and view responses. Next we have Curl. The command line tool is
excellent for sending HTTP requests and viewing
responses in a terminal. We also have Hitler, which is a web
debugging proxy that logs and inspects HTTP and HTTPS traffic
between the computer it is installed on
and the Internet. Lastly, we have Wireshark, which is a network protocol
analyzer and can help you troubleshoot network
level issues affecting HTTP requests. When it comes to
troubleshooting and debugging, effective collaboration
with your team is vital for resolving complex
HTTP request issues. Keep thorough documentation
of your findings, including error messages,
logs, and debugging steps. Share this information
with colleagues who may assist in diagnosing
and fixing the problem. In conclusion,
troubleshooting and debugging HTTP requests is a fundamental
skill for web developers, as HTTP is the fundamental
protocol of web interactions. Being able to
identify and resolve issues quickly and
effectively is essential for delivering
reliable web applications and seamless user experience. By using developer tools,
examining status codes, inspecting headers and payloads, checking server logs, and
utilizing debugging tools. Developers like you
can diagnose and fix HTTP request issues while keeping their applications
running smoothly. With all of this being said, I thank you guys very much for sticking with me up to
the end of this lecture. I hope this got you an
advantage on troubleshooting and debugging HTTP requests that will come hand
in the future. I was Gs and I look forward to seeing
you in the next one.
19. HTTP Status Codes: Hello guys, and welcome
back to discourse. In this lecture, we
are going to take a look at the HTTP status codes, also called return codes. These are a fundamental
part of web communication. The reason why these
data codes are so important is because
they are issued by a server in response to a
client's request made to that specific server and are the most relevant
and short pieces of information about how
the request went. The first digit of the
status code specifies one of the five standard
classes of responses. The message phrases
shown are typical, but any human readable
alternative may be provided. As I was saying, HTTP return
codes have three digits. If these three digits
are starting with one, the status you are
receiving is informational, meaning that the
request was received. If it starts with two, it means that the request
was a successful operation, meaning it was successfully received, understood,
and accepted. With three, it means
a redirection, as in further action, needs to be taken in order
to complete the request. If it starts with four, it means that your request
had a client error. Something went wrong on the
client side and the request contains bad syntax or it
just cannot be fulfilled. Lastly, if your three digit code starts with the digit five, it means that your request
had a server error. Something went wrong on
the server side, as in, the server may be failed to fulfill an apparently
valid request. Now let's make our way through the different
categories of responses and list some
examples for each of them. For informational
responses, we have the code 100 with the
keyword continue. This means the server has
received the request headers and the client should proceed
to send the request body. This code is used in
situations where the server needs to confirm that it is willing to
accept the request. For example, a user may submit a large file for upload to
a file sharing service. Think about we transfer, the server responds with 100 continuum to indicate
that it is ready to receive this big file
and the user can proceed with the actual upload At informational responses. We also have 11 with the
keywords of switching protocols. The server is indicating
here a change in protocol such as switching
from HTTP to web socket. The client should follow this new protocol for
further communication. For the next category, we have successful responses, and here we have a
very well known code, which is 200, followed
by the keyword of okay. This is the most
common success code, indicating that the
request was successful, the server has
fulfilled the request, and the response contains the requested data
or information. For example, think about a
user accessing a blog post, and the server responds with a 2000 K status code delivering the requested
blog content successfully. We also have 21,
which means created. This status code indicates that the request resulted in the
creation of a new resource. It's often used with
post or put requests. You can think about a user that submits a registration form on a website and the server
responds with 21 created, hopefully to indicate that a new user account has
been successfully created. Also at success messages, we have 24 for no content while the server successfully
processed the request. In this case, it has no response body to send
back to the client. This is often used
for delete requests. If a user deletes a comment on a social media platform and the server responds
with 24 no content, it confirms the removal without returning any additional
redundant data. Moving on to
redirection messages, we have 31 with the keywords
of moved permanently. The requested resource has been permanently moved to a new URL. The client should update its bookmarks or
links accordingly. You can think about an e
commerce website that changes its URL structure and the user
attempts to access an old. Product page. The server
responds with 31 moved, permanently redirecting the user to the new URL for the product. We also have here the
code of 34 not modified. This status is used to indicate that the client's
cached version of the resource is still valid and there is no need to
download it again. If a user accesses a frequently visited Us
website, the server, recognizing that the user's
browser cache is up to date, responds with this three
or four not modified, indicating that the
cached version is still valid and no
new data is required. For client error responses, We have 400 which
is bad request. The server could not understand the request here due
to invalid syntax, missing parameters, or maybe
other client site issues. Think about a user that
submits a search query with invalid parameters
on a travel booking site, and the server
responds back with 400 bad requests
to indicate that the request lacks
essential information. Also, we have 41
for unauthorized. The request requires
user authentication. The client that
sends this request should provide
valid credentials, for example, a user name
and password or a token. All of this in order to
access the resource. For example, a user
attempts to access a private document on a cloud storage service
without proper authentication, the server will
naturally respond back with 41
unauthorized prompting the user to provide valid logging credentials before getting back the whole response. Lastly, we have 43 forbidden. The server understood
the request but refuses to fulfill it. The client's request is valid, but the server has decided
not to serve the resource. If a user tries to access an admin only section
of a web application, and the server responds
with 43 forbidden, it indicates that the
user does not have the necessary permissions
for that action. You can see that 41 unauthorized is received
when you do not provide any credentials and 43 forbidden is received when
you provide credentials, but they do not have the
necessary permissions attached. We also have 44 not found here. The requested resource could
not be found on the server. It indicates that the URL is invalid or that the
resource no longer exists. If a user clicks on a
broken link that led to a deleted product page on
an e commerce website, the server will respond
with 44 not found, in order to notify the user that the requested resource
no longer exists. For server error responses, we have 500, which stands
for Internal Server Error. This is a generic error
message and it indicates an unexpected
condition prevented the server from
fulfilling the request. It's a catch all for
server side errors. You can think here
about a user that attempts to submit an
order on an online store, but due to an unexpected
issue on the server, this request results in a
500 internal server error. The user is advised
to try again later. Also, we have 52
for bad gateway. This status code indicates that a server acting as a gateway or proxy received
an invalid response from an upstream server, often indicates network issues. You can think about a user
that accesses a web server or an API that acts as a
gateway or another service. The gateway server
encounters a problem when forwarding the request
to the upstream server, leading to a 52 bed
gateway response. Of course, there are a lot more return codes
than the ones I enumerated, but in case you encounter one
that is not mentioned here, you can Google it and
you will be safe to understand exactly what
happened with your request. When it comes to the
distinction between safe and unsafe HTTP methods, this is not directly related
to HTTP status code. Safe methods such
as get and hand are designed to have no significant
impact on the resources. They are generally
associated with successful or
informational responses. While unsafe methods
like put or post can have a range of responses depending on the
specific situation, including success, client,
and server errors. As a conclusion for this lesson, HTTP return codes are essential for understanding the
results of HTTP requests. They provide information about
the success, redirection, client errors, and
even server errors associated with a given request. These codes are a
fundamental part of web development and
troubleshooting, aiding both developers like you and users in diagnosing issues and interpreting the outcomes of their interactions
with web services. With all of this being said, I thank you guys
for sticking with me up to the end
of this lecture, and I look forward to
seeing you in the next one.
20. HTTP Request Methods: Hello guys and welcome
back to this course. In this lecture we will look
at a very important concept, and that is the HTTP methods. They are often referred
to as HTTP verbs. And they are the foundation
of communication between clients and servers
on the worldwide web, they define the actions that the client can request
from a server, ranging from retrieving data
to modifying resources. In this lesson, we will explore the various HTTP methods,
they're practical examples, their role in web communication, as well as their classification
as safe or unsafe. And the key fundamental
concept of item potency. To get started, let's firstly
get into the CRM acronym. It represents the four
fundamental operations for managing data in a database
or data storage system. It is commonly used in the
context of APIs, databases, and software
development to describe the basic actions that
can be performed on data. Each letter in the acronym corresponds to a
specific operation. C is for create. The create operation
involves adding new data, records, or resources to a
database or a data store. It is the action of
inserting a new piece of information or a new entity
in the context of an API, this typically means sending a request to create
a new resource. Our letter is for read in
this operation in pills retrieving or accessing data from a database or data store. This operation does
not modify the date. It is all about fetching and reviewing existing
information. In APIs. This Aspirin
corresponds to sending a request to retrieve
data from a resource. U stands for update. This involves modifying
existing data in a database. It is the action of making
changes to an existing record, such as updating it, attributes or
content. In an API. Updating typically
requires sending a request to modify a resource. D stands for deletion. And this operation is
the action of removing data records or resources from a database or a data store. It results in the
permanent removal of the specified information. In the context of APIs, this corresponds to sending a request to remove a resource. The CRPD operations are fundamental in data management
and data manipulation. They serve as the
foundation for building and interacting with
databases, web services, and applications
that need to perform these essential
tasks when designing APIs or working with databases, CRPD operations are
a crucial concept for developers like Q, as they provide a
standardized way to manage data efficiently
and accurately. We talked about the
CRPD operations first, as they intertwine with the HTTP verbs of
posts from Create, get from Reed, put from
Update and Delete, as we will see next. Let's get back now
to the HTTP methods. Firstly, we have yet, it is used to
retrieve the data of a resource that is
probably identified by an ID parameter or a general list that contains all the elements
of that resource. A few notes on the GET requests are that they can be
cached and bookmarked. They remain in the browser
history and should never be used when dealing
with sensitive data. Also, they have
some restrictions regarding the length and this
exit when defining them, they are used to only
retrieve data and not modify it in
any shape or form. Imagine you're
using a web browser to access an online
news website. When you click on
a news article, your browser sends
a GET request to the website server asking for
the content of the article. The server responds by sending back the
requested article, allowing the client to read it. Next we have post
HTTP method greet the new resource and sends it to be processed by the server. It is not safe and can have side-effects on the
server or the resource. Now observations when
it comes to post methods are that like getMethod, they cannot be cached, bookmark, or even care in the
browser history. And also they do not have any
restrictions on the length. Also, the back button, if hit on post request
will resubmit data. Unlike on a GET request, where it will do no harm for
a real life scenario here, imagine you're using
a social media app and you decide to
create a new post. When you create the
actual post button, the application send a post
request to the server, including the text and
image attached media. The server processes you
will request saves the post Visible to your followers. Next, we have the HTTP method, but these updates are
specified resource, meaning that it replaces all
current three presentations of the target resource
with the request payload. The main difference
when it comes to the post and PUT methods are the results that you get when repeatedly doing them
over and over again. While the put method
will always produce the same result as it updates a resource over and over again, the post method will have
side effects if trying to double ender the same
entity into a system. Imagine you have a personal
blog and you notice a typo in one of your
published articles, you use a content
management system and make corrections to the article's content when you save the changes that CM has, sends a request to
update the article on the server ensuring that the corrected version
is now displayed. Next we have this
method requests the removal of a resource
at the specified URL. It should delete the
resource if it exists. Let's say you have an email
application open like Gmail and you want to do clutter your inbox by removing
an outdated email. When you select
that specific email and click delete,
the application, sends a request to
the e-mail server which removes the
email from your inbox. As far as the HTTP
method, batch girls, it is used to apply partial
modifications to a resource. It is often used when
you want to update only a subset of
the resources date. Imagine you are using a task management application
to keep track of your work. You realize that you date
or a task has changed. So you open the app and update only the due date without
altering any other task VPLS. The application will send a
batch request to the server, which modifies only the
due date of the task. Next we have had at
is similar to gap, but it only requires the headers of the resource without
the actual data, it is used to check the
resources made the data or existence without transferring
the entire actual content. This could be used, for example, if you would be browsing, a shopping website, would want
to buy a product from me. Before viewing the
product details, you click on the check
availability button. The website sends a
request to the server asking for me to data such
as product availability, price, and the stock
information without loading the entire product page and all of its
corresponding details. Lastly, we have the
HTTP method of options. This method retrieves
information about the communication options
available for a resource. It allows a client
to discover which HTTP methods are
supported by the server. If you are building a web
application and you need to determine which actions are
supported by a web service, then this HTTP method
is crucial for you. The application sends
an options request to the service by asking what methods are available for a
specific resource. This helps your
application adapt to the services capabilities and know what kind of requests
it can send to it. These were all the HTTP methods, but they are more key concepts
to be understood here. Next, let's look at how we
can classify an HTTP method. When it comes to
these HTTP verbs, they can be broadly
categorized in to. Firstly, safe methods are dos, HTTP methods that
are designed to have no significant impact on
the resources they target. In other words, when a
safe method is used, it should not cause
any changes or side effects on the server
over the resource itself. And following HTTP methods
are considered safe. Get it should not alter
the state of the resource or server API solely
for data retrieval. Also, head, it retrieves metadata without transferring
the entire content, making it again safe. We also have unsafe
HTTP methods. These in contrast, or
HTTP methods that can potentially lead to changes in the state of the
resource or the server. They are not safe in the sense that they
may have side effects, such as resource creation or
modification or deletion. The following HTTP methods
are considered unsafe. It is not considered safe
because it often leads to the creation of a new resource or the modification
of an existing one Put it directly modifies
or creates resources, and therefore, it is
not a safe method. Delete, it leads to the removal of the
specified resource, making it inherently unsafe. Lastly, batch, although it is more targeted
than post or put, it can still alter
resource data, making it an unsafe method. Let's get now into the very important
term of item potency. In the context of HTTP methods, item potency refers to the
property of an operation. We're performing the
same action multiple times as the same effect
as performing it once. This means that if you send an item potent request
multiple times, that state of the system and
the resource it interacts with should remain unchanged
after the first request. Let's take a look
at some examples of item potent HTTP methods
that get method. It is item potent when you retrieve information
with a GET request, making the same request
again and again should not change the state of the server or the resource
you are fetching. The put method as
well is item potent if you use both to update a
resource with certain data, sending the same but
request multiple times will result in the resource containing the same
data each time. Lastly, we have the leaves. Here. It is an item
potent method because if you request the deletion
of a resource using the lead, sending the requests
repeatedly won't change the fact that the resource
has been deleted. In contrast to item potent
methods, non-ionic, both on methods and have different outcomes
when repeated. For example, the post method is known item
potent and repeated post requests for
creating a resource can lead to the creation of
multiple identical resources. Each request results in a
new resource being added. The patch request is typically
non item potent as well. Repeating a patch request may update a different
resource each time if the changes are incremental or depend on the current
state of the resource. Let's get now into why
item potency matters. First of all, item
potent methods contribute to system resilience. In scenarios where
network issues, timeouts, or communication failures per item potent operations
can be safely retried without causing unintended side effects
or data corruption. Item potency is closely
related to caching as well. When a response to an
item potent request, these cached
sub-sequence requests can be satisfied from the cash, reducing the load on the server, and
improving performance. Item potent methods in
short, predictable behavior, developers and users can rely on the fact that repeating
a request will not result in different outcomes or unexpected changes in
the systems state. Lastly, item potency
simplifies error handling. When a request fails or is
interrupted, we trying, does it introduce inconsistencies
or unexpected results? In general, when designing APIs, it's essential to choose the
appropriate HTTP methods for different operations
and document their item potent behavior. This helps clients and developers
like you understand how requests should be
handled and what to expect when they
interact with the API. Without this, I really hope some of the concepts presented
in this lecture we'll help you out in the
future and I look forward to seeing you
in next lectures.
21. Headers Query Parameters: Hello guy, and welcome
back to discourse on APIs. In this lecture, we
will take a look at request headers and
query parameters. We will explore
the critical role they have in API requests, how they are used, and best practices for leveraging
them effectively. Starting off with what they are, HTTP headers are mate data included in API
requests and responses. They convey information
about the request, such as the content type
authentication credentials or preferred language. Query parameters
on the other hand, are key value pairs added
to the end of a URL's path, separated by question
marks and ampersand. They provide
additional information to the API about the request, typically for filtering,
sorting, or pagination purposes. Moving on to headers
In API requests, they play very important
role for various reasons. First of all, we have
authentication authorization. Headers often carry tokens or credentials required
to authenticate the client with the API. As a concrete
example here you can think of the O of
security protocol. The bearer token that you
use there in order to authenticate will be
stored in a token header. Next we have content
negotiation. The accept and content type
headers allow clients to specify the data format
they can accept or send, such as Json or X
ML Headers like cache control and Eg control caching behavior to
improve performance. Api's may use headers like X rate limit limit
and X rate limit remaining to enforce
rate limits on requests. We have a whole
lecture dedicated to rate limiting on
API's in this course, you can also check
that out. Moving on. Headers like accept
language specify the preferred language where the response content that you
will get out of the API. When it comes to query
parameters in API requests, they provide valuable
customization options for API requests
we have filtering API's often allow clients
to filter data by specifying query parameters
such as question sign, filter equals keyword to
retrieve specific records. Clients can also sort
API responses using query parameters like question
sign sort equals field, or question sign order equals
ascending or descending. Query parameters like
question mark, page, and question mark
per page enable paginated access
to large datasets. Api's may support full
text search without query parameters like questions
Q equals search term. When we talk about
best practices for headers and
query parameters, we have quite a few. The first thing that I would
suggest you do is always secure authentication,
headers and credentials. Use authentication
tokens, short lifetimes, and secure token
storage practices. Maintain consistency in
the use of headers and query parameter names across your API, endpoints
and documentation. Validate and
sanitize user input, especially when query parameters are involved to
prevent security. Vulnerabilities like
EsquL Injection provide clear and comprehensive
documentation on supported headers
and query parameters, including their usage, data
types, and expected values. If rate limiting is applied, communicate the rate limits and remaining quotas using
response headers. Consider versioning your API to ensure backward
compatibility and use version specific headers or query parameters
when necessary. Again, here we have
a whole lecture in discourse about
API versioning, so you can take a look at that. Also, headers and
query parameters are the building
blocks of API requests enabling clients to
communicate effectively with API's while customizing
their interactions. Understanding their
role, proper usage, and best practices is essential
for developers seeking to harness the full potential of API's in their applications. Thank you, Chris,
for sticking with me up to the end
of this lecture. I really hope you got
something out of it. And I'll see you
in the next one.
22. OpenAPI Specification: Hello guys, and welcome back to this Open API specification
tutorial where we understand how we can better document our web Rest APIs. In this lecture, we are
going to talk about what exactly is an Open
API specification. This open API
specification was named Swagger specification
before what it is, it's an API description
format for rest APIs. It is equally suitable both for designing
new API's before implementing them and
also documenting your existing API's so that users can more easily understand how
they can make calls in order to retrieve resources and data in general from them. Open API lets you
describe your entire API, including the
available endpoints, the operations in the
request and response formats that you might get supported, authentification methods,
support contexts, and also other information. The open API specification has undergone several versions
since the original release. The Sweater hub online
platform that we will use in this tutorial is currently supporting
the three version of Open API which
is the latest one, but also supports
the two version on more concrete
level talking now, this Open API file that we are going to create in
the Yamo language, we will see that
in just a moment. It allows you to describe your entire API including
the available endpoints. For example, here we have users and operations
on each end point. So these are the
HTTP methods that we talked about in
the last lecture. For example, on the
end point users, we can have documented a get method that
would get the list of users and the post method
that would create a new user. It also lets you describe
operation parameters, input and output
for each operation. For example, if you want to get the information about
a specific user, you can give it a parameter, input as an ID, and it will retrieve the information about
that specific user. It also lets you document
authentification methods for that API in
case there are any. Also lets you document
contact information, license terms of use on. As we will see when
we will get hands on with the online
platform, sweerhup, we will see exactly
the syntax for each part of the
documentation that we can write about
our Web Rest API. These API specifications
that we talked about can be written either the
Yamo or Jason languages. This format is easy to learn and readable to both
humans and machines. We are going to talk about both of these in
the next lectures. Thank you for sticking
with me to the end of this tutorial and I look forward to seeing you
guys in the next one.
23. SwaggerHub Overview: Hello guys and welcome back to this open API
specification tutorial. In this section, we are going
to take a look at how can we use the Swagger Hub tool and those who
understand what it is. More specifically.
In this lecture, we are going to discuss the definition of the
Swagger hop tool and an overview of this
online platform starting with the
definition of it. Swagger Hub is a platform
that is available online and helps you
develop your own API, whether it is public for your personal use or internal and private
for your company. It does this helping of developing your API by
documenting it in detail. The principle that Swagger
Hub relies on heavily is the design first code later principle as you
will see in just a moment. With Swagger Hub, you
start by thinking about how your API is
going to look like. Furthermore, laying
out its resources, operations, and data models. Only once all the
designing of your API is complete and you have a
clear structure for it, you can start the
actual implementation of the business logic of it. Now the definitions
of this API that you write is saved in
the Cloud and can also be synchronized with external versioning systems
like Github or Bitbucket. Which makes it much easier to collaborate with your
team on it and also maintain more versions of it once it gets traction
and it starts to evolve. As you can see here, I have the Swagger Swagger Hub page
open in my Chrome browser just to give you a quick
look in the background while I explain to you guys what this tool is basically used for. The Swagger Hub tools integrates the Swagger tools in order to make it more known
and commonly used. It supports the UI editor code
En and also the valid data into its online
collaborative platform that we are going to take a
look in just a few moments. The syntax for describing these APIs in the
Swagger Hoop tool is the Swagger Open API to
as you probably expected, with it being the default
format of API definitions, we will use Y L for writing the structure of your API in discourse
in general. It is the chosen language for writing into this swagger
to you can however, paste into the editor
Json text if that is what you are more comfortable with and it knows how to
parse and recognize it. But once you save the
work you have done, the Json that you wrote will be converted into YAML as well. Now of course, the
swagger hop platform that I am presenting to you in this tutorial is
available online and is also free to
use for 14 days. Now we are going to take an
overview of the platform. As you can see here, I'm at
the home of my Swegerhup app. In my browser, you first notice the side panel on the left part of the screen. And it starts with
the my hub page that lists all the API's
that you have access to. They can be created by
yourself or the ones where you were invited to
collaborate on by other people. For now, I have just a store of my own that I created by using
the pets to a template. So it's something pretty
basic that we are going to take a look in just a moment. You can also search
from the side panel on the swagger hub for an API by using this
option of search. This is a way for you to
see a lot of great public API's developed by
Swagger Hub users and it will help you a lot, especially if
you're just getting started documenting
your first API. You can get inspired from their API's structure or just take a quick look at some
of the syntax they use. Now when clicking a
random API from here, I'm just going to
select this random one, you're going to notice. That you will get this page
that has a split view. In the left part,
you are going to see the Y AML code of the
Open API definition. In the right part,
the documentation generated in a beautiful
reference style. You are of course able to resize these panels from this
side bar right here. Another observation here
would be that Swegerhb lets you test these API calls
directly into the browser. But I'm just going to switch
to the petitor template to each is more representative
of what I wanted to show you. Here we have a pet store
template, for example. We can use the get
operation to get a user. Once you click on Try Out, you can search for a username
that needs to be fetched. It also redirects you to
user one for testing. If we write user one here
and click on Execute, we get the responses. The server response
had a code of 200, which is okay, which means that our request ended up just fine. And this is the
response body which obviously has the
ID of the user one, the username which is a string, a first name which is a string, last name which is
a string, email, password, phone and user data. These are of course
hardcoded string, but in the case of
actually making a request to your
API from Postman and actually having in your
database actual users, you are going to see values that are specific
to that users here. Apart from that, we also
have a few response headers. The request duration,
how long it took until our response
from the server. Also, you have detailed
here the responses that you can get with their code
description and links. First of all, we have
the 200 code which is the successful operation
as a description. And it also gives you an example value of
what you are going to get if the request that you did ended up successfully and
the response you get, it's a valued one. But you also might give it another user that is not
available in the database. In which case you will end with a four oh four or
400 error code. Also, the navigation panel on the left side has the role
to show you the list of operations and data models that are available
in this open API. Also, by clicking on them, it will automatically
take you to that portion of the YAML script. For example, if we click on the HTTP method that I
just described earlier, you can see that it
automatically took us to the portion of
the Amo code where it is described in
order for it to be beautifully shown in
this reference style view. Now a few other buttons that
are available here would be the ones from the
leftmost side of the screen. Which makes available for
you to hide the navigation, which is this left
navigation panel that I just talked about. Hide the editor, which is of course the navigation
and also the Amel code. And you can also hide the UI, which is the reference
style document on the right part of the screen. Going up a bit, we
are going to have a quick information about our Open API
description right here, where we have a few details, some integrations and the owner created by less safe and other beautiful
details like that. Of course, you can
open your API, rename your API, or
compare it, and merge it. We are not going to get
into that right now, but going here into
the version of it. If we click on this down button, you can see that you have a few options here that are
available for your API. You can make your API private, which I have to tell you
that you can do only if you have the premium plan
attached to your account. Then you can publish your API, which is a very
important button. Once you're done with writing
your API specification, then you can delete
this version. And also you can add a new
version in case your API involved and you want to develop a whole
new version for it. Now you can view
the documentation here in a whole other page. Which basically just takes
the reference style document and shows it to you in a whole page for you to see
in a more relaxed fashion. Other than that, you can
click on the API options where you can edit
your Github push and also reset the changes. You can disable the
notifications and share and collaborate
on this API. Lastly, there is
the Export button, which presents you
different methods of downloading your API, either in Yama or Jason, some documentation,
the server stub, and also the client SDK. In the next lecture,
we are going to take a closer look at
an open API and at its corresponding Y AML code
in order to learn how we can make a documentation for your very own API
using this tool. Thank you guys for sticking with me to the end of this lecture, and I really look forward to see you guys in the next one.
24. Info, Server and Tags: Hello guys and welcome back to this open API specification
tutorial where we talk about the Swagger
Hub online platform that you can use to document
your API in detail. In this lecture, we are going to discuss the info servers and text parts of our
Yamoscript that is used in order to create the Swagger documentation
for your API. As I stated in the
previous lecture, this left black
part is the editor. We write the Yaml script
that is used to render the reference interface where the API and its
endpoints are included. Each API definition,
as you can see, starts with the
open API version, which in the case of the API, that is, by the way, the default pet store
template that they provide on the platform
for you to get started and get a pretty good
idea of the features they can offer with the
platform is three. That's the way GM script starts when writing an open
API specification. The next section
is called info and contains the methodata
about your API. This Metoddata has
information about the API, title, description
version, and so on. This information section may also contain contact
information, the license name, URL, and also other details. As you can see in our case, it has the description of, this is a simple pet store, a simple pet store server. And you can find
more about swagger, which we can see that is rendered right under
the title here. Then the version of our
API is pretty obvious. That is one data which is rendered in this
gray box right here. Then we have the title, The Swagger Pet Store, And then some Terms of Service, which are right here. And they also specify the link to redirect
to then some contact, which is the contact of
the developers right here, which is an e mail. And then some information about the license and the
URL which is Apache to do that is pretty
self explanatory for the first part of
the API documentation. Next, as you can see, we need to define the
API server address and the base URL for the API calls that we are going
to furthermore make. This section of server can
also have a description. Now suppose our API will be located at
httpsmypiample.com zero. Then it can be described
as the server tag, then URL, as you
can see on line 19, and then the path that I just
told you more concretely, on the case of this
Pet store template, you can see that
the description is the Swagger hub
API auto marking. And then it gives you URL, which is the Swagger
Hub.com my name, clothing store, the version. And you can see that you
can also select this one as your main server and it
is also available here. The next section, right
before we go into the path one is the Te section. What this does is basically
that you can assign a list of tags to each API operation
you have in your structure. Tech operations may be handled differently by tools
and libraries. In our case, Swagger UI
uses these texts that you give to group the displayed
operations this way. Having a more elegant
and put together design to your endpoints
Optionally here, you can also specify an
external docs tech for each of your tags by using the global text
section on the root level, which is this one
right here at line 21. The tech names here should match those used in all
of your operations. This is the way that
Swaggerhub is able to map each HTTP method to its corresponding tech
and show them grouped. As you see on the
screen right here, we have the pet grouping, the store grouping,
and the user grouping. The operations are actually, as we will see in
the next lecture, the HTTP methods
that are done on a specific each of those
operations, for example, disposed here you
can see that it has the tags section of
path and it knows that the post operation for the path path is going
to be in the pat group. Now, the tag order in the global text section also controls the default
sorting in swegerUI. As you can see, pet
is the first tag, store is the second one, and user is the last note
here that it is possible to use a tag in an operation even if it is not
defined on a root level. And it will also show just
fine as you can see here. More concretely on our example, in the text root level, we have the first
one which is pet. And the description,
everything about your pets that is shown right next
to the title of the tag. Also, you have this description which is shown to the
right part of the tag. That is find out more. And you can also provide
the URL that is shown here. This was about it with
the first lecture of the Swegerhub tool about
info servers and tags. In the next lecture, we
are going to discuss paths and operations
that are being done on this Yama script
that furthermore go along to show in the UI render of the
Seger API documentation. Thank you guys for sticking with me to the end of this tutorial, and I really look forward to seeing you guys in the next one.
25. Paths and their Operations: Hello guys, and welcome back to this open API
specification to Trio, where we learn more about
the Swegerharb platform and how we can better
document our APIs using it. In this lecture, we
are going to discuss paths and operations in
the Swegerharp platform. More specifically the
syntax of them in the Yama document in order
for them to be rendered. As you can see in the right
part of the screen here, in Open API terms, paths are the resources
also known as the endpoints that sure application programming
interface exposes. This can be for example, a users or user ID
slash information. The operations are
the HTTP methods made available inside their paths that are used to
manipulate these paths, such as get, post, or delete. For each specific path, we can articulate HTTP methods
or as their name is here, operations that are
available for that path. For example, here we
have the paths keyword, then we have the path path. And then we have
multiple operations. For example, put and
post API paths and operations are defined in the global path section of
your API specification. In the case of this
script right here, this is at line 34. All the paths that
we find here are relative to the API server URL. The full request URL has the
syntax of the server URL. And then the path that
you see right here, global servers that
we talked about in the last lecture can also be overridden on
both the path level and the operation level. Paths may have an
optional shorter summary and a longer description
for documentation purposes. These fields are again
available in the Yama document, specifically for each of them. As you can see here
on the tag path, we have the summary, add the new path to the store. Also, we have the description of the response, 45 invalid input. These informations
here are supposed to be relevant to all
operations in this path. Description can be multi line
as it may be a longer one and supports the markdown of common mark for which
text representation. Now as far as templating goes, you can use curly braces to mark parts of an URL as
path parameters, but you can also use
the parameters tag, as you can see right here. And then just specify multiple features about
that parameter for each. As I've said before, you
can define operations. So HTTP methods that can be used to access that path and read it or alter it or deleted. Openapi three supports
the get, post, put page, delete head options
and trace operations. A single path can support
multiple operations. For example, get of
users to get a list of users and post of
users to add a new user. A thing to pay attention to here would be that open API defines a unique operation as a combination of a path
and an HTTP method. This means that two get
or two post methods for the same path are not allowed even if they have
different parameters. As parameters have no effect here on the uniqueness
characteristic. Each operation
from your API also supports some optional elements for documentation purposes. Just like paths, they
have a short summary and a longer description
of what an operation does. The description can be multi line and supports
Common mark as well. They also have texts that are used to group
operations logically, resources, or any other
qualifier as you can see here. For example, on line 51
we have the pat tag. Also the external
docs attribute here is used to reference an
external resource that contains Additional
documentation Now open. Pi three supports operation
parameters P via path, query string headers
and cookies. You can also define
the request body for operations that transmit
Td to the server, such as post, put, and patch. Query string parameters must
not be included in paths. They should be defined
as squary parameters. Instead, with the
query attribute that we can see here on line 77. For the parameter named status, we have the inquiry specifier. Another quick observation
and pretty obvious one here, would be that it is
impossible to have multiple paths that differ
only in the query string. As open API considers
a unique operation, a combination of a path and the HTTP method and additional parameters do not
make the operation unique. Another attribute
for human operations here is the operation ID, which is an optional
unique string used to identify an operation. For example, here we
have the operation ID of a path here for the post
method of the path path. If you decide to
provide these ID's, pay attention to the
fact that they must be unique among all operations
described in your API. Some code generators use this attribute value to name the corresponding
methods in code. You can also mark
specific operations as deprecated to
indicate that they should be transactions out of usage with the syntax
of another one of these features that would be deprecated and then the
full value of true, the global servers array from the beginning of
the Amo script that we talked about in
the last lecture, and that is present at line 16, can be overridden on both the path level and
the operation level. This is useful if
some endpoints use a different server or base path than the rest of your API. This can be the case
when we talk about the file upload or
download operation, or maybe a deprecated but
still functional endpoint. This was about it
with the paths and operation section of our
swagger hub yamoscript. Thank you guys for
sticking with me to the end of this tutorial. In the next lecture, we are
going to talk about schemas. I really look forward to
seeing you guys there.
26. Schemas for data models: Help guys and welcome back to this open API specification
tutorial where we understand how the
Swagger up platform works. And how we can write Yama
code in order to write a better documentation
for our API with swagger. In this lecture, we are
going to talk about schemas and the schema
section of your Yama script. As you can see here in the Pet store template of
the API documentation. If you scroll to the bottom, we have as well as in the Yama
script and also in the UI. And these quick C two
schemas that define the data models that are available in the back
end of our database. And these data models are
objects that can be exposed and handled through the
endpoints of our API that we document here in open PI three and therefore
in swagger hop. The schema section
of UR yao script refers to the data model
models that are available, as I said, through Ur endpoints. The data types of these models are described using
a schema object. Now when describing
a schema object, you should use standard keywords and terms from the
changes on schema. Starting off with
data types that are available in
the schema objects. The data type of schema is
defined by the type keyword. The syntax here would
be, for example, type, and then string or
whatever type it is. In Open API, we have access
to the following basic types. We have string, number, integer, bullion
array, and object. For example here,
the order entity has the ID property and it has the
type of integer data type. It's pretty straightforward. These types exist in most
programming languages, though they may not go by the
same name that we see here. Using these types, you
can describe pretty much any data structure
that you can be dealing with when talking
about the field of a data model from
back and database. When dealing with numbers, there are the attributes
of minimum and maximum that specify the range of certain field of
the model can take. Also, the length of
the string can be restricted using the mean length and
max length attributes. Just like the type
here attribute, you can go like mean
length and max length. That should limit the
length of your screen. The same being valid for
the range of Integra, just in case you want the fields from your
models to be a little bit restricted and you want to inform the users
that he's going to. Furthermore, I want to
use your API and help him in the documentation to
tell him that, for example, when getting an order ID, the order should be in 1-11 because those are
the only entries that you have in the database in order
for him to avoid 44 error. Now, a very useful
type when dealing with schemas in Swagger Hub
are the enumerations. You can use keyword to
specify possible values of a request parameter or a model property other than the enumerations
and the data types. This open API three that is used in the
Swagger Hub platform. Let's you define
dictionaries where the keys are strings in case you do
not know this dictionary, also known as a map, hash map or associative array, is a set of key value pairs. In order to define a
dictionary for your field, you need to use the type object, as you can see on line 536, and the additional
properties keyword to specify the type of values
in the word key value pairs. Along with these dictionaries, Open API provides several
keywords which you can use the combined schemas
when validating a value of a parameter
against multiple criteria. Here we would have one of that validates the value against exactly one of the sub schemas. All of that validates the value against all of the
sub schemas and any of that validates
the value against any one or more of
the sub schemas. Talking more concretely
on our pets toward template of API documentation
that we have here. You can see that the schemas, so the objects that we can make a crude operations with
our API would be order, the order of your
pet, which has an ID, that's an integer, a pet ID, a quantity, a ship date,
which is a day time, A status that is, um, as I've told you about
this type before, that can be placed,
approved, and delivered. And here is the
syntax of that now. We can also have a category. We can also have a user
that places the order. We can also have a tag. We can also have a pet that has a category scheme
on inside of it. We have a name, photo
URLs that can take the user and give
him a photograph of that pat in case he
wants to x that field. We have some text that has
the tech schema in it. And lastly, we have the API
response documented here. As I said, the Amo syntax here
is pretty straightforward. We have the components, root level featured,
then the schemas in it. You simply write the name
of your schema for example. We have order, then
the type of it, and then the
properties which are, as I've said here, if you want to have a side by side look and maybe map
them with your mind, you have the properties
of ID, that's an integer. It has a format of 64. Same for pet ID quantity only the quantity
has a format of 32. Now we have the ship date
which has the format of a date time and is
of type string. Here the user of the API is suggested that
it should be a day time, but it is actually a string. And we also have the
status that type string, it has a description, as you can see, order status
that is written here. And it furthermore explains
what this field is used for. We have the implementation
of the data type in it, which in this case is the annum that can take
either placed, approved or delivered values. Lastly, we have a bullion
that represents whether the order is completed
or not by the store. We can also, as you see here, specify the default value for field in this case is false. This is pretty resembling the next schemes
that we have here. This was about the
last component of our Amo script that we are
going to need to know in order for us to write
a Yam script that translates to an
API specification using the Swegerhop
platform too. Thank you guys for sticking with me to the end of this tutorial. I really look forward to seeing
you guys in the next one.
27. REST APIs: Hello guys and welcome
back to this tutorial. In this lecture, we are
going to look at rest APIs, what they are, how they work, and if they are
obviously useful. And also we are
going to understand the difference between a
simple API and the rest API. So getting started when
talking about rest APIs, the rest word comes from
Representational State Transfer, which is a software
architectural style that uses a subset of HTTP. It is commonly used to create interactive applications
that use web services. Web service that follows these guidelines
is called RESTful. Such a web service must provide its web resources in a
textual representation and allow them to be red
and modified with a stateless protocol and the predefined set
of operations. This approach allows
interoperability between the computer systems on the internet that
provide these services. In this tutorial, we are
going to work week rest APIs more precisely by
documenting them in detail. As this is a very good
practice rule in order for our API to be clear and
very well-structured. Thank you guys for sticking with me to the end of this lecture. And I really look forward
to see you guys in the next one where
we are going to discuss the HTTP request
methods and return codes that are available
when dealing with rest APIs.
28. SOAP APIs: Guys, welcome back
to this tutorial. In this lecture, we are
going to discuss soap APIs. As we already saw a few
things about rest APIs. Npr also going to
do a comparison on a later lecture in-between these types of APIs before dead, Let's see some specific
things about soap APIs. What they are when
they are used. So you know which API you can choose when trying to
implement one of your own. So first of all, soap represents a tool
that makes systems using different operating systems communicate via HTTP requests. Using exclusively
XML packages of data based APIs are
made to create, recover, update,
and delete records. Basically perform crude
operations on them. Just like rest APIs are. Few examples of data models
that can be manipulated. Might be user, some general objects on
the soap API requests. Then being used with all those languages that
support web services, like JavaScript or Python, or C Sharp to name a few. The main advantage of these APIs is the flexibility
that developers have. When it comes to the
programming language. You can write them in. Their concretization
is the making of web based protocols like HTTP that are already compatible with all the
languages and operating systems. Also, the S in the cell name Dove soap
APIs stands for simple. The soap protocol only provides the simple base for
complex implementations. The space is made
up of five parts. The envelope is the first part, which is the tag that
specifies whether an XML, this is soap response, as I told you before, the way the soap APIs
are communicating and the way they put their
information is true. Xml file, which is an Extensible Markup
Language farm that has its own tags and it's
used to transfer data. We will have a whole lecture on the type of file later on. Now, we also have the heavy
part of an soap protocol, and this one is
actually optional. Next we have the body, which is the most
important part, is basically the most
of the soap message. So it contains the bigger part of the information that is sent. And also lastly, we
have the fault deck, which is the last component
of soap APIs enter this deck of both is
placed inside the party. What did he dies
specifies error messages that might appear within
that request or response. Now, talking about
some more advantages when it comes to
the soap protocol. Properties of neutrality means that works on a wide
variety of protocols, from HTTP to SMTP. And also another property
that constitutes an advantage when talking about soap APIs with their
extensibility. So thank you guys for sticking with me to the
end of this tutorial. This was about the
presentation of soap APIs. I look forward to seeing
you guys in the next one, where we will take a look at a comparison between
South APIs, rest APIs. We can furthermore understand which one is better to choose depending on our
current situation and our implementation needs.
29. SOAP vs. REST: Hello guys and welcome
back to this tutorial. In this lecture, we
are going to look at the comparison in rest APIs and soap APIs to understand
which one would be better for us if we want to
start developing our own API. The main difference
between rest APIs would be that soap is a protocol and rest is an
architectural style. Furthermore, rest APIs
use multiple standards, like HTTP, Jason,
URL, CSV, and XML. Even though change sign
would be preferred, you can use what you are
most comfortable with. Soap API is, on the other hand, are largely based on the HTTP and only XML for
data transferring. From these difference
between them. There comes a
disadvantage, obviously, when dealing with soap APIs from the fact that the requests
being on the same XML. And that way they are much larger than our JSON
response for itself. In this way, soap will end up using much more band
width, then rest. Rest is much more faster and
efficient in this regard. Now, in general, we should
use rest when we do not need a state to be maintained
between API calls. To give a counter example here, you can think about a
shopping site that needs to retain the information
of you were purchased. In other words, when
entering the page, where do you pay for it though? This would be a case where
self API would be better than a rest API because it
would need this date of fewer cart to be maintained
to the payment page. Now has become standardized and also has built
in error handling. Rest APIs are much more easy to use, lightweight, flexible. And all of those
reasons makes it have a smaller learning curve for new developers that are just getting into API programming. Rest APIs also have easy-to-understand
standards like Swagger, OpenAPI specification. These standards let you narrowly
document you were a guy. This way, making it much more easy to be adopted
by new programmers. In conclusion, if you are
thinking about whether to choose soap or rest
for your project, I would say that it
ultimately boils down to the web service
that you want to implement. Based on new work oh, needs. You have to decide
which one to go with. Further. These points about
the difference between self and rest APIs. Thank you guys for sticking with me through this tutorial. And I look forward to seeing
you guys in the next one.
30. JSON: Guys and welcome back
to this tutorial. In this lecture, we are going
to discuss json data files, starting with the name j
sine comes from JavaScript. Object Notation. Mds are language independent, human-readable language
used for its simplicity. Nps also most
commonly used in web based applications to exchange
data in-between parties. Now the G sign extensions, obviously n dot JSON. Jason is a user-friendly
substitute to XML, ACTs more lightweight
and easy to read. Also, being more
lightweight, fast, we take the torch
change from surface to clients can end up being
much smaller in size. And that's why EPs are preferred way to communicate
data on the worldwide web. From that perspective. Good advantages for the GI. Some language revolves around the fact that it is
easy for humans to read and Dr. And D is also easy for machines to
bars and January JSR. Now in talking about the
structure of a g sine file, he told me to its
entire structure. The first part to that
it has a collection of name value pairs in
various languages. This is realized as an object, record, struct,
dictionary, or hash table. Also, it can be a key or
an associative array. The second part then delta
g sin five can have, is an ordered list of values. And you can associate
that in most languages. Wheat on a ray vector list or sequence window that when exchanging data between the
browser and the server, the data can only be decks
and chasten is text. This way, we can easily
convert any JavaScript object into JSON and send that JSON
to the server, the client. Now, we can work with the data. It's JavaScript objects with no complicated parsing
and translations. Also, if you receive
data in adjacent format, is a client from the server. You can easily convert it
into a JavaScript object. So it works great the
other way around too. So this was about which the JSON data files are used in APIs to clients
for informations. Thank you guys for
sticking with me. 30 end of this tutorial and I look forward to seeing
you in the next one.
31. XML: Hello guys and welcome
back to this tutorial. In this lecture we are going
to discuss XML files, STR, a popular choice when
talking about transmitting data on the Internet
from servers to clients. And Pi squared. So starting with the name, XML comes from Extensible
Markup Language and de-select which of its
own that defines a set of rules for encoding
documents in a format that is easy to understand by
both humans and machines. The design of XML, what's a lot of accent on
simplicity, generality, use building across
the Internet is a textual format with strong
support via unique code. Although the design of
XML focuses on documents, the language is widely used for the representation of an
arbitrary data structures, such as those used
in web services. All the characteristics
made it into one of the most used languages
for exchanging data over requests
on the internet. You may be asking what
exactly ease on Mark blank. Well, markup is information
added to a document that enhances It's body of
meaning in certain ways, in that it identifies the parts and how they
relate to each other. More specifically,
a Markov blanket, which is a set of symbols that
can be placed in the text of the document to demarcate and label the parts
of that document. Xml was designed to
be self-descriptive. It also resembles
HTML very much. If you look at the
response of XML, the difference between the two. The XML language has no
predefined tags as HTML does H. You saw if you are familiar with that markup language, details, paragraphs that are headers that are age and then a number for
how big they are and so on. But still, XAML does not
actually do anything at all. It is just an intuitive, easy, and understandable way to place information wrapping
tags that are statistically name someone in most dried piece of
software to send, receive, store, or
display the state. That is done with actual code written inside the endpoints of an eventual API to which you can make requests to
retrieve the data. Now there are two
key characteristics of the XML language that differentiate it and makes it useful in a variety
of situations. Epc extensible, as
the name suggests, meaning that it allows you to create your own decks
inside the beat. And those decks are of course, representative for
the dating provide as you make them to be. If you would have a well-written XML user will understand, just find the
information that he got from the XML that you sent. It carries the data
independent of the way it will be shown
by who retrieves it. Indeed, social standard did
became public and open. In conclusion, the reason why XML itself beak and
popular nowadays, He's the way that it makes
the abundance of systems with the Grunt programming
languages and operating systems
uniform to a single, simple and easy to understand markup language that is
very flexible in terms of the tags and containing
the information that it Nobody's that was about
it with the XML files. Thank you guys for
sticking with me. To the end of this tutorial. I will look forward to seeing
you guys in the next one.
32. XML vs. JSON: Hello, and welcome
back to this tutorial. In this lecture we are going to discuss different Cs and also thinks that they
have in common about XML and JSON languages. We talked in the
previous lectures about each one of them, each of their
specific locations, what the JAR you might contexts, they can be used. But now, doing a
comparison between them, maybe you can furthermore
easily decide which one is best for you in
your own API representation. Sbc, both JSON and XML can be used in order
for you to receive data from a web
server or in order for you to send the
client to a web server. They are basically just ways of putting the date that
that is communicated in-between web parties when it comes to things that
they have in common. They are both very structured
in a hierarchical way. Meaning of course,
that they have those tags in the
case of XML and curly braces in case of JSON that represent
values within values. So they are very structured, both of them there are
also human readable, also known as self-describing. Also. Another thing that we can
observe on both of them is that they can be fetched
with an HTTP request, which also makes
them useful when talking about APIs and
communication over the Internet. They also of course, can be parsed and used by lots
of programming languages. But here comes the
mean difference in-between these two
markup languages. That is obviously being dead
to eczema has to be parsed, written actual XML parser. While JSON can be parsed by a standard
JavaScript function, do I need this parsed by
standard JavaScript function? It's parsed into a ready
to use JavaScript object. And as you can
conclude from that, that's a major advantage for
JSON is it is much easier to transcribe into actual code when you are working with it and
you are receiving gate. Eczema is much more difficult to parse the JSON in that regard. Now, another key differences between these two might be that the JSON objects
that are parsed with the JavaScript always
have some kind of types, either if it's a string, a number array of OR Boolean, whether the XML data is typeless and this
should be string. Also XML supports
comments while G standard does not support any
kinds of comments on you. If you maybe read
some JSON files, you thought that there were
absolutely no comments there. While JSON is supported by
most browsers nowadays, XML cross-browser parsing can be tricky and you might have some problems when you deal
with the special browsers. I personally never
ran into that, but apparently it is possible. And the last difference that
we have here would be that the JSON is less
secure than an XML. And that is probably the strongest point
that you would look at, is trying to choose
XML over adjacent. But again, g sine, it's much easier to be parsed. And it is also the go-to way to transmit data on the web
when coming, especially to. Whereas state is why XML is
also used with soap APIs. Personality I prefer JSON more, but depending on what
you're trying to build, what type of API human to build. I think it's best for
you to choose for yourself and for your needs in-between these two languages. So thank you guys
for sticking with me to the end of this lecture. And I really look forward to seeing you guys in the next one.
33. How can we call an API: Hello guys and welcome
back to this API course. In this section, we
are going to deal with the more practical
aspect of API's. That is, how you
can actually make requests to specific API's and get information out of them as some responses as stated
in previous lectures, when dealing with API's, it's like dealing
with a website. For example, when
you enter Facebook, you basically click a link. When you click that link, the website opens up. What actually happens when
you click it is that you make a request to the
Facebook servers. The website that
you get in front of your eyes is the response
that you get back. Now it is the same thing when
you call an API endpoint. An EPA's endpoint
is just as a link. When you call it, it means
you make a request to, it is like you click a website. The response you
get is either in the form of a web page when
you are entering Facebook, or it is in form of
a response when you make a request to
an API's endpoint. But it's pretty much
the same thing. Without that being said, there are two ways that
we will discuss in this course with which you
can actually make requests, two different API's endpoints. Now the first one is through curl and the second
one is through postma. These are two programs
that allow you, obviously, to make these requests, just as you would from a
browser, Let's say URL. Now curl comes
installed by default if you are using a Mac OS or if you're using
a Windows Ten. If you are using
either a Bunto or a window that is previous
to the version ten, but you have Git for Windows installed or for a
Buntu installed. The curl is already downloaded again from Github
and you can call it, as you will see, that I will
do in the future lecture. When trying to make a
request to a website, again, you are not even having Git on your machine and you're using Windows previous
to ten or a bundle, you can curl with the package manager and that is the most
simple way to do it. For example, you can download chocolate and then run
Choco install curl. Or again, other package
managers are fine as well. But if you don't want to do
it through a package manager, you can as well download Curl
from the Curl home page. You can go to Curl that SE, then there is a download
button and you can download the archived curl file and
then go ahead and install it. In the next few lectures, we are going to take
a public API and make some requests to it with
both Carl and Postman. Just to see exactly how we, how we can actually use not only the HTTP verbs and accessing API's endpoints
and see them in practice, but also all the
notions that we have talked about theoretically
up to this point. If all of that sounds
interesting to you, I really hope I will see
you in the next lectures. I thank you very much
for sticking with me up to the end of this one.
34. Calls with Curl: Hello guys and welcome
back to the API course. In this lecture, we
are going to make our first API
request using Carl. I have here on the screen
the website of Nasa. And Nasa is actually providing some API endpoints
for different developers. You may ask yourself, why would you want
to access the data through an endpoint of
an API instead of just maybe entering a link
into your URL and seeing that information
for yourself with images in a form
that is much nicer, then it would appear as the response of an API as
you will see in a moment. The answer to that is because
when you are using an API, you usually use it in your
code to get information from other parts that are providing that
information to you. And you cannot calculate it or you do not wish to do that. You retrieve that
information in your code by making these
requests that we are going to make only within Carl and Postman to show
you guys how they are done. But as I've said, they would be done from your code and you
would maybe call the NsaPI to show information about the
first moon landing. And you would retrieve
that information from the API and you would show it in
your own website. That's what API's are good for. Now, getting back
to our tutorial, I have here the API, Nasa.gov And if you enter this
URL into your web browser, you will be redirected
to this page, where first of all, you can
see that you need an API key. If you scroll a bit and
you see different API's, that they are providing
different endpoints. For example, the space
weather database of notifications, knowledge,
and information. Here, for example,
an endpoint is the coronal mass ejection. Here you can see
that as an argument, you need the API key
which is provided in section after you enter
first last name, also email. If you are actually working on an application or a website
that will be using this URL, you might as well enter it. But it is optional because
some people may only do these to maybe understand API's or just retrieve some
information from them. After I showed you
that, we clearly see some steps into
making an API call. First of all, we
need to provide here first name and last
name and then e mail. I'm going to do that very quick. Then when we click sign up, you can see that the API key that is provided
to us is this one. We might as well copy it. Now you see that it also takes us furthermore
with some instructions and tells us that we can
start to use this key to make different requests
to their web service. We need to simply pass this key in the URL when
making a web request. And they even a simple
example request here. Now if we copy this and then
add curl in front of it, we can make a request to the
endpoint of the API of Nasa. As you can see, the part is the astronomy picture of
the day, for example. This is one of them. You can see that it can also
take different parameters, but it needs the API key. We actually copied
this endpoint from before that it was provided by them when we entered
our credentials. So now getting into how we can
actually make the request. We can switch on to command line that I opened in
administrator mode, but if you are on
Macos or Bundle, you can open your terminal. And after that, after you make sure your curl is installed, you can go ahead and
type curl. And then. Right, the API endpoint, but also filed by your key. When you click on Enter, you can see that we get some information in
our response here, and that is in the
Json type scripting. And you can see that it takes us to the copyright
of the picture, which is this guy called Nick, I think is pronounced then the date of the picture,
when it was taken, and then the explanation
of the picture. We can also get some other
stuff like the title, the URL, which we
can copy from here, and I think it will take
us to the actual picture. You can see that this is the astronomical
picture of the date. Now, this is a pretty simple way in which you can access
the endpoint of an API. As I said you, just like you did this query
parameter of the API key, you can actually type
another question, mark the date and then equals. And then write a specific date from which you want to retrieve the astronomic picture of that day or an array of them with the start
date and the end date. Then of course, the
EPIK, as I've said. And then some other query
parameters as well, that you can sneak in after this endpoint is
finished, after the APOD. This was about it. As I've said, there are usually some extra steps when you
are trying to an API. You can add those being some
tokens that needs to be exchanged with you and the server that you are trying
to make the request to. Just so that server
makes sure you are trying to access the
information in your rights, just like the API keys here. The token process is a bit more complicated and we will
not get into it here, But I have another course
that is only about the process that handles these
tokens within the API's. This was about how you
can make a simple request with L on your personal machine. But in the next lecture, we are going to also take
a look on how you can make this request within a new program that's
called Postman. And if you're wondering, why would you want to do that, It's because that
program will show you more details about your request and more
options regarding it. Like the HTTP verb you want
to call your API with. All the information
will be formatted with text box and it will be
much easier to read by you. And also many more other options that we will see in
the next lecture. As I've said, I really
hope you guys got something useful out
of this lecture. I hope you are a step closer
to calling an API in, within your own web application. Or even just closer
to understanding how exactly you can
manipulate API's, call them and what
they actually are. Thank you very much
for sticking with me up to the end
of this lecture. I really look forward to see
you guys in the next one.
35. Calls with Postman: Hello guys and welcome
back to this API course. In this lecture, we are
going to take a look at how exactly we can make requests to different public ABI's using the tool Postman. First of all, you
need to download Postman on your
current local machine. To do that, you can hop
into your web browser, enter the Postman name, and then just click on the download post sub
link of the first link. Then you will be redirected to this page where we can download. This executable can be also downloaded for
Macos or Linux. It does not matter what OS you are currently using because
it will work for you. However, now you can see that the executable
was downloaded. We can go ahead and open it, and you can see that it goes
straight to installing it. Now it will open it up. As you can see, just like that, you have the tool on
your local machine, you can create an account. But I'm just going to skip
and go straight to the app. Here you have an workspace
already opened up, but to make things easier for you to understand
in this right part, you can add a new tab where
you create a request. And as you can see, this is much more oriented on
creating requests, servers, and getting
their responses back. Then the current instruction
from the CLD was. You can now go ahead and enter the URL of the last
tutorial here. As I will do in just a moment. What we are going to do here is enter this request
and we are going to use the HTTP get as we want to retrieve resources from
the specific endpoint. If we click Send, you can see here that in
the bottom portion of the request of the page is our response to the
request we just sent. You can see here, first of all, the network and then the status with which our
response is coming. It is 200, which means okay, if you hover on it,
you can also see exactly what this
status comes from. If it is a non standard one and you might not
have seen it before, you can actually look at
an explanation of it here. Then we can see in how
much time the response came back and then the
different portions of it, if you're interested
in that stuff. And also the size of the
response that we got here, it is the body
which is of course, can be raw and this
is the one we saw on our current response earlier in the previous lecture in the CMD. We can also see a preview, we can visualize, but the visualizer for this request is not actually
set up right now. But by default it is on
the pretty tab which makes the whole text
formatted in this JS file. And as I've said, you
can see that it is also parsed and highlighted
on different portions, which is very useful for us to actually see different
details of our request. Here you can see that we have different query parameters
that he by default, when parsing the
endpoint that we are making the request
to is taking out of. You can see that he detected our APIke parameter and
also the value of it here. Of course, you can add
some description to it. We can also add different
parameters from here, instead of actually furthermore typing them into the request. And this can help managing
the length of the endpoint, because even though the length of the endpoint will
be bigger here, we can actually
make less mistakes. As we can see the key
and the value better. Now on the authorization part, we can leave it as it was. This is a public API in. The IP key we retrieved from the website is
directly in the link. But if we would use tokenized security system
for accessing this API, we would need to actually
use the two token or whatever protocol they are using when trying to make
an actual request. Then here are some headers, the body and different
stuff like that. Here on settings we have
different HTTP methods settings, but that is not as important. This is just a preview of what you can do with Postman and how you can create
request with this tool, you can also create another request furthermore
and also save this one. And if you create an account, all these requests
will be saved for you just in case
you need to go back on them and review
them or maybe actually copy them into your current application that
you are working on. This is about it on how you can make the request with
the postman service as well. Thank you very much
guys for sticking with me up to the
end of the tutorial. I really hope you got
something out of it. And if you have any
questions about the process of creating the
request for an API, you can go ahead and left
them here on this course. And I will do my best to respond to you in
a timely fashion. Also, if you are interested. I also have another course
on the O process that the majority of API's are
using nowadays and adds an extra layer of security when you are trying to make
a request to them. You can check that out as well. But for now, thank
you very much As I've said and I look
forward to seeing guys in the next
lectures and tutorials.