System Design - High Level Architecture Design (For Scalability, Reliability, Consistency) | Tanmay Varshney | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

System Design - High Level Architecture Design (For Scalability, Reliability, Consistency)

teacher avatar Tanmay Varshney, Software Developer, Tech Educator

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

13 Lessons (1h 39m)
    • 1. System Design

    • 2. Basics

    • 3. Load Balancers

    • 4. Caching

    • 5. Cache Eviction Policies

    • 6. Types of Cache

    • 7. Data Partitioning

    • 8. Data Redundancy

    • 9. SQL Vs NoSQL

    • 10. CAP Theorem

    • 11. Consistent hashing

    • 12. Message Queue

    • 13. CDN

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

System Design is the process of designing the architecture, components, and interfaces for a system so that it meets the end-user requirements.

Designing large scale systems is becoming more crucial than ever. It doesn't matter if you an entry-level software engineer or the technical manager at your workplace, you should be aware of these concepts. Understanding how to scale a system, making it more reliable and available, and how to keep it maintainable will definitely give you an edge over others.

This course aims to help you learn to design large-scale systems and prepare you for system design interviews. You will be introduced to the topics you should consider before you start working on your project so that you could build a strong foundation for it.

Let's start learning.

Meet Your Teacher

Teacher Profile Image

Tanmay Varshney

Software Developer, Tech Educator


I am a Senior Software Engineer with vast experience of working in top tech giant companies.
I have more than 6 years of industry and teaching experience in domains like:

1. Designing scalable architecture for complex and distributed systems.

2. Developing components in a system across the full stack.

3. Solving complex data structures and algorithms related problems.

These are the major skills needed to be a good software developer who can excel in any tech company easily. I am really passionate about sharing my knowledge and expertise with you.

Thus, I am on board to create awesome technical courses on Skillshare based on my expertise which can be understood in the simplest manner.

Come, join me in this learning adventure!! I will... See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. System Design: During the system design basics. Do you know that the average amount of videos watched on Netflix, but to vk is roughly around 1 billion hours. The number of tweets posted birdie are around 500 million. That's about 6 thousand. Greeks eat chicken and tags, just the number of tweets posted. We're not even talking about the retreats under greets liked. Ran these numbers. Fascinating. Does that not make you wonder, how is it even possible for companies with a bunch of engineers who handle such huge volumes of traffic. While the casual user, as unaware of the complexity. As a system designer, you must face it Handling. You'll need to be aware of the high level architecture of the application on which these products are built. They all have a very solid foundation and we'll code. It is very important for these companies would be functional all the time. In today's world, it's hard to imagine even a single minute per day without these services. What makes these tools available to us? 24 by seven? The answer is the evade. These systems are designed. And that's becoming more of a skill. Understanding how to keep your systems functional all the time. It is now considered a primary skill to have. When you are preparing for an interview, are trying to build a system for your organization or for your own product. And learning how to design scalable systems will help you become a better engineer. The aim of this course is to help you learn to design large-scale systems and prepare you fund the system design interviews, you will be introduced to the topics which you should consider by designing your systems. So let's get started. 2. Basics: Hey guys. In this video, we are going to talk about system design basics. First of all, let's understand what system designers. System design is. The process of designing the architecture competent and interfaces photosystem so that it meets the end-user requirement. In the software engineering, system design is one domain in which everyone should be somewhat familiar with it. No matter what your role is. Vile designing systems. There are three primary concerns that should be addressed. Reliability, scalability, maintainability. So now let's see what all these mean indicated. Roughly. The liability means continuing to work correctly, even when things go wrong. In the system design domain. The liability means the ability of a system to validate for other problems in order to prevent failures on complete shutdowns. Large systems are built using fault intolerant companies. The beauty and the arc of system design is to build a fault tolerant systems using fault intolerant companies. No. Faults can be categorized as hybrid forms, also filled voids. Hybrid forms happened a lot in large data centers. A large dataset that valve thickness, hard-disks going bust everyday. Memory will be corrupted on a regular basis. Hiv metaphors can be addressed by adding redundancy. That is, the data centers can have multiple unlimited backups to avoid the single points of failure. Software Ford's can happen due to a variety of reasons. Abundant of a process can homegrown system resources and cause a systematic thresh across all the nodes. Or the operating assumptions of BI applications can change and desert in crashes. Therefore, it can be handled by understanding business requirements and breathing resiliency to handle deletions from the same begun ammonia bidding, to publish warnings or Leland mega unit testing. And finally, by designing better abstractions and interfaces to easily isolated the problem. Scalability is the system's ability to deliver reasonable performance in the face of increased load. For example, for the social liberating the plate, the expected number of rights or posts or to thicken or read. That is, in your timeline view. What a thickened can be used to describe the performance, can be thought of as the systems operating characteristics than the system's loaded pedometer is changed. For example, you might measure the performance in terms of system average response time. Of course, there are many ways of measuring the performance of a system which is currently outside the scope of our discussion. Maintainability means writing code that can be easily understood, refactored, and upgraded by someone who is not the original author of the code. Any piece of spaghetti confusing code will ultimately be understood by machines. Good code should be readable and easily understandable so that he could collaborate. Good code should always have the right level of abstractions, getting APIs and interfaces. So dark new functionality can be easily built on top of the existing code bases. Next, we will see the different components that needs to be combined together in order to build scalable systems. 3. Load Balancers: Hey guys, welcome to the first lecture in the system design basic cities. Today, we are going to talk about load balancers. A load balancer is a very important component of any distributed system. Read balancers distribute the incoming client requests to computing resources, such as a cluster of application servers and databases. In each case, the load balancer returns the response as long the computing this OLS the servers or the databases to the appropriate client. The basic purpose of a load balancer is to improve the responsiveness and the availability of an application, website or a database. A load balancer also keeps track of the status of all British forces while distributing Nyquist. Let's see. A server is not available to take on new requests, or it's not responding, or it might have an error rate, the load balancer will stop sending traffic such as null. This is achieved via Celtx. The load balancer regularly attempt to connect to the back-end servers to ensure that the cell bodies are listening. If that feels a health check. It is automatically removed from the pool and the traffic will not be forwarded to it until it responds to the Healthix again. Now let's have a look at the basic architecture. Awful. Load balancer. Typically, a load balancer sits between the client and the servers, excepting incoming network and application traffic, and distribute the traffic across multiple backend servers using various algorithms such as round robin. No, here does some basic architecture. You could use a load balancer. Now, here we have a client. Since that equates through internet, our load balancer. Now it's the responsibility of the load balancer to distribute the traffic between our various web servers. By balancing application requests across multiple servers. A load balancer reduces that individual server load and prevents anyone applications that were from becoming a single point of failure. Thus improving the overall application, availability and responsiveness. Sue to state precisely, load balancers are effective at preventing linguists from going to unholy Sanders would have anything overloading of the resources and helping and eliminating single points of failure. No delays, full scalability and redundancy. We can try to balance the load at each layer of the system. You can add load balances at replaces between the user and to the observer. Between the web servers and an internal platform layer, like application servers, one cache servers. And between any internal black families and databases will see it through diagram. So here is our first load balancer. Sitting between the client and I'll get servers. Now, here is a second load balancer sitting between our web servers and application servers. Now, this is a third load balancer. Bitches being utilized between the app servers and the databases. So the first load balancer distributes the incoming gland linguists believe epsilon_1 or the web server to the secondary load balancer, further distributes the traffic in the application server or the application server do. And the third load balancer distributes the incoming traffic from the application servers databases. This is basically distributing the load on interfaces. Now let's go over the advantages of using a load balancer. First upon uses expedience or faster. And an uninterrupted service, users won't have to wait for a single struggling to finish its previous tasks. Instead, their requests, I'll immediately passed on to a more readily available resource. Second, service providers, expedience, less daunting and higher throughput. Even a full server failure vault affect the end-user experience as the load balance it, they'll simply note around it to heavy server. Towed. Load-balancing makes it easier for system administrators to handle incoming requests by decreasing read painful users. And next, system administrators, expedience. Fewer feed all synced components. Instead of a single device or forming a lot of work. Load balancing has several devices are forming a little bit of luck. Now let's move on to some disadvantages of using a load balancer. The load balancer can become a performance bottleneck if it doesn't have enough resources or if it's not configured properly. Second, introducing a load balancer to help eliminate single points of failure results in increased complexity. Totally. A single load balancer is a single point of failure. Configuring multiple load balancers further increases the complexity. So keeping in mind the advantages and the disadvantages, bit a load balancer offers. Benito include advisedly as but our needs in the system which we are designing. 4. Caching: If you had on the slow Internet connection and building a website that is before any high-quality image. However, in your subsequent visits to the same state, so that the page renders extended image instantly. Venue visit a brand new website. It takes more time to load. Then a frequently visited the same browser. Now let us see another case. Noticed by watching a YouTube video that keeps bubbling simultaneously. Then you have a slower internet connection is not interrupted. The video continues to play until the buffered amount is reached. In Buddhist net use, the internal mechanism that is happening is caching. So now let's discuss about cache. Cache books on the principle of locality also finance, the gash acts as a tool for the data to speed up the lookup on. The. Also Ganesh is to deduce that each agency and amplify the throughput. Now let's see a real golden out. And let's say you want to cook dinner tonight. You need different ingredients, vegetables, spaces, et cetera, for the preparation. But do use at the supermarket every day. You know, that would be too cumbersome. So you check your kitchen audio definitely data to look for the required ingredients. This view of the data from the supermarket. Now here, your refrigerator is acting as a guest. And the supermarket is your data source or data stored. Everything. So the benefit of using the cache and this interview is that it saves you the time to visit the supermarket and drop your ingredients. So now let's see how applications work and how we can use the gash. Generally speaking, any backend application stores the data in a database. Then a client tries to teach any data. It's wrong. The application, the application queries the database, fetches the data and the database, and returns it back to the user or the client. The needn't be silver could be running to the application server on the same system as a separate process, or the database server running on a different computer altogether. Now, taking the data from a database is time-consuming since it needs an operation to get the data of the file system. If the data is stored in the cache, the read operation will be pleasing fast, because reading from the memory is Vz. Then reading from the file system. And the databases stored the data in defense system by Lagash keeps the data in memory. So when a client requests some information, so don't let an application, the applications that were just takes the data place to fetch the data from the cache. In case the data is found in the cache, it will be application. And the application server could then return the data to decline. Now, in case the data is not found in the cache, it will be committed to the database. The database would then be returning the application server with the data. And the application server could store that data. Nb gash. In order to avoid further querying the database or the same or similar requests. Bent the gland the same data repetitively. It makes more sense convention from the cache, then from the database. And let's look at an example of a verb to use cash. Let's say if it becomes Why did all the users and fetch the data from the same tweet? And since total hasn't Melinda, users using the cache, then see millions of calls to the database and the users would be visited with information at a much quicker rate. In this way, a cache on the database. If the data is found in the gash, a database call will be saved by reducing the pressure on the database. 5. Cache Eviction Policies: Hey guys. In this video, we're going to talk about cache eviction policy. Since cache has a limited capacity, it may become full at some point. And then, depending on the Big Data is being accessed by the application. Hence, we need to come up with a strategy. Other policy, Buddha move the data from the cache and replace it with the one that has more probability to be accessed in the near future. There are multiple cache eviction policies. Lru least recently used, LRU least frequently used. And most recently used. These are the most commonly used eviction policies. You will see? No. Let's talk about these individually. Talking about LRU. This policy removes the NPV from the cache, which is the least recently used one. So as soon as the gas becomes full, audit is about to become for them the least decently uses. Entry is evicted from it. And the recent entry is added to the cache. So you can imagine Facebook. It's towards celebrities, photos in a cache. The data access pattern of the followers are such that they're interested in the most recent photos. So then this celebrity photos cash becomes full. It will kick the photos that was least recently added to it. So let's say for example, b1 photo, B2, B3, B4 are the photovoltaic jobs that, that added to the cache. And but x and d, p plus one, p plus 23. So this represents the time at which these photographs were last accessed by the gash. And let's see, we need to add b5 to the cache and only supports for photographs ended I. So we need to remove one photograph and then only we will be able to add if I so in this case, even would be removed from the cache because this photograph has been least recently used before, was the most recently used. And so b1 will be removed from the gash, and B5 will take its place. And the time of its axis will be p plus four. L of u. That is the least frequently used. It a few keeps track of the frequency are the number of times a data item is accessed. In case of cache size crossing a given threshold, it will emit to the entry with the lowest frequency. For example, venue type any word via texting on your smartphone. Your phone starts suggesting multiple words that you can select. Instead of typing the whole word. Internally, you are formed software maintains a cache of all the words that you have time along with its frequency. The liberal elites, that the lowest frequency. So let say you bake Vasa the most. So as soon as you start typing W and a, your phone will start suggesting you wassup immediately. Because this has the highest number of frequency and you have cash. Now, in case of a tie between multiple modes, then the list essentially used to burn is evicted from the cache. Mrd, or the most recently used in MIT you, the most recently used N3 is removed and preference is given to the oldest entries in the cache. If the data access pattern is such that the user is less probable to the most recent entry. Then this strategy is used for eviction. An example for this type of cash is dating apps like Tinder. It generally caches all the potential matches of a user. Then the user either left or right rapes or provide. The app shouldn't recommend being provided to the user. Again, if this happens, it will result in a poor user experience. So it's necessary to edit the entities which would absorb most recently. The application must remove cache entry of the profile that either left or right. 6. Types of Cache: In this video, we're going to talk about different types of caches. By guessing is fantastic. It does require some maintenance for keeping cache coherent with the source of tort. That is, when the databases, if the data is modified in the database, it should be invalidated in the cache. If not, this can cause inconsistent application behavior. Solving this problem is known as cache invalidation. Based on the cache invalidation, There are three main planks of caches. First, to gash than bad gash, and third, write down gash. Now let's see what all of these mean indegree, right through cache. As the name moves, the data is first written to the cache and then do the database. This is an application server. As soon as it needs to read some data. It first reads the data from the cache, and afterwards it writes to the database. This ensures consistency between the data in the cache and the data in the database. Every lead then on the gash follows the most recent rate. However, the downside of this approach is that the application rate latency increases because the data is first written to the cache and then it's persisted in the database. This approach is not suitable for any write-heavy system. It is useful for applications which reduces the data frequently. Buns, it's persistent in the database. Write latency can take a hit, but it's compensated by lower latency and consistency. Next, we have write-back cache. As we saw, the late two cash is not suitable for write-heavy systems as the latency can spike. An alternative approach is to write the data to the cache first and Mark the leader as modified, which can be updated in the DB later. So an application server writes the data to the cache. And then an honest job could regularly lead all the modified entries in the cache and update their corresponding values in the database. This approach would need that impacted read, not delayed latency. The only disadvantage is that there will be a lag due to data thinking between the cache and the living. Now, since the database is the source of the brute, any obligation reading from dB would read still embodies. The sites such as YouTube. Uses write-back cache, goes toward the buccal of any video. Updating the database for every single view of any vital video would be expensive. Writing data to the cache. And then sinking in the DB is a better solution. He had. Usage of write-back cache, ensures read and write latencies. Next IS radon gas. Fuel back-end applications do not frequently Really, the most recent data, in this case, radon gas is used. And this policy, the database, is updated without breaking the data to the cache. So the application server first write the data to the db. And then for any of you eat the gash queries the database. If the entries are not pleasant in the gash. This approach doesn't load the cache, the data that won't be delayed. The downside of this approach is that if the application starts querying for the most recent data, it will desert and my double cache misses. So these are the three types of caches, which have some positives and some negatives. It entirely depends on the scenario that which cache should you be considering while designing your systems? 7. Data Partitioning: Hey guys. In this video, we're going to talk about data partitioning. It is also known as data sharding. Data sharding is a process of breaking up large tables into multiple smaller tables or junks, known as jibes, and distributing the data across multiple machines or intercluster. Each chart will have the same schema and columns like that of the original table. But data stored in each child is unique and independent of other charges. There are two ways of data sharding. First, vertical sharding or vertical partitioning. And the second is called horizontal partitioning. In vertical partitioning, the main table is broken down into multiple partitions by separating the number of columns. Here, as you can see, this main table has user specific information. That is a user ID, username, and the user email. This information is split into tables. The first containing the user ID and the user name, and the second containing user ID and the user emitted. In case we need to retrieve a particular user's information. We can join both these tables based on the ID. Here. We have divided the table vertically. Hence, it is known as vertical partitioning. In horizontal partitioning, v divide the main table according to the number of rows. That is uncharged. One, V keep a few set of rows. And in charged to V came another set of rows. The data in both the shots combined will give us the original data. Here. The data could have been divided based on a few factors, which we will see in the coming late. Database. Sharding is pretty much similar to horizontal scaling. That is, adding more machines, auto-scaling out. Hence. It allows us to add more machines when existing cluster in order to spread out the load, allow more traffic and faster processing. Also, chiding helps to make that application distributed, thus, minimizing a single point of failure. Database sharding needs to be done in such a way that the incoming data should be inserted into collect chart. There should not be any data loss. And the desert queries should not be slow. Considering these things. Let's see, what are the techniques to shard the database. First is Hashmi sharding, or also known as GIS-based shining Vedic. A key-value pair, such as a customer ID on planned IB or immolating from the newlines are the data. Then pass it to a hash function and insert the data into the doesn't think shied. From our previous example. Let's say we have the following data to be inserted. User ID, username, and uses email. Let's see, the first value is one. Username is ABC. And some email. Abc at Here, redesigned to shard our data based on this user ID. So the bus, this user ID do a hash function for our example, but verses this hash function, just does the modulo of the user id by three and assigns a good resulting child. So the modulo 43 would result in 012. So whenever the result is 0, it is a same Bouchard run. Whenever the result is one, it is assigned to SIO2, and whenever the result is two, it is assigned to shard three. So in this case, the user ID one, it would be assigned to our chart to. So this rule would be inserted in our shot too. In case we need to add another rule, that user ID two and some other attributes. This hash function would then pick do modulo three, which will result in two, and assign the data. Bouchard three. This is the simplest sharding algorithm and can be used to evenly distribute the data among shots. And prevent the risk of having a data hotspot. Database hotspot problem arises when one-child is accessed more as compared to the other sharps. And hence, in this case, any benefits of shining are cancelling out. The main issue with this approach is that it gets really challenging to dynamically add or remove a database server. Every time this happens, we need to reach out the database, which means we need to update the hash function and rebalance the data. Also, if it happens frequently, this can cause data loss. So let's see. We need to remove or server, which hosts are shard three. In this case, we will have to first modify our hash function. And then the hash, all the data that has been stored inside one, SIO2 and child three respectively. Because as we change the hash function, the distribution of the data will also change. Ventilation. But this problem is to use consistent hashing. Consistent caching provides scalability even when we have lots of data among lots of servers. And the number of available servers change continuously. Vivi, learn about consistent hashing and upcoming lectures. Next we have the deans to be shining and reentry sharding. The shard is chosen on the basis of the range of a Schottky. The range of sharding is chosen in such a way that the Schottky is likely to fall in any one of the possible values. So let say we have a recommender system that stores all the information about the user and recommends movies based on users. It. Hence, we can create a few different charge and divide each user information based on which age range devolve into something like this. If the user falls in the age range of 0 to 18, the data would be stored in child number, but if he falls in the age of 19 to 27, the associated child would be shocked to. And similarly, the interface sharding is also very easy to implement. We just need to check the range in which our current data faults and insert or read the data from the corresponding shied. Also, every shard holds a different set of data. But the schema of all the shards is seen. The major drawback of this technique is that if I did that is unevenly distributed. It can lead to database hotspots. Then we have the sharding and directly based charting. We have a lookup table. It's toward the schottky to keep the track of which art stores what entries, fluorine or read the data. Declined engine first concerns the lookup table to find the shied Number four, the corresponding data using the Schottky and then visits a particular shard to perform different operations. So an example of the literally be sharding would-be stored the data based on user's geolocation. That is, if the user is located in us, it would be stored in childhood. If the user is located in UK. His information would be stored in chart do. If he's looking at it in India, then it's information would be found in chat three. This sharding is pretty much similar to range based shining, except instead of determining victory and the shots gained data falls into. Each key, is dyed with its own specific shied. Unlike has be shutting, which uses a fixed hash function and range before deciding, which requires us to specify a range in advance. Dedicatory be sharding allows you to use whatever system on algorithm you want the US to assign data in pleas to the shops. And also it's a relatively easy, good dynamically add charts using this approach. The main issue medically based sharding is we need to conserve a lookup table before every deed and write query. Hence, it can embed application performance. Also, the lookup table is blown through a single point of failure. One solution. But this problem is to use load balancers. But again, frequently updating the copy of the lookup table in each server would be an overhead? No. Let's talk about some of the benefits of sharding. Database. Sharding helps us to facilitate horizontal scaling ends. We can add more machines to the existing cluster and distribute the load to scale up applications. Faster query response time. Without database sharding, the database needs to compare a gritty to each and every row. And it can be a huge setback. But with chiding, instead of traveling all the rows, we need pro delivers only a few rows present in the particular side. Sharding makes maintenance easier because each side contains a chunk of data. Database sharding eliminates the problem of a single point of failure and makes our application more fault-tolerant. Which sharding. We have reduced cost. Because if we try to add more RAM and storage to an existing machine in order to vertically scale it up. It is an expensive glosses by having several nodes. The less computation power is cheaper. Obviously, there are some drawbacks of shining. Database. Sharding becomes complex. Vented comes to practical implementations. Also, if incorrectly, it can lead to data loss and corrupt tables. Another major issue with sharding is that the charge might become unbalanced. In case of database hotspot problems. A major chunk of data might fall into a particular set of charges only, and the remaining shots might remain empty. Lunch sharding is done. It is very difficult to return back to the original uncharted version of the database. So this was your brief introduction to the database, sharding 8. Data Redundancy: So the topic of discussion for this video is data replication and redundancy. Replication means duplication of critical data services with the intention of increased reliability of the system. For example, if there is only one copy of a file stored on a single server, then losing that server means losing the filing. Since losing data is never a good thing, we can create duplicate or redundant copies. To solve the problem. The same principle applies to the services to if we have a critical service in our system, ensuring that multiple copies, all versions of it are running simultaneously, can secure us against the failure of any particular node. Creating redundancy in the system can remove single points of failure and provide backup if ever needed in a crisis situation. For example, let's say we have two instances of a service running in production. Let suppose our primary service field or degrade. Then the system can fail over to the second disservice. In such scenarios. These vehicles can happen automatically or can be controlled manually. We can also feel good to unmetered database in case our primary database fields. Now, another important part of service redundancy is to create a shared nothing architecture. In shared nothing architecture, each node can operate independently of one another. This means that there should not be any simpler service managing state or our guests reading activities for the other nodes. This helps a lot with scalability since new servers can be added without special conditions on knowledge. And most importantly, and most importantly, such systems are more resilient to failure. As there is no single point of failure. We always have a secondary server or the secondary database in case we need to trigger a failover. Now let's look at the advantages of data replication. Dna replication is generally performed to achieve higher availability, reduced latency, scalability, and network interruptions? No, let's discuss each of these in brief. Hired availability means ensuring the availability of a distributed system. That means system keeps on working even in gives half of one or few nodes getting filled. So we can simply state that keeps on working. Now, reduced latency replication assists. In reducing the latency of data queries by keeping data geographically closer to the user, example CDN, it keeps a copy of replicated data closer to the user. Have you ever thought of how Netflix streams reduce? That said chart latencies? Will data replication is one of the reasons for that. Dude scalability. So you'd queries can be served from replicated copies of the same data. This increases the overall throughput of the queries and network interruptions. Artist some books, even under network fault. It's also important to understand the disadvantages of data replication. First and foremost, most students bases needed as storing the replica of the same data at different, say, consumes more space.. Secondly, didn't replication becomes expensive, vendor replica at different site needs to be updated. And third, maintaining D-Day consistency at different site involve complex measures. Now let's look at a technique for data replication. This technique is called master-slave replication. It's one of the most common practices in data replication. In master-slave replication technique. The leader, or you could say master, or the primary node. Here. We could say it has leader. Or a primary node, replicates data to all of its followers, which could be termed as slaves, are read replicas. In some cases. Our secondary nodes. This is the most commonly used mode of replication. Whenever a new rate comes to the master. It keeps their dry goods local storage. And since the same data to all of its replicas as a chain stream order replication log. Eats live, then update its own local copy of the data as it was possessed by the leader node. Many relational databases like MySQL, PostgreSQL, and your SQL databases like MongoDB, rethink BB, and espresso uses this mode of replication. Message blue cards like Kafka and cues like Rabbit MQ also employ single leader based replication. Data. Two replicas from a leader is copied either asynchronously or synchronously. Iit setup method of replication has its own set of pros and cons, which are currently beyond the scope of this discussion. So I hope you got some clarity or data replication and redundancy, which is a principle that needs to be followed while designing systems. 9. SQL Vs NoSQL: And the world of databases. There are two main types of solutions. Relational databases and non-relational databases. We are more familiar with them as SQL and NoSQL. Both of them default indivi, developing the kind of information destroyer and the V distorted. Sql and relational databases store data in a row and columns. Each row contains all the information about one entity. You could imagine it in the form of a table. Having multiple rows and multiple columns. Each row contains all the information about one entity. And all the columns are the separate data points. Some of the most popular relational databases include MySQL, Oracle, MS SQL server, SQLite to both green and MongoDB. Talking about non-relational databases, NoSQL databases, the following are the most common types. First, key-value stores. In key-value stores, the data is stored in an eddy of key and value pairs. The key is an attribute name, which is good value. Villain key-value stores include Voldemort and DynamoDB. Next comes the document databases. In these databases, data is stored in documents instead of rows and columns of a table. And these documents are grouped together in the form of collections. So the data is stored in documents. And a group of documents is called a collection. Each document can have an entirely different structure. The examples of document databases include CouchDB, odd MongoDB. Now let's come to the third type of database called ID column databases. Instead of tables. In columnar databases, we have column families, which are containers for rows. And like relational databases. We do not need to know all the columns upturned. And ito doesn't have to have the same number of columns. You could imagine it as something like this. Go luminary databases are best suited for analyzing large datasets. Now, the examples include Cassandra, HBase database. Then we have graph databases. These databases are used to store data. Who's a deletions are best represented in the form of a graph. Like this. Data is saved in graph structure with nodes called entities. Properties. The information about the entities and lines, the connections between the entities. The examples of graph databases include Neo4j in finite graph and others. No, let's look at some high level differences between SQL and NoSQL. Sql basically comes under the RDBMS relational database management systems. Leaders. Nosql comes under non-relational or distributed digital systems. These databases have fixed or static or pre-defined schema. Vedas. Nosql databases have no schema, are very dynamic schema. Sql databases are not suited for hierarchical data story. Vid, as NoSQL databases are best suited for hierarchical data storage, SQL databases are best suited for complex queries. Vision, you need to merge multiple entities to bring out some information. Versus NoSQL databases are not so good for complex queries. Sql databases are particularly good Fit Vertical Scaling. Vid as no sequel databases very beautifully supports horizontal scalability. Now let's look at the reasons you should be using SQL databases. First of all, if you need to ensure acid complaints, as it complains, reduces anomalies, and protects the integrity of your database by prescribing exactly how transactions interact with the database. Generally, NoSQL databases sacrifice acid complaint for scalability and processing speed. But for many e-commerce and financial applications, as it complained, databases remains the preferred option. Then your data is structured and unchanging. If your business is not experiencing massive growth, that would require more servers. And Only working with data that's consistent, then there may be no use for you to use a system designed database to support a variety of data types and high traffic volume. Not when you should be using NoSQL databases. Then all the other components of your application are fast and seamless. Nosql databases. Data from being diverted in. Big data is contributing to a large successful no SQL databases, mainly because it handles did differently than traditional relational databases. A few examples of NoSQL database at MongoDB, CouchDB, Cassandra, HBase, as we previously saw, null. The reasons to use MySQL database are as follows. When storing large volumes of data that often have little to no structure. And NOSQL database set limits on the types of data we can store together and allows us to add different shapes as the new dangers. Vid, document-based databases. You can store data in one place without having to define a lot of data. Those are the Netherlands. Then you'll want to make the most of cloud computing and storage. Cloud-based storage is an excellent course sieving solution, but requires data to be easily spread across multiple servers to scale up. Using commodity hardware, on-site or in the cloud, gives you the hassle of additional software. Nosql databases like Cassandra are designed to be scaled across multiple datacenters out of the box. If you add another pre-development phase. No, SQL is extremely useful for rapid development as it doesn't need you to be prepared ahead of time. If you are working on quicker iterations of your system, which require making frequent updates to the data subject without a lot of downtime between versions, a relational database will slow you down. Now the question arises, which one do you use? Sql or at most equal? When it comes to database technology, there is no one-size-fits-all solution. That's why many businesses rely on both relational and non-relational databases for different needs. Even as no sequel databases are getting popularity for this speed and scalability. There are still situations where the highly structured SQL database may perform better. Choosing the right technology hinges on your use case. The vast majority of relational databases are acid complained. That is, this abort, atomicity, consistency, isolation, and durability. Vds, the NoSQL databases are base complained. That is, they're basically available. Soft state. That is, you could modify the database whenever you want and they provide eventual consistency. Not out of the box, but yes, they do provide consistency. So when it comes to data availability, see if getting D for performing transactions SQL databases. I've still debated with most of the Nozick solutions, sacrifice as it complains for performance and scalability. 10. CAP Theorem: Hey guys. In this lecture, we're going to talk about bonding principles in system design basics. C stands for consistency, availability, and B stands for audition dominance. So the gap to them states that it is impossible for the distributed software system to simultaneously grew by more than three, guaranteed availability, consistency, and partition tolerance. Then we design a distributed system. Trading off among gap is almost the first thing we want to consider. When designing a distributed system. We can pick any two of the three. What all these mean? Talking about consistency, a system is said to be consistent if all nodes see the same data at the same time. So let's consider a distributed system. The three unknowns are interacting. So simply speaking, if you perform a read operation on a consistent system, it should be done the value of the most recent write operation. And this means that the lead should cause all the nodes on the same data. That is the value of the MOSFET and rate. So let's understand it with the, for example. So let C be provided our system built-in input as x. So let us say this is our data and related this input and this is basically a read operation. Be radon, relate to our system. So if you are reading the data, didn't do our system from any of the other nodes. Let's say we are reading from existing C. So mostly done. Done x has been most recent. Domination alone. Nor let us assume v, some new data to our system B. So let say data as learning gap of this write operation is d plus one. So now if you perform a legal settlement system, see, it should be done learning gap instead of bionics. Because at this point of time, learning gap is our most recent claimed population. Now let's discuss about the availability. Availability in a distributed system ensures that the system remains operational 100% of the time. That means inbreeding gets a response regardless of the individual state. But this does not guarantee that the response contains the most recent grade, no guarantee of the right. In the response. Example for this system, let us say V, they'd been to three in our system. V, x, y, z at time t plus one. So then we are trying to glean from system BY a highly available system, have done, either went to three XYZ. Depending on the synchronization between the systems. This doesn't guarantee consistency, but the systems are highly available. That is, the system is operational. It is giving some response. Any of the linguist. Now let's see what tolerance means. This is a condition which states that the system does not fail regardless of if the messages I'm dropped quite billion between the nodes and the system, partition dominance has become more of a necessity then an option in a distributed systems. It is possible by sufficiently the beginning records across combinations of norms and networks. So in our system, we have three nodes. Let's say we have three nodes. In our system. We have three nodes. These nodes are connected to the system. The system is still functioning. Only. This particular node starts to malfunction. But we cannot see the system is done. It is still functioning. And this particular edition is affected. If any data, it would be determined by some other norm which has duplicate. This node. Now, B cannot be a gender lead us to that is conveniently available. A, consistent. And any partition failures can only made a system that has any three properties. Because to be consistent, all the nodes should see the same set of updates in the same order. But if the network support is updated, edition might not make it to the petitions before a gland out of date partition after having upgraded. The only thing that can be learned. This possibility is to stop settling linguists from the out-of-game variation, but then the service would no longer be a 100% available. So the examples of highly available and consistent systems might not fall. Partition tolerant. Databases like MySQL, SQLite, relational databases. On the other hand, the examples for highly available and by patient dominance systems aren't condominium data stores like Cassandra and consistency and partition tolerance. And we'll do not care about the availability. Then example would be MongoDB. So to conclude, we can see that we can await only two of the three discussed. Getting any distributed system. 11. Consistent hashing: Welcome to the video on consistent caching. Before moving forward with consistent hashing, we first need to understand distributed hash tables. Distributed hash table is one of the fundamental components used in distributed scalable systems. As we know, hashtables need a gain. Our value, and the hash function. The hash function, maps the key to a location where the value is stored. So when we pass the key for the hash function, it returns the index in the hash table data value would be stored. Now suppose we are designing a distributed caching system. Given n cache servers. And in butane hash function would be d modulo m. That is, to find which cache server AGI is present, we simply need to do this modular. And the resultant value will provide us the index of the cache server where our value is stored. It is a simple and commonly used hash function. But it has two major drawbacks. First, it is not horizontally scalable. Whenever a new cache host is added to the cluster, all the existing mappings are broken. Because as our number of cache servers Change, our hash function changes. And all the mappings already done in the existing system goes to win. So it will be a bin waned in maintenance if the caching system and into lot of data that typically it becomes difficult to schedule are down Dane who update our luggage mappings. Second, it may not be load-balanced, especially for non-uniformly distributed data. In practice, it can easily be assumed that the data will not be uniformly distributed. For the caching system, it translates into some caches being hot and saturated, while the others idle and almost empty. So if we have three cache servers, C1, C2, and C3, it might happen that most of the cache reads are being done from C1 and C2 and C3 are not graded DAG much. This deserts in a non-uniformly distributed data. So in these scenarios, consistent passion is a good way to improve the caching system. Consistent hashing is a very useful strategy for distributed caching systems and distributed hash tables. It allows distributing data across a cluster in such a way that will minimize the reorganization. Ven nodes are added or removed. Hence making the caching system easier. That scale up or scale down. Inconsistent hashing. Then the hash table is recites. Example. A new gash silver is added to the cluster. In that case, only k by n keys needs to be mapped. If you recall. In the caching system, we use the mode as the hash function. So it was d modulo N. So only these would have to be remapped. But in this case, only k by n gaze needs to really map. Here. K is the total number of keys, and N is the total number of syllables. So let's see how it works. As a typical hash function system, passion maps a B to an integer. Suppose the output of the hash function is in the range of 056. Imagine that the integers in the range are pleased honoring such that the values are wrapped around. That is, and b to the 0 is stored somewhere here. And digit one is stored some bit here. Integer do is stored somewhere here, and so on. And these are 255, which is probably stored here. Now, given a list of cache servers, we first need to hash them to the individuals in our range. So let's say we had three cache servers and Hashing them to integers deserts in the following numbers. A0 is mapped to five, B is mapped to a 100, and C is mapped to relative and NP. Soviet zoom that is placed at index five. B is pleased at index 100, and C is pleased at index fund MP in our link. Now, then we need to map any key to a particular server. The first hazard. So let's say we need to map the vn. So we pass it to our hash function. Let's say if it outputs to, We'll check on our index. So B will move in the clockwise direction on the link until we encounter our first cash. So index to Watson did hit. So. This should have been mapped to this location, but we have our nearest cash syllable on index five. So this key would be mapped to our server a. Similarly, let say we have DK2 and the hash function for k2 returns 115, which is probably here on our ring. So this ghetto be mapped to here. But since there is no cash celebrate here, we move in the clockwise direction. And the first cache server Vn counter is at index vanity, that is a cache server c. So the map, this scheme in cache server c. So this is how we map our p's who are cache servers. Now let see what happens when we add a new settable. In this case, let say we added cash that would be at index location 125. So this is at 500, this is at 80. Now, he's given it still results in an entity group which is mapped here. And our nearest gas server is a Atlantic slave. This was already stored here. But for that, the ghetto for which the hash function returned 115 map tablet here and was pleased in our cache server c. So what we need to do is we need to map the skin to do our guests ever need. So we only need to move the keys before index 125th as the remaining gaze would be stored in the cache, see only the keys that is aiming at cash see believed to be split. Some of them will be shifted by the other gays, will not be touched. Similarly, if by any chance our guest server E goes down and it is removed from a cluster, will only have to move given D from here to here. As B would be the first cache server on this ring. Then this gas is removed. All the keys that were originally mapped to it will fall into B. And only those keys would need to be moved to be other keys will not be affected. So the only lived to move d by n keys. In either case, if we add or remove a particular gash server. Not for load balancing. As was discussed in the beginning, the real leader is essentially randomly distributed and thus may not be uniform. It may mean the geese on the gashes unbalanced. To handle this issue, we add virtual replicas, funding dashes. Instead of mapping each cache to a single waned on during the map it to the multiple points on the ring. That is replicas. This way. Each cache is associated with multiple portions of telling. We can do this by having multiple hashes for the cache servers themselves. As the number of replicas in Greece, the geese would be more balanced. For this, we can have multiple hash functions for our gash servers and similarly for b and similarly for C. But doing this, we can achieve a balanced gash. 12. Message Queue: In this video, we're going to discuss about message queues. And did advantage while designing a system. Is a message queue. On messageQueue is a component of messaging middleware that enables independent the applications and services to exchange information. Message queues. Stored messages are packet of data that application creates for another application to consume in the order they are transmitted until the consuming application can process them. This enables messages to wait safely until the receiving application is that Eddy? So if there is a problem with the network or the receiving application, the messages in the message queue or not lost. Those message queue is used for asynchronous application to application communication? No. What does asynchronous communication between applications mean? Asynchronous communication means application when it wants to send a message m application. But it doesn't require an immediate response to continue its processing. That means application one would continue to work regardless of the message M being received by the application to. Application two could be busy or might be disconnected. In the network. Application is available. And back a response to application when in the maintain application one can perform some other dusk sessile. So where do we store these messages? Obviously, we do not want our message m to be lost. Now, here comes the message goes to a rescue. Message queues provide temporary storage. Vendor destination program is busy, what is not connected. Now, the best example for asynchronous messaging is then an email is sent. The sender can continue processing other things without an immediate response from the receiver. So a message tool is nothing but the aggregation of Gautama messages and queue. The queue contains a sequence of messages sent between the applications are waiting their turn to be processed. Message is pleased on a Q. I stored until the consumer believes them. Send that application is called the producer. And the receiver application is Goldie consumer. That all of the producer is to produce the messages. And the role of the consumer is to consume the messages. Messages that data to be sent from producer to the consumer. It can either be a request response. Message. Queues do not process messages. It simply stores them. This way of handling messages decouples the producer from the consumer. The producer and the consumer of the message do not need to interact with the message queue at the same thing. Now let's talk about the advantages of Massachusetts. Message queues are important because they help in decoupling the applications. Or applications are decoupled. If they can communicate to each other without being connected. Also, run application is completely unaware of the implementation of the another application. In other words, there is no dependency between them. Now, the decoupled application, any change made to an application doesn't affect the other application. As long as the communication contract is not breached. Beacon easily break ven monolithic application into smaller applications. Vetted reduces overall complexity. It becomes easier to maintain and debugger applications can have cross-platform application is smaller. The applications can be independently developed in any programming language and scaled accordingly. That means the applications could be programming language agnostic. Fit message queues. Did is an increase in the liability and the performance of a system. Producers do not have to wait for consumers to become available. Again, simply add requests in the queue. Consumers can process the messages whenever they are available. And that is simply not overhead in reading. The message goes. Photosystem messages. Even if different application companies who saw your data will be lost and the system becomes more fault tolerant. 13. CDN: Hello and welcome to the video on content delivery network, or popularly known as the CDN Networks. A CDN or content delivery network is a globally distributed network of web servers. Odd points of presence whose purpose is to provide faster content delivery. Now first, let's talk about the benefits of Simeon. The content is replicated and stored throughout the season. So the user can access the data that is stored at the location that is geographically closest to him. This is different and more efficient than the traditional method of storing content on just one central server. As it avoids the bottleneck on that server and provides a high content loading speed. So now let's see how the Internet works with and without CDN. In case we denote have a CPM network. All the requests from our users is being served by the content provided. But in case where we have a cd network between the ContentProvider and our users. The content is served by the CDN instead of the provided. This avoids any potential bottlenecks. Eg the content provided. Since the CD network is globally distributed, declined accesses a copy of the data near to himself, as opposed to all declined accessing the same central server. This is in high content loading speed, thus improving the user experience. If all the data is located on the central server, the user experience is negatively affected by limited loading speed. The greater the distance between the user and the server, the longer it will take for the content to read any of the plant. To put it more simply. The purpose of a CDN is to improve the user experience and provide a more efficient network utilization. A perfect example of seeding is Netflix. Netflix source, all its data on this network. So whenever you start to play a video, Let's see, Netflix has it Silvers based in the US. And let's see, if you are deciding in India, without coelom, Netflix would have to bring all the data from it, us servers to you here in India. This would have resulted in number deletions to buffer the video. But you never notice a lack vital watching your video because the content is stored in CDMA networks. You as a user in India, are accessing the content from this network instead, vitreous, geographically much more closer to you. Hence, it results in a better user experience and also await to throttle the Netflix servers. In the US. Content providers, such as media companies and e-commerce vendors, bcb and operators to deliver did point end. Who did Audience? And done a CBGB's ISPs, carriers, and network operators for hosting centers in their data centers. There are two key mechanisms which explains how CDN functions. First, keep important content distributed to multiple globally distributed datacenters. So it is closer to the end user and thus faster to download. And second, you setup it optimizations based on the content type to get the content to the user most efficiently. This means that if you're buffering the videos on your smartphone, it is the responsibility of the CDN to provide you with only the SD version of the video. If you are putting the video on your laptop, audio, even it would provide you with HD resolution video. This deserts in better network optimization. As you do not need an iterative video, there'll be buffered on your smartphone. So this v, a CDN offloads the graphic self directly from the ContentProvider, thus resulting in possible goslings. Location is key for content delivery speed. The farther the user is from the syllabus that the data is stored, the longer it will take for the content to reach to the user. And this intense negatively affect the user experience. And deducing the CB1 solves this problem and provides the user with a much better user experience.