Computer Networks For Beginners | IT Networking Fundamentals | Edoreal Learning Solutions | Skillshare

Playback Speed

  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Computer Networks For Beginners | IT Networking Fundamentals

teacher avatar Edoreal Learning Solutions

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

48 Lessons (3h 12m)
    • 1. Introduction to CN

    • 2. Types of communication and types of networks

    • 3. Internet

    • 4. Network topologies

    • 5. Protocols and Standards

    • 6. Types of Network Models

    • 7. User support layers in OSI model

    • 8. Network support layers in OSI model

    • 9. TCP/IP model

    • 10. Addressing in computer Networks

    • 11. Introduction to Physical Layer

    • 12. Multiplexing

    • 13. Transmission Media

    • 14. Fiber optic cables

    • 15. Circuit Switching

    • 16. Packet Switching

    • 17. Introduction to Data Link layer

    • 18. Error Correction Methods

    • 19. Error detecting code - Checksum

    • 20. Framing

    • 21. Flow and Error control

    • 22. Stop and Wait

    • 23. Sliding window protocols

    • 24. Random Access Protocols

    • 25. CSMA, CSMA-CD, CSMA-CA

    • 26. Controlled Access

    • 27. Ethernet

    • 28. Giga bit Ethernet

    • 29. SONET-SDH -1

    • 30. SONET-SDH -2

    • 31. Wireless LANs -1

    • 32. Wireless LANs -2

    • 33. IPv4 Addressing

    • 34. Classfull addressing

    • 35. Subnetting

    • 36. NAT(Network Address Translation)

    • 37. Internetworking

    • 38. Tunneling

    • 39. Address Mapping

    • 40. Process to Process Delivery

    • 41. TCP and UDP

    • 42. SCTP


    • 44. Closed loop congestion control

    • 45. Congestion control in TCP

    • 46. Quality of Service(QoS) - 1

    • 47. Quality of Service(QoS) - 2

    • 48. Integrated and Differentiated services

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels
  • Beg/Int level
  • Int/Adv level

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.





About This Class

Computer network connects two or more autonomous computers. The computers can be geographically located anywhere. Computer networks have opened up an entire frontier in the world of computing called the client/server model.

By choosing this course, you will come to know about the basics of computer networks, types of network models, Types of layers in each model, types of devices, types of networks and what is internet etc. You will also learn about the history of internet, history of these computer networks and who are the scientist involved in developing these technologies.

You will learn the following topics with this course

  1. Introduction to computer networks
  2. Networking devices like switches
  3. Learn about LANs and WANs
  4. Subnets, layers, models
  5. Understand OSI Model
  6. Understand concept of TCP/IP protocol
  7. Understand what is Physical Layer
  8. Learn about Multiplexing
  9. Transmission media in Physical layer
  10. Fiber optic cables
  11. Basics of networks and its introduction

This course is targeted at Computer science engineering students who are passionate about networks and security, beginners who want to know about how internet and communication works between computers. Basic knowledge of Computer Hardware and Software is required to learn this course.

Meet Your Teacher

Edoreal is a visual and social learning company to provide quality education to engineering students. We are passionate about engineering and related stuff and we create easy, fun and engaging video lessons with animation and consistently try to update our content as per the new technologies and syllabus.

Everyone must be given the world class education at affordable costs and it should be accessible to everyone in the world.

See full profile

Class Ratings

Expectations Met?
  • Exceeded!
  • Yes
  • Somewhat
  • Not really
Reviews Archive

In October 2018, we updated our review system to improve the way we collect feedback. Below are the reviews written before that update.

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.


1. Introduction to CN: hello. In this course, we discuss computer networks in detail. Why waiting? Let's dive in these days we use computer networks daily because off its advantages. For example, let's say we need to send a file to our friend who is miles away from us. Unlike traditional career service, we could send that distill file instantaneously through computer networks. Despite the miles distant able to exchange data from all points in the world quickly and accurately is only possible through computer networks. Let's see Water Computer Network Ease. A computer network is just an interconnection off autonomous computers that transmit data using electrical signalling. This course explains the fundamentals of computer networks in a simple and understandable way. The basic components of competent attacks are called coast On subjects they hosted an end system that sends and receives messages. A computer in the network can be called as a host. The host, or node in the network can also be a mobile device switch router. Extra submit is a collection of switching elements on transmission lines. Switching elements are nothing bad. Specialized computers such as routers and switches Network provider, a telephone company operates this subject purpose of the subcommittees to carry messages from host to host. The more recent meaning off sub Nitties logical subdivisions are sub networks off a bigger network. Let's see some basic switching elements off computer networks. Hub, switch and router are the basic switching elements. Hobbies. A simple networking device which connects all network dividers on an internal network. It has multiple ports that, except Internet connections from the devices. It doesn't know the exact location of the destination device to where the data must be sent . So if PC one sends a message to PC three, the message will be broadcasted by the hub to all the devices that are connected to the hub . It creates secular issues, unnecessary traffic, westerns of bandwidth extra. That's why it's which is our preferred over hubs. The switch is very similar to hub in connecting dividers on an internal network. The only difference is the switch knows precisely where the data to be sent. It can detect the physical or MCA disease off the devices that are connected to the switch . So, unlike Hub, the data transmitted from a PC would go to the intended PC in the network with help off a switch hub under switch are used to exchange data within the same network cannot transfer data outside the network that is on the Internet To exchange data outside their own network . We need a device that is capable off reading I p addresses. That is read. The rotor comes in and router routes the data from one network to another using I P addresses. So the data from a PC in the first network goes to the switch. Then that order takes the data from the sweet. On far was the data to the switch in the second network and so on until it reaches the destination. 2. Types of communication and types of networks: Let's see the types of communication in computer networks. Communication can occur in three days, and they are going to point broadcasting on multi casting in point to point. As the name suggests to computers directly communicate with each other. That is 1 to 1, whereas in broadcasting, one computer sends data to all notes in the network. For example, distributing weather reports are live radio programs. Broadcasting a message is vestal because some of the receivers might not be interested in the message. Multi casting is nothing but sending messages to, well defend Group off notes. But not all the notes. For example, streaming live video only to the subscribed of yours categories. Off networks In general, there are two primary categories off networks. Local area network on wide area. Network networks off a size in between are typically referred to as Metropolitan Area Network, local area network, Local area network, or LAN, is a private network. Off personal computers are consumer electron ICS that share a common communication medium, such as cables or wireless connection. Lancer confined to a small geographic area. Such a single office or a building lance owned by companies are called enterprise networks . All computers in the land communicate through a hub or switch. The most common type of wide land is the Internet. Here we can see a simple switch. The Internet. Each computer speaks the Internet protocol and connects to a box called a switch standard technologies used in the land or return it on WiFi Weightless Lancer. The newest evolution in land technology, the standard for wireless Lan sees popularly known as WiFi comfort the wireless networks. Wide lands exceed them in all dimensions of performance. Many lands used wireless technologies that are built into smartphones, tablet computers and laptops with the help of heart sports. Why do you Network? A van is normally referred to as a network, which spans over a large geographic area that may comprise a country, a continent or even the whole world. Van provides long distance transmission of data. For example, take three officers from three different cities, which are located far away from each other. The three lands can be connected using transmission lines on switching elements, that is, routers to have communication between all the computers. On this is called a van in a van. The hosts on sub nets are owned and operated by different people. Voters in a van usually connect different kinds of networking technology. So the networks in these offices, maybe a switch of the the net on the transmission lines can be sown. It links. So if van maybe before toe as an Internet work than can be circular switched or packet switch on it can use fiber links or wireless technologies. For example, satellite communication. Let's see the two other variations in once. Instead of having dedicated lines, the company may want to interconnect all its branches virtually using underlying capacity off the Internet, which is called as VP in or what your private network And it is also a type of van. Instead of having dedicated lines on VP in, the company can approach a network service provider or telecom company toe connect all its branches, which can also be a van Metropolitan Area Networks, a metropolitan area network. Our man is a network which is bigger than land but smaller than van. It typically covers the area inside a town or a city. It is designed for those who need a high speed connectivity. A good example. Off man is a part off a telephone company network that can provide a high speed digital subscriber line, or DSL, to the customer. Recent developments in high speed wireless Internet have resulted in another type of man, which has been standardised as I Triple E eight, No. 2.16 on He's popularly known as Why Max. 3. Internet: in the network. Many types of networks are available in the world with the different hardware and software . These incompatible networks must be connected to achieve communication between people connected to different networks. Internet book or Internet. He's nothing but the interconnected collection Off networks. The most important Internet is called the Internet Note Uppercase Letter I. The Internet is a collaboration off millions off interconnector networks History of the Internet in mid 19 sixties, Advanced Research Projects Agency are are per in the department Off Defense Off Us A. Wanted to develop a command and control network that could survive a nuclear attack on the investors who funded in this project, wanted to share the findings between researchers that could eliminate the duplication of efforts and reduce costs. In 1967 at an Association for Computing Missionary, or ACM, AARP represented its idea for our planet, a small network off connected computers. The core idea was that each host computer would be attached to a specialized machine called Interface message processor. Are I M P. All I am piece need to be connected to one another on each MP should be able to communicate with another. I am piece as well as with its own attracted host. By 1969 Arpanet was a reality. Four notes at four universities were connected by the MPs to form a network Network control protocol provided communication between the hosts in the network host is nothing but a computer. In 1972 Bob Kahn and Vint Cerf, both of whom were part of the core Arpanet group, collaborated on a project named the Internet ING Project. Shortly after that, authorities decided to split TCP into two protocols transmission control protocol TCP on inter networking protocol, i P by mid 19 eighties and a self or national science foundation built a new backbone to connect it supercomputing centers to some regional networks with 56 k bps slings, a backbone is a part off a computer network that interconnects various pieces off network, providing apart for the exchange off information between different lands or sub networks. Later, NSF upgraded its backbone to 4 48 k bps and then to 1.5 amperes, which allowed the network connection to thousands off universities, research labs, libraries, museums, ex cetera. In 1990 the 1.5 MBPs links were upgraded to fortify and appears later. NSF created naps or network access points to ensure regional network could communicate with every other regional network. Then many other countries and regions build national research networks. In the early 19 nineties, the Internet has grown explosively with the emergence of World Wide Web, the architecture of the Internet, the Internet has gone through so many phases since 19 sixties. Today, the Internet we experience is the result off decades off innovation on combination off many technologies. So it became complex over the years it is made up off. Many violate DEA on local area networks, joined by various devices, switching stations, routers and so on. Today we use services off the Internet. Service providers are eyes piece to get an Internet connection. The Internet uses Ice peanut works to contact enterprise networks. Home networks on many other networks, all eyes piece are connected in order to deliver the Internet services to the end user. There are international service providers, national service providers, regional service providers on local service providers. The Internet today is not owned by any single person. Our government international ice piece connect all the nationalized piece. The national Internet service providers are backbone networks created and maintained by specialized companies. All these nationalized piece are connected by massive fitting stations called network Access Points on Naps. The regionalized piece in turn, take services from nationalized piece on provide services to localize piece. Finally, the localized peace offer Internet services to the end users. This is how the typical Internet connects the whole world. 4. Network topologies: physical topology. There are two types of connections. Find two point on multi point. We have already seen point to point. A multi point connection is one in which more than two specific devices share a single link . Physical topology means having a truck is laid out physically. The structural representation off the relationship off all the links on linking devices to one another is referred to as the topology. Off a network. There are four basic apologies. Mesh Star Bus on Dring Messed Apology. Here Every device has a dedicated point to point link to every other device. The Tom dedicated means the link carries traffic only between the two devices it connect. To find the number of physical links with a node, we first considered that each note must be connected to every other note, meaning that every note must be connected to and minus one note. If each physical link alos communication in both directions are duplex mode, we can say that in a messed apology we need and into and minus one by to duplex mode dealings. To accommodate those links, every device on the network must have, and minus one I reports to be connected to other and minus one stations. Advantages. No traffic problem. Because each connection can carry its own data load, it is robust. If one link becomes unusual, it doesn't cut up the in that system. It has the advantage of privacy or security. Because of the dedicated line on Lee, the Internet recipients sees the message point to point links make fault identification easy. We can find the precise location off the fault. Disadvantages, installation and re connection are difficult because all the vases must be connected to one another. The bulk of the whiting occupies the available space. The hardware required to connect each link. Our I reports and cable can be expensive. One of the practical examples off a messed apologies. The connection off original telephone offices in which each regional office needs to be connected to every other regional office. Start apology. Every device in star topology has dedicated point to point link only to a central connection point. Usually called a hub or switch, the devices are not directly connected to each other like a mess. Topology. A star topology doesn't alot direct traffic between the devices. The central Point acts as an exchange. One device wants to send data to another. It sends the data to Central Point, which then sends the data to another connected device. Advantage is less expensive than a messed apology because each device needs only one bling on one, I report, it is easy to install and reconfigure. It is robust because if one link failed, only that link is affected. Easy for identification. These advantages. The whole topology depends on a single device. The hub. If the hub goes down, the whole system is dead. The Star topology is widely used in local area networks. High speed lands usually use a star. Topology with a central hub must apology the mash and start apologies used point to point connections. A bus topology, on the other hand, is multi point one. Long cable is used to link all the devices in the network. Each device is connected to attack. Attach it to the long cable. As a signal travels along the cable, some off its energy is transformed into heat. Therefore the signal becomes weaker and weaker as it travels farther and farther. That is why there is a limit on the number of taps, a Buskin support and on the distance between those steps Advantages. He's installation less cabling. These advantages difficult re connection on fault. Isolation a fault are breaking the Bersih Able stops all transmission damaged area reflects signals back that creates nice in both directions. Bust Apology was one of the first apologies used in the design off Ed Leland's Eternity. Lands can use a bus topology, but they're less popular now. Ring topology In this topology, each device has a dedicated point to point connection with only the two immediate neighbors . A signal is passed along the ring in one direction, from device to device until it reaches its destination. Each device in the ring contains a repeater. Many devices receives a signal intended for another device. It's repeated. Region rates the bits and passes them along. Advantages, easy to install and reconfigure. Fault isolation is simplified, and Al Adam rings whenever a problem occurs, indicating the location. These advantages. Traffic is uni directional. A break in the ring can disable the entire network. Drink Apology was prevalent on IBM introduced its local area network token ring. Today. The need for higher speed lands has made this topology less popular. Hybrid apology and network can be hybrid, for example, we can have a main star topology, with each branch connecting several stations in a bus topology. 5. Protocols and Standards: protocols and standards to send or receive information center can't simply send bit streams on expect receiver, toe. Understand there should be some rules on which both sender and receiver agree upon. Protocol is a set off rules and regulations framed to cario the communication effectively and efficiently. These particles are the building blocks off company notebooks. Protocol defines farm it off messages within a layer. Violating protocols makes communication more difficult or impossible. The essential elements off a protocol are syntax, semantics and timing. Syntax. The terms Tin tacks referred to the format of the data. The order in which the data is presented. A simple protocol might expect the 1st 8 bits of the data to be the address of the sender. The 2nd 8 bits off the data to be the address of the receiver and the rest of the stream to be the message itself. Semantics refer to the meaning off each section of bits. Interpreting a particular pattern in the packet and taking some action is nothing but semantics. Timing refers to do characteristics when data should be sent on how far the data should be sent for. Sanders produces data at 100 MBPs, but the receiver can process data only at one MBPs, the transmission will overload the receiver, and some data will be lost standard. There are some standards that have been adopted by vendors and manufacturers for effective communication and interoperability. For example, I Tripoli standards. Some standards must be followed when a new design is to be formulated. For example, eight not 2.3 standards lays down the specifications to later to eternity. So if anyone were to design a software or hardware related to Internet, they would have to follow those specifications then that new design would work with all the pre existing, as well as future Softwares or hardware's off Internet. That I communication standards fall into two categories. De facto standards, meaning by fact, are by convention. These standards have not been approved by an organized body, but have bean adopted as standards through whites produced. De facto standards are often established originally by manufacturers who seek to define the functionality off new product or technology. Did your standards meaning by law or by regulation? These are the standards that have been legislated by an officially recognized body. Standards are dollop through the corporation off standard creation committees, forums and government regulatory agencies. Some standard creation committees are International Organization for Standardization. Iast Pro American National Standards Institute, Enos Eye Institute off electrical and electronics engineers. High Tripoli Layers on Services A computer network is a combination off layers services on protocols, the whole process of communication why and network can be divided into separate layers. Purpose off layering is to reduce the design complexity off networks. The task off sending a message from one point in the world to another can be broken into several sub tasks. Each sub task is performed by a separate layer on each layer has its own hardware and software at the lowest layer, a signal are set off signals he's sent from the source computer to Destination Computer A services a set off operations that a layer provides to its upper layer, or an interface between two layers, with the lower layer being the service provider on the upper layer. Being the service user 6. Types of Network Models: types of network models. Some standard network mortals are needed to enable communication between different kinds of networks effectively and efficiently. The two popular standard network models are always, say difference model on the TCP AP different model. The oil say difference model defines a seven layer network on the TCP, a big difference model represent a four layer network. The always say mortal dominated networking literature before 1990. The protocols related to the oil Cemal are not used anymore, but the model itself is actually quite conventional and still valid. TCP a bit became the dominant commercial architecture because it was used untested extensively on the Internet. The always a difference model was never fully implemented. Unlike toys, a difference model in TCP AP, protocols are widely used, but the mortal itself is not off much use. For the same reason we have to examine both of them in detail. It is stuff to explain some types of communication with TCP AP, such as Bluetooth extra, because TCP AP is protocol dependent to both TCP and I pee. But I always say more. You can explain them as it is protocol independent. Always. I model the international organization for standards are bias for is a multinational body dedicated to a worldwide agreement on international standards. The oil, similarly, is developed by the US role in 1984 as the first step towards international standardisation of the protocols, and it was revised in 1995. The oil say model is called open systems interconnection model because it deals with connecting open systems, that is, the systems that are open for communication with other systems. The purpose of the ways I Morley's to facilitate communication between different machines without making changes to the underlying software and hardware. The oil Somalis composed of seven order layers that AL owe us to think about a networking problem layer wise. And they are application layer presentation, layer session lier transport layer network layer data link clear and physically Here. Here we can see the layers involved when a message is sent from center to receiver. As the message travels from Santa to receiver, it may pass through many intermediate notes. These intermediate notes, usually in more only the 1st 3 layers off the ice. I model because the other four layers are dealt with in the sender and receiver that are passing through the stack from application layer to physical air is called encapsulation, and the rivers process is called big absolution. Organization Off oil say layers. These seven layers can be divided into three groups based on their behavior. Physical data link on network layers can be taught off as network support layers. These layers deal with the physical aspect off moving the data from host to host, such as electrical signals, physical connections, physical addressing, reliability, transport, Exeter session presentation on application layers are the user support layers on these layers. Support interoperability among different software systems in a host computer, Layer four or the transport layer acts as a bridge between user support layers and network support layers. And it also ensures the transmitter data from Netflix. Support players must be in a form the user support layers can use and vice versa. Hosts and submits are the basic elements in computer networks. Submit is in olden lawyer layers. 12 and three courts are in olden upper layers 456 and seven. We can say that the upper layers are almost always implemented in software. On the lower layers are a combination of both software and hardware except for the physical layer, which is mostly hardware. An electromagnetic signal at physical layer transmitted to the receiving device. At the receiving machine, the signal gets passed to layer one and is transformed back into digital form. Then the distal messages under a player by layer, with each layer receiving on removing the headers from physical to application layer. Finally, by the time it reaches the application layer, the messages converted back to the readable form. Let's explore each layer in detail. 7. User support layers in OSI model: application earlier application layer is the top. Most layer are the seventh player in oil say model. The application layer is their network applications on their application layer protocols reside. This is the layer their user, whether human or software actually in tracks. Applications that has networking capacity to connect with people on other networks are on the same network. Application earlier offers user interfaces on support services. Suggests natural cultural terminal remote file handling. Elektronik May database services, Extra network or Chiltern? No, it's a softer version off a physical terminal, the Tello's a user to log on to a remote host. The more file handling services aloes a user toe access and manage files in a remote host from the local computer presentation layer. The next layer is the presentation layer, which basically lives on the operating system. We in tracked with application layer that intern sense information to the presentation layer that our translation that encryption and decryption compression decompression all these things happen in the presentation layer. Text data will be translated to ask er Unicord format on the media that I can be translated to Jeb Peg or MP three ex cetera, encryption and compression happens at the center side. Encryption happened for security reasons when we send, since your information like credit card numbers and so on and compression is to compress the message and make it lightweight. Decompression, decryption and translation happens at the receiver side on our exact opposite off compression, encryption and translation session layer. It deals with creating, maintaining and synchronizing sessions among different communicating systems. Sessions offer various services. Della Control is to keep track off whose turn it is to transmit and whether it is half duplex or full duplex. Synchronization is too low to add. Checkpoints are synchronization points to a stream of data to allo them to pick up from where they left off. In the event off crash and subsequent recovery, the center is sending 100 MB video. It is advisable to insert checkpoints after every fire. Be to ensure that each fi MBI unit is received and acknowledged independently. In this case, if a crash happens during the transmission at 63 m, be the only data that need to be re sent after the system record is 61 to 63. Transport layer transport layer is the bridge between user support lay years on network support. Layers on provide the falling services such as and two in delivery segmentation, reassembly, connection control, flow control on other control and two in delivery off messages. Transport layer ensures so students nation delivery off a message from a specific sander process on one computer to unspecific receiver. Process on the other had off. Transport layer contains the information about the port. Address off the specific process segmentation and reassembly. Transport layer Dwight's a message into transmittable segments with sequence numbers and reassemble them correctly at the destination Connection Control the transport layer supports both connection less on connection oriented services. A connection service streets Each segment as an independent packet on delivers them to the destination independently. A connection oriented service ensures having the connection with destination before delivering the package. Flo Control. The transport layer manages the rate of transmission between two notes to prevent a fast center from transmitting more data and vice versa under control. The transport layer ensures the entire message arrives at the destination transport layer without any damage loss or duplication. It does other correction through retransmission or other deduction. Goats like checks um, transport lee or set up maintain UNT tear down connections for the session layer 8. Network support layers in OSI model: network layer network layer provides the falling services such as rotting off package, Logical addressing on, submit control, trotting off packets and logical addressing natural clear receives segments from the transport layer and wraps Eat segment with I P Header or logical. It does Exeter, which is known as a packet. It decides route with the help of connecting devices called routers or switches, possibly across multiple networks or links, so that packets ultimately reached the final destination. Sub net control. The natural clear controls the operation of the sub net. If to money packets are present in the sub net at the same time, they will get in one others way, forming bottlenecks. Congestion control is also a responsibility of the network layer. Here we have to know that the natural clear gets each packet the correct computer. But the transport layer gets the entire message to the correct process. On that computer data link layer data link clear transforms the physical air into a real able link. In other words, it makes physical layer appear ever free to the natural, clear by masking the real errors so the natural cleared doesn't see them. Data link clear has to sub layers. Logical Link Control, which provides flow control on other control and media access control, provides physical addressing as well as logical topology. Logical apologies Describe how signals act on the network. Data link Layer is responsible for ever free data transmission framing at a detection flow control on physical addressing framing in this layer, backers airplane from network layer become frames on this process is called framing. The data link layer is responsible for moving frames from one heart or node to the next. The data link clear is concerned with local delivery off frames on the same level off the network. The services relabel the receiver confirms correct receipt off each frame by sending back acknowledgement from physical addressing data. Link layer adds header to the frame to define the sender or receiver off the frame. If the frame is intended for a system outside the Sanders Network, the receiver disease address of the device that connects the network to the next one Flo Control Data Link layer imposes airflow control mechanism to a wide vote, overwhelming the receiver when Sandra is sending more than the receiver could process better control. The data Lynn clear can detect and retransmit damaged or lost frames on recognized duplicate frames under control is achieved by adding a trailer at the end of the frame. Access control when two or more devices are connected to the same link, Data link layer protocols that reminds which otherwise has control or the link at any given time. Data link layer mainly deals with switches on bridges in the network. Physical layer in noisy model. It is the layer where literally all the fighting presence. This includes patch cords. Cabling Exeter on this layer defines the type of transmission medium on its interfacing characteristics with the connected devices. Physical air deals with electrical and mechanical specifications, timing interface on transmission medium and it is responsible for transmitting rabbits or a physical communication channel. Most of the networking problems are physical air problems. This layer receives frames from data link layer on converts the frames into orbit. The physical area is responsible for encoding that is including zeros on once into electrical signals, data rate that is duration off a bit, or how long a bit lasts. Synchronisation off bits that is sender and receiver not only must use the same betrayed but also use synchronized clocks line configuration. That is, whether connection of devices to the transmission medium ease point to point or multi point physical topology. That is how dividers are connected. To make a network mesh start drinking Exeter transmission mode. It defines the direction of transmission between two devices. Whether simplex half duplex are full duplex simplex more is a one way communication only one device can send. The other can only receive in the half duplex mode to devices can send and receive, but not at the same time. In full duplex or simply duplex, two devices can send and receive messages at the same time. Some other devices used in this physical layer our network interface controller repeater, either knit hub ex cetera. 9. TCP/IP model: history off DCP AP model First DC Baby was outlined by self and Con in 1974 and later it became a standard in the Internet community by 1989. Initially, the Arpanet used Lizzie telephone lines to link computers in hundreds off universities and government offices. However, due to the addition off new networks such as radio on satellite networks, the existing protocols had trouble inter working with them. Furthermore, applications with divergent requirements were envisioned, ranging from transferring files to real time speech transmission Exeter. So a new and flexible architecture was needed to connect different types of networks seamlessly, and also to ensure the connection is not lost as long as the source and destination machines were functioning. Even if some of the machines are transmission lines in between were suddenly blown away. All these requirements led to a Newark picture and was named as the TCP, a peer of France model, and it is named after its two fundamental protocols, TCP an I P. The initial particles are built on top off connection list technology with packet switching networks, and these initial particles can run across different types of networks. PCP, a peer of France model was dollop a 10 years clear to the ice I model and it was the prevalent model used in both our planet and also on the Internet we used today. The TC paper model was divided into four layers Application layer transport layer network clear on link clear. We can see application Layer off Casey Pape immortal as a combination off application presentation. Concessional yourself or else I model and we can relate. Linklater off easy p a. P as a combination off physical on data link layer Soft voice I'm order application layer off TCP AP Model application layer NTC Pape Immortal holds all the higher level protocols the initial particles are Tell it ftp SMTP Dennis Hastert dp Exeter. No need for session and presentation. Lives was perceived in the TCP AP mortal because session on presentation functions are not very useful to most applications and they can be included in the application late itself. Transport layer off TCP AP the purpose of this transport Leary's very similar toe oyster transport layer but represented with two and two and transport protocols that are responsible for delivery off application layer messages between application endpoints by adding a 16 Report number two the packets and to enter and transport protocols are transmission control protocol. TCP on user data. Graham Protocol, UDP. A new protocol a CTP, or stream control transmission protocol has been added to provide support for new applications such as vice over the Internet, and it combines the best of both worlds. TCP and UDP Network Clear Network. Clear off easy pay peas responsible for moving network clear packets known as data grams independently to the transport layer, and it is also called as enter network or Internet lier. It defines an official packet and particle format called the Internet Protocol R I. P. The Internet Protocol. It is the transmission mechanism used by the TCP AP protocols, but it is an under label on connection list protocol. It provides no error, checking on no guarantee for delivery off messages. Internet protocol transports data in packets called Data Grams and each data. Graham can be transported separately along different routes. On these data, Grams may arrive in auto sequins or be duplicated. Internet protocol doesn't keep track off the route, and it has no facility for reassembly. Off data grams. It is the job off higher layers to reassemble the data Grams the Internet Protocol uses for supporting protocols, and they are address resolution protocol, Air P Rivers Address, Resolution Protocol, R E R P, Internet Control Message Protocol, ICMP and Internet Group message protocol I GMP. We can discuss about these particles separately in the next videos link. Clear Link Layer is the lowest layer in the TCP AP model, and it describes what the links must do to meet the needs off connection lists. Internet layer It is not really a layer, but rather it is an interface between hosts on transmission links. Linklater converts Packer Tindouf frames and transmits them by the time a frame Rijs the destination. That frame contains a physical address, which is used to identify the destination. Internet or physical address is off 48 bits, and it is built into a network interface card off the network dividers 10. Addressing in computer Networks: addressing to deliver anything from a source to a destination. We need different addresses at different stages off the transit. Suppose to deliver a. Both represent to your friend in the Newark city. We need to provide country name State City on your friends. Local address. We don't all of them. Your gift can't reach your friend. Same goes here in the computer networks In order to deliver some data to a destination, we need a unique Attar's at each stage off the network. Three levels of fat asses are used in the Internet employing the TC paper protocols, and they are physical or this logical it does on the poor. Doctors, as we can see here, each other sees related toe unspecific clear in the TCP AP architecture physical address. The physical address is also called as the Link a does or a Mac address, and it is the address off a node or a host in the network. Every networking device in the world comes with a uniquely burn Mac address, which allows the devices to connect the network. The physical are McAdoo's is the lowest level off address on this address will be used in frames off the link layer to find the physical address off the destination device. The size and format off this physical of this may vary depending on the type of the network . For example, even accuse us a six bite physical address and local talk, however, has a one by dynamic address. The physical or this cannot be changed, and it is a hacks, a dismal others that looks like this logical. Are I peer? Does Physical addresses are not sufficient in the Internet, Brooke, because different networks can have different at this format, so toe identify each host uniquely. A universal addressing system was needed irrespective of the underlying physical network. A logical odysseys usedto identify a public network or public. I pee on the Internet uniquely. Currently it is a 32 bit others. It is so unique that no to publicly are just host on the Internet can have the same i p address, and it looks like this bo tortoises. That I revel at the destination host is not the final task. These days, computers can run multiple applications at the same time, the final object to off network communication is to transmit a message from an application running on a sender machine to an application running on a receiver machine on the network . So we need a separate advising method to identify the applications running on a computer uniquely and that unique others assigned toe. Each process or application is called the port Orders Supporters is off 16 bits in length. Generally, porter dozes off. Normal applications can be in the range off 10 to 4 to 65535 on the port. Numbers 0 to 10 to three are reserved for common on vellum protocols and services like history, DP, ftp, DNS, etcetera. For example, The port of the surface to TP process running on any host is 80 and that cannot be used by any other application our process. 11. Introduction to Physical Layer: physically, it is the lowest layer and Internet model. Any directly interacts with the transmission media. This layer physically carries information from one note to the next node in the network physically performs many complex tasks and important US keys to provide services to its up earlier that is battling earlier physically or creates a signal the stream of bids coming from data link layer and sense that signal true transmission Medium. Now let's look into physical air concepts multiplexing modulation on transmission medium multiplexing. We need a medium wide or wireless to send or receive signals between two notes in the network. Generally, we transferred several signals through a single while is much more convenient to use a single wire to carry several signals than to use a separate wire for every signal. For example, we receive hundreds of TV channels on just a single Why we can't imagine hundreds of wires connected to our television, so transmission off saddle signals or a single transmission medium is called multiplexing. Multiplexing was first used in telegraphy in the 18 seventies. Multiplexing divides the capacity of the medium into several logical channels. Each logical channel can carry different message signal and de multiplexing is a process that extract the original signals at the receiver, and there are three primary multiplexing techniques. Frequency division multiplexing have diem Vaillant division multiplexing WDM on time division multiplexing T dia Abdi um on WDM are designed for analog signals. Tedium is designed for digital signals. Before getting into these multiplexing techniques, we first need to know about bandwidth on modulation bandwidth, you know, to transmit a multiplex of signal to a wire, it must have the capacity to carry that signal. The capacity of a wire or media to carry information is called bandwidth, and the total capacity may be divided into channels to accommodate 80 minutes a signal present in the multiplex. It signals the more bang with a white has the mood that I can transmit. Bandwidth is measured in two different ways in cycles per second or heads for a loved ways on bits per second bps for this televises modulator on de modulator, the range off audible frequencies for humans is only in between 20 hits to 20 killer head. Low frequency signals cannot travel longer distances without a large antenna, which is a big problem in the real world. If we send any to signals with the same frequency they get mixed up, and it will be impossible to tune to any one of the signals while receiving. So radio stations must broadcast them at different frequencies, and each radio station must be given its own frequency Band modulation console. Both these problems adding a carrier or a periodic play form to the original signal is called modulation, which makes it different from other signals, and it is done by a modulator. The reverse process off extracting original signal from modulated Single is called the modulation, and it is done by a de modulator. 12. Multiplexing: frequency division multiplexing you have. Diem is an analog technique that combines analog signals with this technique can divide the available bandwidth off the transmission medium into independent frequency channels on shared those channels. Among the signals to be transmitted through the medium, each orginal signal is modulated with a modulator and converted into different frequency bands, using different techniques like amplitude modulation, frequency modulation, phase modulation, single side band modulation and also using different carrier frequencies. These techniques can be achieved by altering the amplitude frequency or phase of the carrier signal, which is called amplitude frequency, or phase modulation. Respecting the modulated signals are then combined into a single composite signal, which is passed into a transmission medium to transmit all the signals at the time. For example, a cable of man with one mega heads made past 1000 signals off each one killer heads insides . In this case, the cable is divided in 2000 sub channels. All the sub channels in a cable are separated by a God ban that prevents signals from old lapping. The oppose it process happens at the receiver side, broadcasting television signals. He's one of the most common applications for medium have Diem is also used in AM and FM radio broadcasting Very intrusion multiplexing. The basic idea of W um is to combine on transmit my tipple light signals to one single light beam with the multiplex er on to extract those individual light signals from the combined signal with a D. Multiplex. Sir, we already know that a prison can easily combine on split light signals based on the angle of incidence on the frequency. Using this prison kind of technique, we can make a multiplex ER to combine several light signals into one signal on a D multiplex er to split a single light signal into multiple signals. WDM is designed to use fiber optic cables because they have very high data transmission rates than that off metallic cables. WDM has become very popular due to its battle with the bandage off. A typical fiber optic cable is their own 25,000 *** heads. So theoretically, the resume for 2500 10 Gbps channels WD, um, is conceptually very similar to Abdi Um, but it involves light signals instead of electrical signals. Here in Dublin, Diem, we're combining different light signals based on their Waveland are color on the frequencies are very high compared to the idea. Delirium is also an analog multiplexing techniques. One of the common applications of WDM is synchronous optical and truck are sewn it a new method in WDM called dance WD or dwt. Um, it is even greater efficiency by spacing channels closer in the fiber optic cable time division multiplexing. Our media tedium is a distant technique in which the time is shared between the signals. Instead of sharing a portion of bandwidth off the transmission medium, its signal occupies a portion of time in the transmission medium or link. Here in tedium, the signals 123 and four going the link sequence really, rather than simultaneously. But whereas in F D M and W D. Um, all the signals go simultaneously in the lane. Tedium is a pistol multiplexing. Pickney, where distal that are from different sources are transmitted through a time should link. However, this doesn't mean the source cannot produce analog. Better analog data can be sampled and change it to digital letter. Then multiplexes using tedium. It comported that a stream is formed with bits coming from all the sources, which is called a frame. Each frame contains time slots. Each slot corresponds to a particular Selves to l O sequin Shal transmission off each individual service beds on form a frame. Some kind of switch like mechanism is used in the media. The switch gets open in front off a particular connection than the duo's gets a chance to place the data onto the link after taking data from and sources, that is, after making one frame against which starts from four SOS to make another frame. This process continues until all sources stopped transmitting a bit to differentiate each frame. A synchronization bit is added at the beginning, off each frame. Alternatively, one and zero, it is called frame synchronization. An individual source data from all frames passes through an intended channel. Each serves can have different data rates, so to synchronize eat soups with the other sources. It adds additional data bits called big patty or post stuffing. Here, one character from each source forms a frame, but sometimes slots have no data. For example, in 2nd 3rd and fourth frames, we can see no data because all the sources may not send bits at the same time at the same rate on at the same land. There is a possibility for vestige of banquet in synchronous stadium. To overcome this problem, we used a synchronous or statistical tedium with the time slots are allocated dynamically on demand. The multiplex of checks each input line in drowned robin fashion on allocated slot. The line has data to send. Otherwise it skips the line and checks the next line in STD um, multiplex er sense bits as synchronously without fixing starts. This makes receiver complex as it needs to know the source of the data arrived. So we add source address to each look. This may increase overhead, but there are some schemes to minimize this overhead with related addressing where the address land is reduced using module automatic. 13. Transmission Media: transmission media. The transmission media are directly controlled by the fiscally on are usually located below fiscally. Anything that carry information from one place to another can be called as a transmission media. Usually the transmission medium can be a metallic able and free space or a fiber optic cable. Over the years, better metallic media have been invented, like to stay on co axial cables. Optical fibre technology increased the data transmission rate incredibly, and three space, like a black hair on water, was used more efficiently as transmission medium with some new technologies. So the transmission media can be classified broadly into two categories. Guided on unrated guided media includes We stood paid cable quite Jill Cable and fiber optic ing UNDATED Medium uses three space for guided media. The medium's more important to get remind the limitations of plants. Mission for unjaded media. The bandwidth off the signal produced by transmission antenna is more important. Raided media here The signals are transmitted to a solid medium such as copper, twisted pair cable, quite steel, cable or optical fiber. Signal. Telling along any of these media is director by the physical limits off the media, twisted paid on Greg's Elk Able use metallic conductors to transport data in the form off electric signals. Fiber optic cables transport signals in the form of flight. Now we will look into each one of them separately. Mr Big Cable Twisted Pair Cable is a type of lighting in which two conductors with their own plastic insulation are twisted together to improve the electromagnetic compatibility. It was invented by Alexander Graham Bell Compact two CV conductor or uninterested balanced Bayer. A twisted pair reduces electromagnetic radiation or crosstalk, between neighbouring base and the all symbols rejection of factional electromagnetic interference. The interference on the crosstalk can affect both the wives and create unwanted signals. The two white our barrel. The distribution of unwanted signals is not the same in both ways because sometimes one leg is needed to the noise on the other. Maybe Father unequal distribution of noise in the wise affect the original sin. Like the stevia wise balance is maintained, for example, for one first one ladies nearer to the noise and the other is father for the next twist, the rivers happens. This debate and shoes that both ways are equally affected by noise are crossed up, so that issue that calculate the difference between the two receives no unwanted signals. The unwanted signals are mostly cancelled, so we can say the performance of transmission increases the number of tourists per unit. Off land increases crags Ilkin like twisted pair cable Quiet People has an interconnector, which is held in place by either regularly spaced insulating rings. For a solid electric material, which is in turn enclosed in an outer, conduct the altar metallic, wrapping those both as a shield against noise but also a second conductor. This outer conductor is also enclosed in an insulating sheep. On the whole, Caro is protected by a plastic all Gregg's table in carry signals off higher frequency and has higher bandwidth than the twisted pair cable you do. It's a shielding question. Cables are much less susceptible to interference are crossed than the twisted pair cables. Outer collector is grounded on interconnector in shielded from inter feelings, which reduces the cost. Cracknell cable connectors to connect quietly cable. To divide us, we need to exile connectors. The most common type of connector used today ease be insecure. Three public plates of these connectors are the BNC connector, the BNC connector the BNC 10 minute. The BNC connector is used to collect the end of the cable to a device such as a TV set. The being city connector is used to branch out The connection to computer are otherwise. The BNC Terminator is used at the end of the cable to prevent the reflection on signal performance. Although gradual, cable has a much higher bandwidth, the signal weekends rapidly and requires the frequent use off peters. The bigger the diameter off inner and outer conductors, the better the performance of the cable. Quite ill cables are generally used in cable TV sets, long distance telephone transmission, local area. 14. Fiber optic cables: fiber optic cable. The fiber optic cable. The thin, flexible transmission medium made of glass or ultra pure fused silica or plastic to transmit signals in the form off light. There's a cylindrical shape and consists of three concentric sections. The cold, the cladding on the jacket. The core consists off very teams transfer fiver made of glass or plastic. The cladding is a glass or plastic coating that has optical properties different from that off the core. The jacket and circles. A bundle of clouded fibers to understand fiber optic cables. First, we need to explore the nature off light. Let's take a dancer medium on a less dense medium. Introduce a light ray into the denser media. Then it travels in a straight line as long as it is moving through a uniformity. With the ray passes into the dancer medium, the light rate changes it's direction or simply the flips. The angle off instance at which the rate travels along. The interface off two channels is called critical, and if the angle off instances less than the critical angle, the rate reflects our moves closer to the surface with the angle of incidence is equal to the critical angle. The light travels along the interface if it is greater than the critical and Ray reflects and travels again In the dancer substance, which is called total internal reflection, the critical angle changes from substance to substance. Optical fiber uses reflection to carry light to channel a glass or plastic basilica. Court is their only by the cladding off less dense glass or plastic. The difference in density off the two materials must be such that being flight moving through the core is reflected off the cladding and should be refracted into it propagation modes. Currently, there are two moats for propagating light along optical channels. Multi mood on single mode. The multi more propagation can be implemented in two ways. Step index graded index Multi month cyber means multiple beings from a light shows passed through the core in different parts. How these race move within the cable depends on the structure of the coal in step in next fiber, the density off the core is constant. From the center to the edge is the rain moves through this constant density in a strike line until it reaches the interface and suddenly changes. It's direction due to a lower density, so stepping next to the first to the suddenness off. This change, regretted in next table, is the one with varying densities in the court. The court is built in such a way that it has different ministries at different points in a court. The density is highest at the center of the cool on decreases gradually as it moves to the edges. Here we can see the impact of this variable density on the propagation off like bees. Single more propagation. It use a step in next technology on a highly focused light source that limits raised to a small range of angles all close to the horizontal. A single malt cyber itself is manufactured with a much smaller diameter on with substantially lower density than that off multi moved fiber. The densities decreased to make the critical Langley almost talent to the coal to make the propagation of the being nearly horizontal. In this case, angle of incidence off different beans is almost identical on the dealers are negligible. All the beans arrive at the destination together. Fiber sizes. Optical fibres are defined by the ratio of the diameter of the court to the damage off their cladding, both expressed in micrometers fiber optic cable. Connected. There are three types off important characters for fiber optic cables. A C counter S t connector, Anti RJ. Can it? The S E Can occur is used for cable TV. It uses push a full locking system. The straight tip connector is used for connecting cable to networking dividers. It uses a bayonet locking system and is more reliable than S E character MDR Jay Z character that is the same size as RJ 45 Ethernet cables. It is also used as a connecting cable to networking devices applications. Fiber optic cable is often found in backbone networks because it's wideband with these cost effective today with wavelength division multiplexing, we can transfer data at a rate off 1600 Gbps Advantages. SOS Optical Cyber Higher Bandit Lightweight Last signal intonation Immunity to electromagnetic interference for assistance to Karajan materials. Greater immunity to tapping on. It needs fewer repeaters. Come back to the other transmission media. These advantages off optical fiber installation and maintenance requires expedites uni directional light propagation. The cable on the interfaces are relatively more expensive than those off other lighter media 15. Circuit Switching: Switching. Let us first understand why we need switching in a computer network. There may be so many devices present in a network. How do make only two devices communicate in a simple network when there are so many devices. Well, we can connect all the devices either in a mesh topology or a star topology and make them communicate. It works if we have a small network with a limited number of devices. But these techniques are impractical in wasteful when applied to huge networks. The number of links and their length require too much infrastructure, and the majority of those links would be idle most of the time. A better solution is switching, where we use several switches to route the data in a network with a very little infrastructure. Network with switches as called a switch network. We can see a simple switch network here. Switches are devices that are capable of creating temporary connections between two or more devices linked to the switch. We can use a switch in two ways. In a switched network, some of the switches are used to connect to the N systems, computers or telephones. And some switches are used to connect with other switches for routing. Here, the end systems are labeled as a, B, C, D, and so on. And the switches are labeled as 1234 and so on. The network uses FDM or TDM for communication between switches. Each switches connected to multiple links. So the network has multiple paths between a source and destination pair, and that gives reliability for the network. Now, we can define what a switching, directing or switching a signal towards a particular network or devices called switching. Till now, we have seen what is switching and why do we need switching? Now, let us understand how switching is implemented. Three types of switching networks are available. Circuit-switched networks, packet-switched networks, and message switched networks. Packets. Networks can be further divided into two sub categories, virtual circuit networks and datagram networks. Let's see each of these switching networks in detail. Circuit switch networks. Circuit switching was originally developed for voice communication in traditional telephone networks, but it is also being used for data transfer nowadays. The circuit switch network consists of a set of switches connected by physical links. Each link is normally divided into channels using FDM or TDM. However, each connection uses only one dedicated channel on each link, and all the channels from sender to receiver forms a dedicated path. Circuit switching takes place physical layer, as it was the oldest form of switching. The data transfer in circuit switching is a continuous flow from sender to receiver. Know addressing is involved in data transfer as data's not packetized, the switches route the data based on FDM or TDM. If a switch can handle three connections at a time, and if a fourth call is made to the switch, the Call cannot be connected as all the resources are reserved for the entire duration of call. In circuit switching, there are three steps involved for communication between sender and receiver. Setup, data transfer and disconnection in the setup phase to connect any 2N systems or call request signal is sent from sender to receiver that must be accepted by all the resources along the path, including sender and receiver. And an acknowledgement signal or call accept signal comes back through established path. We have connected the 2n systems. Now we have to transfer the data. Data transfer after the connection is established, the data transfer happens from source to destination through the established path. We are done with the setup and data transfer. Now we have to disconnect the connection to release all the resources. Circuit disconnection after the data transfer, another acknowledgment signal would be sent from the receiver descender and the circuit disconnect. We have discussed how communication happens in a circuit switched networks. Now let's discuss about efficiency. Delay in automation has all the resources are reserved for the entire duration of time. These resources are not available for the other connections. So circuit switching is not as efficient as other two switching techniques. The delay in this network is minimum because of no waiting time has the connection is already established. In the early days of the telephone, circuit switching happen manually. Automatic circuit switching was invented by a nineteenth-century Missouri funeral director named Allman Brown Strowger. A Public Switched Telephone Network is the best example of a circuit switched network. 16. Packet Switching: In our previous video, we have discussed about circuit-switched networks. We know that it has some disadvantages, like if all the links are busy, we cannot make a call. To overcome those disadvantages. And new switching technique was introduced. It is called message switching. In message switching, all the switching stations are given a sufficient storage. We can call them buffers. During the transmission, every switch in between the source and destination stores the entire message and forwards that from node to node. This phenomenon is called store and forward. If the message is very big, it takes so much time to transmit. This message takes all the resources for a long time, which causes the next message to wait for a very long time. Well, message switching also has some disadvantages like the nodes need sufficient storage, more processing time for the entire message. And we cannot use this technique for real-time transmission like live streaming multimedia games because of long delays. To overcome those disadvantages, packet switching technique was introduced. Packet switched networks. In packet switching, the data as packetized that has large message is divided into small units for better results. Whereas in circuit switching, data as a continuous flow, packet switch networks can be classified into two types, datagram networks and virtual circuit networks. Let us look into each of them in detail. Datagram networks. Networks that provide connection less services at the network level are called datagram networks. In other words, no connection setup required before data transfer and data packets can take different routes. In datagram networks, there is no need of setup or disconnection phases because each packet is treated independently and they don't need to travel in the same path. Hence, these packets may arrive at the destination out of order. In datagram Networks, resource allocation happens only when there are packets to be transmitted. Whereas in circuit switch networks, resources are allocated for entire duration. Here the packets are called datagrams. And the switch in a datagram network can be called as a router. Every packet in a datagram network carries a header that contains the destination address of the packet. The router uses the destination address to transmit packets from source to destination. The destination address of the packet is same for the entire duration. We have discussed that the datagram networks or connection-less networks and transmits packets individually. Now let us discuss about the efficiency and delay. The efficiency of the datagram network is better than that of circuit switch network. Because here, no resource allocation required for the entire duration and no restriction on the number of participants. But there is a chance of delay here because the datagrams may experience a wait time at the router when there are so many datagrams in a network, the internet we use today as an example of datagram network. We have seen the data transmission in datagram networks. Now let us see how data transfer happens in virtual circuit networks. Virtual circuit networks. Virtual circuit networks use connection-oriented switching, like circuit-switched networks, that has resources must be allocated before data transfer. We already know circuit switching happens at physical layer and switching in datagram networks happened at network layer. But in virtual circuit networks, switching happens at the data link layer, transporting data over a connection-oriented packet-switched network. That is, the data transported from a particular source to a destination follows the same path and through the same nodes is called virtual circuit switching. And the path that is series of links and routers is called virtual circuit. Just like circuit-switched networks, virtual circuit networks also have setup data transfer in teardown phases. At the beginning of a transmission, a setup packet goes from source to destination, establishing a path and reserves resources. This is called set-up phase. In the data transfer phase, the data packets follow the path that is established in the setup phase. The same resources are allocated on demand and on a first-come-first-served basis. For example, Alice and Bob are communicating via switched network. And if four packets are transmitted from Alice to Bob, and after that, they paused for a while. That means no packets are being transmitted. Now, if some other communicates in the network, the same path can be used to transmit their packets. Even the connection for the Alice and Bob has not terminated. In case of circuits which networks, it is not possible to use the same path unless the connection is terminated. As all the packets belonging to a particular source travel along the same path, they arrive at the destination in order, but they may experience some delay because of the congestion. Just like a datagram network, data is packetized and each packet carries an address in the header. In virtual circuits to types of addressing as required, global addressing and local addressing. A source or a destination must have a global address unique in the scope of network. If the network is part of international network, as the first packet decides the path, it contains the global address in the header, but not subsequent packets. All the other packets have the local address in the header. Local address is also called as virtual circuit identifier or Vc II. Local address has local jurisdiction. It defines what should be the next router and the channel on which the packet is being carried. A VC I, unlike a global address, is a small number that has only switched scope. That means each link on the network contains different VC eyes. It is used by a frame between two switches. When a frame arrives at a switch, it has a BCI. When it leaves, it has a different BCI. So in short, a virtual circuit consists of path between the source and destination hosts and contains VC numbers, one number for each link along the path. And each router contains entries in the forwarding table along the path, a packet belonging to a particular virtual circuit carries a VC number in its header. We know that each link has different VC numbers. So every router along the path must replace the old VC number with a new one. The new VC number is obtained from the forwarding table. We will see an example to better understand the concept. Let us take two systems, a and B and four routers, R1, R2, R3, and R4. The end system a is connected to R1 and the other end system B as connected to R4. Let us assume the system a wants to communicate with B. So a sends a request message to B and establishes a virtual circuit through the routers R1, R3, and R4. So the path is a R1, R3, R4 be assigned 1620 to 3036 as VC numbers to the four links in the virtual circuit. In this case, when a frame leaves host a, the value in the VC number field in the frame header as 16. When it leaves R1, it is 22. And when it leaves are three, the value is 30. And finally, when it leaves are for the value as 36, the forwarding table in the router contains the next link VC numbers along the path. Virtual circuit networks are more efficient than datagram networks as it reserves a path for the entire duration. The delay is also minimum compared to Datagram networks. There is no need to provide complete addressing information in the header for each packet, since the packets are routed individually, only a small number BCI is required in each packet. That means each packet contains a less overhead compared to Datagram networks. Let's recap and see what we have learned in this video. In packet switching, data is packetized to have the flexibility and efficiency. Datagram networks or connectionless and virtual circuits or connection oriented networks, will circuits are. 17. Introduction to Data Link layer: In this video, we will explain about data link layer briefly. We know that the data link layer as the second layer in the OSI model and it is in between physical layer, network layer. The data link layer transfers data from network layer of one machine to network layer of another machine. It converts packets into frames at sender's side, frames into packets at the destination side. Some other data link layer responsibilities are addressing flow control, error control, and media access control. This layer appends a header to the frame to define the address of the sender and the receiver. If the senders data rate is higher than the receiver, this layer imposes a flow control mechanism to avoid packet droppings at the receiver. The data link layer also adds reliability to the physical layer by adding mechanisms to detect and correct the errors. Network cards like Ethernet card, personal computer memory card, eight naught, 2.11 cards, bridges, layer two switches are examples of data link layer device's data link layer has two sublayers. Logical link control or LLC sub layer, Medium Access Control or MAC sub layer. In broadcast networks, only one station should transmit at a time on the shared communication channel. Medium Access Control or MAC protocols determined the transmission on a broadcast channel, that is, which station should transmit and who must wait. And these are TDMA, FDMA, CDMA, token-based protocols. Mac sub layer resides near the physical layer. The functions like physical addressing, logical topologies, transmission methods take place in this MAC sub layer, LLC sub layer functions are error control, flow control in transmission synchronization. 18. Error Correction Methods: Error detection and correction. In this video, we will see some error detection and correction methods. Two types of errors are possible in the data transmission. One, single bit error, where only one bit in the data unit changes to burst error, where two or more bits in the data unit changes at a time. There is no guarantee that every bit that has been transmitted from sender to receiver reaches accurately. Some bits may get lost in the transit. Some of them may become corrupt in the passage. Many factors can affect one or more bits of a message. So we need a mechanism for detecting and correcting errors. The main concept in detecting or correcting errors is redundancy. We need to send some extra bits, that is redundant bits along with the original message in order to get detected or corrected at the receiver. Usually, errors can be corrected mainly in two ways. One, forward error correction to retransmission. In forward error correction, FEC, the receiver guesses the error by using redundant bits and corrects the error immediately. It is possible only when the number of errors is small and the message, well, we will see why. Here, the packets are sent in groups, with each group containing four packets. For every four data packets that are sent, a fifth parody packet can be constructed and sent. For example, L, M, N, O, R data packets. And P is the parody packet which contains redundant bits that are the parody or exclusive our sums of the bits in each of the four data packets. If all of the packets arrived safely, the parody packet is simply discarded at the receiver. Or if only the parody packet is lost, the receiver knows that the last one is parody and the original bits are safe, so everything will be okay. If any packet is lost during the transmission, the bits in the missing data packet can be reconstructed using the parity bits and other three packets. If two packets, a group of five or lost, there is nothing we can do to recover the missing data. That's why it is only possible when the number of errors or small FEC techniques are valuable because they can decrease the number of sender retransmissions required. And more importantly, they allow for immediate correction of errors at the receiver. The ability of the receiver to detect and correct errors is known as forward error correction, FEC. Since it corrects the errors, it is called error correcting code. In retransmission, the receiver detects the error and request the sender to retransmit it until receiver believes the message is error-free. The retransmission method is used in the channels where the possibility of errors as small, because if we keep retransmitted defected packets, there may be a greater chance of congestion. The number of errors is small in fiber optic cable transmission. So here it is cheaper to use an error detecting code and just retransmit defective blocks. Fec is used on noisy channels where number of errors as high because we cannot retransmit every defected packet and errors may occur in the first retransmission. Also, error correcting codes like FEC are widely used in wireless communications since they are very much noisy and error-prone when compared to optical fibers. 19. Error detecting code - Checksum: Error detecting codes. There are quite a few error correcting and detecting codes are available. But for now, we will look into one of the important error detecting codes called checksum. Checksum. The checksum is based on the concept of redundancy. That is, adding extra bits to the original data. These redundant bits are checksum of original Betts. That is, checksum means a simple sum formed using original bets. Usually, the checksum is placed at the end of the message as the complement of the sum function. Errors can be detected by adding the entire received codeword, both data bits and checks on. If the result comes out to be 0, no error has been detected. Let us jump into an example to see how it does error detection. Let us assume our data is a list of 404 bit numbers, 25139 that we want to send to the destination. In addition to this original data, we send the negative complement of the sum of all numbers called the checksum. In this case, we send to 5139 minus 29. The receiver adds all the numbers that were received, including checksum. If the result comes out to be 0, it assumes no error, otherwise, it knows there is an error. In the above example, all the bits are 4-bit words except the checksum minus 29. To make checksum a 4-bit word, we use ones complement arithmetic. 1's compliment means 29 can be written as 11101 to make it a 4-bit word, bring that extra leftmost bit and add that to the rightmost bits. That is 1101 plus one equals 1110, that is 14. In 1s complement, a negative number can be represented by inverting all data bits 0001, that is one. The sender now sends five 4-bit data items to the receiver, including the check someone that has 251391 at the receiver. It adds all data items including checksum. The result is 30, that is 11110. The sum is wrapped, adding leftmost bits to the sum that as 1110 plus one and becomes 15, that is 1111. The Rapson 1111 is complemented means replacing 0 with 11 with 0 and it becomes 0000, that is 0. If the sum comes out to be 0, the data is not corrupted. The receiver discards the checksum and keeps the other data items as it is. If the checksum comes out a non-zero, the entire packet is dropped. One of the examples of a checksum as the 16-bit internet checksum used on all Internet packets as part of the IP protocol. The internet checksum as efficient and simple, but provides weak protection in some cases because it is just a simple sum. It does not detect the deletion or addition of 0 data, nor swapping parts of the message and some other errors. These errors may not occur by random processes, but they are just the sort of errors that can occur with buggy hardware. A better choice as Fletcher's checksum, which computes two checksums for a data word. And at the end of the data word, the modulus operator is applied and the two values are combined to form the Fletcher checksum value. So for each block of the data word, the blocks value is added to the first sum and the new value of the first sum is then added to the second son. Both some start with the value 0. Fletcher's algorithm, the whole data as divided into equal blocks, B1, B2, and so on up to bn. Assume initial checksums starting at c1 equals 0 and C2 equals 0. For each block, BI, add BI to C1, that is, c1 equals c1 plus b. I. Add the new value of C1 to C2. That is, c2 equals c1 plus c2. Then c1 and c2 are converted to a model owed 255 computation if required. For example, take a byte stream of hexadecimal O2, hexadecimal O3, hexadecimal O2, and hexadecimal 03, or the blocks B1 and B2. The checksums are C1 and C2 with initial checksums 0, C1 equals c1 plus b1, that is, C1 equals 0 plus hexadecimal O2 equals hexadecimal O2, c2 equals c1 plus c2. C2 equals hexadecimal O2 plus 0 equals hexadecimal hotel, and C1 equals c1 plus B2. That is, c1 equals hexadecimal O2 plus hexadecimal O3 equals hexadecimal 05. C2 equals c1 plus c2. C2 equals hexadecimal 05 plus hexadecimal O2 equals hexadecimal 07. And finally the checksum becomes hexadecimal 0705 and is placed at the end of data word. It includes a positional component, adding the product of the data and its position to the running sum. This provides stronger detection of changes in the position of data. The Fletcher checksum cannot distinguish between blocks of all 0 bits hexadecimal 0000000 and blocks of all 1-bit hexadecimal ff, ff, the checksum for all zeros and all ones produce Same result in Fletcher's algorithm. And it also cannot identify the precise location of error in the code word. So a method called cyclic redundancy check CRC is used. It is also called as a polynomial code. 20. Framing: Framing. Physical layer transmits bitstream to the destination over a channel. But while transmitting, the signal may be corrupted with errors. To avoid this, the physical layer also add some redundant bits to the signal. However, the bit-stream received by the data link layer is not guaranteed to be error free. It is up to the data link layer to detect and if necessary, correct errors. The usual approach for the data link layer has to break up the bitstream into discrete frames and computed checksum for each frame and include the checksum in the frame when it is transmitted. At the receiver side, it recalculates the checksum. If it matches with the checksum in the frame, then no error. Otherwise, there are some errors to break bitstream into frames. There are quite a few methods available. The first framing method uses a field in the header to specify the total number of bytes in the frame. When the data link layer at the receiver side sees the byte count in the header. It knows how many bytes are present in the frame, and hence where the end of the framers. Let us take an example, consider three small frames of sizes, 457 bytes respectively. The first number is the byte count of the frame. So we can say that the first frame contains four bytes, second, five bytes, and the third frame contains seven bytes. It looks perfect, right? But still there is a problem with this algorithm. The byte count in the header may change due to the transmission errors. Suppose the byte count five in the second frame may become seven due to a single bit flip, then the receiver will get out of synchronization. It is unable to locate the correct start of the next frame. Here comes the second framing method. It eliminates the problem of resynchronization of frames at the receiving node. This by adding special bytes at the start and end of each frame. That special byte is often called a flag byte, to separate one frame from the next frame, an 8-bit flag, or one byte flag is added at the beginning and the end of a frame. This method was appropriate when only text data was exchanged by the data link layers. Here, the flag could be any character that is not used for text communication. Now, however, we also send other types of data, such as graphs, audio, and videos. Now the problem has any pattern or character used for the flag could also be part of the data. If this happens, the receiver, when it comes across this pattern anywhere in the data, assumes it is the end of the frame. To solve this problem of byte stuffing strategy was interested in byte stuffing. A special byte is added to the frame when there is a character in the data part with the same pattern as the flag. Whenever the receiver comes across the escape character, it removes ESC from the data section and treats the next character to the ESC as data, not a delimiting flag which tells the end of the frame. Byte stuffing by the ESC allows the presence of the flag in the Data section of the frame, but it also creates some other problems. What happens if the original data contains one or more escaped characters? In the data section, the receiver removes the escape character thinking that it is not the original data. This problem, the escape characters that are part of the text must also be marked by another escape character. This is definitely a headache. Byte stuffing and D stuffing. Well, the third method of framing gets around the problem of byte stuffing, which uses 8-bit bytes just before the flag like data. Why do we need to stuff eight bits before the flag like data? When we can do this by just adding a single bit to the flag like pattern in data. So framing can also be done at the bit level. In this method, each frame begins or ends with a special bit pattern, 01111110. This pattern as a flag byte. Whenever the sender's data link layer encounters five consecutive 1's in the data stream. This layer automatically inserts a 0 bit into the outgoing bit stream. This is called bit stuffing. When the receiver sees five consecutive incoming one bits followed by a 0 bit in the data, it automatically deletes the 0 bet. For example, if the data section contains the flag pattern 01111110, this flag is transmitted as 011111010, but stored in the receiver's memory is 01111110. 21. Flow and Error control: Flow and error control. Well, framing is not the end of all problems. Framing alone cannot guarantee precision. We've solved the problem of marking the start and end of each frame for now. But how do we know that all frames are actually delivered to the network layer at the destination correctly and in the proper order. The usual approach to ensure reliable delivery has to provide the sender was some feedback about what is happening at the other end. Generally, the protocol tells the receiver to send back special control frames having positive or negative acknowledgements about the receiving frames. If the sender machine receives a positive acknowledgement, that means the frame has arrived safely. And if it's a negative acknowledgement, it knows that something is wrong and the frame must be retransmitted again until it is positive. And there is a possibility that the hardware troubles may cause a frame to vanish entirely. Or if the acknowledgement frame is lost, the sender will not know how to proceed. In this case, the sender will wait forever for the acknowledgement. This problem would be solved by introducing timers into the data link layer. That means whenever the sender transmits a frame, it usually also starts a timer with it. The timer expires after an interval that has long enough for the frame to reach the destination, process there and have an acknowledgment transmit back to the sender. If nothing goes wrong, the timer will be cancelled after acknowledgement reach the sender. If the frame or the ACC is lost in transit, the timer will expire, indicating the sender about the problem. The usual solution is to transmit the frame again. And there may be a chance that frame reaches the receiver multiple times after the retransmission. The data link layer passes these duplicate frames to the network layer. That's why the sequence numbers are introduced for the outgoing frames so that the receiver is able to identify retransmitted frames from the originals. The mechanism of retransmission of frames whenever an error has occurred is called automatic repeat request ARQ. Flow control. Another critical problem that occurs in the data link layer as sometimes the sender transmits the frames faster than the receiver can accept. This problem occurs when the sender is running on a powerful machine and the receiver is running on a low-end machine. Somehow, the receiving device must inform the sending device to slow down or stop sending frames temporarily. For this reason, every receiving device must have a block of memory reserved for storing incoming frames until they are processed. This memory block is called as buffer. The receiver tells the sender to halt transmission if the buffer fills up to the neck. There are so many flow control mechanisms and protocols out there. We will discuss them separately for noiseless, error free and noisy, error-prone channels. Noiseless and noisy channels. The data link layer uses framing, flow control and error control to transmit the data from one node to another. These mechanisms are done by the data link layer protocols. So we will see how the protocols are implemented and how they manage these mechanisms. We can classify the protocols into two categories. One is used for noiseless channels where the probability of error is small and the other is used for noisy channels. For better understanding, we will implement these protocols one by one, starting from a basic implementation to the complex protocol by solving the problem step-by-step in each of the protocols, noiseless channels. Let us first assume an ideal channel where no frames are lost or corrupted or duplicated. We discussed two protocols for this type of ideal channels. The first type of protocol does not use flow control, and the second one does. And neither of them has error control because we have assumed that the channel is perfectly noiseless. Simplest protocol. As an initial example, we will consider a protocol that is as simple as it can be, because it does not worry about the possibility of anything going wrong. The sending and receiving network layers are always ready. Processing time as ignored, infinite buffer spaces available. And the communication channel between the two data link layers never damages or loses frames. This is thoroughly unrealistic protocol, which we call simplest because of the lack of any other name. And it is simply to show the basic structure on which we will build the original protocols. The protocol has two distinct procedures running in the source and destination machines. Both procedures, that is sender and receiver are constantly running because they do not know when the corresponding events will occur. The sender is always ready and pumps data out under the line as fast as again. And the sender would be in sleep position until there are some packets to transmit. If there are packets to send, it takes the packets from network layer, makes a frame by appending header and trailer to the data packet and transmits the frame to the physical layer for transmission, the same procedure repeats for all frames. The receiver algorithm has the same format as the sender algorithm, except that the direction of the frames and data as upward takes data from physical layer and extracts the data, then finally delivers data to network layer stop and wait protocol. If dataframes arrive at the receiver side faster than they can be accepted, the frames must be stored until their use. For this reason, the receiver must have a sufficient buffering space and processing abilities. However, this is a worst-case solution because it needs dedicated hardware all the time, even if the utilization of link is mostly low. An excellent solution to this problem has to have the receiver provide feedback to the sender. The protocol we discuss now is called the stop and wait protocol, because the sender transmits one frame, stops until it gets confirmation from the receiver. That is okay to go ahead and it then sends the next frame. As in the simplest protocol, the sender takes a packet from the network layer, uses it to construct a frame and sends it on its way. But now, unlike in the simplest protocol, the sender must wait until an acknowledgement frame arrives before looping back in fetching the next packet from the network layer. So difference between simplest and stop and wait protocol at the sender side as the sender waits for an acknowledgement before sending the next frame. And receiver side is that after delivering a packet to the network layer, the receiver sends an acknowledgement frame back to the sender. 22. Stop and Wait: Noisy channels. Even though the stop and wait protocol gives us an idea of how to add flow control to its predecessor. Noiseless channels are imaginary and nonexistent. So to make the channel error free, we need to add error control to our protocols. Now let us look into communication channels that are noisy or makes errors. We discussed three protocols in this section that use error control. Stop and Wait automatic repeat request. Here we add a simple error control mechanism to the stop and wait protocol. In stop and wait protocol, if the data packet is lost in transit, receiver waits for the data and the sender waits for the acknowledgment indefinitely. Or if the AC is lost, the sender waits for Act for infinite time. To detect and correct damaged frames, we need to add redundancy bits to our dataframes. When the frame arrives at the receiver side, it is checked and if it is damaged, it is dropped silently. The error detection in this protocol is done by the silence of the receiver. If the received frame as the correct one, the receiver sends the ACK frame to the sender, and if it is corrupted or lost, it would be discarded and stays silent. That means no Akwesasne. If the receiver does not respond to the errors, how can the sender know which frame to resend? To overcome this problem, the sender keeps a copy of the scent frame with it. At the same time, it starts a timer. If the timer expires and there is no acknowledgement for the scent frame. The frame is recent, the copy is held and the timer is restarted. This scheme has a fatal flaw in it though. Remember that the goal of the data link layer has to provide error free, transparent communication between network layer processes. Take an example to see what might go wrong. The network layer at the sender side gives packet 12. It's data link layer that packets successfully received at the receiver side and passed to the network layer. The receiver sends an ACK frame back to the sender. Suppose the acknowledgement frame gets lost in transit. The data link layer at sender's side eventually times out not having received an acknowledgement, it incorrectly thinks that its frame was lost and sends the frame containing packet one. Again, this duplicate frame would be passed to the network layer at the receiver. So somehow to stop duplicate frames, the receiver must be able to identify the Frames after retransmission. The simplest way to tackle this problem has to have the sender put a sequence number for each frame AdSense. Then the receiver checks the sequence number of each incoming frame to see if it is a new frame or a duplicate one. But one of the important aspects as the range of the sequence numbers. Since we want to minimize the frame size, we look for the smallest range that provides unambiguous communication. Fortunately, a 1-bit sequence number as sufficient for each frame, either a 0 or one. When the sender sends a frame with sequence number 0, the receiver returns an ACK frame with sequence number one. That is the next frame expected sequence number. The announcement of act number as done by using an incremented modulo two operation that has 0 becomes 11 becomes 0. Let us see the working of Stop and Wait ARQ for better understanding. We have a sender and receiver B. The sender window consists of the sequence numbers and the receiver contains act numbers. Remember that the act number that the receiver sends is the expected sequence number of the next frame. A sends a frame with sequence number 0 and the timer started. The receiver receives the frame 0 and returns an act to the sender with act number one, that is the next frame sequence number expected. And the timer stops, then the frame one is sent from a and the timer started. Assume that the frame one is lost in transit. Then the timer times out and the frame one is sent again with the timer running. The receiver receives the frame one and sends act to the sender with AC number 0, and the timer stops. Again, the sender sends frame 0 and the receiver sends back the AC one. Upon receiving the frame 0. If the AC is lost, the timer times out and again sends the frame 0 to be. The receiver receives frame 0 and looks at the sequence number. But the receiver knows that the frame 0 has already arrived, then it's simply discarded at the receiver. This is how the stop and wait ARQ works. 23. Sliding window protocols: Sliding window concept, whether it is a networking or any other area, we achieve efficiency only when we do things parallel. For that, a new task must begin before the previous task ends. This is called pipelining. But there is no pipelining and Stop and Wait ARQ because we are waiting for a frame to reach the receiver and be acknowledged there before sending the next frame. To improve the efficiency of the channel, multiple frames must be in transmission while waiting for acknowledgement. Simply put, we need to send more than one frame to keep the channel busy while the sender is waiting for acknowledgement. To get this task done, we need to use sliding window concept. In sliding window protocols, the sender has a buffer storage called the sending window, and the receiver's buffer is called the receiving window. The essence of all sliding window protocols is that at any instance of time, the sender must maintain a set of sequence numbers corresponding to frames it is permitted to send. These frames are said to fall within the sending window. Similarly, the receiver also maintains a receiving window corresponding to the set of frames. It is permitted to accept the size of the window as determined by the number of sequence numbers permitted. In this case, the window size is four because four sequence numbers are present. These sequence numbers repeat itself for the upcoming frames. In previous protocols, the sender must wait until the acknowledgment comes from the receiver. But in sliding window protocols, the sender need not to wait for the acknowledgment from the receiver. Instead, it sends possible number of frames before it receives acknowledgment signal. Let's say one frame here. In this case, 0 is sent from a to B. By the time it receives acknowledgement, it sends another three frames, 123. Now, 0123 is a window. After receiving the acknowledgment for the first frame 0, it sends another frame, fourth frame with sequence number 0. Now, 1230 is a window. In case if the frames 123 are received and acknowledged, then it sends another three frames and window becomes 0123. So the window slides according to the received acknowledgment frames. This process continues till all packets arrive at the receiver, go back and automatic repeat request protocol. In this protocol, we can send several frames before receiving acknowledgements, and we keep a copy of these frames until the acknowledgments arrive. For example, assume a window size of four at the sender side. So four frames, 1234 are sent from sender to receiver. The frames 12 are received and acknowledged. Now the window slides and sends another two frames, 56 because the window size is four. But somehow the frame three as lost in transit, and the timer expires for frame three. That means frame three has not been acknowledged. Now, the sender goes back and sends frames 3456. Again. That's why the protocol is called Go back and ARQ. This approach can waste a lot of bandwidth if the rate of error is very high. In this protocol, the size of the sending window must be less than N. N must be smaller than the number of sequence numbers. The size of the receiver window is always one. Selective repeat, automatic repeat request. For the sake of one bad frame, the sender needs to retransmit all the frames again and go back and ARQ that causes a time delay and bandwidth issues and other factors. The other simple strategy to handle errors when frames or pipeline does a selective repeat ARQ. Selective repeat ARQ consists of receiver window larger than one. This approach requires large amounts of data link layer memory. If the window is large, most probably the window sizes of sender and receiver are equal. To get good results. Selective repeat is combined with a receiver that sends a negative acknowledgement knack when it finds an error. For example, when it receives a checksum error or a frame out of sequence. Next, stimulate retransmission before the corresponding timer expires, and thus improve performance. For example, take both window sizes as to the sender sends frames 01 to the receiver, 01 air sequence numbers. If the receiver receives both frames correctly, it sends an acknowledgement to the sender with the next sequence number expecting. Here it is two. Now the window slides and the sender sends the frame starting with the sequence number two, that has frames 23 as the window sizes too. If the frame T2 is lost or damaged, the receiver sends a negative acknowledgement NAC, asking the frame to to be retransmitted. Now, only frame two as retransmitted, not frame three. 24. Random Access Protocols: Multiple access. We know that the data link layer has two sublayers. The upper sub layer that is responsible for flow and error control is called the logical link control, LLC sub layer. The lower sub layer that is mostly responsible for Multiple Access resolution, that is, resolving access to the shared media is called the Media Access Control MAC sub layer. If all the devices in the network have one-to-one dedicated links, we don't need the MAC sub layer because there is no shared medium. When nodes are connected to a common link, then multiple stations can access the channels simultaneously. So we need a MAC sub layer to manage access to the link and decrease collisions. For example, in a quiz competition, when the anchor asked a question, if all the contestants answer at a time, it would be difficult to understand who answered correctly because collision happens. So it is the duty of the anchor to manage all the contestants and make them answer one at a time by putting buzzers. There are two types of networks are possible in data communication, switched communication networks and broadcast networks. In switched communication networks, the users are connected using multiplexers, switching elements, transmission lines, and use circuit switching or packet switching techniques. We will shortly see sonnet, frame relay and ATM networks. They are called switched communication networks. In switched communication networks, the multiplexers and switching elements take care of access and collision issues in a shared medium, like who must wait and who should transmit. But there are some cases where we only broadcast the information through a shared media which does not use multiplexers and switching elements. The examples of broadcast networks are bus topological networks, ring networks, satellite communication networks, and some wireless networks, many more. So how a user transmits data through a shared medium without collisions when so many users are connected through the same medium. We do that using some mechanisms and protocols that allow only one user to transmit at a time. And that is done by Medium Access Control sub layer. For this purpose, many protocols have been devised to handle access to a shared link. We categorize them into three groups. Random access protocols, Controlled Access Protocols, and channelization protocols. Each protocol contains different mechanisms to handle the access issues. Let's examine each of them in detail. Random access protocols. In random access method, no station has more priority over another station and no one station is assigned the control over other stations. At any given time. A station that has data to send follows a procedure to make a decision on whether descender not. This decision depends on the state of the medium, whether it's busy or idol. There is no particular time for a station to transmit. Transmission takes randomly among the stations. That's why these methods are called random access methods. We know every station has equal rights on the medium. But if any two stations trying to send at a time, there occurs in access conflict and the frames will be corrupted. To avoid access conflicts, each station follows a procedure. The random access methods we study now have evolved from an exciting protocol called Aloha, which uses a straightforward procedure known as Multiple Access. The Aloha method was improved by adding a procedure that tells the station dissents the medium before the actual transmission of data. Beer Sense Multiple Access, CSMA. Csma method eventually evolved into two methods. First one is carrier sense multiple access with collision detection, CSMA CD. And the second one is carrier sense multiple access with collision avoidance, CSMA CA, CSMA CD tells the station what to do when a collision is detected. Csma CA tries to avoid the collision altogether. Aloha. Aloha has the earliest random access method that was developed at the University of Hawaii in the early 19 seventies by the researcher Norm Abramson and his colleagues who were trying to connect users on remote islands to the main machine in Honolulu. They used short range radios with each user terminal sharing the same upstream frequency to send frames to the central computer. We will discuss two versions of Aloha here. Pure aloha and slotted aloha. Pure and slotted aloha has defer concerning whether time is continuous or divided into discrete slots respectively. Pure aloha. The fundamental idea of an aloha system as simple, users transmit the data whenever they have data dissent. In that case, there will be collisions among the frames if the system send the data at a time that lead to corrupted frames. In pure aloha, if any station transmits a frame, it expects an acknowledgment from the receiver. If acknowledgement is not received within a specified time, the station assumes that the frame has been corrupted. If the frame is corrupted due to the collision, the sender just waits a random amount of time and re-centered Again. This is called back-off time. The waiting time to send must be random, or the same frames will collide over and over again. For example, there are five users, a, B, C, D, E. They are transmitting the data through a common channel. Users a and E transmitted their first frame at the same time on a shared channel and they get collided. And again, the users a, B, D, transmitted the frames almost at the same time in collided. Only a few frames can collision. Here, the collided frames from a, B, D, E users do not receive acknowledgements. So they wait a random amount of time and re-send those collided frames. Again, this wastes a lot of bandwidth of the channel and a lot of time. In pure aloha, the maximum throughput or channel utilization as 18%. This result is not very encouraging. Slotted aloha. Somehow, we need to increase the capacity of the Aloha system. Soon after aloha came into existence, Roberts published a method to double the capacity of Aloha. His idea was to divide time into discrete intervals called slots. Each interval belongs to one frame. This approach asks the users to agree on slot boundaries. The stations can send a frame only at the beginning of the slot and only one frame is sent in each slot. One way to achieve synchronization is to have one special station emit a beep at the start of each interval, like o'clock. If any station misses its time slot, that is, if it is not able to place the frame under the channel at the beginning of the slot. It is required to wait for the beginning of the next slot. Thus, the continuous time aloha has turned into a discrete-time one. This haves the vulnerable period. There is still a possibility of collision in slotted aloha if two stations tried to send the frames at the beginning of the same time slot. For example, stations 23 send the frame at a time in the discrete-time slot, so they get collided. But slotted aloha is far better than pure aloha because it reduces the collisions. The best we can get for using slotted aloha is 37% of the slots empty. Thirty-seven percent successes and twenty-six percent collisions. 25. CSMA, CSMA-CD, CSMA-CA: Carrier sense multiple access CSMA. To minimize the chances of collision and to increase the performance of the system. The CSMA method was developed. The chances of collision can be reduced if a station senses the medium before trying to send the data. Csma requires that each station first checks the state of the medium, whether it is busy or idle before sending data. It can reduce the possibility of collision, but it cannot eliminate it completely. The collision still occur because of the propagation delay. When a station transmits a frame, it takes some time, a very short time for the first bit to reach every station and for every station. Dissent said, the station sends the medium at the point of contact to the medium, only not the entire medium. So the channel may be busy even though the station senses it to be idle. Persistence methods. The very important question is what to do if the channel is idle or busy? Three methods have been devised to answer these questions. The one persistent method, the non-persistent method, and the p persistent method. The one persistent method is simple and straightforward. Whenever a station wants to send the data, it first checks the channel to see if any other station is transmitting at that moment. If the channel is idle, the station sends its data. If the channel is busy, the station centers the medium continuously until it becomes idle. Then the station transmits a frame. If the collision occurs, the station waits a random amount of time and starts all over again. The protocol is called one persistent because the station transmits with a probability of one, that is probability of success when it finds the channel idle. That means it transmits immediately after the channel is idle. One persistent method is used in CSMA CD systems like Ethernet. A second carrier sense protocol is non-persistent CSMA. If the channel that we want to send is already in use, the station does not continually check the channel, but a random amount of time and then checks again. So we can call this a less greedy protocol. So this algorithm leads to better channel utilization but longer initial delay than one persistent CSMA. The p persistent method is used if the channel has time slots with a slot duration greater than or equals the maximum propagation time. The p persistent method reduces the chances of collision and improves efficiency by using the advantages of the other two methods. If the slide is busy, it senses the transmission medium continuously until it becomes idle, then transmit with probability p. If the station did not send the frame in that empty slot, it waits for the next slot with probability q equals one minus p. If that sliders also idle, then it sends the frame again with probability p or postpones again with probability q. To better understand how it works, we look into this flowchart. If the channel is busy, it continuously checks the channel. If it finds the channel idle, it generates a random number between 01. If that random number is less than or equal to the predefined probability the station can transmit. If the random number is greater than the probability value, it waits a time slot and again checks the channel. If idle, again a random number generator and compares with probability, the same process continues. If the channel is found busy again, it waits a back-off time and starts again. The probability of the channel can be calculated using this formula, where n is the number of stations that are connected to the shared medium. This method is used in CSMA CA systems like Wi-Fi, Carrier Sense Multiple Access with Collision Detection, CSMA, CD persistent and non-persistent CSMA calls are definitely an improvement over aloha because they ensure that no Station begins to transmit while the channel is busy. However, if two stations sense the channel to be idle and begin transmitting simultaneously, their signals will still collide. Csma CD as the base for the classic Ethernet LAN. If a station detects a collision, it immediately stops transmission, wait a random period of time, and then tries again. Well, if they wait for the same amount of time, incense, they would again collide. So to determine when to retransmit again, it would run an algorithm called back-off algorithm, carrier sense multiple access with collision avoidance, CSMA CA. In any wireless networks, some of the energy centers lost in transit, the received signal finally has very little energy. So a collision may add only a small percent of additional energy. So sometimes the stations cannot detect the collision signals because of less energy. This is not very useful for effective collision detection. We need to avoid collisions on wireless networks because collisions cannot be detected in wireless communication. Carrier sense multiple access with collision avoidance, CSMA CA was invented to handle this problem. Collisions are avoided through the use of CSMA. Ca is three strategies. The interframe space, the contention window, and acknowledgements. When an idle channel is found, the station does not send immediately. It waits for a period of time called the interframe space, or IFRS. The contention window as an amount of time divided into slots. A station that is ready to send chooses a random number of slots as its weight time after waiting for IFRS, time after IFRS and contention also, there still may be a collision during transmission and the data may be corrupted during the transmission. So the positive acknowledgement and the timeout timer can help guarantee that the receiver has received the frame. 26. Controlled Access: Controlled access. In controlled access, all the stations consult each other to find which station has the right to send. A station cannot send data unless it has been authorized by other stations. We discuss three important controlled access methods. Now, reservation. In the reservation method, a station requires to make a reservation before sending data. So the time is divided into intervals. In each interval, a reservation frame proceeds the dataframe sent in that interval. This picture shows a situation with five stations and a five-minute slot reservation frame. In the first interval, only stations 134 have made reservations. In the second interval, only station one has made a reservation. Polling. Polling works along with topologies in which one device acts as a primary station and the other devices are designated to be secondary stations. All data transmissions must be made through the primary device only even when the final destination as the secondary device. The primary device controls the link and the secondary devices just follow its instructions. It is only up to the primary device to determine which device is allowed to use the channel at any given time. If the primary device wants to receive data, it asks the secondaries if they have any data to sand. This is called a poll function. For example, the primary device ask station a to send data, but station a does not have any data to send. So it sends knack that tells I have no data to send. And it also asks station B to send data. In this case, station B has some data to send to the primary device and sands. In return, the primary device sends an acknowledgement telling I have received your data. If the primary device wants to transmit data, it tells the secondary to get ready to receive. This is called the select function. For example, here the primary device wants to transmit data to the stations a and B. It tells the stations a and B to get ready to receive data by sending a select signal to the stations. Then it sends data to the stations after receiving the acknowledgment from the stations that they are ready to receive. Token passing. The token passing method, the devices in a network are organized in a logical ring. That means every device must have two devices at the two ends. Here, a token or a special packet is always circulating through the rain. Whoever has the token that station gets the right to access the channel and sends its data. If a device has some data to send, it has to wait until it receives the token. If the device has no more data to send, it gives the token to the next logical device. The device cannot send data until it gets the token again in the next round. One important point to note as it is not necessary to have all the devices physically connected in a ring, but it can be a logical rain channelization. The channelization as a multiple access method. Here, the available bandwidth of a link is shared between the stations in time, frequency or using code. There are three important channelization protocols. They are FDMA, TDMA, and CDMA. Fdma. Here, the bandwidth is divided into frequency bands for each station descendants data. Simply put, each frequency band is reserved for a specific station all the time. To prevent station interferences, the allocated bands are separated from one another by small guard bands, TDMA. Here, the station share the available bandwidth of the channel in time. Each station has given a time slot during which it can send its data. In TDMA, the bandwidth has just one channel that is timeshared between different stations. In CDMA, the transmission takes place simultaneously from all stations and the data is separated using a code theory. 27. Ethernet: I triple E standard. So land has gone through so many technologies like Ethernet, token, ring, token bus, FDD II, fiber distributed data interface, and ATM land, et cetera. Amongst all land technologies, Ethernet is the most dominant one and has completely taken over the wired land market. The reason behind the dominance of Ethernet in wildland industry as it is the first high-speed land and the other technologies like token ring, FDD II and ATM or more complex and expensive than Ethernet. In 1985, the Computer Society of the I triple E started a project called Project eight to, to set standards to enable inter-communication among devices of different manufacturers. The I triple E has subdivided the data link layer into two sublayers, logical link control, LLC and Media Access Control. Mac. Media Access Control is different for different land technologies like Ethernet, CSMA, CD, token bus, token ring back also created several physical layer standards for different land protocols. The most important technologies we still use our eight naught 2.3 Ethernet and eight naught 2.11 wireless LAN, bluetooth wireless PAN as widely deployed, but has now been standardized outside of eight, not 2.15 ethernet. The story of Ethernet was started about the same time as that of Aloha by a student named bob Metcalf. He and his colleague David Boggs designed and implemented the first Local Area Network in mid 19 seventies. It used a single long thick coaxial cable in a bus topology and ran at three megabits per second. They called the system Ethernet, after the luminiferous ether, through which electromagnetic radiation was once thought to propagate. The Xerox ethernet was so successful that day, intel and Xerox has created a standard in 1978 for a ten megabits per second ethernet called the DIC standard. With a minor change, the DIC Standard became the I triple E eight naught 2.3 standard in 1983. By the late 19 nineties, several companies replaced their lands with Ethernet installations using a hub based star topology. And the devices are connected using twisted pair copper cables. But in the early two thousands, Ethernet continued to use a star topology, but the hub was replaced with a device called a switch. By the way, switches a collision list device. There are two kinds of Ethernets exist. Classic Ethernet, which solves the multiple access problem using the methods we have studied till now. And switched Ethernet in which devices called switches are used to connect different computers. Classic Ethernet as the original one that runs at rates from three to ten megabits per second. Switched Ethernet as what Ethernet has become an runs at 100,110 thousand megabits per second in forms called Fast Ethernet, gigabit ethernet and ten gigabit ethernet. In practice, only switched Ethernet is used nowadays. For now, we concentrate only on the frame format of the classic ethernet. The Ethernet frame format is almost same for Classic and switched Ethernet. Mac sub layer manages the operation of the channel access method and passes frames received from the upper layer to the physical layer. In the frame format, first eight bytes belongs to preamble with each byte contains a bit pattern 10101010 except the last byte, because the last two bits of last byte are always set to 11, that is 10101011. This last byte with 11 as called the start of frame delimiter for eight naught 2.3. The last 21 bits tell the receiver that the rest of the frame is about to start. Next comes two addresses, one belongs to the destination and one for the source. And they look like this respectively. They are each six bytes long. Next comes the type or length field. The type field specifies which process to give the frame tale. For example, a type code of hexadecimal OH, OOO means that the data contains an IPv4 packet. And Novell ipx or Apple Talk, each have their own standardized type number. I triple E eight naught 2.3 decided that this field would carry the length of the frame. The data field contains the data coming from the upper layer protocols. It is a minimum of 46 bytes and a maximum of one hundred and five hundred bytes. If the data part of a frame as less than 46 bytes, it uses something called a pad field to fill out the frame to the minimum size. The last field checksum contains error detection information, in this case, CRC F32 and Ethernet frame must have a minimum length of 512 bits or 64 bytes. The standard defines the maximum length of a frame without preamble and SFD field as 12,144 bits are 1518 bytes. There is a time gap between two consecutive frames transmitted that is called interframe gap of 9.6 microseconds. This time delay is provided between two frames to allow other stations to transmit at this time. And it defines several physical layer implementations. Four of the most common implementations are ten base 510, base two, ten base T, and ten base after. The name ten base five is derived from several characteristics of the physical medium. The ten refers to its transmission speed of ten megabits per second. The basis short form for baseband signaling. And the five stands for the maximum segment length of 500 meters. In switched Ethernet, we use switches to connect devices using point-to-point links. So there is no multiple access problem in switched Ethernet. Hence, MAC sub layer protocols have no significance here. 28. Giga bit Ethernet: Gigabit ethernet. The classic Ethernet used a single long cable to connect all devices. We have come so far from that single long cable architecture to the present day Ethernet. We face so many problems with this kind of architecture, like finding breaks, loose connections that leads to a different kind of cabling pattern in which each station has a dedicated cable running to a central hub. But hubs cannot handle several devices at a time. As more and more stations are added, each station gets a decreasing share of the fixed capacity. Using switches, we can increase the load. We can add more devices freely. But the speed of switched Ethernet, ten megabits per second is not sufficient at that time. So the I triple E eight naught 2.3 committee in 1992 has instructed to come up with a faster land. The proposal was to keep eight, not 2.3 exactly as it was, but just make it go faster. It was progressed very quickly. And the result, eight naught 2.3, you was approved by the I triple E in June 1995. Everyone called it fast Ethernet, 100 megabits per second. The requirement for an even higher data rate has taken it to the design of the gigabit ethernet protocol, 100 megabits per second. And the standard is called eight naught 2.3 z over fiber cable and called eight naught 2.3 ab over twisted pair cable. Gigabit ethernet transmits ethernet frames at a rate of a gigabit per second, that is 1 billion bits per second. Like the Fast Ethernet gigabit ethernet supports two different modes of operation, Full Duplex mode and half duplex mode. In the Full Duplex mode, there is a central switch connected to all computers or other switches. Here, each switch has buffers for each input port in which data are stored until they are transmitted. There is no collision in this mode, as we discussed before. That means CSMA CD is not used. Lack of collision tells that the maximum length of the wire as determined by the signal attenuation in the wire, not by the collision detection process. Gigabit ethernet can also be used in Half Duplex mode, but it is very rare. Here. A switch can be replaced by a hub, which acts as the common cable in which a collision might occur. The Half Duplex approach uses CSMA CD. However, as we saw before, the maximum length of the network in this approach has totally dependent on the minimum frame size. Three methods have been defined in gigabit ethernet, traditional carrier extension and frame bursting. In the general approach, we keep the minimum length of the frame, as in traditional Ethernet, 512-bit. The shortest allowed frame length is 64 bytes at one gigabit per second speed, the frame can now be transmitted 100 times faster than in classic Ethernet. When the collision happens, the sender must still be transmitting the frame until the noise signal gets back to it. Even in the worst case, to get this noise signal get back while the frame is still in transmission, the maximum cable length must be 100 times less than the classic Ethernet, or 25 meters. If the cable length as 2500 meters long, the transmitted signal reaches the receiver before the collision signal comes back to the sender. If that happens, the sender cannot know that the noise signal received belongs to her with that small cable length, not enough for most of the offices. So to increase the length of cable to 200 meters, two features were added. The first feature called carrier extension, tells the hardware to add its own padding after the normal frame, to extend the frame to 512 bytes. This padding is added by the sender station and removed by the receiver station. So the software has unaware of it. That's why no changes are needed to existing software. The disadvantage is that using 512 bytes worth of bandwidth to transmit 46 bytes of user data has a line efficiency of only 9%. The maximum length of the network can be increased eight times to a length of 200 meters. This allows a length of 100 meters from the hub to the station. Carrier extension as very inefficient if we have a series of short frames to send, each frame carries redundant data to improve the efficiency, frame bursting was proposed. Instead of adding an extension to each frame, multiple frames are sent. However, to make these multiple frames look like one frame, padding is added between the frames so that the channel is not idle. In other words, the method deceives other stations into thinking that a very large frame has been transmitted. We have four implementations in gigabit ethernet under the physical layer, which uses fiber cable, twisted pair cable and shielded copper cable. They are 100 base sx, 100 base lx, 100 base CX, 100 base T, 10000 base x means it operates on fiber cables and sometimes on copper cables. So 100 base sx and 100 base Lx use optical fiber modules. 11000 base Cx uses shielded balanced copper cable and 11000 base T used unshielded twisted pair cable. 29. SONET-SDH -1: So net as DH sewn it means synchronous optical network developed in the United States that works on optical technology. As DH means synchronous digital hierarchy in Europe and elsewhere, both are the same, but differ only in minor ways. The sonnet design provides Internetworking between different carriers. Are companies. Unification of different digital systems. A means to multiplex, a good management to the network properly. Previous systems did not do this very well. So net network, this is a simple representation of Sony network with Mux, DMux regenerates and add drop multiplexer, STS Mux DMux. It converts electrical signal into light signal and multiplexes. The signals regenerates. It regenerates the optical signals just like a repeater. That means it increases the signal strength. Add drop multiplexer. It can add signals coming from different sources into a given path or remove a signal. For each frame at the sender side, three types of headers are added to the data part for the transport purpose, path overhead has added first, it contains information about end to end portion of the network from its optical source to optical destination. Then lined overhead is added for the SDSN signal between STS n and add drop multiplexers. And section overhead is added for communications between adjacent network elements such as regenerated. It is then transported through the regenerative and add drop multiplexers to the destination using the header information. Here, all headers are removed one-by-one and converted back to electric signal. We can compare these three headers with the data link layer in the OSI model. And another layer called photonic layer works as a physical layer in the OSI model. It indicates presence of light is one and absence of light is 0. In sonnet architecture, each sender and receiver as tied to a common clock called a master clock or primary reference clock, PRC. The master clock controls the timing of signals and the whole system. That's why it is called synchronous network. Bits on the Soviet line are sent out at extremely precise intervals controlled by the clock. Sonnet frame format. The basic sonnet frame as a block of 810 bytes. These frames are sent out every 125 microseconds onto the line. Even if there is no useful data to send, it constructs and sends dummy data for every 125 microseconds. Because sonnet as a synchronous network, that means the Soviet system sends continuous bit-stream of frames to receiver. So the receiver sees only a continuous bitstream. Then, how does the receiver know beginning of each frame? Well, the first two bytes of each frame contains a fixed pattern. By that pattern, the receiver is able to identify each frame separately. 1 second as equal to 1 million microseconds. For every 125 microseconds, it can send a frame. So in 1 second it can send 1 million divided by 125, that is 8 thousand frames. The 810 bytes sonnet frames are best described as a rectangle of bytes, 90 columns wide by nine rows high. That is 9009810 bytes, 810 bytes as equal to a into 810, that is 6,480 bits. Thus, 6,480 bits are transmitted 8 thousand times per second for a gross data rate of 51.84 megabits per second. This layout as the basic sonnet channel called STS one synchronous transport signal one. The first three columns of each frame are reserved for system management information. The upper three rows of the first three columns are used for section overhead, SOH, the lower six are lined overhead LH. The remaining 87 columns of each frame hold 87 times nine times eight times 8 thousand equals 50.112 megabits per second of user data. The SPE synchronous payload envelope. It carries the user data. In particular, it does not always begin in row one. Column for the SPE can begin anywhere within the frame. A pointer to the first byte is contained in the first row of the line overhead. One SPE does not necessarily fit it into one STS, one frame. It may be split between two frames. This may happen if a payload arrives at the source while a dummy sonnet frame is being constructed, we can see the empty boxes in the frames. That is the dummy data constructed while there is no data, we can use sonnet in long distance transmission. It offers high data rates and large bandwidth with minimum interference. 30. SONET-SDH -2: So net as DH sewn it means synchronous optical network developed in the United States that works on optical technology. As DH means synchronous digital hierarchy in Europe and elsewhere, both are the same, but differ only in minor ways. The sonnet design provides Internetworking between different carriers. Are companies. Unification of different digital systems. A means to multiplex, a good management to the network properly. Previous systems did not do this very well. So net network, this is a simple representation of Sony network with Mux, DMux regenerates and add drop multiplexer, STS Mux DMux. It converts electrical signal into light signal and multiplexes. The signals regenerates. It regenerates the optical signals just like a repeater. That means it increases the signal strength. Add drop multiplexer. It can add signals coming from different sources into a given path or remove a signal. For each frame at the sender side, three types of headers are added to the data part for the transport purpose, path overhead has added first, it contains information about end to end portion of the network from its optical source to optical destination. Then lined overhead is added for the SDSN signal between STS n and add drop multiplexers. And section overhead is added for communications between adjacent network elements such as regenerated. It is then transported through the regenerative and add drop multiplexers to the destination using the header information. Here, all headers are removed one-by-one and converted back to electric signal. We can compare these three headers with the data link layer in the OSI model. And another layer called photonic layer works as a physical layer in the OSI model. It indicates presence of light is one and absence of light is 0. In sonnet architecture, each sender and receiver as tied to a common clock called a master clock or primary reference clock, PRC. The master clock controls the timing of signals and the whole system. That's why it is called synchronous network. Bits on the Soviet line are sent out at extremely precise intervals controlled by the clock. Sonnet frame format. The basic sonnet frame as a block of 810 bytes. These frames are sent out every 125 microseconds onto the line. Even if there is no useful data to send, it constructs and sends dummy data for every 125 microseconds. Because sonnet as a synchronous network, that means the Soviet system sends continuous bit-stream of frames to receiver. So the receiver sees only a continuous bitstream. Then, how does the receiver know beginning of each frame? Well, the first two bytes of each frame contains a fixed pattern. By that pattern, the receiver is able to identify each frame separately. 1 second as equal to 1 million microseconds. For every 125 microseconds, it can send a frame. So in 1 second it can send 1 million divided by 125, that is 8 thousand frames. The 810 bytes sonnet frames are best described as a rectangle of bytes, 90 columns wide by nine rows high. That is 9009810 bytes, 810 bytes as equal to a into 810, that is 6,480 bits. Thus, 6,480 bits are transmitted 8 thousand times per second for a gross data rate of 51.84 megabits per second. This layout as the basic sonnet channel called STS one synchronous transport signal one. The first three columns of each frame are reserved for system management information. The upper three rows of the first three columns are used for section overhead, SOH, the lower six are lined overhead LH. The remaining 87 columns of each frame hold 87 times nine times eight times 8 thousand equals 50.112 megabits per second of user data. The SPE synchronous payload envelope. It carries the user data. In particular, it does not always begin in row one. Column for the SPE can begin anywhere within the frame. A pointer to the first byte is contained in the first row of the line overhead. One SPE does not necessarily fit it into one STS, one frame. It may be split between two frames. This may happen if a payload arrives at the source while a dummy sonnet frame is being constructed, we can see the empty boxes in the frames. That is the dummy data constructed while there is no data, we can use sonnet in long distance transmission. It offers high data rates and large bandwidth with minimum interference. 31. Wireless LANs -1: Wireless lands. When we walk into an airport or railway station, we access the Internet without any cabling. Wireless communication as one of the fastest growing technologies. As we all know, we cannot walk everywhere with our phones and laptops being connected by cables. So the demand for connecting devices without the use of cabling is increasing everywhere. There are many wireless technologies available now. They are I triple E eight, not 2.11 wireless lands, Bluetooth, ZigBee, et cetera. 8.1192. Wireless LAN is also called wireless Ethernet or WiFi. These networks are classified into two modes, Infrastructure Mode and ad hoc network. In Infrastructure Mode, we have devices called access points. These are nothing but home routers. Each access point can have several devices such as laptops and smartphones. Several access points may be connected together, generally by a wired network called a distribution system to form an extended eight, not 2.11 network. In this case, clients can send frames to other clients through their APs. The API generates radio signals that cover a specific area. That area is called Basic Service Set. In ad hoc network mode, there is a collection of computers or mobile phones can send frames without access points and routers. This is just like a mobile hotspot where we connect number of devices using hotspot connection, we can divide the communication in the infrastructure and ad hoc modes as layers. They are application layer, TCP, IP, LLC eight naught 2.11, Mac eight naught 2.11 physical layer. As we know, the LLC and MAC layers are part of data link layer. For now, we concentrate only on MAC layer in the data link layer, MAC sub layer in eight naught 2.11 wireless lands. We know that eight not 2.11 wireless lands do not use Collision Detection. Once a station begins to transmit frames, it transmits all frames. That is, once a station gets started, there is no turning back. So while transmitting those frames, if a collision occurs, it reduces the performance of the protocol to reduce the number of collisions eight, not 2.11 defines several collision avoidance techniques in MAC layer I triple E eight naught 2.11 has divided the Mac into two sublayers. The distributed coordination function, DCF and point coordination function, PCF. Distributed coordination function. Dcf uses CSMA CA as the access method. Wireless lands cannot implement CSMA CD before sending a frame, the source station senses the medium by checking the energy level at the carrier frequency. There may be a chance that multiple stations can sense the channel to be idle. So if all these stations tried to send their frames at a time, they will collide. That's why each station waits a random amount of time before sending its frame. The randomness will help avoid more collisions. We call this time the back-off time. After the back-off time has expired, if the station is found to be idle, the station waits for a period of time called the distributed inter-frame space DIFS. And again, it senses the channel. If the channel is still free, then the station sends a control frame called the Request to send RTS. After receiving the RTS frame, the receiver waits a period of time called the short interframe space, SIFS, and sends a control frame called the clear descend CTS to the source station. This control frame indicates that the destination station is ready to receive data. The source station then waits and amount of time equal to SIFS time and sends data to the receiver. The receiver, after waiting and amount of time equal to SIFS, sends an acknowledgement to show that the frame has been received. 32. Wireless LANs -2: Network allocation vector. How do other stations postpone sending their data? If one station acquires access to the channel, the key as a feature called naff. When a station sends an RTS frame, it includes the duration of time that it needs to occupy the channel. Then the CTS frame as broadcasted to all the stations, instructing the other stations not to send for the Reserve duration. The stations that are affected by this transmission create a timer called a network allocation vector, nav, that shows how much time must pass before these stations are allowed to check the channel for idleness. And when the act comes back from the receiver, other stations can sense the channel collision during handshaking. What happens if there is a collision during the time when RTS, CTS control frames are in transition. That means two or more stations may try to send RTS frames at the same time. These control frames may collide. However, because there is no mechanism for collision detection, the sender assumes there has been a collision if it has not received a CTS frame from the receiver, the backoff strategy as employed, and the sender tries again. Point coordination function PCF. The point coordination function, PCF is an optional access method that can be implemented in an infrastructure network. The PCF resides in the access points to manage the communication within the network. It is implemented on top of the DCF and is used mostly for time sensitive data transmission like online gaming and streaming to give priority to PCF over DCF. Another set of interframe spaces has been defined, PI, fs and SIFS. Here, the SIFS as the same as that in DCF, but the PI fs is shorter than the DIFS. This means that if at the same time a station wants to use only DCF and NAP wants to use PCF, the API has priority. The only difference between DCF and PCF has the access point waits for PIF S time rather than waiting for DIFS time to occupy the channel and gives priority to time sensitive data transmission, fragmentation. Fragmentation as one of the methods to improve the performance of the system. The wireless environment is very noisy or corrupt frame has to be retransmitted. The protocol therefore recommends fragmentation, the division of a large frame into smaller ones. It is more efficient to re-send a small frame than a large one. I triple E eight, not 2.11 MAC layer frame format. The MAC layer frame consists of eight fields. Frame control fc, the FCC field as two bytes long and defines the type of frame and some control information. The frame control contains 11 subfields. We see them one by one version. The length of this field as two bits, which tells the current protocol version which is fixed to be 0. For now, type. It is a two bit long field which determines the purpose of frame, that is management 00, control 01, data 10, the value 11 as reserved subtype. It is a four bit long field which indicates subtype of each type of the frame, like 0000 for association request, 1-0-0, 0 for beacon to ds. It is a one-bit long field, either 0 or one, which indicates that the destination frame as for ds distribution system, from ds, it is a one-bit long field, either 0 or one, which indicates that the frame is coming from ds distribution system as the infrastructure that connects multiple access points together to form an extended service set. Ess. More frag, more fragments. It is one-bit long field. If it sets to one means frame is followed by other fragments. Retry it is one-bit long field when set to one means it is a retransmitted frame. Power management, it is one-bit long field. When set to one, the field indicates that the station is in power saving mode. If the field is set to 0, the station stays active. More data, it is one-bit long field, which is used to indicate a receiver that a sender has more data to send them the current frame. Wep Wired Equivalent Privacy. It is one-bit long field, which tells that the standard security mechanism of eight naught 2.11 is applied. That is, encryption has implemented order. It is one-bit long field. When set to one, the received frames must be processed in order. The subfields in frame control is over. Now we will see the other fields in the frame format d. It is called duration field. The value indicates the period of time in which the medium is occupied in all frames types except one. This field defines the duration of the transmission that is used to set the value of NADH in one control frame. This field defines the idea of the frame addresses. There are four address fields, each six bytes long. The meaning of each address field depends on the value of two ds and ds subfields. The two ds and ds fields are important for assessing the packet, since the bit combination of these fields identifies if the frame is entering or leaving the wireless environment, the sequence field gives numbering to frame so that duplicates can be detected. The data field, the payload up to 2312 bytes. A wireless LAN defined by I triple E eight naught 2.11 has three categories of frames. Management frames, control frames, and data frames. Management frames. Management frames are used for the initial communication between stations and access points. Control frames. Control frames are used for accessing the channel and acknowledging frames. Dataframes. Dataframes are used for carrying data and control information. 33. IPv4 Addressing: Introduction. The data link layer delivers the packets between two systems on the same network, but it is required that the data must be shared between multiple networks, crossing many intermediate routers along the way. The network layer has the ability to deliver the individual packets from source to the destination host, possibly across multiple networks. To get this job done, the network layer must know completely about all the routers and links on the network. And it chooses the appropriate path to move data between source and destination. To move data between source and destination across multiple networks, we must need a kind of addressing system that helps in identifying source and destination. So let us jump into the network layer, learning about the addressing and its protocols. Logical addressing. The data that has been sent from a source computer may pass through many lands and wants to reach the destination computer somewhere else in the world. To achieve this, we need a global addressing system called logical addressing or IP addressing. The network layer adds a header to the packet coming from the upper layer. This header contains the IP addresses of the sender and receiver. These addresses are referred to as IPv4. Ip version four addresses, or simply IP addresses. And the address is 32 bits in length. This gives us a maximum of two to the power of 32 addresses. The increase in the use of electronic devices has led to a new version of IP addresses called IPV6 addressing. And it has a length of 128 bits that give much greater flexibility and address allocation. So we first discussed the IPv4 addressing and then move on to the IPV6 addressing. Ipv4 as a 32-bit long address. It universally defines the connection of a device, for example, a computer or a router to the Internet. Ipv4 addresses are unique. That means each address defines one and only one connection to the Internet, to devices on the internet can never have the same address at the same time. Simply put, if a device operating at the network layer has M connections to the Internet, it needs to have M addresses. That means when you are connecting your mobile device through Wi-Fi from your home router, you have one IP address. And when you connect your mobile device to your cell network or mobile data, you have another IP address. It is important to note that an IP address does not actually refer to a host. It actually refers to a network interface. So if a host is on two networks, it must have two IP addresses. The total number of addresses used by the protocol as two to the power of 32 or 4,294,967,296 because it uses 32-bit long address. This is called the address space. Theoretically, if there were no restrictions, more than 4 billion devices could be connected to the internet. The address space is two to the power of N, because each bit can have two different values, 0 or one, and N bits can have two power n values. We can show an IPv4 address in two notations. Binary notation, decimal notation. Binary notation. Ipv4 addresses written as 32-bit with four octets each having eight bits. The following is an example of an IPv4 address in binary notation, dotted decimal notation. To make the IPv4 address easier to read, internet addresses are usually written in decimal form with a decimal point dot separating the bytes. The following is the dotted decimal notation of the above address. Note that because each byte or octet is eight bits, each number in dotted decimal notation as a value ranging from 0 to 255. Some rules must be followed while writing the IPv4 addresses. Those are, there must be no leading 0 like this. There can be no more than four numbers in an IPv4 address. Each number needs to be less than or equal to 255310 is outside this range. A mixture of binary notation and dotted decimal notation is not allowed. 34. Classfull addressing: There are two types of addressing schemes in networking. Classical addressing and classless addressing. We discussed both of them, although we use only the classless addressing nowadays, classical addressing. Here, the address space is logically divided into five classes, a, B, C, D, and E. Each class occupies some part of the address space. We can find the class of an address that is given in any form. If the address is given in binary notation, the first few bits can tell us the class of the address. If the first bit of the first byte is 0, then it is called as class a address. If it is 1011011101111, then it is a class B, class C, class D, and class II respectively. If the addresses given in decimal dotted notation, the first byte defines the class. If the first decimal number of the first byte is in between 0 to 127, then it is a class a address. If it is in between 128 to 191192 to 223224 to 239240 to 255, then they are class B, class C, class D, and class II respectively. One problem with classical addressing is that each class is divided into a fixed number of blocks, with each block having a fixed size. There are two parts in any IP address. The network part and the host part. The network part is used to identify network and the host part is used to specify number of supported hosts. The network portion has the same value for all hosts on a single network, such as an Ethernet LAN. Class a addresses. In class a addressing, the first eight bits of the IP addresses are used for the network id, and the final 24 bits are used for the host id. For example, take ten dot one dot 26 dot to the first eight bits are the first octet, that is, ten is the network portion in the next 24 bits, that is 1.26.2 as the host portion. As 1-bit in class a, addressing 0 is reserved for class identification. It can assist to power 7128 networks and two power 2416,777,216 hosts. These 128 networks are called number of blocks in 16,777,216 hosts is called block size. Class B addresses. In class be addressing the first 16 bits of the IP addresses are used for the network id. And the final 16 bits are for the host ID. For example, take 17 two dot 17 dot one-to-five, D21. Oh, the first two octets, that is 17, two dot 17 represents the network portion. And the next two octets, that is one-to-five dot to 10 is the host portion. So class be addressing can assist to power 1416,384 networks as two bits are reserved for class identification, 1-0. And two power 1665,536 hosts class C addresses. In class C addressing the first 24 bits of the IP addresses are used for the network ID. The final eight bits are for the host id. The same applies for the class c addressing. So Class C addressing kinesis two power 212,097,152 networks as three bits are reserved for class identification, 1102 power 8256 posts, Class D addresses. These addresses are reserved for multicasting. The first four bits of first octet or always set to 1110. And the remaining 28 bits represents host's IP address range for Class D addressing is class E addresses. These addresses are reserved for research purposes ranging from 240 dot o Dato dot o to 255.255.2550.25 for the first four bits of first octet or always set to 1111. There are various reserved IP addresses for special purposes and for private networks. Reserved IP addresses for private network. These are used by routers outside our home network and not used by the general public. Network address translation in the routers translate private to public IP address. These public addresses will be used by our mobile phones and PCs or any other end devices. The range of private addresses for class ahas, for class BS, and for class C AS special IP addresses. The range between this address to this address is called link local addresses. The range between this to this address is called loop back addresses. And the range between this address to this is used to communicate within the current network. The classical addressing has a huge fly and add. The class a format provides a small number of possible network ID's and a huge number of possible host IDs for each network that has, it provides 128 networks and a huge number of hosts, which is practically not useful for almost any organization. Class B also has very large for the organizations. And the class C or class D and class II are too small for any organization. So in classical addressing, a large part of the available addresses were wasted. Subnet mask for classical addressing. The length of the network ID and host ID is predetermined in classical addressing. So we can use a default mask, a 32-bit number made of contiguous ones followed by contiguous zeros to find the net ID and host ID. For example, the mask for a class a address has eight ones, which defines the net ID. The next 24 bits define the host ID. It can be logical ended with the IP address to extract only the network portion. The concept does not apply to classes D and E. The last column here shows the mask in the form slash n, where n can be 816 or 24 in classical addressing. This notation is also called slash notation or classless inter-domain routing CID, our notation when the network mask is specified as a prefix, for example, slash 16, then that number of bits as the size of the network part and the remainder out of 32 For IPv4 and 128 for IPV6 is the host part. 35. Subnetting: Subnetting subnet as a smaller logical partition in the IP network. Subnetting as a process of dividing a single large IP network into multiple smaller subnetworks. Well, but why we need subnetting? For example, if we have a big town without dividing into sectors and streets, identifying a house takes a long time. But if the town is divided into sectors and streets, It will be easy to identify a particular house. Subnetting is also used to improve the performance of the network. Suppose an organization has three departments, and each department has ten systems. Without subnetting, these systems are connected to a single land. The message that is intended for one system will be broadcast to all systems because it is a single land. So the traffic increases in the network that leads to collisions, thereby network performance degrades. Here, three subnets are created, each department one subnet with ten systems each. Let us take an example to see how subnetting works. Take an IP address of an end system, 19, two dot 16, K1000 dot 97 slash 27, the binary form of this IP address us. Let us find the network idea of this particular IP address. We can see that the CID, our notation as 27, it represents a subnet mask of 27 continuous ones, and the remaining are zeros. To find the network ID, we have to perform logical AND operation to the IP address subnet mask. The first three octets doesn't change. The value of the last octet changes to 96. Here is the network ID. That is 19, two dot 16 dot 10, 0.9c six. This is the network ID of the given IP address. Now, we will find out how many subnets we can have in this IP network. We need to identify which class it belongs to. To find the number of subnets. We can tell the class of an IP address by looking at the CID, our notation. It is a class C address because the CID are above 24 belongs to class C. These 27 bits represent network portion and the remaining five bits represent host addresses. The first 24 bits are always on for the network portion in class C. So to allow subnetting, we use the first three bits that were on in the last octet. With three bits, we can have eight combinations. They are, these eight combinations are nothing but eight subnets. So the number of subnets can be calculated using the class and the number of bits just before the 0 bits. Number of one bits in this example is three. So two power three gives us the number of subnets. The formula for subnets as two power n, where n is the number of one bits just before the 0 bits in that octet. Well, then you might ask how many hosts can be there for each subnet. It. And we calculated using number of 0 bits present. In our example, we have 5-0 bits in the last octet. So with five bits, we can get 32 combinations, that is 0 to 31. We can get this by doing two power five equals to 32. But the number of hosts as 30 only because we subtract two as the first address belongs to subnet ID, which is used to identify the network. And the last address as the broadcast address, which is used to broadcast messages to the entire network. So the formula for number of hosts as two power n minus two, where n is the number of 0 bits. Our original mask. For the first subnet, we take 000 here and it stays the same. And these five bits can be varied to get hosts. We already know it gives 32 combinations. So the first subnet starts with 0 and ends with 31. That is 19, two dot 16, eight dot 10 dot O to 19, two dot 16, eight dot 10 dot 31. Second subnet combination as zeros 01, it stays the same while the 5-0 bits are varied. The last octet value is 32. So the second subnet starts with 32 and ends with 63. So the ranges 19, two dot 16, eight dot 10 dot 32 to 19, two dot 16, eight dot 10 dot 63. In the same fashion, we have another six combinations of subnets here. They are. We put these values in this place and vary the 5-0 bits. These are the eight subnets with 30 hosts each In total, we can configure 240 hosts in this IP network. And the hosts are in-between these values. They are one to 30 and the first subnet, 33 to 62 in the second subnet, and so on until the last subnet. The first column belongs to subnet ids. The last column represents broadcast IDs and in-between, these are host ids. Remember that earlier we get 19 two dot 16, eight dot 10 dot 96 as the network ID. This is nothing but the given IP address, 19 two dot 16, eight dot 100 dot 97, as in between 97126, that is in the fourth subnet. So the network ID is used to find out in which subnet, the given IP addresses present. As we have discussed, the first address in the subnet is used to identify the subnet. And the last address is used to broadcast the messages in the subnet. So we have divided a big IP network into eight smaller logical networks. This is called subnetting. As we already know, the classes a and B provides us with millions of host addresses. If a class C addresses provided to a single organization, than around 250 addresses will be assigned. So waste of IP addresses if we use classical addressing. So to overcome the drawbacks, classless addressing was introduced. 36. NAT(Network Address Translation): Network address translation. In this video, we will discuss about network address translation or nat. With IPv4 addressing weekend have around two power 32, that is around 4 billion addresses. The total population of Earth has about 7 billion. What if everyone uses multiple devices? We definitely run out of IP addresses. The long-term solution is for the whole internet to migrate to IPV6, which has 128-bit addresses. This transition is slowly occurring, but it will take some time to run IPV6 address on all devices to get by. In the meantime, a quick fix was needed. The quick fix widely used today came in the form of network address translation. The NAT translates the public IP address to private IP address and private IP to public IP. Suppose we have three devices in our home, and these three devices are connected to a single router. The router comes with a public IP address that is accessible to the whole world. Public IP address as a globally unique IP address assigned to a computing device, a webserver, email server, and any other server device directly accessible from the Internet uses public IP address. But the three devices in our home gets private addresses that were reserved by Internet Corporation for Assigned Names and addresses, ICANN. These private addresses are assigned to the three devices by the router. For example, when we request a web server from one of the three devices to send a soccer score. The request packet goes to the router and then to the web server. When the score is being forwarded to the device, the web server sends packets to the public addressed router, and the router sends those packets to the intended device using a NAT routing tables stored while requesting. You can see your public IP address by searching what is my IP address in any browser? And if you want to see your private IP address, the better way is to use command prompt. Just type IP config slash all in the command prompt. After changing the directory to C, You can see your private IP address. They're the private IP address your device using currently can be used by another device in another network at the same time, because routing takes place based on public IP address. That's why public IP addresses are unique. This kind of technique reduces the address depletion overhead. 37. Internetworking: Inter-networking. In this video, we will discuss about how Internetworking works between dissimilar networks. This internetwork is made of many different networks, including pans, lands, months, and one's the Internet as a prime example of this interconnection. When packets sent by a source on one network must transit one or more foreign networks before reaching the destination network, many problems can occur at the interfaces between networks. To overcome this problem of delivery through several links, the network layer or the internetwork layer was designed. Ip provides a universal packet format that all routers recognize and that can be passed through almost every network. Let us explore how interconnection with a common network layer can be used to interconnect different networks. And internet comprised of eight, not 2.11 wireless LAN MPLS, multiprotocol label switching and Ethernet networks are shown here. Suppose that the source machine on the eight naught 2.11 network wants to send a packet to the destination machine on the Ethernet network, different networks may have different forms of addressing. So the packet carries a network layer address that can identify any host across the three networks. The packet initially transmits from eight, not 2.11 network to an MPLS network. Eight not 2.11 provides a connectionless service, but MPLS provides a connection oriented service. So a virtual circuit must be setup to cross that network. Once the packet has travelled along the virtual circuit, it will reach the ethernet network. Now, we will see how a packet transmitted from source to destination, adding headers along the way, the source machine takes data from the transport layer and generates a packet with the common network layer header, that is IP. Then the data link layer adds eight, not 2.11 header to the packet. The network header contains the ultimate destination address, which does not change at any layer along the path. At the first router, the eight naught 2.11 frame header is discarded, revealing the IP header. The network layer in the router now examines the IP address in the packet and looks up this address in its routing table. Based on this address, it decides to send the packet to the second router next. Part of the path and MPLS virtual circuit must be established to the second router. That's why the packet is encapsulated with MPLS headers that traveled is circuit. Upon reaching the next router, the MPLS header is discarded, revealing the IP address. By this, it knows the next device address. So the network address or Ethernet address is added to the packet because it is an Ethernet. And finally it reaches the destination. So IP address is always present in the data and the different networks in between adds its own Addresses to the IP packet, and again removes its address to look into the IP address. This is how internet work work. 38. Tunneling: Tunneling. In the last video, we have seen how Internetworking works between different kinds of networks. In this video, we will learn a kind of inter-networking where the source and the destination hosts are on the same kind of network and a different network in the middle. This concept is called tunneling. And it is a protocol that allows for secure transmission of data from one network to another. We will see an example to better understand how tunneling happens. For example, to make tunneling clearer, consider driving a bike from country a to B. Suppose we need to cross an ocean to go to be, the bike must be loaded onto a ship. And after reaching the Bay of ocean, we can ride the bike again. Tunneling packets works the same as it goes through a foreign network. Take a multinational company with an IPV6 network in country a and an IPV6 network in another country B. And connectivity between the two companies via the IPv4 internet. We want to send a packet from a to B. When we send a postal letter to a friend, we write the destination address, that is his address on the postcard, not the middle city address. In the same way, the host in the Country A's office construct the packet containing an IPV6 address of office be because Country B office also uses an IPV6 address. Then it sends the packet to the multiprotocol router that connects the country a IPV6 network to the IPv4 internet. When this router gets the IPV6 packet, it encapsulates the packet with an IPv4 header addressed to the IPv4 side of the multiprotocol router that connects to the country be IPV6 network. That is nothing but the Router puts an IPV6 packet inside an IPv4 packet. When this wrapped packet arrives, the country be router gets the original IPV6 packet by removing IPV4 header and sends this IPV6 packet to the destination host in Country B. It seems like the IPV6 packet goes through a tunnel at one end and arrives at the other end. Tunneling is widely used to connect isolated hosts and networks using other networks. The disadvantage of tunneling is that none of the hosts on the network in between can be reached because the packets cannot escape in the middle of the tunnel. However, this limitation of tunneling is turned into an advantage with VPNs, Virtual Private Networks. A VPN means it virtually creates a secret tunnel between source network and destination network to provide a measure of security. 39. Address Mapping: Address mapping. In previous videos, we have seen how Internetworking works. To send a packet from one end to the other end, we must map variety of addresses. In this video, we will learn how to map different addresses for routing. To send a packet from one network to another network, the source needs both logical address and physical address. Sometimes the source device knows IP address, but does not know physical address. And sometimes the source knows physical address, but not logical address. Arp protocol is used to map a logical address to physical address, and to map physical address to logical address. We have three protocols. Are ARP, bootstrap protocol and DHCP ARP address resolution protocol. Let us consider a host and a router. If we want to send an IP datagram from this host to router, the host needs to find the IP address of the router. It gets the router's IP address from the DNS server. It asks DNS server for the IP address to the given URL. In return, the DNS server sends back the IP address to the Hearst. If the router wants to send an IP datagram to host, the router gets the IP address of the host from its routing table. The IP addresses added to the packet at the network layer, and the next layer as the data link layer. The data link layer must add physical address or MAC address of the next device to the packet to be able to pass through the physical network. That means the sender requires the physical address of the receiver. Well, then how does the data link layer know the physical address of the next device? This is where the ARP protocol comes into the picture. The host broadcasts and ARP query packet over the network. The packet includes the logical and physical addresses of the host and the logical address of the receiver. But does not upend the physical address of the receiver because it does not know the physical address of the receiver. Each and every host or a router on the network receives the AARP query packet and processes it. But only the intended host or a router recognizes its IP address appended to the AARP query packet, then that device sends back an ARP response packet. It does not broadcast the response packet. It unicast the packet directly to the host or enquire or by using the physical address received in the query packet. This ARP response packet contains the recipients logical and physical addresses. Using this ARP as inefficient. If a system needs to broadcast and ARP request for every IP packet it needs to send to another system. If the first IP packets, ARP response packet is kept in cache memory, it doesn't need to broadcast the ARP request packet for every IP packet it sends to the same destination. Now, we will see how come a device Get to know about its logical address. If that device knows only physical address, there are some situations that the host knows only its physical address but does not know logical address. This may happen when the device is rebooted. When rebooted, it knows its physical address by looking at its interface, which has physical address imprinted on it. And it does not know its IP address when a new device has added to the network. We have three protocols for this purpose. They are our AARP reverse address resolution protocol, be o tp bootstrap protocol, and DHCP Dynamic Host Configuration Protocol. We will learn about our ARP protocol for now. Reverse address resolution protocol, our AARP. This protocol finds the logical address of a device that knows only its physical address. To find the logical address, our ARP request packet would be broadcasted on the local network. There must be a special host in the local network that knows all the IP addresses. This device is called our ARP server. Our ARP server will send to our AARP reply message to the requesting device, which contains the logical address of the device. The requesting device and responding device must be running an RA RP client and server programs to get this job done subnets, they need to assign an RA ARP server for each subnet separately. This is definitely not a good solution. That's why this protocol is not useful anymore. We have two other protocols replacing our AARP for good results. They are bootstrap protocol and DHCP protocol. 40. Process to Process Delivery: Introduction. In the previous lectures, we have seen all functions of network layer. In this video, we will see a little introduction about transport layer and process to process delivery. First of all, transport layer takes the data from the session layer, which is in the form of a single big message and divides that message into transmittable segments for efficient delivery. Ip addresses used to identify a particular computer. However, computers run multiple application programs at the same time. So those segments must reach the correct application on the destination device. To identify a specific program running on the destination computer, a header must be added here that includes a type of address called a port addresses in the Internet. Sometimes the segments may reach out of order at the destination. So sequence numbers are added to the segments for proper reassembly, We have three protocols in this layer to carry out those tasks. They are TCP, Transmission Control Protocol, UDP User Datagram Protocol, and SCTP Stream Control Transmission Protocol. The data link layer is responsible for the delivery of frames between two neighboring nodes over a link. This is called node to node delivery. The network layer delivers the packets from one host to another host. This is called host-to-host delivery. But the exchange of data between two nodes or between two hosts as not enough. To complete the delivery, we must deliver the message to the correct application running on the destination machine. The communication must happen between two processes or application programs. That is exactly what transport layer does. It is responsible for the delivery of a message from application program running on the source machine to the application program running on the destination machine. This is called process to process delivery. Okay, then how to identify and deliver the segments to the correct application program running on the destination machine. This is where the port address comes in. We add source and destination port address or port number to the header to deliver packets to a specific program among multiple programs running on the destination host. These port numbers are 16-bit numbers between 65,535. Then how does the source host node these port numbers? Simply, the source host defines itself with a port number chosen randomly by the transport layer software running on the client host. The server process or destination must also define itself with a port number, but it cannot choose server port number randomly. Iana decided to use universal port numbers for servers. These are called well-known port numbers. The IANA Internet Assigned Number Authority has divided the port numbers into three ranges. Well-known port numbers, registered port numbers, and dynamic or private port numbers, well-known ports. The port numbers from 0 to 1023 are well-known port numbers assigned to common protocols and services and are assigned by IANA. Servers are always listening on these port numbers. For example, port number 80 is used for HTTP over TCP, and for DNS request. Port number 53 is used. Registered ports. The port numbers ranging from 1024 to 49,151 are called registered port numbers. These addresses cannot be used by client programs. They can only be registered with IANA for specific service upon application by the requesting entity to prevent duplication. Dynamic port. The port numbers from 49,152 to 65,535 are neither control nor registered with IANA. These are used by client programs. When a web browser wants to connect to a web server, the browser will allocate itself a port in this range, and the server port is identified by the well-known port number. These port addresses are dynamically assigned to the client programs. Process to process delivery needs to addresses at each end to make a proper connection. They are IP address and port address. An IP address and a port address together is called a socket address. These socket addresses identify a particular application program on a machine. Suppose if you want to connect to email server from your home computer, the socket connections would be client-side IP 1plus port 50,201, and server-side IP to plus port 25. These connections identify both devices and application programs running uniquely. 41. TCP and UDP: Tcp and UDP. We already know that transport layer uses three protocols to carry the information depending on the type of data. In this video, we will discuss about the transport layer protocols, Transmission Control Protocol and User Datagram Protocol. Transmission Control Protocol. Tcp is a connection-oriented, reliable transport protocol. Here, connection-oriented means there's a connection establishment between the two ends where they negotiate different things like congestion control, error control. Tcp establishes a connection by using three-way handshake mechanism before the actual data transmission. First, a client sends a sin synchronous packet to a server, asking if the server is open for new connection. When the server receives the SYN packet from the client, it sends back a confirmation receipt called SYN-ACK packet to the client. As soon the client receives the syn ak packet, it returns an act packet to the server. After this process, a secure connection is established and the data transfer takes place between them. After data transfer is completed, client terminates the connection with server by exchanging Finn and act packets between them as follows. First, client sends TCP segment with fin finished flag set to one to the server. Then the server sends an acknowledgement to the client for the received fin segment. And again, the server sends its own FIN segments set to one to the client to terminate the connection. Finally, the client acknowledges the servers TCP fin segment and terminates the connection. We have seen what is connection oriented. Now we will see what is the meaning of reliability in TCP. Tcp has built-in error and flow control mechanisms. That is, when a data packet is sent from sender to receiver, the receiver verifies the data packet, whether it is correct or erroneous packet, using the error control information like checksum, then the receiver sends an acknowledgement or negative acknowledgement to the sender. This way, TCP ensures reliability. Tcp will make sure that the packets reach the correct process. It won't care how they receive. It does not care about the devices in between the end systems. User Datagram Protocol. Now let us see how UDP protocol works. The User Datagram Protocol is a connectionless, unreliable transport protocol. Here, connection-less means no connection as established between sender and receiver. Sender just sends the segments to the receiver without any agreement. That means there is no three-way handshake. Unreliable means UDP does not care about the errors or packet droppings on the way. That means no error control or flow control. It just send segments to the receiver and there is no acknowledgement mechanism. So sender continuously sends the datagrams. Now let us look into the segment structures of TCP and UDP. In the segment structures, each segment contains header. Tcp segment contains 20 bytes header, whereas UDP contains eight bytes header. Hence, UDP segment contains a very minimal overhead compared to TCP. From this analysis, we can say that transmitting a segment using UDP as faster because UDP has less overhead and no three-way handshaking and no acknowledgments for the scent packet. But TCP as reliable. Now we will see where is TCP used and in what circumstances we use UDP. Tcp is slow and guarantees the delivery of each packet. So we use TCP where all transmitted packets must be received irrespective of throughput or speed. That is, in web, where text data as being transmitted using HTTP or FTP. In text communication, we need to receive all transmitted packets without missing any text in the page. And speed is not that important while fetching a webpage. Udp transmission is fast and does not care about whether the segments reached the destination properly or not. So we use UDP where we do not care, even if we lose some data and throughput or speed matters. Udp is used in video and audio streaming and also online games. 42. SCTP: Stream Control Transmission Protocol, SCTP. Sctp as a new message oriented, reliable transport protocol designed for recent internet applications. Sctp combines the best features of TCP and UDP. Here, message oriented means that the data is transmitted as messages. A message is nothing but a group of bytes. So the data in SCTP as transmitted indistinct groups or chunks as in UDP. Whereas in TCP, the data as a continuous flow, which is also called as stream oriented Protocol, SCTP, and as reliable like TCP transmission as it detects lost data, duplicate data, and out-of-order data. Sctp also has congestion control and flow control mechanisms. Some new applications like IP telephony, telephony signaling need a more sophisticated service than TCP can provide. So some additional features were implemented in SCTP to provide better services than TCP and UDP. They are multi streaming and multi-homing Malte streaming. In TCP transmission, the transport happens between TCP client and a TCP server in one single stream. The problem here is that a loss at any point in the stream interrupts the delivery of the rest of the data. But in SCTP, the data is partitioned into multiple streams and transports the multiple streams parallely in a single SCTP connection. Partitioning into multiple streams means some webpages are multi-media documents contain different streams like text, audio, video. These streams are handled separately, so the message loss in one stream does not affect the delivery of data in other streams. If the video stream has blocked, the other streams like text and audio can still deliver their data. This approach is known as multi streaming, multi-homing. In SCTP, if the host is connected to two or more networks, it allows data to be transferred over to outgoing paths, means single SCTP endpoint supports multiple IP addresses. This is called multi-homing. In TCP, data transfer takes place only in one path because it contains one source and destination IP addresses. So in SCTP, when one path fails, another interface can be used for data delivery without interruption. As we know, the unit of data in TCP as a byte. And the data transfer is controlled by numbering bytes using sequence numbers. But in SCTP, the unit of data as a data chunk. Taken example, to understand how numbering is given in SCTP chunks, assume a multi streaming of a webpage containing images and text as two streams. Sctp uses a transmission sequence number t-SNE to number the data chunks in order to rearrange them correctly at the receiver. Here each chunk is numbered from one to eight. In SCTP, there may be several streams in each connection. So each stream needs to be identified by using a stream identifier, SI. In this example, images stream as identified with letter I and text stream as identified with letter T. Each data chunk carries stream identifier in its header so that when it arrives at the destination, it can be properly placed in its stream to distinguish between different data chunks belonging to the same stream as CTP uses stream sequence number SSN. Here roman numbers one to four are given to the images stream to differentiate between them. Sctp packet structure, the design of SCTP packet as totally different from TCP packet. Here we can see the SCTP packet structure. In SCTP, data and control information are carried in separate chunks. Data in data chunks, and control information is carried in control chunks. The sequence numbers such as t-SNE, SI, SSN are present in control chunks. A number of control chunks and data chunks can be packed together to form a packet. The general header length in SCTP as 12 bytes and SCTP packet and carry several data chunks. Each chunk belongs to a different stream and the TCP header length as 20 bytes. We already have a little knowledge about all the header fields previously except verification tag. In SCTP, a unique field named verification tag identifies the endpoints as each connection may contain two IP addresses because of multi-homing. Whereas in TCP, the socket address defines the end points correctly. 43. CONGESTION CONTROL - OPEN LOOP: Congestion. One of the critical issues in packet switched networks as congestion. Congestion in a network is a state of network where the number of packets sent to the links are nodes is greater than the number of packets that the network can handle. When congestion happens, some packets may be dropped. New connections cannot be connected. Quality of service decreases and the data transmission may slow down. So what happens when more number of packets reach a router or switch and they cannot handle them. They have two buffers called input buffer and output buffer. Input buffer holds the packets before processing, and output buffers hold packets after processing. When a packet comes to the router, the packet is placed at the end of the input buffer. When the packet reached the front of the queue, the router uses routing table to find the route. The packet is then moved to the output Q buffer and waits for its turn to be sad if too many packets are present in cuz this phenomenon is called congestion. Controlling the congestion using some techniques is called congestion control. Congestion control means either preventing congestion before it happens or removing congestion after it has happened. Generally, we can divide congestion control mechanisms into two broad categories. Open loop congestion control prevention and closed loop congestion control removal. We will discuss some techniques in open loop and closed loop congestion control. Open loop congestion control. In open loop congestion control, we take measures to prevent congestion before it happens. In these mechanisms, the source or the destination handles the congestion control. We have five policies under open-loop control. Congestion prevention, retransmission policy, window policy, acknowledgement, discarding policy, admission policy, retransmission policy. While transmitting packets. If a packet is lost or corrupted in passage, the receiver sends an acknowledgement asking to retransmit, then the sender has to retransmit the lost or corrupted packets. If the sender has to retransmit several packets, congestion in the network increases. So by maintaining a proper retransmission policy, we can prevent congestion. Window policy. Take two types of windows, go back and window, and selective repeat window. In, Go back and window while transmitting. If a packet is corrupted, it retransmits successfully received packets. Also, if the packet before them has lost or corrupted, it increases the congestion in the network. But in selective repeat window, all packets are not retransmitted. The packet that is lost as retransmitted using selective repeat window as better than go back and window. So we can reduce congestion by choosing a better window policy at the sender Acknowledgement policy. Suppose the receiver receives three packets successfully. Instead of sending three acknowledgements to the sender, it can send a single cumulative acknowledgement about three packets. This can reduce congestion. Without sending acknowledgement for every packet sent, the receiver can send an acknowledgment only for the lost or corrupted packets. It also reduces the congestion by not sending acknowledgements for every packet sent, the sender's slows down the transmission and prevents congestion. So a proper acknowledgment policy can reduce congestion. Discarding policy. In audio file transmission, it does not affect the quality of audio even if the router drops or discords corrupted or less sensitive packets, so that congestion is prevented by choosing a good discarding policy, admission policy. It is a quality of service mechanism. While transmitting packets or router or switch can deny transmission through a particular path if there is congestion in the network or if there is a possibility of future congestion. 44. Closed loop congestion control: Closed loop congestion control. In the last video, we have seen about open loop congestion control policies. In this video, we will learn about closed loop congestion control techniques. Closed loop congestion control techniques are used to remove congestion after it happens. There are several mechanisms for this purpose, such as back pressure, choke packet, implicit signaling, explicit signaling. We will look into them one by one. Back pressure. It refers to a congestion control mechanism in which a congested node informs the immediate upstream node to slow down transmission and goes back up to the sender. Let us take an example to understand how it works. Consider a source and a destination and four nodes in between them. The packets are going from source to destination. Suppose at node three, if it has more input data than it can handle, it tells node two to slow down the transmission. By slowing down the transmission. Now, node May 2 be congested. If node two as congested with more packets, it informs node one to slow down transmission, which causes congestion in the node one. Then node one informs the source machine to slow down transmission. This process controls the congestion. This method is called back pressure because the pressure on node three is moved backward up to the source to remove the congestion in the opposite direction of data flow. The back pressure method can be applied only to virtual circuit networks in which each node knows the upstream node from which a flow of data is coming. The method cannot be implemented in a datagram network because in this type of network, a node or a router does not know about the upstream node or the path. Choke packet. Back pressure. The warning is from one node to its upstream node up to the source. In the choke packet method, the warning is from the congested node to the source station directly. The intermediate nodes through which the packet has traveled are not informed to slow down. When a router in the internet gets more IP datagrams than it can handle. It informs the source host using a source quench ICMP message. Implicit signaling. In implicit signaling, if any node is congested, there is no communication between the congested node and the source. The source guesses that there is a congestion somewhere in the network from other symptoms. For example, when a source machine send several packets and there is no acknowledgement for awhile, the source assumes that the network is congested. Sometime later, the acknowledgment comes to the source. This delay in receiving an acknowledgement as interpreted as congestion in the network. And the source machine slows down the transmission. Explicit signaling. In explicit signaling, the congested node can explicitly send a signal to the source or destination. The explicit signaling method is different from the choke packet method. In the choke packet method as separate packet is used to inform the upstream node about congestion. But in the explicit signaling method, the signal is included in the data packet or acknowledgement packet. Sometimes the congested node sends that signal to the receiver through a data packet in forming to slow down. This is called forward signaling. And sometimes the signal is sent backwards to the sender using acknowledgement signal. This is called backward signaling. 45. Congestion control in TCP: Congestion control in TCP. In the previous videos, we have seen what is congestion control and some techniques to handle congestion. In this video, we will see how TCP actually handles congestion control in the network. Tcp's congestion control is based on window size. That means TCP adjusts its window size to control congestion. The window size is nothing but the number of packets present in the network. Usually the sender's window size as determined by the buffer space in the receiver. Receiver's buffer can store four packets. The sender window size is four, but we totally ignored another entity here, the network, such as bandwidth or any other issues. If the sender sends packets to the network very fast, but the network is not able to handle the flow. It must tell the sender to slow down transmission. So in addition to the receiver's buffer size, the network is a second entity that determines the size of the senders window. Hence, the actual size of the window as determined as the minimum of these two. This is called congestion window, CM WD, which indicates the number of outstanding bytes at anytime. That has, it limits the total number of unacknowledged packets. Now let's see the congestion policies that TCP implements. Congestion policy. Tcp's policy for handling congestion is based on three phases. Slow start, congestion avoidance and congestion detection. Slow start or exponential increase. Before going into this topic, we need to understand about round trip time, RTT, round-trip time as the duration in milliseconds it takes for a data packet to go from a source machine, destination machine, plus the time the acknowledgement of that packet to be received by the source. Rtt is also called as the ping time. We can find this round trip delay using the ping command in the Command Prompt, Open command prompt, and go to C directory and type Ping or any server you want to reach. We will see something similar to this. One of the algorithms used in TCP congestion control is called slow start. However, it is not slow at all. It is an exponential growth, starts with one packet and doubles for every round trip. Now we will see how slow start algorithm actually works. It first starts with the congestion window of one maximum segment size MSS. So one packet is sent from sender to receiver. Then the TCP Receiver sends back an acknowledgement to the sender. This is one RTT. The size of the congestion window increases one MSS each time an acknowledgement is received. So the window size becomes two. Now, two packets are sent from sender to receiver at a time. These two packets may reach the receiver with a little time difference between them. If the network path is slow. Now the receiver sends those two acknowledgements to the sender. This is one RTT. When the first packets acknowledgement is received, the window size becomes three. And when the second packet's acknowledgement is received, the window becomes four simply to acknowledgements received. So too has added to the previous window size two, and the window size becomes four. Now, the sender sends four packets to the receiver. These four packets will take an entire RTT to reach the receiver. That means the network cannot handle more than four packets in this network. Anyhow, the receiver sends back the acknowledgments for the four packets and the window size increases one by one from five to eight as each acknowledgment is received. In the next RTT, the sender sends eight packets into the network, but it can only handle four at a time. It creates congestion in the network and the packet loss occurs. Slow start cannot continue indefinitely. There must be a threshold to stop this phase. The sender keeps a threshold value to stop the slow start phase. This variable is called slow start threshold, SS thrash. When the size of congestion window reaches this threshold, slow start stops and the next phase that has congestion avoidance starts. In most implementations, the value of SS thrash has 65,535 bytes. Congestion avoidance or additive increase. After the congestion window size reaching the threshold value. Congestion avoidance takes over because if congestion continues, it may lose some packets. In this algorithm each time the whole window of segments as acknowledged, that has one round, the size of the congestion window is increased by one, not powers of two. In this picture, we can see for every RTT, the congestion window increases by one. Here, one packet is sent in first RTT, two packets are sent in second RTT, three packets are sent in third RTT and four packets in next RTT. When there is no congestion, the congestion window continuously increases until data transmission over. But congestion is very common in the shared networks. So what will happen when TCP detects congestion? This is where the congestion detection comes in. Congestion detection or multiplicative decrease and ingestion occurs, the congestion window size must be decreased. The only way the sender can guess that congestion has occurred, as in one of two cases, when a timer times out or when three duplicate acts are received. In both cases, the size of the threshold has dropped to 1.5. That is multiplicative decrease. Cwnd equals CWND plus1 is the additive increase, and CWND equals CWND divided by two is the multiplicative decrease. First case, the timeout as a serious situation, when some packets are lost in the network, no acts are received, no packets are sent. The timer times out after a relatively long period of time, and the last packet is retransmitted. One, if a time-out occurs, that implies there is a stronger possibility of congestion. So in this case, TCP reacts strongly. It sets the threshold value to 1.5 of the current window size. Then it sets the congestion window size to one segment, and it starts the slow start phase again. After reaching the threshold, congestion avoidance takes place. This process is called TCP Tahoe. Take an example to better understand TCP Tahoe. Take the maximum segment size as one packet. At the beginning of transmission, the congestion window size was 32 packets, but timeout timer fired and the threshold value is set to 16 packets. The congestion window then grows exponentially until it reaches the threshold value 16. After reaching threshold value, the window increases only by one segment for every RTT, that is linear growth. But on the eighth RTT, one packet is lost in the network. The threshold value is set to half the current window size, that is 20 divided by two equals to ten. Now the slow start phase starts again, and this process continues until all packets reach the destination. Second case, if three duplicate ACKs are received, There is a weaker possibility of congestion. Suppose if we send packets 123456 and get back ACK one ACK to AC K2, AC K2, AC k2. We can say data three got lost in the transmission and we are stuck on AC K2 that triggered three duplicate ACKs. In this case, TCP has a weaker reaction. Let us look into this graphical representation for better understanding. Initially, slow start phase begins. After that, the congestion window increases linearly until a packet loss is detected by duplicate acknowledgements, it sets the threshold value to 1.5 of the current window size. That is 38 divided by two equals to 19. It sets the congestion window size to the value of the threshold 19. It starts the congestion avoidance phase that is additive increase. This process is called TCP Reno. Here we can see it is not starting from windows size of one MSS, that is slow start phase. It starts from the half threshold and increasing linearly. This is called fast recovery and improves the performance of the network. 46. Quality of Service(QoS) - 1: Quality of service. Quality of service, QoS is an Internetworking issue. We can define quality of service as the capability of a network to provide quality services to a network. It manages traffic to reduce packet loss and delays. The primary goal of QoS as to provide priority for a specific type of data. For example, real-time transmission like video, audio, online games need more priority over passive data like file transfer. And QoS deals with these four parameters to give better services. Reliability, delay, jitter, and bandwidth. Reliability. Reliability as a characteristic that a packet in the network needs. In case if we lose a packet or acknowledgement, it needs to be retransmitted. It is more important for file transfer, electronic mail, and Internet access to have reliable transmissions than telephony or audio conferencing, guaranteeing the delivery of each packet is nothing but reliability. Delay. How much time does a packet takes travel from source to destination is called delay. The delay in file transfer email services as tolerable, but audio or video conferencing telephony meet minimum delay. Jitter. The variation in delay for packets belonging to the same receiver as referred to as jitter. For example, if three packets depart at times 123 and arrive at thirty one, thirty two, thirty three all have the same delay of 30 units. On the other hand, if the above packets arrive at thirty three, thirty one, and thirty eight, They will have different delays for audio and video transmission. The first case is completely acceptable. The second case is not bandwidth. A network needs appropriate bandwidth for a specific application. For example, video conferencing needs to send millions of bits per second. This needs higher bandwidth. While file transfer may not reach even a million bits per second. A lower bandwidth for this application is acceptable. Now, we will see the techniques to improve quality of service. Techniques to improve QOS, we must improve the quality of service of a network for reliable delivery of data. There are some techniques to improve the quality of service. We discussed some of them. They are scheduling, traffic shaping, admission control, and resource reservation. Scheduling. Usually a router or a switch gets packets from different flows for processing. All the packets from different flows must be treated fairly to get a good quality of service. We discussed some of the scheduling techniques that improve QoS. They are FIFO queuing, priority queuing, and weighted fair queuing. Fifo queuing. This is called first-in, first-out cueing. In this method, packets weight in a buffer queue until the node is ready to process them. If the arrival rate is higher than the processing rate, the queue will fill up and the new packets after that will be discarded. Here, the oldest entry is processed first. That is, the packets leave the queue in the order in which they arrived. Priority queuing. Here we have two types of cues, higher priority queue and lower priority queue. The packets as they enter our first assigned to a priority class. That is, time sensitive data goes to higher priority queue. Other data like file transfers goes to lower priority queue. The packets in the highest priority queue are processed first, and the packets in the lowest priority queue are processed last. A priority queue provides better quality of service than the FIFO queue, because higher priority traffic can reach the destination with less delay. But there is a drawback here. If there is a continuous flow in a high priority queue, the packets in the lower priority queues will never have a chance to be processed. The condition is called starvation. Weighted fair queuing. This technique is same as priority queuing, but the cues are weighted based on the priority of the q's. Higher priority means a higher weight. The system processes packets in each queue in a round robin fashion with the number of packets selected from each queue based on the corresponding weight. For example, if the weights are 2313 packets are processed from the second queue. Two packets from the first queue, and one from the third queue. 47. Quality of Service(QoS) - 2: Traffic shaping. Traffic shaping is a method to control the amount and the rate of the traffic sent to the network. To popular techniques can control traffic. They are leaky bucket algorithm and token Bucket Algorithm. Leaky bucket algorithm. If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as long as there is water in the bucket, the rate at which the water leaks does not depend on the rate at which the water is input to the bucket. The input rate can vary, but the output rate remains constant. Also, once the bucket is full to its capacity, any additional water entering it spills over the sides and his lost. Similarly, in networking, a technique called leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent out at an average rate. For example, sometimes the data come at ten megabits per second rate, sometimes three megabits per second rate. If it's not a leaky bucket, the output also maintains the same rate as input rate. It creates inconsistency. But in leaky bucket technique, whatever the rate of data at the input, the output flow must be constant. For example, four megabits per second. This technique was proposed by Turner and is called the leaky bucket algorithm. Now we will see what is the Token Bucket Algorithm. Token Bucket Algorithm. However, leaky bucket has a drawback. If the bucket is full, further packets get discarded. If the host is idle for some time, the bucket becomes empty. All of a sudden, if bursts of data comes, leaky bucket sends out only at a limited rate to solve this overflow issue and to credit the idle host a token bucket collects tokens from the host. Whenever the host does idle. In the Token Bucket Algorithm, it collects tokens when the host is idle, and if the burst of data comes, it removes sufficient tokens from the bucket and sends out bursts of data at a higher rate. If a packet comes to the bucket, it discards the token and sends out the packet at the same rate. The packet has not sent out if there are insufficient tokens in the bucket and the contents of the bucket are not changed. The token bucket allows bursty traffic at a regulated maximum rate. To get the best out of it, we put token bucket first to control the overflow into credit, the idle host, and then the leaky bucket to control the outflow. Admission control. Admission control refers to the methods used by a router or a switch to accept or reject a flow based on pre-defined parameters called flow specification. Before a router accepts a flow for processing, it checks the flow specifications to see if it's capacity in terms of bandwidth, buffer size, CPU speed, and its previous amendments to other flows to handle the new flow. Resource reservation. A flow of data needs resources such as a buffer, bandwidth, CPU time, the quality of service as improved if these resources are reserved before data transmission. Two prominent models have been designed to provide quality of service in the internet. They are integrated services and differentiated services. We will see them in the next video. 48. Integrated and Differentiated services: Integrated services. In integrated services, we reserve the network resources before sending the data to achieve the quality of service. We have several flows in a network. Each flow needs to reserve bandwidth before data transmission. So this is a flow based model. It guarantees the delivery as it reserves bandwidth beforehand, but it is not scalable as the users may increase and decrease dynamically while watching a video conference or television program. We reserve bandwidth before the transmission. So if users increase, the bandwidth must increase, increasing the bandwidth than some other resources as time taking for big networks with millions of flows. That's why it is not scalable. Resource reservation protocol. Rsvp is used for making the resource reservation before the data transmission. Integrated services is also called inter differentiated services. Differential services do not reserve network resources before sending data, but mark each packet with priority or class and send it to the network. Each packet and transported based on the priority information by each router along the path. This class information is carried in the differentiated services or DS field of IPV4 and IPV6 packets. This is called per hop behavior ph B. It does not involve the whole path, but can be implemented locally on each router without advanced setup. Diff serve as the model that is used on modern IP networks because it is scalable and easy to maintain. Differentiated services is also called def serve.