Network goals 1. 2. 3. 4. 5. The main goal of networking is "Resource sharing", and it is to make all programs, data and equipment available to anyone on the network with out the regard to the physical location of the resource and the user. A second goal is to provide high reliability by having alternative sources of supply. For example, all files could be replicated on two or three machines, so if one of them is unavailable, the other copies could be available. Another goal is saving money. Small computers have a much better price/performance ratio than larger ones. Mainframes are roughly a factor of ten times faster than the fastest single chip microprocessors, but they cost thousand times more. This imbalance has caused many system designers to build systems consisting of powerful personal computers, one per user, with data kept on one or more shared file server machines. This goal leads to networks with many computers located in the same building. Such a network is called a LAN (local area network). Another closely related goal is to increase the systems performance as the work load increases by just adding more processors. With central mainframes, when the system is full, it must be replaced by a larger one, usually at great expense and with even greater disruption to the users. Computer networks provide a powerful communication medium. A file that was updated/modified on a network can be seen by the other users on the network immediately. Topologies Bus network Mesh network Ring network Star network Tree network A computer network is made of computers which are linked to one another with communication lines (network cables, etc.) and hardware elements (network adapters, as well as other equipment for ensuring that data travels correctly). The physical arrangement — that is, the spatial configuration of the network — is called the physical topology. The different kinds of topology are: 1. 2. 3. 4. 5. Bus topology Star topology Ring topology Tree topology Mesh topology The logical topology, as opposed to the physical topology, refers to way that data travels along communication lines. The most common logical topologies are Ethernet, Token Ring and FDDI. 1. Bus topology Bus topology is the simplest way a network can be organized. In bus topology, all computers are linked to the same transmission line by using a cable, usually coaxial. The word "bus" refers to the physical line that joins all the machines on the network. The advantages of this topology are that it is easy to implement and functions easily; on the other hand, it is highly vulnerable, since if one of the connections is defective, the whole network is affected. Advantages: Any one computer or device being down does not affect the others. Disadvantages: Can't connect a large number of computers this way. It's physically difficult to run the one communications line over a whole building, for example. Star topology In star topology, the network computers are linked to a piece of hardware called a hub. This is a box which contains a certain number of sockets into which cables coming out of the computers can be plugged. Its role is to ensure communications between those sockets. Unlike networks built with bus topology, networks which use star topology are much less vulnerable, as one of the connections can easily be removed by disconnecting it from the hub, without paralyzing the rest of the network. The critical point in this network is the hub, as without it, communication between the computers on the network is no longer possible. However, a star topology network is bulkier than a bus network, as additional hardware is required (the hub). 2. Advantages: Gives close control of data. Each PC sees all the data. User sees up-to-date data always. If a computer other than the host fails, no other computer is affected. Disadvantages: If host computer or its software goes down, the whole network is down. (A backup computer system would be necessary to keep going while repairs are made.) 3. Ring topology In a ring-topology network, computers each take turns communicating, creating a loop of computers in which they each "have their turn to speak" one after another. In reality, ring topology networks are not linked together in loops. They are actually linked to a distributor (called a MAU, Multistation Access Unit) which manages communication between the computers linked to it, by giving each of them time to "speak." The two main logical topologies which use this physical topology are Token ring and FDDI. Advantages: Requires less cabling and so is less expensive. Disadvantages: If one node goes down, it takes down the whole network. 4. Mesh Topology Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on a mesh network can take any of several possible paths from source to destination. (Recall that even in a ring, although two cable paths exist, messages can only travel in one direction.) Some WANs, most notably the Internet, employ mesh routing. A mesh network in which every device connects to every other is called a full mesh. As shown in the illustration below, partial mesh networks also exist in which some devices connect only indirectly to others. 5. Tree Topology Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection points) alone. Connecting Networks Networks can be connected to each other, too. There are difficulties in doing so, however. A combination of software and hardware must be used to do the job. A gateway connects networks of different kinds, like connecting a network of PCs to a main frame network. This can be complex! A bridge connects networks of the same type. This job is simple. A router connects several networks. A router is smart enough to pick the right path for communications traffic. If there is a partial failure of the network, a router looks for an alternate route. Suppose the accounting, advertising, and shipping departments of a company each have networks of PCs. These departments need to communicate with each other, but only sometimes. It would be easier and cheaper to connect them to each other than to put them all on the same larger network. The best arrangement would be for the departmental networks to be of the same kind so that a bridge could be used. Concept of Internet The Internet is not a network at all, but a vast collection of different networks that use certain common protocols and provide certain common services. Architecture of the Internet A brief overview of the Internet today. Assume client calls his or her ISP over a dial-up telephone line, as shown in Fig. 1-29. The modem is a card within the PC that converts the digital signals the computer produces toanalog signals that can pass unhindered over the telephone system. These signals are transferred to the ISP’s POP (Point of Presence), where they are removed from the telephone system and injected into the ISP’s regional network. From this point on, the system is fully digital and packet switched. If the ISP is the local telco, the POP will probably be located in the telephone switching office where the telephone wire from the client terminates. If the ISP is not the local telco, the POP may be a few switching offices down the road. The ISP’s regional network consists of interconnected routers in the various cities the ISP serves. If the packet is destined for a host served directly by the ISP, the packet is delivered to the host. Otherwise, it is handed over to the ISP’s backbone operator. At the top of the chain are the major backbone operators, companies like AT&T and Sprint. They operate large international backbone networks, with thousands of routers connected by highbandwidth fiber optics. Large corporations and hosting services that run server farms (machines that can serve thousands of Web pages per second) often connect directly to the backbone. Backbone operators encourage this direct connection by renting space in what are called carrier hotels, basically equipment racks in the same room as the router to allow short, fast connections between server farms and the backbone. If a packet given to the backbone is destined for an ISP or company served by the backbone, it is sent to the closest router and handed off there. However, many backbones, of varying sizes, exist in the world, so a packet may have to go to a competing backbone. To allow packets to hop between backbones, all the major backbones connect at the NAPs discussed earlier. Basically, a NAP is a room full of routers, at least one per backbone. A LAN in the room connects all the routers, so packets can be forwarded from any backbone to any other backbone. In addition to being interconnected at NAPs, the larger backbones have numerous direct connections between their routers, a technique known as private peering. The 7 Layers of the ISO-OSI Network Model 1. Physical Layer Concerned with the transmission of bits. How many volts for 0, how many for 1? Number of bits of second to be transmitted. Two way or one-way transmission Standardized protocol dealing with electrical, mechanical and signaling interfaces. Many standards have been developed, e.g. RS-232 (for serial communication lines). Example : X.21 2. Data Link Layer Handles errors in the physical layer. Groups bits into frames and ensures their correct delivery. Adds some bits at the beginning and end of each frame plus the checksum. Receiver verifies the checksum. If the checksum is not correct, it asks for retransmission. (send a control message). Consists of two sublayers: Logical Link Control (LLC) defines how data is transferred over the cable and provides data link service to the higher layers. Medium Access Control (MAC) defines who can use the network when multiple computers are trying to access it simultaneously (i.e. Token passing, Ethernet [CSMA/CD]). 3. Network Layer Concerned with the transmission of packets. Choose the best path to send a packet ( routing ). It may be complex in a large network (e.g. Internet). Shortest (distance) route vs. route with least delay. Static (long term average) vs. dynamic (current load) routing. Two protocols are most widely used. X.25 Connection Oriented Public networks, telephone, European PTT Send a call request at the outset to the destination If destination accepts the connection, it sends an connection identifier IP (Internet Protocol) Connectionless Part of Internet protocol suite. An IP packet can be sent without a connection being established. Each packet is routed to its destination independently. 4. Transport Layer Network layer does not deal with lost messages. Transport layer ensures reliable service. Breaks the message (from sessions layer) into smaller packets, assigns sequence number and sends them. Reliable transport connections are built on top of X.25 or IP. In case IP, lost packets arriving out of order must be reordered. TCP : (Transport Control Protocol) Internet transport protocol. TCP/IP Widely used for network/transport layer (UNIX). UDP (Universal Datagram Protocol) : Internet connectionless transport layer protocol. Application programs that do not need connection-oriented protocol generally use UDP. 5. Sessions Layer Just theory! Very few applications use it. Enhanced version of transport layer. Dialog control, synchronization facilities. Rarely supported (Internet suite does not). 6. Presentation Layer Just theory! Very few applications use it. Concerned with the semantics of the bits. Define records and fields in them. Sender can tell the receiver of the format. Makes machines with different internal representations to communicate. If implemented, the best layer for cryptography. 7. Application Layer Collection of miscellaneous protocols for high level applications Electronic mail, file transfer, connecting remote terminals, etc. E.g. SMTP, FTP, Telnet, HTTP, etc TCP/IP reference model Network Access Layer – The Network Access Layer is fairly self explanatory- it interfaces with the physical network. It formats data and addresses data for subnets, based on physical hardware addresses. More importantly, it provides error control for data delivered on the physical network. Internet Layer – The Internet Layer provides logical addressing. More specifically, the internet layer relates physical addresses from the network access layer to logical addresses. This can be an IP address, for instance. This is vital for passing along information to subnets that aren’t on the same network as other parts of the network. This layer also provides routing that may reduce traffic, and supports delivery across an internetwork. (An internetwork is simply a greater network of LANs, perhaps a large company or organization.) Transport Layer – The Transport Layer provides flow control, error control, and serves as an interface for network applications. An example of the transport layer would be TCP- a protocol suite that is connection-oriented. We may also use UDP- a connectionless means of transporting data. Application Layer – Lastly, we have the Application Layer. We use this layer for troubleshooting, file transfer, internet activities, and a slew of other activities. This layer interacts with many types of applications, such as a database manager, email program, or Telnet. TCP/IP Model vs OSI Model Sr. No. TCP/IP Reference Model OSI Reference Model 1 Defined after the advent of Internet. Defined before advent of internet. 2 Service interface and protocols were not clearly distinguished before Service interface and protocols are clearly distinguished 3 TCP/IP supports Internet working Internet working not supported 4 Loosely layered Strict layering 5 Protocol Dependant standard Protocol independent standard 6 More Credible Less Credible 7 TCP reliably delivers packets, IP does not reliably deliver packets All packets are reliably delivered TRANSMISSION MEDIA The means through which data is transformed from one place to another is called transmission or communication media. There are two categories of transmission media used in computer communications. BOUNDED/GUIDED MEDIA 2. UNBOUNDED/UNGUIDED MEDIA 1. 1. BOUNDED MEDIA: Bounded media are the physical links through which signals are confined to narrow path. These are also called guide media. Bounded media are made up o a external conductor (Usually Copper) bounded by jacket material. Bounded media are great for LABS because they offer high speed, good security and low cast. However, some time they cannot be used due distance communication. Three common types of bounded media are used of the data transmission. These are Coaxial Cable b) Twisted Pairs Cable c) Fiber Optics Cable a) a) COAXIAL CABLE: Coaxial cable Coaxial cable is very common & widely used commutation media. For example TV wire is usually coaxial. Coaxial cable gets its name because it contains two conductors that are parallel to each other. The center conductor in the cable is usually copper. The copper can be either a solid wire or stranded martial. Outside this central Conductor is a non-conductive material. It is usually white, plastic material used to separate the inner Conductor form the outer Conductor. The other Conductor is a fine mesh made from Copper. It is used to help shield the cable form EMI. Outside the copper mesh is the final protective cover. (as shown in Fig) The actual data travels through the center conductor in the cable. EMI interference is caught by outer copper mesh. There are different types of coaxial cable vary by gauge & impedance. Gauge is the measure of the cable thickness. It is measured by the Radio grade measurement, or RG number. The high the RG number, the thinner the central conductor core, the lower the number the thicker the core. Here the most common coaxial standards. 50-Ohm RG-7 or RG-11 : used with thick Ethernet. 50-Ohm RG-58 : used with thin Ethernet 75-Ohm RG-59 : used with cable television 93-Ohm RG-62 : used with ARCNET. CHARACTERISTICS OF COAXIAL CABLE Low cost Easy to install Up to 10Mbps capacity Medium immunity form EMI Medium of attenuation ADVANTAGES COAXIAL CABLE Inexpensive Easy to wire Easy to expand Moderate level of EMI immunity DISADVANTAGE COAXIAL CABLE Single cable failure can take down an entire network STP b) Twisted Pair Cable UTP The most popular network cabling is Twisted pair. It is light weight, easy to install, inexpensive and support many different types of network. It also supports the speed of 100 mps. Twisted pair cabling is made of pairs of solid or stranded copper twisted along each other. The twists are done to reduce vulnerably to EMI and cross talk. The number of pairs in the cable depends on the type. The copper core is usually 22-AWG or 24-AWG, as measured on the American wire gauge standard. There are two types of twisted pairs cabling i. ii. Unshielded twisted pair (UTP) Shielded twisted pair (STP) i. Unshielded twisted pair (UTP) UTP is more common. It can be either voice grade or data grade depending on the condition. UTP cable normally has an impedance of 100 ohm. UTP cost less than STP and easily available due to its many use. There are five levels of data cabling Category 1 These are used in telephone lines and low speed data cable. Category 2 These cables can support up to 4 mps implementation. Category 3 These cable supports up to 16 mps and are mostly used in 10 mps. Category 4 These are used for large distance and high speed. It can support 20mps. Category 5 This is the highest rating for UTP cable and can support up to 100mps. UTP cables consist of 2 or 4 pairs of twisted cable. Cable with 2 pair use RJ-11 connector and 4 pair cable use RJ-45 connector. Characteristics of UTP low cost easy to install High speed capacity High attenuation Effective to EMI 100 meter limit Advantages of UTP Easy installation Capable of high speed for LAN Low cost Disadvantages of UTP Short distance due to attenuation ii. Shielded twisted pair (STP) It is similar to UTP but has a mesh shielding that’s protects it from EMI which allows for higher transmission rate. IBM has defined category for STP cable. Type 1 STP features two pairs of 22-AWG Type 2 This type include type 1 with 4 telephone pairs Type 6 This type feature two pairs of standard shielded 26-AWG Type 7 This type of STP consist of 1 pair of standard shielded 26-AWG Type 9 This type consist of shielded 26-AWG wire Characteristics of STP Medium cost Easy to install Higher capacity than UTP Higher attenuation, but same as UTP Medium immunity from EMI 100 meter limit Advantages of STP: Shielded Faster than UTP and coaxial Disadvantages of STP: More expensive than UTP and coaxial More difficult installation High attenuation rate c) Fiber Optics Fiber optic cable uses electrical signals to transmit data. It uses light. In fiber optic cable light only moves in one direction for two way communication to take place a second connection must be made between the two devices. It is actually two stands of cable. Each stand is responsible for one direction of communication. A laser at one device sends pulse of light through this cable to other device. These pulses translated into “1’s” and “0’s” at the other end. In the center of fiber cable is a glass stand or core. The light from the laser moves through this glass to the other device around the internal core is a reflective material known as CLADDING. No light escapes the glass core because of this reflective cladding. Fiber optic cable has bandwidth more than 2 gbps (Gigabytes per Second) Characteristics of Fiber Optic Cable: Expensive Very hard to install Capable of extremely high speed Extremely low attenuation No EMI interference Advantages Of Fiber Optic Cable: Fast Low attenuation No EMI interference Disadvantages Fiber Optics: Very costly Hard to install Baseband and Broadband Transmission The two ways to allocate the capacity of transmission media are with baseband and broadband transmissions. Baseband devotes the entire capacity of the medium to one communication channel. Baseband is the most common mode of operation. Most LANs function in baseband mode. Baseband signaling can be accomplished with both analog and digital signals. Broadband enables two or more communication channels to share the bandwidth of the communications medium. For example, that the TV cable coming into your house from an antenna or a cable provider is a broadband medium. Many television signals can share the bandwidth of the cable because each signal is modulated using a separately assigned frequency. Television tuner may be used to choose the channel to watch by selecting its frequency. This technique of dividing bandwidth into frequency bands is called 'Frequency-division Multiplexing' (FDM) and works only with analog signals. Another technique, called 'Timedivision Multiplexing' (TDM), supports digital signals. Multiplexing is a technique that enables broadband media to support multiple data channels. Baseband Vs Broadband Transmission In a baseband transmission, the entire bandwidth of the cable is consumed by a single signal. In broadband transmission, signals are sent on multiple frequencies, allowing multiple signals to be sent simultaneously. Also Baseband Signaling Broadband Signaling 1 2 Uses digital signaling No frequency-division multiplexing Uses analog signaling Frequency-division multiplexing is possible 3 Bi-directional transmission Unidirectional transmission 4 Signal travels over short distances Signal can travel over long distances before being attenuated Simplex, Duplex & Multiplex Simplex transmission allows data to travel only in a single, pre specified direction. For example in doorbell, the signal can go only from the button to the chime. Two other examples are television and radio broadcasting. Simplex transmission are not often used because it is not possible to send back error or control signals to the transmit end. Half duplex transmission messages can move in either direction , but only one way at a time. Only one end transmits at a time, the other end receives .The press to talk radio phones used in police cars employs the half-duplex standard; only one person can talk at a time. Ethernet networks are often said to be “half-duplex” . Full duplex transmission works like traffic on a busy two way street the flow moves in two directions at the same time. Full-duplexing is ideal for hardware units that need to pass large amounts of data between each other as in mainframe-to-mainframe communications. Concepts of WAP Technology WAP stands for Wireless Application Protocol. Per the dictionary definition for each of these words, we have: Wireless: Lacking or not requiring a wire or wires: pertaining to radio transmission. Application: A computer program or piece of computer software that is designed to do a specific task. Protocol: A set of technical rules about how information should be transmitted and received using computers. WAP is the set of rules governing the transmission and reception of data by computer applications on, or via, wireless devices like mobile phones. WAP allows wireless devices to view specifically designed pages from the Internet, using only plain text and very simple blackand-white pictures. WAP is a standardized technology for cross-platform, distributed computing, very similar to the Internet's combination of Hypertext Markup Language (HTML) and Hypertext Transfer Protocol (HTTP), except that it is optimized for: low-display capability low-memory low-bandwidth devices, such as personal digital assistants (PDAs), wireless phones, and pagers. WAP is designed to scale across a broad range of wireless networks, like GSM, IS-95, IS-136 and PDC. The Internet Model The Internet model makes it possible for a client to reach services on a large number of origin servers; each addressed by a unique Uniform Resource Locator (URL). The content stored on the servers is of various formats, but HTML is the predominant. HTML provides the content developer with a means to describe the appearance of a service in a flat document structure. If more advanced features like procedural logic are needed, then scripting languages such as JavaScript or VB Script may be utilized. The figure below shows how a WWW client request a resource stored on a web server. On the Internet, standard communication protocols, like HTTP and Transmission Control Protocol/Internet Protocol (TCP/IP) are used. The content available at the web server may be static or dynamic. Static content is produced once and not changed or updated very often, for example a company presentation. Dynamic content is needed when the information provided by the service changes more often, for example timetables, news, stock quotes and account information. Technologies such as Active Server Pages (ASP), Common Gateway Interface (CGI), and Servlets allow content to be generated dynamically. The WAP Model The figure below shows the WAP programming model. Note the similarities with the Internet model. Without the WAP Gateway/Proxy the two models would have been practically identical. WAP Gateway/Proxy is the entity that connects the wireless domain with the Internet. You should make a note that the request that is sent from the wireless client to the WAP Gateway/Proxy uses the Wireless Session Protocol (WSP). In its essence, WSP is a binary version of HTTP. A markup language - the Wireless Markup Language (WML) has been adapted to develop optimized WAP applications. In order to save valuable bandwidth in the wireless network, WML can be encoded into a compact binary format. Encoding WML is one of the tasks performed by the WAP Gateway/Proxy. How WAP Model Works? When it comes to actual use, WAP works like this: 1. The user selects an option on their mobile device that has a URL with Wireless Markup language (WML) content assigned to it. 2. The phone sends the URL request via the phone network to a WAP gateway, using the binary encoded WAP protocol. 3. The gateway translates this WAP request into a conventional HTTP request for the specified URL, and sends it on to the Internet. 4. The appropriate Web server picks up the HTTP request. 5. The server processes the request, just as it would any other request. If the URL refers to a static WML file, the server delivers it. If a CGI script is requested, it is processed and the content returned as usual. 6. The Web server adds the HTTP header to the WML content and returns it to the gateway. 7. The WAP gateway compiles the WML into binary form. 8. The gateway then sends the WML response back to the phone. 9. The phone receives the WML via the WAP protocol. 10. The micro-browser processes the WML and displays the content on the screen. WAP is a global standard developed by the WAP Forum for wireless devices to access the Internet and telephony services. WAP can also be used to access data from corporate intranets through public or private IP networks. WAP - Key Features A programming model similar to the Internet's Though WAP is a new technology but it reuse the concepts found on the Internet. This reuse enables a quick introduction of WAP-based services, since both service developers and manufacturers are familiar with these concepts today. Wireless Markup Language (WML) HTML language is used to develop web based application. Same way WML is a markup language used for authoring WAP services, fulfilling the same purpose as HTML does on the Web. In contrast to HTML, WML is designed to fit small handheld devices. WML Script Java Script or VB script is used to enhance the functionality of web applications. Same way WML Script can be used to enhance the functionality of a service, just as, Java script can be utilized in HTML. It makes it possible to add procedural logic and computational functions to WAP based services. Wireless Telephony Application Interface (WTAI) The WTAI is an application framework for telephony services. WTAI user agents are able to make calls and edit the phone book by calling special WML Script functions or by accessing special URLs. If one writes WML decks containing names of people and their phone numbers, you may add them to your phone book or call them right away just by clicking the appropriate hyperlink on the screen. Optimized protocol stack The protocols used in WAP are based on well-known Internet protocols, such as HTTP and Transmission Control Protocol (TCP), but they have been optimized to address the constraints of a wireless environment, such as low bandwidth and high latency. WAP is designed in a layered fashion so that it can be extensible, flexible, and scalable. As a result, the WAP protocol stack is divided into five layers: Application Layer Wireless Application Environment (WAE). This layer is of most interest to content developers because it contains, among other things, device specifications and the content development programming languages, WML and WMLScript. Session Layer Wireless Session Protocol (WSP). Unlike HTTP, WSP has been designed by the WAP Forum to provide fast connection suspension and reconnection. Transaction Layer Wireless Transaction Protocol (WTP). The WTP runs on top of a datagram service such as User Datagram Protocol (UDP) and is part of the standard suite of TCP/IP protocols used to provide a simplified protocol suitable for low bandwidth wireless stations. Security Layer Wireless Transport Layer Security (WTLS). WTLS incorporates security features that are based upon the established Transport Layer Security (TLS) protocol standard. It includes data integrity checks, privacy, service denial, and authentication services. Transport Layer Wireless Datagram Protocol (WDP). The WDP allows WAP to be bearer-independent by adapting the transport layer of the underlying bearer. The WDP presents a consistent data format to the higher layers of the WAP protocol stack, thereby offering the advantage of bearer independence to application developers. Each of these layers provides a well-defined interface to the layer above it. This means that the internal workings of any layer are transparent or invisible to the layers above it. The layered architecture allows other applications and services to utilize the features provided by the WAPstack as well. This makes it possible to use the WAP-stack for services and applications that currently are not specified by WAP. The WAP protocol architecture is shown below alongside a typical Internet Protocol stack. Note that the mobile network bearers in the lower part of the figure above are not part of the WAP protocol stack. The uppermost layer in the WAP stack, the Wireless Application Environment (WAE) provides an environment that enables a wide range of applications to be used on wireless devices. In the chapter "WAP - the wireless service enabler" the WAP WAE programming model was introduced. This chapter will focus on the various components of WAE: Addressing model A syntax suitable for naming resources stored on servers. WAP use the same addressing model as the one used on the Internet, that is, Uniform Resource Locators (URL). Wireless Markup Language (WML) A lightweight markup language designed to meet the constraints of a wireless environment with low bandwidth and small handheld devices. The Wireless Markup Language is WAP.s analogy to HTML used on the WWW. WML is based on the Extensible Markup Language (XML). WMLScript A lightweight scripting language. WMLScript is based on ECMAScript, the same scripting language that JavaScript is based on. It can be used for enhancing services written in WML in the way that it to some extent adds intelligence to the services, for example procedural logic, loops, conditional expressions, and computational functions. Wireless Telephony Application (WTA, WTAI) A framework and programming interface for telephony services. The Wireless Telephony Application (WTA) environment provides a means to create telephony services using WAP. Hardware and Software Requirement At minimum, developing WAP applications requires a web server and a WAP simulator. Using simulator software while developing a WAP application is convenient as all the required software can be installed on the development PC. Although software simulators are good in their own right, no WAP application should go into production without testing it with actual hardware. The following list gives a quick overview of the necessary hardware and software to test and develop WAP applications: a Web server with connection to the Internet a WML to develop WAP application a WAP simulator to test WAP application a WAP gateway a WAP phone for final testing Microsoft IIS or Apache on Windows or Linux can be used as the web server and Nokia WAP Toolkit version 2.0 as the WAP simulator. MODEMS The term modem is a composite word that refers to the two functional entities that make up the device; a signal modulator and a signal demodulator. A modulator creates a band-pass analog signal from binary data. A demodulator recovers the binary data from the modulated signal.Modem stands for modulator and demodulator. TELEPHONE MODEMS Traditional telephone lines can carry frequencies between 300 and 3300 HZ, giving them BW of 3000 Hz; All this range is used for transmitting voice, where a great deal of interference and distortion can be accepted without loss of intelligibility. The effective BW of a telephone line being used for data Transmission is 2400 Hz, covering the range from 600 to 3000 Hz. MODULATION /DEMODULATION Telco Telephone network Modem A Telco Modem B Figure shows the relationship of modems to a communication link. The computer on the left sends binary data to the modulator portion of the modem; the data is sent as an analog signal on the telephone lines. The modem on the right receives the analog signal, demodulates it through its demodulator, and delivers data to the computer on the right. The communication can be bidirectional, which means the computer on the right can also send data to the computer on the left using the same modulation and demodulation processes. Modem standards V-series standards published by the ITU-T. V.32 V.32bis V.34bis V.90 V.92 V.32 This modem uses a combined modulation and demodulation encoding technique called trellis-coded modulation. Trellis is essentially QAM plus a redundant bit. The Data stream is divided into 4-bit sections. Instead of a quad bit, however, a pentabit is transmitted. The value of the extra bit is calculated from the values of the data bits. In any QAM system, the receiver compares each received signal point to all valid points in the constellation and selects the closest point as the intended value.. A signal distorted by transmission noise can arrive closer in value to an adjacent point than to the intended point, resulting in a misidentification of the point and an error in the received data. By adding a redundant bit to each quad bit, trellis-coded modulation increases the amount of information used to identify each bit pattern thereby reduces the number of possible matches. The V.32 calls for 32-QAM with a baud rate of 2400. Because only 4 bits of each pentabit represents data, the resulting speed is 4*2400=9600. FDX 2400 baud 9600 bps 2-wire 600 1800 3000 Bandwidth diagram V.32 bis : The V.32 bis modem support 14,400-bps transmission. The V.32 uses 128QAM transmission. V.34 bis : The V.34 bis modem support 28,800-bps transmission with a 960-point constellation to a bit rate of 33,600 with a 1664-point constellation. V.90: Traditional modems have a limitations on the data rate.V.90 modems with a bit rate of 56,000 bps, called 56Kmodems, are available. Downloading rate is 56K, while the uploading rate is a maximum of 33.6 kbps. Traditional modems In traditional modems data exchange is between two computers, A and B, Through digital telephone network. Sampling & noise PCM Modem A Telephone network Inverse PCM Modem A to B Quantization noise happens in the Telco office near A B Traditional modems Sampling & noise Inverse PCM Modem A Telephone network B to A Quantization noise happens in the telco office near B PCM Modem B After modulation by the modem, an analog signal reaches the telephone company Switching station. Where it is sampled and digitized to be passed through the digital network. The quantization noise introduced in the signal at the sampling point limits the data rate according to the capacity. This limit is 33.6 Kbps. 56K Modems Communication today is via the Internet. In Uploading, The analog signal must still be sampled at the switching station, which means the data rate in the uploading is limited to 33.6 Kbps. There is no sampling in downloading. Data rate in downloading is 56Kbps. 56K MODEMS Sampling & noise PCM Telephone network ISP server Modem Uploading, quantization noise A Inverse PCM Telephone network ISP server Modem downloading, no quantization noise A V.92: The standard above V.92 is called V.92. These modems can adjust their speed, and if the noise allows, they can upload data at the rate of 48 Kbps. The modem has additional features. For example, the modem can interrupt the internet connection when there is an incoming call if the lines has call-waiting service. RS 232 INTERFACERS 232 is a standard interface by EIA and RS232C is the latest version of this interface. INTERFACING WITH RS232 It expects a modem to be connected to both receiving and transmitting end. The modem is termed as DCE(Data Communication Equipment) And the computer with which modem is interfaced is called DTE (Data Terminal Equipment). The DCE and DTE are linked via a cable whose length does not exceed 50 feet. The DTE has 35 pins male connector and DCE has 25 pins Female connector. FEATURES OF RS232 INTERFACE 1. RS232 Signal LEVEL RS232 standard follows –ve logic, Logic1 is represented by negative voltage., logic0 is represented by +ve voltage. Level 1 varies from -3 to -15v and level 0 varies from 3 to 15v 2. RS232 SIGNALS SL NO PIN NUMBER SIGNAL SIGNAL NAME 1 1 --- Frame ground 2 2 TXD Transmit data 3 3 RXD Receive data 4 4 RTS Request to send 5 5 CTS Clear to send 6 6 DSR Data Set Ready 7 7 SG Signal Ground 8 8 RLSD or CD Received line signal detect or carrier detect 9 20 DTR Data Terminal Ready 10 22 RI Ring Indicator COMMUNICATION BETWEEN DCE AND DTE Before sending data to the other end the DTE requests the permission from the modem by issuing RTS signal. The modem has a method to find out if any telephone line is free and if the other end of modem is ready. When the modem finds the communication path is ready for communication it issues CTS signal to DTE as an acknowledgement. The DTE issues DTR signal when it is powered on, error free and ready for logical connection through the modem. The modem issues a DSR signal to indicate that it is powered on and it is error free. The data is transferred by TXD signal from DTE to DCE and RXD signal receives data from DCE to DTE. The RI and RLSD signals are used with the dialed modem, when the telephone link is shared. Communication 25 pin female connector 25 pin male connector MODEM TERMINAL DCEStandards Serial Communication DTE RS232 is the most commonly used, because almost every computer is equiped with one or more serial ports. Because RS232 signals share a common ground, they are sensitive for interference, therefore the distance is limited to 15 meters. RS232 can only be used for point to point connections (one sender, one recipient). 3 RS422 is a high speed digital interface - unlike RS232 which uses signals with reference to 3 ground, RS449 receivers look for the difference between two wires. By twisting the two wires and making a "twisted pair" any noise or interference picked up on one wire will also be picked 22 up on the other, because both wires pick up the same interference the RS449 differential 22 interface just shifts in voltage level with reference to ground, but does not change with respect to each other. The receivers are only looking at the difference in voltage level of each wire to the 8 other not to ground. 8 RS423 is a not commonly used enhanced version of the RS232 standard. The only difference is that it can be used for greater distances. The interface looks like it uses 7 differential signals, but in fact, all B pins are connected7 to ground. RS423 can be used for point to point or multidrop (one sender, up to 10 recipients) communications. RS449 is a not commonly used enhanced version of the RS232 standard. Another name for RS449 is V.11 Like RS 485 and RS 422 it uses differential signals over twisted pair cables which reduces errors caused by noise or interference. RS449 can be used for point to point or multidrop (one sender, up to 10 recipients) communications. RS485 is the most enhanced RS232 based serial interface available. It can handle high speeds and long cable distances up to 4000 feet. Like RS 449 and RS 422 it uses differential signals over twisted pair cables which reduces errors caused by noise or interference. RS485 uses multipoint technology, it is possible to connect 32 devices on a single RS485 bus. Connection speeds can be as high as 35 Mbs Specifications for RS232, RS423, RS422, and RS485 X-21 Digital Interface CCITT X21 is a physical and electrical interface that uses two types of circuits: balanced (X.27N.1 1) and and unbalanced (X.26N.10). CCITT X.21 calls out the DA-15 (also know by DB-15) connector. The physical interface between the DTE and the local PTT-supplied DCE is defined in ITU-T recommendation X.21. The DCE provides a full-duplex, bit-serial, synchronous transmission path between the DTE and the local PSE. It can operate at data rates from 600bps to 64Kbps. A second standard, X.21bis has been defined for use on existing (analog) networks. An X.21bis is a subset of EIA-232D/V.24 therefore allowing existing user equipment to be readily interfaced using this standard. It should perhaps be emphasized here that V24 defines the data terminal equipment interface to the modem and is not concerned with the interface between the modem and the line itself. The modems themselves therefore form part of the conceptual physical connection. The V24 interface is thus independent of both modulation technique and data throughput rate. The X.21 interface protocol is concerned only with the set-up and clearing operations between DTE and DCE associated with each call. The control of the ensuing data transfer is the responsibility of the link layer. X21 Overview X.21 is a state-driven protocol running full duplex at 9600 bps to 64 Kbps with subscriber networks. It is a circuit-switching protocol using Synchronous ASCII with odd parity to connect and disconnect a subscriber to the public-switching network. The data-transfer phase is transparent to the network. Any data can be transferred through the network after Call Establishment is made successfully via the X.21 protocol. The call-control phases which are used were defined in the CCITT (now ITU) 1988 "Blue Book" Recommendations X.1 - X.32. Signals Provided The signals of the X21 interface are presented on a 15-pin connector defined by ISO Document 4903. The electrical characteristics are defined in CCITT Recommendations X.26 and X.27, which refer to CCITT Recommendations V.10 and V.11. X.21 provides eight signals: Signal Ground (G) - This provides reference for the logic states against the other circuits. This signal may be connected to the protective ground (earth). DTE Common Return (Ga) - Used only in unbalanced-type configurations (X.26), this signal provides reference ground for receivers in the DCE interface. Transmit (T) - This carries the binary signals which carry data from the DTE to the DCE. This circuit can be used in data-transfer phases or in call-control phases from the DTE to DCE (during Call Connect or Call Disconnect). Receive (R) - Controlled by the DTE to indicate to the DCE the meaning of the data sent on the transmit circuit. This circuit must be ON during data-transfer phase and can be ON or OFF during call-control phases, as defined by the protocol. Indication (I) - The DCE controls this circuit to indicate to the DTE the type of data sent on the Receive line. During data phase, this circuit must be ON and it can be ON or OFF during call control, as defined by the protocol. Signal Element Timing (S) - This provides the DTE or DCE with timing information for sampling the Receive line or Transmit line. The DTE samples at the correct instant to determine if a binary 1 or 0 is being sent by the DCE. The DCE samples to accurately recover signals at the correct instant. This signal is always ON. Byte Timing (B) - This circuit is normally ON and provides the DTE with 8-bit byte element timing. The circuit transitions to OFF when the Signal Element Timing circuit samples the last bit of an 8-bit byte. Callcontrol characters must align with the B lead during call-control phases. During data- transfer phase, the communicating devices bilaterally agree to use the B lead to define the end of each transmitted or received byte. The C and I leads then only monitor and record changes in this condition when the B lead changes from OFF to ON, although the C and I leads may be altered by the transitions on the S lead. This lead is frequently not used. X.21 Protocol Operation As stated previously, X.21 is a state protocol. Both the DTE and DCE can be in a Ready or NotReady state. The Ready state for the DTE is indicated by a continuous transmission of binary 1's on the T lead. The Ready state for the DCE is continuous transmission of binary 1's on the R lead. During this continuous transmission of Ready state, the control leads are OFF. During the Not-Ready state, the DCE transmits binary 0's on the R lead with the I lead in the OFF state. The DTE Uncontrolled Not-Ready is indicated by transmission of binary 0's with the C lead in the OFF state. The DTE Uncontrolled Not-Ready state signifies that the DTE is unable to accept calls due to an abnormal condition. The DTE Controlled Not-Ready state sends a pattern of alternating 1's and 0's on the T lead with the C lead OFF. This state indicates that the DTE is operational, but unable to accept incoming calls. The characters sent between the DTE and DCE during call-control phases are International Alphabet 5 (IA5), defined by CCITT Recommendation V.3. At least two Sync characters must precede all sequences of characters sent between the DTE and DCE to establish 8-bit byte synchronization between the transmitter and the receiver. If the Byte Timing (B) lead is used, these Sync characters must align with the B lead timing signals. Need for data link layer The data link layer transforms the physical layer, a raw transmission facility, to a link responsible for node-to-node (hop-to-hop) communication. Specific responsibilities of the data link layer include 1. framing 2. addressing 3. flow control 4. error control 5. media 6. access control The data link layer divides the stream of bits received from the network layer into manageable data units called frames. The data link layer adds a header to the frame to define the addresses of the sender and receiver of the frame. If the rate at which the data are absorbed by the receiver is less than the rate at which data are produced in the sender, the data link layer imposes a flow control mechanism to avoid overwhelming the receiver. The data link layer also adds reliability to the physical layer by adding mechanisms to detect and retransmit damaged, duplicate, or lost frames. When two or more devices are connected to the same link, data link layer protocols are necessary to determine which device has control over the link at any given time. Reliable Transmission Frames are sometimes corrupted while in transit, with an error code like CRC used to detect such errors. While some error codes are strong enough also to correct errors, in practice the overhead is typically too large to handle the range of bit and burst errors that can be introduced on a network link. Even when error-correcting codes are used (e.g., on wireless links), some errors will be too severe to be corrected. As a result, some corrupt frames must be discarded. A linklevel protocol that wants to deliver frames reliably must somehow recover from these discarded (lost) frames. This is usually accomplished using a combination of two fundamental mechanisms—acknowledgments and timeouts. An acknowledgment (ACK for short) is a small control frame that a protocol sends back to its peer saying that it has received an earlier frame. By control frame we mean a header without any data, although a protocol can piggyback an ACK on a data frame it just happens to be sending in the opposite direction. The receipt of an acknowledgment indicates to the sender of the original frame that its frame was successfully delivered. If the sender does not receive an acknowledgment after a reasonable amount of time, then it retransmits the original frame. This action of waiting a reasonable amount of time is called a timeout. The general strategy of using acknowledgments and timeouts to implement reliable delivery is sometimes called automatic repeat request (normally abbreviated ARQ). Stop-and-Wait Protocol Figure 2.20 Timeline for stopand-wait with 1-bit sequence number. The simplest ARQ scheme is the stop-and-wait algorithm. The idea of stop-and-wait is straightforward: After transmitting one frame, the sender waits for an acknowledgment before transmitting the next frame. If the acknowledgment does not arrive after a certain period of time, the sender time out and retransmits the original frame. Figure 2.19 illustrates four different scenarios that result from this basic algorithm. Next figure is a timeline, a common way to depict a protocol’s behavior. The sending side is represented on the left, the receiving side is depicted on the right, and time flows from top to bottom. Figure 2.19(a) shows the situation in which the ACK is received before the timer expires, (b) and (c) show the situation in which the use of error-correcting codes in networking is sometimes referred to as forward error cor- rection (FEC) because the correction of errors is handled “in advance” by sending extra information, rather than waiting for errors to happen and dealing with them later by retransmission. The original frame and the ACK, respectively, are lost, and (d) shows the situation in which the timeout fires too soon. By “lost” we mean that the frame was corrupted while in transit, that this corruption was detected by an error code on the receiver, and that the frame was subsequently discarded. There is one important subtlety in the stop-and-wait algorithm. Suppose the sender sends a frame and the receiver acknowledges it, but the acknowledgment is either lost or delayed in arriving. This situation is illustrated in timelines (c) and (d) of Figure 2.19. In both cases, the sender times out and retransmit the original frame, but the receiver will think that it is the next frame, since it correctly received and acknowledged the first frame. This has the potential to cause duplicate copies of a frame to be delivered. To address this problem, the header for a stop-and-wait protocol usually includes a 1-bit sequence number—that is, the sequence number can take on the values 0 and 1—and the sequence numbers used for each frame alternate, as illustrated in Figure 2.20. Thus, when the sender retransmits frame 0, the receiver can determine that it is seeing a second copy of frame 0 rather than the first copy of frame 1 and therefore can ignore it (the receiver still acknowledges it, in case the first ACK was lost). The main shortcoming of the stop-and-wait algorithm is that it allows the sender to have only one outstanding frame on the link at a time, and this may be far below the link’s capacity. Consider, for example, a 1.5-Mbps link with a 45-ms round-trip time. This link has a delay × bandwidth product of 67.5 Kb, or approximately 8 KB. Since the sender can send only one frame per RTT, and assuming a frame size of 1 KB, this implies a maximum sending rate of BitsPerFrame ÷ TimePerFrame = 1024 × 8 ÷ 0.045 = 182 Kbps or about one-eighth of the link’s capacity. To use the link fully, then, we’d like the sender to be able to transmit up to eight frames before having to wait for an acknowledgment. Sliding Window Consider again the scenario in which the link has a delay × bandwidth product of 8 KB and frames are of 1-KB size.We would like the sender to be ready to transmit the ninth frame at pretty much the same moment that the ACK for the first frame arrives. The algorithm that allows us to do this is called sliding window, and an illustrative timeline is given in Figure 2.21. The Sliding Window Algorithm The sliding window algorithm works as follows. First, the sender assigns a sequence number, denoted SeqNum, to each frame. For now, let’s ignore the fact that SeqNum is implemented by a finite-size header field and instead assume that it can grow infinitely large. The sender maintains three variables: The send window size, denoted SWS, gives the upper bound on the number of outstanding (unacknowledged) frames that the sender can transmit; LAR denotes the sequence number of the last acknowledgment received; and LFS denotes the sequence number of the last frame sent. The sender also maintains the following invariant: LFS − LAR ≤ SWS When an acknowledgment arrives, the sender moves LAR to the right, thereby allowing the sender to transmit another frame. Also, the sender associates a timer with each frame it transmits, and it retransmits the frame should the timer expire before an ACK is received. Notice that the sender has to be willing to buffer up to SWS frames since it must be prepared to retransmit them until they are acknowledged. The receiver maintains the following three variables: 1. RWS denotes the receive window size, gives the upper bound on the number of out-oforder frames that the receiver is willing to accept. 2. LAF denotes the sequence number of the largest acceptable frame and 3. LFR denotes the sequence number of the last frame received. 4. The receiver also maintains the following invariant: LAF − LFR ≤ RWS This situation is illustrated in Figure 2.23. When a frame with sequence number SeqNum arrives, the receiver takes the following action. If SeqNum ≤ LFR or SeqNum > LAF, then the frame is outside the receiver’s window and it is discarded. If LFR < SeqNum ≤ LAF, then the frame is within the receiver’s window and it is accepted. Now the receiver needs to decide whether or not to send an ACK. Let SeqNumToAck denote the largest sequence number not yet acknowledged, such that all frames with sequence numbers less than or equal to SeqNumToAck have been received. The receiver acknowledges the receipt of SeqNumToAck, even if higher-numbered packets have been received. This acknowledgment is said to be cumulative. It then sets LFR = SeqNumToAck and adjusts LAF = LFR + RWS. For example, suppose LFR = 5 (i.e., the last ACK the receiver sent was for sequence number 5), and RWS = 4. This implies that LAF = 9. Should frames 7 and 8 arrive, they will be buffered because they are within the receiver’s window. However, no ACK needs to be sent since frame 6 is yet to arrive. Frames 7 and 8 are said to have arrived out of order. (Technically, the receiver could resend an ACK for frame 5 when frames 7 and 8 arrive.) Should frame 6 then arrive— perhaps it is late because it was lost the first time and had to be retransmitted, or perhaps it was simply delayed—the receiver acknowledges frame 8, bumps LFR to 8, and sets LAF to 12. If frame 6 was in fact lost, then a timeout will have occurred at the sender, causing it to retransmit frame 6. We observe that when a timeout occurs, the amount of data in transit decreases, since the sender is unable to advance its window until frame 6 is acknowledged. This means that when packet losses occur, this scheme is no longer keeping the pipe full. The longer it takes to notice that a packet loss has occurred, the more severe this problem becomes. Notice that in this example, the receiver could have sent a negative acknowledgement (NAK) for frame 6 as soon as frame 7 arrived. However, this is unnecessary since the sender’s timeout mechanism is sufficient to catch this situation, and sending NAKs adds additional complexity to the receiver. Also, as we mentioned, it would have been legitimate to send additional acknowledgments of frame 5 when frames 7 and 8 arrived; in some cases, a sender can use duplicate ACKs as a clue that a frame was lost. Both approaches help to improve performance by allowing early detection of packet losses. Yet another variation on this scheme would be to use selective acknowledgments. That is, the receiver could acknowledge exactly those frames it has received, rather than just the highestnumbered frame received in order. So, in the above example, the receiver could acknowledge the receipt of frames 7 and 8. Giving more information to the sender makes it potentially easier for the sender to keep the pipe full, but adds complexity to the implementation. The sending window size is selected according to how many frames we want to have outstanding on the link at a given time; SWS is easy to compute for a given delay×bandwidth product. On the other hand, the receiver can set RWS to whatever it wants. Two common settings are RWS = 1, which implies that the receiver will not buffer any frames that arrive out of order, and RWS = SWS, which implies that the receiver can buffer any of the frames the sender transmits. It makes no sense to set RWS > SWS since it’s impossible for more than SWS frames to arrive out of order. Frame Order and Flow Control The sliding window protocol is perhaps the best-known algorithm in computer networking. It can be used to serve three different roles. The first role is to reliably deliver frames across an unreliable link. In general, the algorithm can be used to reliably deliver messages across an unreliable network. This is the core function of the algorithm. The second role that the sliding window algorithm can serve is to preserve the order in which frames are transmitted. Since each frame has a sequence number, the receiver makes sure that it does not pass a frame up to the next-higher-level protocol until it has already passed up all frames with a smaller sequence number. That is, the receiver buffers (i.e., does not pass along) out-of-order frames. The third role that the sliding window algorithm sometimes plays is to support flow control—a feedback mechanism by which the receiver is able to throttle the sender. Such a mechanism is used to keep the sender from overrunning the receiver, that is, from transmitting more data than the receiver is able to process. This is usually accomplished by augmenting the sliding window protocol so that the receiver not only acknowledges frames it has received, but also informs the sender of how many frames it has room to receive. The number of frames that the receiver is capable of receiving corresponds to how much free buffer space it has. HDLC- Bit-Oriented Protocols Framing: A sequence of bits is transmitted over a point-to-point link from adaptor to adaptor. In packet-switched networks, blocks of data (called frames at this level), not bit streams, are exchanged between nodes. It is the network adaptor that enables the nodes to exchange frames. When node A wishes to transmit a frame to node B, it tells its adaptor to transmit a frame from the node’s memory. This results in a sequence of bits being sent over the link. The adaptor on node B then collects together the sequence of bits arriving on the link and deposits the corresponding frame in B’s memory. Recognizing exactly what set of bits constitutes a frame, that is, determining where the frame begins and ends is the central challenge faced by the adaptor- framing problem. There are several ways to address the framing problem. 1. Byte-Oriented Protocols (BISYNC, PPP, DDCMP) 2. Bit-Oriented Protocols (HDLC) HDLC is a Bit-Oriented Protocol. Unlike byte-oriented protocols, a bit oriented protocol is not concerned with byte boundaries—it simply views the frame as a collection of bits. These bits might come from some character set, such as ASCII, they might be pixel values in an image, or they could be instructions and operands from an executable file. The Synchronous Data Link Control (SDLC) protocol developed by IBM is an example of a bit-oriented protocol; SDLC was later standardized by the ISO as the High-Level Data Link Control (HDLC) protocol. HDLC denotes both the beginning and the end of a frame with the distinguished bit sequence 01111110. This sequence is also transmitted during any times that the link is idle so that the sender and receiver can keep their clocks synchronized. Because this sequence might appear anywhere in the body of the frame—in fact, the bits 01111110 might cross byte boundaries—bitoriented protocols use the analog of the DLE (data-link-escape) character, a technique known as bit stuffing. Bit stuffing in the HDLC protocol works as follows. On the sending side, any time five consecutive 1s have been transmitted from the body of the message (i.e., excluding when the sender is trying to transmit the distinguished 01111110 sequence), the sender inserts a 0 before transmitting the next bit. On the receiving side, should five consecutive 1s arrive, the receiver makes its decision based on the next bit it sees (i.e., the bit following the five 1s). If the next bit is a 0, it must have been stuffed, and so the receiver removes it. If the next bit is a 1, then one of two things is true: Either this is the end-of-frame marker or an error has been introduced into the bit stream. By looking at the next bit, the receiver can distinguish between these two cases: If it sees a 0 (i.e., the last eight bits it has looked at are 01111110), then it is the end-of frame marker; if it sees a 1 (i.e., the last eight bits it has looked at are 01111111), then there must have been an error and the whole frame is discarded. In the latter case, the receiver has to wait for the next 01111110 before it can start receiving again, and as a consequence, there is the potential that the receiver will fail to receive two consecutive frames. Obviously, there are still ways that framing errors can go undetected, such as when an entire spurious end-of-frame pattern is generated by errors, but these failures are relatively unlikely. An interesting characteristic of bit stuffing, as well as character stuffing, is that the size of a frame is dependent on the data that is being sent in the payload of the frame. It is in fact not possible to make all frames exactly the same size, given that the data that might be carried in any frame is arbitrary. Multiplexing In telecommunications and computer networks, multiplexing (also known as muxing) is a process where multiple analog message signals or digital data streams are combined into one signal over a shared medium. The aim is to share an expensive resource. For example, in telecommunications, several phone calls may be transferred using one wire. It originated in telegraphy, and is now widely applied in communications. The multiplexed signal is transmitted over a communication channel, which may be a physical transmission medium. The multiplexing divides the capacity of the low-level communication channel into several higherlevel logical channels, one for each message signal or data stream to be transferred. A reverse process, known as demultiplexing, can extract the original channels on the receiver side. A device that performs the multiplexing is called a multiplexer (MUX), and a device that performs the reverse process is called a demultiplexer (DEMUX). Types of multiplexing 1. Frequency-division multiplexing (FDM) The deriving of two or more simultaneous, continuous channels from a transmission medium by assigning a separate portion of the available frequency spectrum to each of the individual channels. FDMA (frequency-division multiple access): The use of frequency division to provide multiple and simultaneous transmissions • • Transmission is organized in frequency channels, assigned for an exclusive use by a single user at a time If the channel is not in use, it remains idle and cannot be used by others • • • 2. There are channeling frequency plans elaborated to avoid mutual co-channel and adjacent-channel interference among neighboring stations The use of a radio channel or a group of radio channels requires authorization (license) – for each individual station or for group of stations Frequency division duplexing – 2 radio frequency channels for each duplex link (1 up-link & 1 down-link or 1 forward link and 1 reverse link) Time-division multiplexing (TDM) – Time Division Multiplex: A single carrier frequency channel is shared by a number of users, one after another. Transmission is organized in repetitive “time-frames”. Each frame consists of groups of pulses - time slots. – Each user is assigned a separate time-slot. – TDD – Time Division Duplex provides the forward and reverse links in the same frequency channel. 3. Code-division multiplexing (CDM) Code Division Multiple Access or Spread Spectrum communication techniques FH: frequency hoping (frequency synthesizer controlled by pseudo-random sequence of numbers) DS: direct sequence (pseudo-random sequence of pulses used for spreading) TH: time hoping (spreading achieved by randomly spacing transmitted pulses) Other techniques Hybrid combination of the above techniques (radar and other applications) Random noise as carrier Transmission is organized in time-frequency “slots”. Each link is assigned a sequence of the slots, according to a specific code. Used e.g. in Bluetooth system Switching SWITCHING is a methodology to establish connection between two end points in any network. switched network consists of a series of interlinked nodes called switches. These are devices capable of creating temporary connections between two or more devices. Switching is categories in following CIRCUIT SWITCHING MESSAGE SWITCHING PACKET SWITCHING Message Switching In this switching method, a different strategy is used, where instead of establishing a dedicated physical line between the sender and the receiver, the message is sent to the nearest directly connected switching node. This node stores the message, checks for errors, selects the best available route and forwards the message to the next intermediate node. The line becomes free again for other messages, while the process is being continued in some other nodes. Due to the mode of action, this method is also known as store-and-forward technology where the message hops from node to node to its final destination. Each node stores the full message, checks for errors and forwards it. In this switching technique, more devices can share the network bandwidth, as compared with circuit switching technique. Temporary storage of message reduces traffic congestion to some extent. Higher priority can be given to urgent messages, so that the low priority messages are delayed while the urgent ones are forwarded faster. Through broadcast addresses one message can be sent to several users. Last of all, since the destination host need not be active when the message is sent, message switching techniques improve global communications. However, since the message blocks may be quite large in size, considerable amount of storage space is required at each node to buffer the messages. A message might occupy the buffers for minutes, thus blocking the internodal traffic. Basic idea Each network node receives and stores the message Determines the next leg of the route, and Queues the message to go out on that link. Advantages Line efficiency is greater (sharing of links). Data rate conversion is possible. Even under heavy traffic, packets are accepted, possibly with a greater delay in delivery. Message priorities can be used, to satisfy the requirements, if any. Disadvantages: Message of large size monopolizes the link and storage Packet Switching The basic approach is not much different from message switching. It is also based on the same ‘store-and-forward’ approach. However, to overcome the limitations of message switching, messages are divided into subsets of equal length called packets. This approach was developed for long-distance data communication (1970) and it has evolved over time. In packet switching approach, data are transmitted in short packets (few Kbytes). A long message is broken up into a series of packets as shown in Fig. 4.2.2. Every packet contains some control information in its header, which is required for routing and other purposes. Main difference between Packet switching and Circuit Switching is that the communication lines are not dedicated to passing messages from the source to the destination. In Packet Switching, different messages (and even different packets) can pass through different routes, and when there is a "dead time" in the communication between the source and the destination, the lines can be used by other sources. There are two basic approaches commonly used to packet Switching: virtual-circuit packet switching and datagram packet switching. In virtual-circuit packet switching a virtual circuit is made before actual data is transmitted, but it is different from circuit switching in a sense that in circuit switching the call accept signal comes only from the final destination to the source while in case of virtual-packet switching this call accept signal is transmitted between each adjacent intermediate node as shown in Fig. 4.2.3. Other features of virtual circuit packet switching are discussed in the following subsection Virtual Circuit Packet Switching Networks An initial setup phase is used to set up a route between the intermediate nodes for all the packets passed during the session between the two end nodes. In each intermediate node, an entry is registered in a table to indicate the route for the connection that has been set up. Thus, packets passed through this route, can have short headers, containing only a virtual circuit identifier (VCI), and not their destination. Each intermediate node passes the packets according to the information that was stored in it, in the setup phase. In this way, packets arrive at the destination in the correct sequence, and it is guaranteed that essentially there will not be errors. This approach is slower than Circuit Switching, since different virtual circuits may compete over the same resources, and an initial setup phase is needed to initiate the circuit. As in Circuit Switching, if an intermediate node fails, all virtual circuits that pass through it are lost. The most common forms of Virtual Circuit networks are X.25 and Frame Relay, which are commonly used for public data networks (PDN). Datagram Packet Switching Networks This approach uses a different, more dynamic scheme, to determine the route through the network links. Each packet is treated as an independent entity, and its header contains full information about the destination of the packet. The intermediate nodes examine the header of the packet, and decide to which node to send the packet so that it will reach its destination. In the decision two factors are taken into account: • The shortest ways to pass the packet to its destination - protocols such as RIP/OSPF are used to determine the shortest path to the destination. • Finding a free node to pass the packet to - in this way, bottlenecks are eliminated, since packets can reach the destination in alternate routes. Thus, in this method, the packets don't follow a pre-established route, and the intermediate nodes (the routers) don't have pre-defined knowledge of the routes that the packets should be passed through. Packets can follow different routes to the destination, and delivery is not guaranteed (although packets usually do follow the same route, and are reliably sent). Due to the nature of this method, the packets can reach the destination in a different order than they were sent, thus they must be sorted at the destination to form the original message. This approach is time consuming since every router has to decide where to send each packet. The main implementation of Datagram Switching network is the Internet, which uses the IP network protocol. Advantages: • Call setup phase is avoided (for transmission of a few packets, datagram will be faster). • Because it is more primitive, it is more flexible. • Congestion/failed link can be avoided (more reliable). Problems: • Packets may be delivered out of order. • If a node crashes momentarily, all of its queued packets are lost. Reduction of transmission time because of parallelism in transmission in packet switching technique Virtual Circuit Versus Datagram Packet Switching Key features of the virtual circuit packet switching approach are as follows: •Node need not decide route •More difficult to adopt to congestion •Maintains sequence order •All packets are sent through the same predetermined route On the other hand, the key features of the datagram packet switching are as follows: •Each packet is treated independently •Call set up phase is avoided •Inherently more flexible and reliable Circuit Switching In this networking method, a connection called a circuit is set up between two devices, which is used for the whole communication. Information about the nature of the circuit is maintained by the network. The circuit may either be a fixed one that is always present, or it may be a circuit that is created on an as-needed basis. Even if many potential paths through intermediate devices may exist between the two devices communicating, only one will be used for any given dialog. In a circuit-switched network, before communication can occur between two devices, a circuit is established between them. This is shown as a thick blue line for the conduit of data from Device A to Device B, and a matching purple line from B back to A. Once set up, all communication between these devices takes place over this circuit, even though there Fig. circuit switching are other possible ways that data could conceivably be passed over the network of devices between them. The classic example of a circuit-switched network is the telephone system. When you call someone and they answer, you establish a circuit connection and can pass data between you, in a steady stream if desired. That circuit functions the same way regardless of how many intermediate devices are used to carry your voice. You use it for as long as you need it, and then terminate the circuit. The next time you call, you get a new circuit, which may (probably will) use different hardware than the first circuit did, depending on what's available at that time in the network. Packet Switching In this network type, no specific path is used for data transfer. Instead, the data is chopped up into small pieces called packets and sent over the network. The packets can be routed, combined or fragmented, as required to get them to their eventual destination. On the receiving end, the process is reversed—the data is read from the packets and re-assembled into the form of the original data. A packet-switched network is more analogous to the postal system than it is to the telephone system (though the comparison isn't perfect.) In a packet-switched network, no circuit is set up prior to sending data between devices. Blocks of data, even from the same file or communication, may take any number of paths as it journeys from one device to another. Datagram Networks Example for Datagram Network Example for Virtual Circuit Network Two basic approaches to packet switching are common: The most common is datagram switching (also known as a "best-effort network" or a network supporting the connection-less network service).This is what is used in the network layer of the Internet. Datagram Packet Networks Datagram transmission uses a different scheme to determine the route through the network of links. Using datagram transmission, each packet is treated as a separate entity and contains a header with the full information about the intended recipient. The intermediate nodes examine the header of a packet and select an appropriate link to an intermediate node which is nearer the destination. In this system, the packets do not follow a pre-established route, and the intermediate nodes (usually known as "routers") do not require prior knowledge of the routes that will be used. A datagram network is analogous to sending a message as a series of postcards through the postal system. Each card is independently sent to the final destination (using the postal system). To receive the whole message, the receiver must collect all the postcards and sort them into the original order. Not all postcards need be delivered by the postal system, and not all take the same length of time to arrive. In a datagram network delivery is not guaranteed (although they are usually reliably sent). Enhancements, if required, to the basic service (e.g. reliable delivery) must be provided by the end systems (i.e. user's computers) using additional software. The most common datagram network is the Internet which uses the IP network protocol. Applications which do not require more than a best effort service can be supported by direct use of packets in a datagram network (using the User Datagram Protocol (UDP) transport protocol). Such applications include Internet Video, Voice Communication, messages notifying a user that she/he has received new email, etc. Most Internet applications need additional functions to provide reliable communication (such as end-to-end error and sequence control). Examples include sending email, browsing a web site, or sending a file using the file transfer protocol (ftp). This reliability ensures all the data is received in the correct order with no duplication or omissions. It is provided by additional layers of software algorithms implemented in the End Systems (A,D). Two examples of this are the Transmission Control Protocol (TCP), and the Trivial File Transfer Protocol (TFTP) which uses UDP. One merit of the datagram approach is that not all packets need to follow the same path (route) through the network (although frequently packets do follow the same route). This removes the need to set-up and tear-down the path, reducing the processing overhead, and a need for Intermediate Systems to execute an additional protocol. Packets may also be routed around busy parts of the network when alternate paths exist. This is useful when a particular intermediate system becomes busy or overloaded with excessive volumes of packets to send. It can also provide a high degree of fault tolerance, when an individual intermediate system or communication circuit fails. As long as a route exists through the network between two end systems, they are able to communicate. Only if there is no possible way to send the packets, will the packets be discarded and not delivered. The fate (success/failure) of an application therefore depends only on existence of an actual path between the two End Systems (ESs). This is known as "fate sharing" since the application shares the "fate" of the network. There is another type of network known as a virtual circuit network Congestion Control When one part of the subnet (e.g. one or more routers in an area) becomes overloaded, congestion results. Because routers are receiving packets faster than they can forward them, one of two things must happen: 1. The subnet must prevent additional packets from entering the congested region until those already present can be processed. 2. The congested routers can discard queued packets to make room for those that are arriving. We now consider the problem of congestion and some possible solutions. Three general approaches: 1. prevent it altogether 2. congestion avoidance 3. deal with it if it occurs Preallocation of Resources Preallocation schemes aim to prevent congestion from happening in the first place. For example, we can require that resources be preallocated before any packets can be sent, guaranteeing that resources will be available to process each packet. In virtual circuit networks, for example, the sender opens a connection before sending data. The circuit setup operation selects a path through the subnet, and each router on the path dedicates buffer space and bandwidth to the new circuit. What happens when a user attempts to open a virtual circuit and the subnet is congested? The subnet can refuse to open the connection, forcing the user to wait until sufficient resources become available. Note: The ability of the subnet to reject requests to open connections is an important property of connection oriented networks. Traffic Shaping Control the rate at which packets are sent (not just how many). A congestion avoidance method. Relates to Quality of Service (QoS) as means to provide more reliable service despite variable transmission patterns. At set up, the sender and carrier negotiate a traffic pattern (shape).Leaky Bucket Algorithm used to control rate in a datagram network. A single-server queue with constant service time. If bucket (buffer) overflows then packets are discarded. Enforces a constant output rate regardless of burstiness of input. Does nothing when input is idle. In contrast the Token Bucket Algorithm causes a token to be generated periodically, which during idle periods can be saved up. Related to traffic shaping is how specification, where a particular quality of service is agreed upon between sender, receiver and carrier. Isarithmic Congestion Control Another approach to congestion avoidance is to limit the total number flow of packets in the subnet at any one time. The idea is similar to the token ring: 1. When an router accepts a packet from a host, it must obtain a permit before sending the packet into the subnet. 2. Obtaining a permit is analogous to \seizing the token", but there can be many permits in the subnet. When an router obtains a permit, it destroys it. 3. The destination router regenerates the permit when it passes the packet to the destination host. Issues: 1. Although we have limited the total number of packets in the subnet, we have no control over where in the subnet those packets will be. Thus, an router in one part of the subnet might be congested, while an router in another part remains idle, but unable to process packets for lack of permits. 2. Regenerating lost permits is di_cult, because no single node knows how many permits are currently in the subnet. 3. How to distribute permits? Distribute among the routers or centralize for a known access point. Virtual Circuits Admission Control Refuse to set up new connections if congestion is present. Also helps with congestion avoidance. Flow Control Flow control is aimed at preventing a fast sender from overwhelming a slow receiver. Flow control can be helpful at reducing congestion, but it can't really solve the congestion problem.For example, suppose we connect a fast sender and fast receiver using a 9.6 kbps line: 1. If the two machines use a sliding window protocol, and the window is large, the link will become congested in a hurry. 2. If the window size is small (e.g., 2 packets), the link won't become congested. Note how the window size limits the total number of packets that can be in transmission at one time. Flow control can take place at many levels: User process to user process (end-to-end). Later, we'll see how TCP uses flow control at the end-to-end level. Host to host. For example, if multiple application connections share a single virtual circuit between two hosts. router to router: For example, in virtual circuits. Load Shedding/Discarding Packets (No Preallocation) At the other end of the spectrum, we could preallocate no resources in advance, and take our chances that resources will be available when we need them. When insufficient resources are present to process existing packets, discard queued packets to make room for newly arriving ones. Who retransmits the discarded packets? Two cases: connection oriented and connectionless. In datagram (connectionless) networks, the sending host (transport layer) retransmits discarded packets (if appropriate). In virtual circuit networks, the previous-hop router retransmits the packet when it fails to receive an acknowledgment. Failure to preallocate resources leads to two problems: potential deadlock and unfairness. First, let us consider deadlock. Suppose that all of an router's buffers hold packets. Because the router has no free buffers, it cannot accept additional frames. Unfortunately, it also ignores frames containing ACKs that would free up some of those buffers! Suppose further, that two adjacent routers, A and B, are sending packets to each other. Since both are waiting for the other to accept a packet, neither can proceed. This condition is known as a deadlock. Solution: Reserve at least one buffer for each input line and use it to hold incoming packets. Note that we can extract the ACK field and still discard the packet, if we don't have buffers to hold it. Advantage of discarding packets when congested: Easy to implement. Disadvantages: 1. Wastes resources. The network may have expended considerable resources processing a packet that is eventually discarded. 2. Non-deterministic. There is fewer guarantees than with virtual circuits that packets will ever reach their destination. 3. Requires that sending hosts pay attention to congestion. If the network can't prevent a host from sending data, a host can overload the network. In particular, a” broken” host may cause the network to become overly congested. 4. In the extreme case, congestion collapse occurs. The network becomes so overloaded, that few packets reach their destination. Meanwhile, the sending hosts continue to generate more data (both retransmissions and new packets). This condition occurred several times back in 1987, and the Internet/Arpanet became unusable for a period of hours to days. Random Early Detection (RED) Start dropping packets before a router runs out of buffer space. Basic idea: Choke Packets ECN (Explicit Congestion Notification) is an example where instead of dropping packets, routers mark packets indicating that router is congested. Routers can monitor the level of congestion around them, and when congestion is present, they can send choke packets to the sender that say “slow down". How can an router measure congestion? An router might estimate the level of congestion by measuring the percentage of buffers in use, line utilization, or average queue lengths. Advantage: Dynamic. Host sends as much data as it wants, the network informs it when it is sending too much. Disadvantage: 1. Difficult to tune. By how much should a host slow down? The answer depends on how much traffic the host is sending, how much of the congestion it is responsible for,and the total capacity of the congested region. Such information is not readily available in practice. 2. After receiving a choke packet, the sending host should ignore additional choke packets for a short while because packets currently in transmission may generate additional choke packets. How long? Depends on such dynamic network conditions as delay. Variations exist. LAN: Baseband versus Broadband There are two LAN transmission options, Baseband and Broadband. Baseband LANs which are the most prevalent are single channel systems that support a single transmission at any given time. Broadband LANs which are most unusual support multiple transmissions via frequency channels. Broadband LANs Broadband LANs are multichannel typically based on coaxial cable as the transmission media, although fiber optic cable is also used. Individual channels offer bandwidth of 1 to 5 Mbps, with 20 to 30 channels typically Aggregate bandwidth is as much 500MHz. The characteristics may be given as follows: Digital Signal onto RF carrier (Analog) Channel allocation based on FDM. Head-End for bi-directional transmission. Stations connected via RF modems radio modems accomplish the digital-to-analog conversion process providing the transmitting device access to an analog channel. Advantages of Broadband Data, voice and video can be accomplished on broadband channel. Greater distances Greater bandwidth Disadvantages of Broadband Cable Design Alignment and maintenance High cost, requires modems Lack of well developed standards Baseband LANs Baseband LAN is single channel, supporting a single communication at a time. They are digital in nature. The total bandwidth of 1 to 100 Mbps is provided over coaxial cable, UTP,STP or fiber optic cable. Distance limitations depend on the medium employed and the specifics of the LAN protocol. Baseband LAN is the most popular and the most highly standardized. Ethernet, Token passing, Token Ring and FDDI LANs are all baseband. They are intended only for data, as data communication is, after all, the primary reason for the existence of LANs. The characteristics of this system may be summarized as follows: Unmodulated digital signal. Single channel. Bi-directional propagation via T connectors. No need of modems-low cost installation Advantages of Baseband Simplicity Low cost Ease of installation and maintenance High rates Disadvantages of Baseband Limited distances Data and voice only Carrier Sense Networks-CSMA/CD CSMA is a network access method used on shared network topologies such as Ethernet to control access to the network. Devices attached to the network cable listen (carrier sense) before transmitting. If the channel is in use, devices wait before transmitting. MA (multiple access) indicates that many devices can connect to and share the same network. All devices have equal access to use the network when it is clear. Even though devices attempt to sense whether the network is in use, there is a good chance that two stations will attempt to access it at the same time. On large networks, the transmission time between one end of the cable and another is enough that one station may access the cable even though another has already just accessed it. There are two methods for avoiding these so-called collisions, listed here: CSMA/CD (carrier sense multiple access/collision detection) CD (collision detection) defines what happens when two devices sense a clear channel, then attempt to transmit at the same time. A collision occurs, and both devices stop transmission, wait for a random amount of time, then retransmit. This is the technique used to access the 802.3 Ethernet network channel. This method handles collisions as they occur, but if the bus is constantly busy, collisions can occur so often that performance drops drastically. It is estimated that network traffic must be less than 40 percent of the bus capacity for the network to operate efficiently. If distances are long, time lags occur that may result in inappropriate carrier sensing, and hence collisions. CSMA/CA (carrier sense multiple access/collision avoidance) In CA (collision avoidance), collisions are avoided because each node signals its intent to transmit before actually doing so. This method is not popular because it requires excessive overhead that reduces performance. Ring Network A ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node - a ring. Data travels from node to node, with each node along the way handling every packet. Because a ring topology provides only one pathway between any two nodes, ring networks may be disrupted by the failure of a single link. A node failure or cable break might isolate every node attached to the ring. FDDI networks overcome this vulnerability by sending data on a clockwise and a counterclockwise ring: in the event of a break data is wrapped back onto the complementary ring before it reaches the end of the cable, maintaining a path to every node along the resulting "C-Ring". Many ring networks add a "counter-rotating ring" to form a redundant topology. Such "dual ring" networks include Spatial Reuse Protocol, Fiber Distributed Data Interface (FDDI), and Resilient Packet Ring. Ring network - Advantages All stations have equal access Each node on the ring acts as a repeater, allowing ring networks to span greater distances than other physical topologies. Because data travels in one direction high speeds of transmission of data are possible When using a coaxial cable to create a ring network the service becomes much faster. Ring network - Disadvantages Often the most expensive topology Numerous connections to other computers mean that a computer is less likely to become isolated from the network due to a failure If one node fails, the rest of the network could also fail Damage to the ring will affect the whole network IEEE 802 Standards The IEEE (Institute of Electrical and Electronic Engineers) is a technical association of industry professionals with a common interest in advancing all communications technologies. The LAN/MAN Standards Committee (LMSC), develops LAN (local area network) and MAN (metropolitan area network) standards, mainly for the lowest two layers in the OSI reference model. LMSC is also called the IEEE Project 802, so the standards it develops are referenced as IEEE 802 standards. IEEE 802 standards define physical network interfaces such as network interface cards, bridges, routers, connectors, cables, and all the signaling and access methods associated with physical network connections. While connecting computers through networks we need to have set of rules/standards for the data to travel from one computer to other computer. One such set of rules for the networking traffic to follow is IEEE802 standards developed by IEEE .The standards such as IEEE 802 helps industry provide advantages such as, interoperability, low product cost, and easy to manage standards. The IEEE 802 standards are further divided into many parts – 802.1,802.2,802.3(Ethernet CSMA/CD) etc. X.25 X.25 is an International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) protocol standard for WAN communications that defines how connections between user devices and network devices are established and maintained. X.25 is designed to operate effectively regardless of the type of systems connected to the network. It is typically used in the packet-switched networks (PSNs) of common carriers, such as the telephone companies. Subscribers are charged based on their use of the network. X.25 Devices and Protocol Operation X.25 network devices fall into three general categories: data terminal equipment (DTE), data circuit-terminating equipment (DCE), and packet-switching exchange (PSE). Data terminal equipment devices are end systems that communicate across the X.25 network. They are usually terminals, personal computers, or network hosts, and are located on the premises of individual subscribers. DCE devices are communications devices, such as modems and packet switches, that provide the interface between DTE devices and a PSE, and are generally located in the carrier's facilities. PSEs are switches that compose the bulk of the carrier's network. They transfer data from one DTE device to another through the X.25 PSN. Figure: DTEs, DCEs, and PSEs Make Up an X.25 Network illustrates the relationships among the three types of X.25 network devices: Figure: DTEs, DCEs, and PSEs Make Up an X.25 Network Packet Assembler/Disassembler The packet assembler/disassembler (PAD) is a device found in X.25 networks. PADs are used when a DTE device, such as a character-mode terminal, is too simple to implement the full X.25 functionality. The PAD is located between a DTE device and a DCE device, and it performs three primary functions: buffering (storing data until a device is ready to process it), packet assembly, and packet disassembly. The PAD buffers data sent to or from the DTE device. It also assembles outgoing data into packets and forwards them to the DCE device and adds an X.25 header. Finally, the PAD disassembles incoming packets before forwarding the data to the DTE and removes the X.25 header . Figure: The PAD Buffers, Assembles, and Disassembles Data Packets illustrates the basic operation of the PAD when receiving packets from the X.25 WAN: Figure: The PAD Buffers, Assembles, and Disassembles Data Packets X.25 Session Establishment X.25 sessions are established when one DTE device contacts another to request a communication session. The DTE device that receives the request can either accept or refuse the connection. If the request is accepted, the two systems begin full-duplex information transfer. Either DTE device can terminate the connection. After the session is terminated, any further communication requires the establishment of a new session. X.25 Virtual Circuits A virtual circuit is a logical connection created to ensure reliable communication between two network devices. A virtual circuit denotes the existence of a logical, bidirectional path from one DTE device to another across an X.25 network. Physically, the connection can pass through any number of intermediate nodes, such as DCE devices and PSEs. Multiple virtual circuits (logical connections) can be multiplexed onto a single physical circuit (a physical connection). Virtual circuits are demultiplexed at the remote end, and data is sent to the appropriate destinations. Figure: The PAD Buffers, Assembles, and Disassembles Data Packets illustrates the basic operation of the PAD when receiving packets from the X.25 WAN: Figure: Virtual Circuits Can Be Multiplexed onto a Single Physical Circuit illustrates four separate virtual circuits being multiplexed onto a single physical circuit: Two types of X.25 virtual circuits exist: switched and permanent. Switched virtual circuits (SVCs) are temporary connections used for sporadic data transfers. They require that two DTE devices establish, maintain, and terminate a session each time the devices need to communicate. Permanent virtual circuits (PVCs) are permanently established connections used for frequent and consistent data transfers. PVCs do not require that sessions be established and terminated. Therefore, DTEs can begin transferring data whenever necessary because the session is always active. The basic operation of an X.25 virtual circuit begins when the source DTE device specifies the virtual circuit to be used (in the packet headers) and then sends the packets to a locally connected DCE device. At this point, the local DCE device examines the packet headers to determine which virtual circuit to use and then sends the packets to the closest PSE in the path of that virtual circuit. PSEs (switches) pass the traffic to the next intermediate node in the path, which may be another switch or the remote DCE device. When the traffic arrives at the remote DCE device, the packet headers are examined and the destination address is determined. The packets are then sent to the destination DTE device. If communication occurs over an SVC and neither device has additional data to transfer, the virtual circuit is terminated. The X.25 Protocol Suite The X.25 protocol suite maps to the lowest three layers of the OSI reference model. The following protocols are typically used in X.25 implementations: Packet-Layer Protocol (PLP), Link Access Procedure, Balanced (LAPB), and those among other physical-layer serial interfaces (such as EIA/TIA-232, EIA/TIA-449, EIA-530, and G.703). Figure: Key X.25 Protocols Map to the Three Lower Layers of the OSI Reference Model maps the key X.25 protocols to the layers of the OSI reference model: Figure: Key X.25 Protocols Map to the Three Lower Layers of the OSI Reference Model Packet-Layer Protocol PLP is the X.25 network layer protocol. PLP manages packet exchanges between DTE devices across virtual circuits. PLPs also can run over Logical Link Control 2 (LLC2) implementations on LANs and over Integrated Services Digital Network (ISDN) interfaces running Link Access Procedure on the D channel (LAPD). The PLP operates in five distinct modes: call setup, data transfer, idle, call clearing, and restarting. Call setup mode is used to establish SVCs between DTE devices. A PLP uses the X.121 addressing scheme to set up the virtual circuit. The call setup mode is executed on a per-virtualcircuit basis, which means that one virtual circuit can be in call setup mode while another is in data transfer mode. This mode is used only with SVCs, not with PVCs. Data transfer mode is used for transferring data between two DTE devices across a virtual circuit. In this mode, PLP handles segmentation and reassembly, bit padding, and error and flow control. This mode is executed on a per-virtual-circuit basis and is used with both PVCs and SVCs. Idle mode is used when a virtual circuit is established but data transfer is not occurring. It is executed on a per-virtual-circuit basis and is used only with SVCs. Call clearing mode is used to end communication sessions between DTE devices and to terminate SVCs. This mode is executed on a per-virtual-circuit basis and is used only with SVCs. Restarting mode is used to synchronize transmission between a DTE device and a locally connected DCE device. This mode is not executed on a per-virtual-circuit basis. It affects all the DTE device's established virtual circuits. Four types of PLP packet fields exist: General Format Identifier (GFI) - Identifies packet parameters, such as whether the packet carries user data or control information, what kind of windowing is being used, and whether delivery confirmation is required. Logical Channel Identifier (LCI) - Identifies the virtual circuit across the local DTE/DCE interface. Packet Type Identifier (PTI) - Identifies the packet as one of 17 different PLP packet types. User Data - Contains encapsulated upper-layer information. This field is present only in data packets. Otherwise, additional fields containing control information are added. Session Layer The session layer resides above the transport layer, and provides “value added” services to the underlying transport layer services. The session layer (along with the presentation layer) add services to the transport layer that are likely to be of use to applications, so that each application doesn't have to provide its own implementation. It is the thinnest layer in the OSI model. At the time the model was formulated, it was not clear that a session layer was needed. The session layer provides the following services: Dialog management: Deciding whose turn it is to talk. Some applications operate in half-duplex mode, whereby the two sides alternate between sending and receiving messages, and never send data simultaneously. In the ISO protocols, dialog management is implemented through the use of a data token. The token is sent back and forth, and a user may transmit only when it possesses the token. Synchronization: Move the two session entities into a known state. The transport layer handles only communication errors, synchronization deals with upper layer errors. In a file transfer, for instance, the transport layer might deliver data correctly, but the application layer might be unable to write the le because the le system is full. Users can split the data stream into pages, inserting synchronization points between each page. When an error occurs, the receiver can resynchronize the state of the session to a previous synchronization point. This requires that the sender hold data as long as may be needed. Synchronization is achieved through the use of sequence numbers. The ISO protocols provide both major and minor synchronization points. When resynchronizing, one can only go back as far as the previous major synchronization point. In addition, major synchronization points are acknowledged through explicit messages (making their use expensive). In contrast, minor synchronization points are just markers. Activity management: Allow the user to delimit data into logical units called activities. Each activity is independent of activities that come before and after it, and an activity can be processed on its own. Activities might be used to delimit les of a multiple transfer. Activities are also used for quarantining, collecting all the messages of a multi-message exchange together before processing them. The receiving application would begin processing messages only after all the messages had arrived. This provides a way of helping insure that all or none of a set of operations are performed. For example, a bank transaction may consist of locking a record, updating a value, and then unlocking the record. If an application processed the first operation, but never received the remaining operations (due to client or network failures), the record would remain locked forever. Quarantining addresses this problem. Exception handling: A General purpose mechanism for reporting errors. Note: The TCP/IP protocols do not include a session layer at all. Remote Procedure Call (RPC) Remote Procedure Call (RPC) provides a different paradigm for accessing network services. Instead of accessing remote services by sending and receiving messages, a client invokes services by making a local procedure call. The local procedure hides the details of the network communication. When making a remote procedure call: 1. The calling environment is suspended, procedure parameters are transferred across the network to the environment where the procedure is to execute, and the procedure is executed there. 2.When the procedure finishes and produces its results, its results are transferred back to the calling environment, where execution resumes as if returning from a regular procedure call. The main goal of RPC is to hide the existence of the network from a program. As a result, RPC doesn't quite t into the OSI model: 1. The message-passing nature of network communication is hidden from the user. The user doesn't first open a connection, read and write data, and then close the connection. Indeed, a client often doesn’t even know they are using the network! 2. RPC often omits many of the protocol layers to improve performance. Even a small performance improvement is important because a program may invoke RPCs often. For example, on (diskless) Sun workstations, every file access is made via an RPC. RPC is especially well suited for client-server (e.g., query-response) interaction in which the flow of control alternates between the caller and callee. Conceptually, the client and server do not both execute at the same time. Instead, the thread of execution jumps from the caller to the callee and then back again. The following steps take place during an RPC: 1. A client invokes a client stub procedure, passing parameters in the usual way. The client stub resides within the client's own address space. 2. The client stub marshalls the parameters into a message. Marshalling includes converting the representation of the parameters into a standard format, and copying each parameter into the message. 3. The client stub passes the message to the transport layer, which sends it to the remote server machine. 4. On the server, the transport layer passes the message to a server stub, which demarshalls the parameters and calls the desired server routine using the regular procedure call mechanism. 5. When the server procedure completes, it returns to the server stub (e.g., via a normal procedure call return), which marshalls the return values into a message. The server stub then hands the message to the transport layer. 6. The transport layer sends the result message back to the client transport layer, which hands the message back to the client stub. 7. The client stub demarshalls the return parameters and execution returns to the caller. RPC Issues Issues that must be addressed: Marshalling: Parameters must be marshalled into a standard representation. Parameters consist of simple types (e.g., integers) and compound types (e.g., C structures or Pascal records). Moreover, because each type has its own representation, the types of the various parameters must be known to the modules that actually do the conversion. For example, 4 bytes of characters would be uninterpreted, while a 4-byte integer may need to the order of its bytes reversed. Semantics: Call-by-reference not possible: the client and server don't share an address space. That is, addresses referenced by the server correspond to data residing in the client's address space. One approach is to simulate call-by-reference using copy-restore. In copy-restore, call-byreference parameters are handled by sending a copy of the referenced data structure to the server, and on return replacing the client's copy with that modified by the server. However, copy-restore doesn't work in all cases. For instance, if the same argument is passed twice, two copies will be made, and references through one parameter only changes one of the copies. Binding: How does the client know who to call, and where the service resides? The most flexible solution is to use dynamic binding and and the server at run time when the RPC is first made. The first time the client stub is invoked, it contacts a name server to determine the transport address at which the server resides. Transport protocol: What transport protocol should be used? Exception handling: How are errors handled? Binding We'll examine one solution to the above issues by considering the approach taken by Birrell and Nelson. Binding consists of two parts: Naming refers to what service the client wants to use. In B&N, remote procedures are named through interfaces. An interface uniquely identifies a particular service, describing the types and numbers of its arguments. It is similar in purpose to a type definition in programming languages. For example, a “phone” service interface might specify a single string argument that returns a character string phone number. Locating refers to finding the transport address at which the server actually resides. Once we have the transport address of the service, we can send messages directly to the server. In B&N's system, a server having a service to offer exports an interface for it. Exporting an interface registers it with the system so that clients can use it. A client must import an (exported) interface before communication can begin. The export and import operations are analogous to those found in object-oriented systems. Interface names consists of two parts: 1. A unique type specifies the interface (service) provided. Type is a high-level specification, such as “mail” or “file access". 2. An instance specifies a particular server oering a type (e.g., file access on wpi"). Name Server B&N's RPC system was developed as part of a distributed system called Grapevine. Grapevine was developed at Xerox by the same research group the developed the Ethernet. Among other things, Grapevine provides a distributed, replicated database, implemented by servers residing at various locations around the internet. Clients can query, add new entries or modify existing entries in the database. The Grapevine database maps character string keys to entries called RNames. There are two types of entries: Individual: A single instance of a service. Each server registers the transport address at which its service can be accessed and every instance of an interface is registered as an individual entry. Individual entries map instances to their corresponding transport addresses. Group: The type of an interface, which consists of a list of individual RNames. Group entries contain RNames that point to servers providing the service having that group name. Group entries map a type (interface) to a set of individual entries providing that service. For example, if wpi and bigboote both offered le access, the group entry le access” would consists of two individual RNames, one for wpi and bigboote's servers. When a server wishes to export an interface: 1. It calls its server stub, which then calls Grapevine, passing it the type and instance of the service it wishes to register. Once the interface has been registered with Grapevine, it can be imported by the client. Note: Grapevine insures that both an individual and a group entry has been established for the exported service. 2. The server stub then records information about the instance in an internal export table. In B&N, there is one export table per machine, containing entries for all currently exported interfaces. This table is used to map incoming RPC request messages to their corresponding server procedure. 3. Each entry in the export table contains: a unique identifier that identifies that interface, and a pointer to the server stub that should be called to invoke the interface service. The unique identifier is never reused. If the server crashes and restarts, new identiers are used. The client binds to an exported service as follows: 1.The client stub calls the Grapevine database to nd an instance of the desired type. 2. The database server returns the desired group entry, and the client chooses one of the individual servers. 3. The client stub sends a message to the selected server stub asking for information about that instance. 4. The server stub returns the unique identifier and index of the appropriate export table entry. The client saves the [identier, index] pair as it will need it when actually making an RPC. 5. The client is now ready to actually call the remote procedure: (a) The client sends the export table index and the unique identifier together with the parameters of the call to the server. (b) Upon receipt of a message, the server stub uses the table index contained in the message to nd the appropriate entry in its export table. (c) The server stub then compares the provided identifier with the one in the table. If they dier, reject the procedure call as invalid. Otherwise call the server stub procedure. Binding Notes Note: The unique identifiers in the export table change whenever a server crashes and restarts, allowing the client to detect server restarts between calls. In those cases where a client doesn't care if the server has restarted, it simply rebinds to another instance of the interface and restarts the remote call. Note: Identiers are managed by the server and are not stored in the Grapevine database. Storing them in the Grapevine database would reduce the number of messages exchanged during the binding phase. However, the current approach greatly reduces the load on the Grapevine servers. In most cases, when a server exports an interface to Grapevine, the entry will have been registered previously, and no updates to the database are required (updates are expensive because the database is distributed). Using Grapevine's database provides late binding. Binding callers to specific servers at runtime makes it possible to move the server to another machine without requiring changes to client software. Finally, the separate registering of types and instances provides great flexibility. Rather than binding to a specific instance, a client asks for a specific type. Because all instances of a type implement the same interface, the RPC support routines would take the list of instances returned by Grapevine and chose the one that is closest to the client. How is binding done on other systems? Semantics of RPC Unlike normal procedure calls, many things can go wrong with RPC. Normally, a client will send a request, the server will execute the request and then return a response to the client. What are appropriate semantics for server or network failures? Possibilities: 1. Just hang forever waiting for the reply that will never come. This models regular procedure call. If a normal procedure goes into an infinite loop, the caller never finds out. Of course, few users will like such semantics. 2. Time out and raise an exception or report failure to the client. Of course, finding an appropriate timer value is difficult. If the remote procedure takes a long time to execute, a timer might time-out too quickly. 3. Time out and retransmit the request. While the last possibility seems the most reasonable, it may lead to problems. Suppose that: 1. The client transmits a request, the server executes it, but then crashes before sending a response. If we don't get a response, is there any way of knowing whether the server acted on the request? 2. The server restarts, and the client retransmits the request. What happens? Now, the server will reject the retransmission because the supplied unique identifier no longer matches that in the server's export table. At this point, the client can decide to rebind to a new server and retry, or it can give up. 3. Suppose the client rebinds to the another server, retransmits the request, and gets a response. How many times will the request have been executed? At least once, and possibly twice. We have no way of knowing. Operations that can safely be executed twice are called idempotent. For example, fetching the current time and date, or retrieving a particular page of a file. Is deducting $10,000 from an account idempotent? No. One can only deduct the money once. Likewise, deleting a file is not idempotent. If the delete request is executed twice, the first attempt will be successful, while the second attempt produces a nonexistent file error. RPC Semantics While implementing RPC, B&N determined that the semantics of RPCs could be categorized in various ways: Exactly once: The most desirable kind of semantics, where every call is carried out exactly once, no more and no less. Unfortunately, such semantics cannot be achieved at low cost; if the client transmits a request, and the server crashes, the client has no way of knowing whether the server had received and processed the request before crashing. At most once: When control returns to the caller, the operation will have been executed no more than once. What happens if the server crashes? If the server crashes, the client will be notified of the error, but will have no way of knowing whether or not the operation was performed. At least once: The client just keeps retransmitting the request until it gets the desired response. On return to the caller, the operation will have be performed at least one time, but possibly multiple times. Transport Layer The design of a transport protocol is more challenging than the design of a data link protocol due to a major difference between the environments in which the two layers operate. • Two data link protocol entities communicate directly via a physical channel. • At the transport layer, the protocol entities communicate through a set of networks (not always known). Introduction The transport layer aims at ensuring end to end reliable data transfer. In order to do so, transport protocol performs is function on top of a network layer. The complexity of the transport layer depends on the service provided by the network layer. Virtual Circuit Service: • The messages are delivered in order from sender to receiver, without error, loss, or duplication. • The transport protocol becomes relatively simple. Datagram Services: • Packet can be lost, corrupted, etc. It is up to the hosts to make sure that messages are delivered in order without loss, error, or duplication. • The transport protocol becomes complex and sophisticated. OSI defines 5 classes of transport protocols: TP0 to TP4. TP0 is supposed to work on top of a perfect network, while TP4 is for the worst kind of networks. Simplified view of protocol architecture (importance of transport layer) Quality of Service QoS can be characterized by a number of specific parameters. OSI transport service allows the applications to specify preferred, acceptable, and unacceptable values for these parameters at the time a connection is set up. It is up to the transport layer to examine these parameters, and depending on the kind of network services available to it, determine whether it can provide the required service. 1. Connection establishment delay 2. Connection establishment failure probability 3. Throughput 4. Transit delay 5. Residual error rate 6. Transfer failure probability 7. Connection release delay 8. Connection release failure probability 9. Protection 10. Priority Transport Service Provided to the session layer. Connection-less Service: Service Primitives: T - UNITDATA.request (callee, caller, qos, user_data) T - UNITDATA.indication (callee, caller, qos, user_data) Connection Oriented Service: Three phases: • Transport Connection (TC) establishment (This is a logical connection) • Data Transfer • Transport Connection Release Terminology • Callee: Transport address (TSAP) to be called • Caller: Transport address (NSAP) used by calling transport entity • Exp_wanted: Boolean flag specifying whether expedited data will be sent • Qos: Quality of service desired • User_data: 0 or more bytes of data transmitted but not examined • Reason: Why did it happen • Responder: Transport address connected to at the destination Transport Protocol Two transport layer entities communicate with each other by exchanging TPDUs. The different TPDU types necessary to implement the basic set of services introduced earlier are as follows: CR: Connection request CC: Connection Confirm DR: Disconnect request DC: Disconnect confirm DT: Data AK: Acknowledgement The use of each TPDU will be described in relation to the various services previously described. Examples of Transport Protocols In this section we will look at specific transport protocols from the TCP/IP architecture. TCP (Transmission Control Protocol) : Connection oriented, reliable end to end. Header format TCP connection establishment • Allow each end to now the other exists and willing to communicate • Negotiation of optional parameters for the communication • At both ends: Allocation of transport entity resources done at this time. First attempt to establish a connection 2-way handshaking States of TCP entities: Active Open vs. Passive Open Actually A sends SYN j and B sends SYN j … 2-way handshake -> obsolete SYN -> problems Check the following scenario: SYNi has been traveling in the Internet for a while and suddenly shows up at the B side. There is a need for 3-way handshaking . Connection Termination Either or both sides can initiate the termination of the connection. Termination can be abrupt or graceful (must accept incoming data until FIN received) One scenario: On one side: • • • • • • A Transport Service (TS) user calls Close request primitive Transport entity (TE) sends FIN segement, requesting termination Connection on this side is placed in FIN WAIT state Continue to accept data from the other TE and deliver data to user User does not send any more data from this side When FIN received from the other TE, informs user and closes connection The other side: • The TE receives FIN segment • Informs TS user, and goes into CLOSE WAIT state • Continues to accept data from TS user and transmit it • When TS user issues CLOSE primitive, TE sends FIN Now, the connection is closed TCP Entity state diagram: state and transitions 5.1.5. Flow Control Fixed size Sliding Window does not work properly, because of unreliable networks, and variable conditions for the TCP entity … Credit scheme: make the size of the window variable � receiver can acknowledge and still can restrain the sender window, because receiver is not ready to receive all what sender can send after receiving the ACK. 23 Credit Scheme: Data transfer as a stream of octets numbered modulo 2 Flow control by credit allocation of number of octets Congestion Control (Issues) We will discuss a few issues here: Retransmission timer management • Estimate round trip delay by observing pattern of delays • Set time to value somewhat greater than estimate • Simple average • Exponential average • RTT Variance Estimation (Jacobson’s algorithm) Window Management • Slow start 1. awnd = MIN[credit, cwnd] 2. Start connection with cwnd=1 3. Increment cwnd at each ACK, to some max Problems … • Dynamic windows sizing on congestion 1. When a timeout occurs, set slow start threshold to half current congestion window: ssthresh=cwnd/2 2. Set cwnd = 1 and slow start until cwnd = ssthresh Increasing cwnd by 1 for every ACK 3. For cwnd >=ssthresh, increase cwnd by 1 for each RTT User Datagram Protocol (UDP) Connectionless service: Segments may arrive in disorder, corrupted, and forwarded to the application as they arrive. Some applications such as Simple Network Management Protocol (SNMP) are implemented on top of UDP. Reliability is not an important criterion for these applications. Fewer overheads than TCP. Header format Checksum for header only. Very simple interface for programming. 12
© Copyright 2026 Paperzz