Você está na página 1de 21

July 2012 Masters in Computer Application (MCA) - Semester 3 MCA3050 Advanced Computer Networks 4 Credits (Book ID: B1650)

Assignment Set 1 (60 Marks)

1. Explain the layers of OSI model. What is the difference between a port address, a logical address, and a physical address? Physical Layer: It coordinates the functions required to transmit a bit stream over a physical medium. It deals with the mechanical and electrical specifications of the interface and transmission medium. It deals the followings:
o o o o

Representation of bits It consists of a stream of bits with out any interpretation. The stream of bits may be 0 or 1. Data rate Transmission rate represents send no of bits per second. Synchronizing The sender and receiver must be synchronized at the bit level. Send and Receive bits are in synchronize. Physical Topology Defines how devices are connected to make a network.

Data Link Layer: It is responsible for the node-to-node delivery. It has the response to make the physical layer as error free for the upper layers and specific responsibilities of the data link layer. It deals the followings:

Framing The data link layer divides the data packets received from the network layer into manageable data units called frames. Physical addressing The data link layer adds the physical address of the sender/receiver to the frames which are transmitted from the data link layer to the network layer. Flow control If the data produced by the sender is more than the receivers reception, then congestion occurs in the network. To avoid this condition, by default the data link layer have some flow control mechanism for the overwhelming of receiver side. Error control The data link layer also have error control mechanism at bit level. Example: Cyclic redundancy check

Network Layer: This layer presents between the data link layer and the transport layer. It is responsible for the source-to-destination delivery of a packet possibly across multiple networks (links). It deals the followings:

Logical Addressing The data link layer handles the addressing problem locally by using the physical address, outside the network we need one more address named (Book ID: B1650)

logical address. The network layer adds a header to the packet coming from the upper layer includes the logical addresses of sender/receiver. Routing In routing, some routing devices (like routers, gateways) take a part to play with the data packets, to reach the final destination.

Transport Layer: The transport layer is responsible for end-to-end delivery of the entire message. For added security, the transport layer may create a connection between the two end ports involves three steps: connection establishment, data transfer, connection release. The transport layer has more control over sequencing, flow and error detection and correction. It deals the following:

Service-point addressing It used to transmit entire message to a correct destination. Segmentation and reassembly Every transmittable segment noted by a sequence number, it enables the transport layer to reassemble the message correctly upon arriving at the destination. Connection control Connection may be connectionless or connection-oriented. For connectionless service, each segment consider as a single unit. But, for connectionoriented service, the two end ports involve in three steps: connection establishment, data transfer, connection release. Flow control Transport layer provides the flow control also. Error control Transport layer provides the error control also. Checksum is an error detection mechanism used by the transport layer. Checksum is based on the concept of redundancy.

Session Layer: The services provided by the first three layers (physical, data link, and network) are not sufficient for some process. It provides some extra services like establishing, maintaining, synchronizing the communicating systems. It deals the following:

Dialog control It allows two systems to enter into a dialog. That communication takes place either in half-duplex or full-duplex. Synchronization The session layer allows a process to add checkpoints into a stream of data.

Presentation Layer: Presentation layer concern with syntax and semantics of the information exchanged between two systems. It deals the following:

Encryption For privacy on sensitive datas, we need an encryption/ decryption process. Encryption means that the sender transforms the original information to another form and sends the resulting message out over the network. Decryption reverses the original process to transform the message back to its original form. Compression It reduces the number of bits to be transmitted, the transmission may contain text, audio and video.

(Book ID: B1650)

Application Layer: Application layer provides user interfaces and support for services such as electronic mail, remote file access and transfer, shared database management, and other types of distributed information devices. It deals the following:

File transfer, access, and management (FTAM) Mail services Directory services

Through logical address the system identify a network (source to destination). after identifying the network physical address is used to identify the host on that network. The port address is used to identify the particular application running on the destination machine. Logical Address: An IP address of the system is called logical address. This address is the combnation of Net ID and Host ID. This address is used by network layer to identify a particular network (source to destination) among the networks. This address can be changed by changing the host position on the network. So it is called logical address. Physical address: Each system having a NIC(Network Interface Card) through which two systems physically connected with each other with cables. The address of the NIC is called Physical address or mac address. This is specified by the manficture company of the card. This address is used by data link layer.

Port Address: There are many application running on the computer. Each application run with a port no.(logically) on the computer. This port no. for application is decided by the Karnal of the OS. This port no. is called port address.

2. Explain FDM and TDM with example? In telecommunications, frequency division multiplexing (FDM) is a technique by which the total bandwidth available in a communication medium is divided into a series of nonoverlapping frequency sub-bands, each of which is used to carry a separate signal. This allows a single transmission medium such as a cable or optical fiber to be shared by many signals. An example of a system using FDM is cable television, in which many television channels are carried simultaneously on a single cable. FDM is also used by telephone systems to transmit multiple telephone calls through high capacity trunklines, communications satellites to transmit multiple channels of data on uplink and downlink radio beams, and broadband DSL modems to transmit large amounts of computer data through twisted pair telephone lines, among many other uses.

(Book ID: B1650)

How it works At the source end, for each frequency channel, an electronic oscillator generates a carrier signal, a steady oscillating waveform at a single frequency such as a sine wave, that serves to "carry" information. The carrier is much higher in frequency than the data signal. The carrier signal and the incoming data signal (called the baseband signal) are applied to a modulator circuit. The modulator alters some aspect of the carrier signal, such as its amplitude, frequency, or phase, with the data signal, "piggybacking" the data on the carrier. Multiple modulated carriers at different frequencies are sent through the transmission medium, such as a cable or optical fiber. Each modulated carrier consists of a narrow band of frequencies, centered on the carrier frequency. The information from the data signal is carried in sidebands on either side of the carrier frequency. This band of frequencies is called the passband for the channel. As long as the carrier frequencies of separate channels are spaced far enough apart so that their passbands do not overlap, the separate signals will not interfere with one another. Thus the available bandwidth is divided into "slots" or channels, each of which can carry a data signal. At the destination end of the cable or fiber, for each channel, an electronic filter extracts the channel's signal from all the other channels. A local oscillator generates a signal at the channel's carrier frequency. The incoming signal and the local oscillator signal are applied to a demodulator circuit. This translates the data signal in the sidebands back to its original baseband frequency. An electronic filter removes the carrier frequency, and the data signal is output for use. Modern FDM systems often use sophisticated modulation methods that allow several data signals to be transmitted through each frequency channel. example: FDM can also be used to combine signals before final modulation onto a carrier wave. In this case the carrier signals are referred to as subcarriers: an example is stereo FM transmission, where a 38 kHz subcarrier is used to separate the left-right difference signal from the central left-right sum channel, prior to the frequency modulation of the composite signal. A television channel is divided into subcarrier frequencies for video, color, and audio. DSL uses different frequencies for voice and for upstream and downstream data transmission on the same conductors, which is also an example of frequency duplex. Where frequency-division multiplexing is used as to allow multiple users to share a physical communications channel, it is called frequency-division multiple access (FDMA). FDMA is the traditional way of separating radio signals from different transmitters. In the 1860s and 70s, several inventors attempted FDM under the names of Acoustic telegraphy and Harmonic telegraphy. Practical FDM was only achieved in the electronic age. Meanwhile their efforts led to an elementary understanding of electroacoustic technology, resulting in the invention of the telephone. Time-division multiplexing (TDM) is a type of digital (or rarely analog) multiplexing in which two or more bit streams or signals are transferred appearing simultaneously as sub(Book ID: B1650)

channels in one communication channel, but are physically taking turns on the channel. The time domain is divided into several recurrent time slots of fixed length, one for each subchannel. A sample byte or data block of sub-channel 1 is transmitted during time slot 1, subchannel 2 during time slot 2, etc. One TDM frame consists of one time slot per sub-channel plus a synchronization channel and sometimes error correction channel before the synchronization. After the last sub-channel, error correction, and synchronization, the cycle starts all over again with a new frame, starting with the second sample, byte or data block from sub-channel 1, etc. Time-division multiplexing was first developed in telegraphy; see multiplexing in telegraphy: mile Baudot developed a time-multiplexing system of multiple Hughes machines in the 1870s. For the SIGSALY encryptor of 1943, see PCM. In 1953 a 24-channel TDM was placed in commercial operation by RCA Communications to send audio information between RCA's facility at Broad Street, New York and their transmitting station at Rocky Point and the receiving station at Riverhead, Long Island, New York. The communication was by a microwave system throughout Long Island. The experimental TDM system was developed by RCA Laboratories between 1950 and 1953.[1] In 1962, engineers from Bell Labs developed the first D1 Channel Banks, which combined 24 digitised voice calls over a 4-wire copper trunk between Bell central office analogue switches. A channel bank sliced a 1.544 Mbit/s digital signal into 8,000 separate frames, each composed of 24 contiguous bytes. Each byte represented a single telephone call encoded into a constant bit rate signal of 64 Kbit/s. Channel banks used a byte's fixed position (temporal alignment) in the frame to determine which call it belonged to

Application examples

The plesiochronous digital hierarchy (PDH) system, also known as the PCM system, for digital transmission of several telephone calls over the same four-wire copper cable (T-carrier or E-carrier) or fiber cable in the circuit switched digital telephone network The synchronous digital hierarchy (SDH)/synchronous optical networking (SONET) network transmission standards that have replaced PDH. The RIFF (WAV) audio standard interleaves left and right stereo signals on a persample basis The left-right channel splitting in use for stereoscopic liquid crystal shutter glasses

TDM can be further extended into the time division multiple access (TDMA) scheme, where several stations connected to the same physical medium, for example sharing the same frequency channel, can communicate. Application examples include:

The GSM telephone system The Tactical Data Links Link 16 and Link 22 (Book ID: B1650)

3. Explain SONET STS-1 and SONET STS-3 frame structure? Synchronous Optical Networking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized protocols that transfer multiple digital bit streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At low transmission rates data can also be transferred via an electrical interface. The method was developed to replace the Plesiochronous Digital Hierarchy (PDH) system for transporting large amounts of telephone calls and data traffic over the same fiber without synchronization problems. SONET generic criteria are detailed in Telcordia Technologies Generic Requirements document GR-253-CORE.[1] Generic criteria applicable to SONET and other transmission systems (e.g., asynchronous fiber optic systems or digital radio systems) are found in Telcordia GR-499-CORE.[2] SONET and SDH, which are essentially the same, were originally designed to transport circuit mode communications (e.g., DS1, DS3) from a variety of different sources, but they were primarily designed to support real-time, uncompressed, circuit-switched voice encoded in PCM format.[3] The primary difficulty in doing this prior to SONET/SDH was that the synchronization sources of these various circuits were different. This meant that each circuit was actually operating at a slightly different rate and with different phase. SONET/SDH allowed for the simultaneous transport of many different circuits of differing origin within a single framing protocol. SONET/SDH is not itself a communications protocol per se, but a transport protocol. Due to SONET/SDH's essential protocol neutrality and transport-oriented features, SONET/SDH was the obvious choice for transporting the fixed length Asynchronous Transfer Mode (ATM) frames also known as cells. It quickly evolved mapping structures and concatenated payload containers to transport ATM connections. In other words, for ATM (and eventually other protocols such as Ethernet), the internal complex structure previously used to transport circuit-oriented connections was removed and replaced with a large and concatenated frame (such as STS-3c) into which ATM cells, IP packets, or Ethernet frames are placed.

Racks of Alcatel STM-16 SDH add-drop multiplexers

(Book ID: B1650)

Both SDH and SONET are widely used today: SONET in the United States and Canada, and SDH in the rest of the world. Although the SONET standards were developed before SDH, it is considered a variation of SDH because of SDH's greater worldwide market penetration. The SDH standard was originally defined by the European Telecommunications Standards Institute (ETSI), and is formalized as International Telecommunication Union (ITU) standards G.707,[4] G.783,[5] G.784,[6] and G.803.[7][8] The SONET standard was defined by Telcordia[1] and American National Standards Institute (ANSI) standard T1.105.[

4. Describe the ISDN connection setup procedure according to the Q.931 protocol? ISDN Specifications ISDN has been standardized in a host of different specifications from groups such as the International Telecommunication Union (ITU), Telcordia (formerly Bellcore), and the American National Standards Institute (ANSI). These groups, along with the ISDN forum, are responsible for modeling the ISDN industry. Standards that are in the I group consist of standards set forth for ISDN, both narrow and broadband. The Q group of standards has to do with switching and signaling. The Q standards also include topics such as SS7 and Intelligent Networks (INs). Several standards are associated with ISDN, but the most common standards are as follows:

I.430BRI Physical Layer I.431PRI Physical Layer Q.921ISDN Data Link Layer Specification Q.931Call-Control and Signaling Specification

The physical layer specifications for ISDN are I.430 and I.431. They identify the electrical, mechanical, and functional specifications of the circuits. I.430 specifies the basis for a BRI frame as 48 bits. The BRI frame is cycled at 4000 times per second, which gives a total bandwidth of 192,000 bps (48 * 4000). Although 192 kbps is available on the BRI circuit, the subscriber typically only uses a maximum of 128 kbps. 144 kbps is possible if the D channel is also used for data, but D channel use depends on the provider. I.431 describes the use of either a T1 or E1 circuit for the electrical basis of a PRI circuit. There are specifications for a Japanese PRI (INS-1500), but it is not included in the ITU specification. ITU-T Recommendation Q.931 is the ITU standard ISDN connection control signalling protocol, forming part of Digital Subscriber Signalling System No. 1.[1] Unlike connectionless systems like UDP, ISDN is connection oriented and uses explicit signalling to manage call state: Q.931. Q.931 typically does not carry user data. Q.931 does not have a direct equivalent in the Internet Protocol stack, but can be compared to SIP. Q.931 does not provide flow control or perform retransmission, since the underlying layers are assumed to be reliable and the circuit-oriented nature of ISDN allocates bandwidth in fixed increments of 64 kbit/s. Amongst other things, Q.931 manages connection setup and breakdown. Like TCP, Q.931 documents both the protocol itself and a protocol state machine. (Book ID: B1650)

Q.931 was designed for ISDN call establishment, maintenance, and release of network connections between two DTEs on the ISDN D channel. Q.931 has more recently been used as part of the VoIP H.323 protocol stack (see H.225.0) and in modified form in some mobile phone transmission systems and in ATM. A Q.931 frame contains the following elements:

Protocol discriminator (PD) - Specifies which signaling protocol is used for the connection (e.g. PD=8 for DSS1) Call reference value (CR) - Addresses different connections which can exist simultaneously. The value is valid only during the actual time period of the connection Message type (MT) - Specifies the type of a layer 3 message out of the Q.931-defined Message type set for call control (e.g. SETUP). There are messages defined for the call setup, the call release and the control of call features. Information elements (IE) - Specify further information which is associated to the actual message. An IE contains the IE name (e.g. bearer capability), their length and a variable field of contents.

ISDN's Q.931 is the Layer 3 specification that is responsible for call control and management. These functions include call set up, tear down, and request for services from Layer 2. The purpose of this section is not to go over all of the possible state primitives, but to give you a good understanding of how Q.931 manages ISDN call control. Q.931 uses a message-based system to control the ingress and egress call functions of an ISDN circuit. Using the D channel as a vehicle for this transport, ISDN Q.931 signaling is referred to as a common channel signaling (CCS) service. As a review, CCS transmits signaling from end-to-end in an out-of-band, non-intrusive way to the bearer traffic. In contrast, channel associated signaling (CAS) is commonly found on T1 circuits and is interwoven into the bearer signal stream. Some argue that ISDN is not CCS because the signaling and bearer traffic take the same path, but remember that, although it is on the same path, it uses a different logical channel. For that reason, ISDN is physically in-band but logically out-of-band. To transport the number of messages that are associated with Q.931, it is necessary to have a standardized message format. The beauty of Q.931 is that your equipment doesn't have to process everything that is in the message format. Any information that your equipment doesn't understand is merely ignored. The message format splits values into four subsections:

Protocol discriminator Call reference length indication Message type Information element

Figure 8-12 shows the basic Q.931 message format with the order of the octets, beginning from top to bottom. Figure 8-12 Q.931 Message Format

(Book ID: B1650)

The first field in the Q.931 message is the protocol discriminator. The protocol discriminator identifies user/network call-control message types from other message types that might be found on the network. The protocol discriminator is coded as 00010000 for user/network callcontrol messages, with the least significant bit (LSB) read first (right to left). The next octet is the call reference length indication octet. This octet specifies how many octets the call reference value can encompass, and it is considered a portion of the actual call reference. The local end of the ISDN connection uses the call reference to maintain a record of all call requests that are processed. To support BRI access, the call reference must be at least two octets in length and at least three octets in length for PRI access. Figure 8-13 shows the octets involved with keeping track of ISDN call instances. Figure 8-13 Q.931 Call Reference Octets There is a flag in the call reference field that delineates between the originating and terminating side of a connection. If it is set to a 0, the message is being sent from the origination point, and if it is set to a 1, the message is being sent from the terminating side of the connection. The next portion of the message identifies the actual message type. Refer to Table 8-4 for a listing of the message types and how they are coded in the field. Bits 1 to 5 in the field identify the actual message, and the last 3 bits identify the message class. For example, if the call clearing message class is 010, and the release message type is 01101, they are read together as (bit 1 on right) 01001101.

5. What is ATM? Discuss its architecture with suitable diagram?

Asynchronous Transfer Mode (ATM) is, according to the ATM Forum, "a telecommunications concept defined by ANSI and ITU (formerly CCITT) standards for carriage of a complete range of user traffic, including voice, data, and video signals," and is designed to unify telecommunication and computer networks. It uses asynchronous timedivision multiplexing, and it encodes data into small, fixed-sized cells. This differs from approaches such as the Internet Protocol or Ethernet that use variable sized packets or frames. ATM provides data link layer services that run over a wide range of OSI physical Layer links. ATM has functional similarity with both circuit switched networking and small packet switched networking. It was designed for a network that must handle both traditional highthroughput data traffic (e.g., file transfers), and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. ATM is a core protocol used over the SONET/SDH backbone of the public switched telephone network (PSTN) and Integrated Services Digital Network (ISDN), but its use is declining in favour of All IP.

ATM was developed to meet the needs of the Broadband Integrated Services Digital Network, as defined in the late 1980s

(Book ID: B1650)

Why cells? Consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (traffic with some large data packets). No matter how small the speech packets could be made, they would always encounter full-size data packets, and under normal queuing conditions, might experience maximum queuing delays. That is why all packets, or "cells," should have the same small size. In addition, the fixed cell structure means that ATM can be readily switched by hardware without the inherent delays introduced by software switched and routed frames. Thus, the designers of ATM utilized small data cells to reduce jitter (delay variance, in this case) in the multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly important when carrying voice traffic, because the conversion of digitized voice into an analogue audio signal is an inherently real-time process, and to do a good job, the decoder (codec) that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess and if the data is late, it is useless, because the time period when it should have been converted to a signal has already passed. At the time of the design of ATM, 155 Mbit/s Synchronous Digital Hierarchy (SDH) with 135 Mbit/s payload was considered a fast optical network link, and many Plesiochronous Digital Hierarchy (PDH) links in the digital network were considerably slower, ranging from 1.544 to 45 Mbit/s in the USA, and 2 to 34 Mbit/s in Europe. At this rate, a typical full-length 1500 byte (12000-bit) data packet would take 77.42 s to transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 line, a 1500 byte packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over, in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this low jitter in a number of ways:

Have a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced by passage through the buffer would require echo cancellers even in local networks; this was considered too expensive at the time. Also, it would have increased the delay across the channel, and conversation is difficult over highdelay channels. Build a system that can inherently provide low jitter (and minimal overall delay) to traffic that needs it. Operate on a 1:1 user basis (i.e., a dedicated pipe).

The design of ATM aimed for a low-jitter network interface. However, "cells" were introduced into the design to provide short queuing delays while continuing to support datagram traffic. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be reassembled later. The (Book ID: B1650)

choice of 48 bytes was political rather than technical.[5] When the CCITT (now ITU-T) was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise in larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATMbased voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets which reduced worst-case cell contention jitter by a factor of almost 30, reducing the need for echo cancellers.

6. Discuss the X.25 structure with help of suitable diagram. Also explain TCP retransmission strategy? X.25 is an ITU-T standard protocol suite for packet switched wide area network (WAN) communication. An X.25 WAN consists of packet-switching exchange (PSE) nodes as the networking hardware, and leased lines, plain old telephone service connections or ISDN connections as physical links. X.25 is a family of protocols that was popular during the 1980s with telecommunications companies and in financial transaction systems such as automated teller machines. X.25 was originally defined by the International Telegraph and Telephone Consultative Committee (CCITT, now ITU-T) in a series of drafts[1] and finalized in a publication known as The Orange Book in 1976.[2] While X.25 has been, to a large extent, replaced by less complex protocols, especially the Internet protocol (IP), the service is still used and available in niche and legacy applications. X.25 is one of the oldest packet-switched services available. It was developed before the OSI Reference Model.[3] The protocol suite is designed as three conceptual layers, which correspond closely to the lower three layers of the seven-layer OSI model.[4] It also supports functionality not found in the OSI network layer.[5][6] X.25 was developed in the ITU-T (formerly CCITT) Study Group VII based upon a number of emerging data network projects.[7] Various updates and additions were worked into the standard, eventually recorded in the ITU series of technical books describing the telecommunication systems. These books were published every fourth year with differentcolored covers. The X.25 specification is only part of the larger set of X-Series[8] specifications on public data networks.[9] The public data network was the common name given to the international collection of X.25 providers. Their combined network had large global coverage during the 1980s and into the 1990s.[10]

(Book ID: B1650)

Publicly-accessible X.25 networks (Compuserve, Tymnet, Euronet, PSS, Datapac, Datanet 1 and Telenet) were set up in most countries during the 1970s and 80s, to lower the cost of accessing various online services. Beginning in the early 1990s in North America, use of X.25 networks (predominated by Telenet and Tymnet)[10] began being replaced with Frame Relay service offered by national telephone companies.[11] Most systems that required X.25 now utilize TCP/IP, however it is possible to transport X.25 over IP when necessary [12] X.25 networks are still in use throughout the world. A variant called AX.25 is also used widely by amateur packet radio. Racal Paknet, now known as Widanet, is still in operation in many regions of the world, running on an X.25 protocol base. In some countries, like the Netherlands or Germany, it is possible to use a stripped version of X.25 via the D-channel of an ISDN-2 (or ISDN BRI) connection for low volume applications such as point-of-sale terminals; but, the future of this service in the Netherlands is uncertain. Additionally X.25 is still under heavy use in the aeronautical business (especially in the Asian region) even though a transition to modern protocols like X.400 is without option as X.25 hardware becomes increasingly rare and costly. As recently as March 2006, the National Airspace Data Interchange Network has used X.25 to interconnect remote airfields with Air Route Traffic Control Centers. France is one of the only countries that still has a commercial end-user service known as Minitel which is based on Videotex which in turn runs on X.25. In 2002 Minitel had about 9 million users, and in 2011 it still accounts for about 2 million users in France though France Tlcom has announced it will completely shut down the service by 30 June 2012. The Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite. TCP is one of the two original components of the suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered delivery of a stream of octets from a program on one computer to another program on another computer. TCP is the protocol used by major Internet applications such as the World Wide Web, email, remote administration and file transfer. Other applications, which do not require reliable data stream service, may use the User Datagram Protocol (UDP), which provides a datagram service that emphasizes reduced latency over reliability. The protocol corresponds to the transport layer of TCP/IP suite. TCP provides a communication service at an intermediate level between an application program and the Internet Protocol (IP). That is, when an application program desires to send a large chunk of data across the Internet using IP, instead of breaking the data into IP-sized pieces and issuing a series of IP requests, the software can issue a single request to TCP and let TCP handle the IP details. IP works by exchanging pieces of information called packets. A packet is a sequence of octets and consists of a header followed by a body. The header describes the packet's destination and, optionally, the routers to use for forwarding until it arrives at its destination. The body contains the data IP is transmitting. (Book ID: B1650)

Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets can be lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost data, rearranges out-of-order data, and even helps minimize network congestion to reduce the occurrence of the other problems. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the application program. Thus, TCP abstracts the application's communication from the underlying networking details. TCP is utilized extensively by many of the Internet's most popular applications, including the World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and some streaming media applications.

July 2012 Masters in Computer Application (MCA) - Semester 3 MCA3050 Advanced Computer Networks 4 Credits (Book ID: B1650) Assignment Set 2 (60 Marks) 1. Explain the functioning of the Open Shortest Path First (OSPF) protocol? Open Shortest Path First (OSPF) is a link-state routing protocol for Internet Protocol (IP) networks. It uses a link state routing algorithm and falls into the group of interior routing protocols, operating within a single autonomous system (AS). It is defined as OSPF Version 2 in RFC 2328 (1998) for IPv4. The updates for IPv6 are specified as OSPF Version 3 in RFC 5340 (2008). OSPF is perhaps the most widely used interior gateway protocol (IGP) in large enterprise networks. IS-IS, another link-state dynamic routing protocol, is more common in large service provider networks. The most widely used exterior gateway protocol is the Border Gateway Protocol (BGP), the principal routing protocol between autonomous systems on the Internet. Overview OSPF is an interior gateway protocol that routes Internet Protocol (IP) packets solely within a single routing domain (autonomous system). It gathers link state information from available routers and constructs a topology map of the network. The topology determines the routing table presented to the Internet Layer which makes routing decisions based solely on the destination IP address found in IP packets. OSPF was designed to support variable-length subnet masking (VLSM) or Classless Inter-Domain Routing (CIDR) addressing models. OSPF detects changes in the topology, such as link failures, very quickly and converges on a new loop-free routing structure within seconds. It computes the shortest path tree for each route using a method based on Dijkstra's algorithm, a shortest path first algorithm. The OSPF routing policies to construct a route table are governed by link cost factors (external metrics) associated with each routing interface. Cost factors may be the distance of (Book ID: B1650)

a router (round-trip time), network throughput of a link, or link availability and reliability, expressed as simple unitless numbers. This provides a dynamic process of traffic load balancing between routes of equal cost. An OSPF network may be structured, or subdivided, into routing areas to simplify administration and optimize traffic and resource utilization. Areas are identified by 32-bit numbers, expressed either simply in decimal, or often in octet-based dot-decimal notation, familiar from IPv4 address notation. By convention, area 0 (zero) or 0.0.0.0 represents the core or backbone region of an OSPF network. The identifications of other areas may be chosen at will; often, administrators select the IP address of a main router in an area as the area's identification. Each additional area must have a direct or virtual connection to the backbone OSPF area. Such connections are maintained by an interconnecting router, known as area border router (ABR). An ABR maintains separate link state databases for each area it serves and maintains summarized routes for all areas in the network. OSPF does not use a TCP/IP transport protocol (UDP, TCP), but is encapsulated directly in IP datagrams with protocol number 89. This is in contrast to other routing protocols, such as the Routing Information Protocol (RIP), or the Border Gateway Protocol (BGP). OSPF handles its own error detection and correction functions. OSPF uses multicast addressing for route flooding on a broadcast domain. For non-broadcast networks special provisions for configuration facilitate neighbor discovery. OSPF multicast IP packets never traverse IP routers (never traverse Broadcast Domains), they never travel more than one hop. OSPF reserves the multicast addresses 224.0.0.5 for IPv4 or FF02::5 for IPv6 (all SPF/link state routers, also known as AllSPFRouters) and 224.0.0.6 for IPv4 or FF02::6 for IPv6 (all Designated Routers, AllDRouters), as specified in RFC 2328] and RFC 5340.[4] For routing multicast IP traffic, OSPF supports the Multicast Open Shortest Path First protocol (MOSPF) as defined in RFC 1584.[5] Neither Cisco nor Juniper Networks include MOSPF in their OSPF implementations. PIM (Protocol Independent Multicast) in conjunction with OSPF or other IGPs, (Interior Gateway Protocol), is widely deployed. The OSPF protocol, when running on IPv4, can operate securely between routers, optionally using a variety of authentication methods to allow only trusted routers to participate in routing. OSPFv3, running on IPv6, no longer supports protocol-internal authentication. Instead, it relies on IPv6 protocol security (IPsec). OSPF version 3 introduces modifications to the IPv4 implementation of the protocol.[2] Except for virtual links, all neighbor exchanges use IPv6 link-local addressing exclusively. The IPv6 protocol runs per link, rather than based on the subnet. All IP prefix information has been removed from the link-state advertisements and from the Hello discovery packet making OSPFv3 essentially protocol-independent. Despite the expanded IP addressing to 128-bits in IPv6, area and router Identifications are still based on 32-bit values. The Open Shortest Path First (OSPF) protocol is the most widely used example of a link state routing protocol. It is in wide use as an interior gateway protocol on the Internet and many other (Book ID: B1650)

networks. The latest version of this public protocol was version 3, as specified in RFC 5340 released in 2008, and includes support for IPv6. The Open Shortest Path First algorithm operates similarly to Dijkstras algorithm but adds a system of designated (primary) and backup routers. Routers are selected for these roles based on their priority number; routers with a priority of 0 cannot be designated or backup routers. The designated router for an area is responsible for sending Link State Advertisements (LSAs) to all other area nodes. OSPF routing packets on an OSPF routed network have a nine-field header, illustrated in Figure 9.9. OSPF packet types include HELLO, database description, link state request, link state update, or link state acknowledgment. OSPF is used on autonomous systems (AS). Autonomous systems are one or more networks under a common administrative structure. OSPF functions not only as the interior gateway routing protocol for the AS, but it can also send and receive routes from other autonomous systems. Each network in the AS is an area within a hierarchy defined within the AS, each area being a collection of contiguous hosts. In OSPF, a routing domain is an alternative description for all systems in an AS that share the same topological map. OSPF partitions areas into separate topologies so that each area is kept unaware of another areas routing traffic. This system is meant to lower the amount of overall network traffic and speed up the discovery process of shortest routes for an individual area. Collections of areas are connected by OSPF border routers in an OSPF backbone. The backbone itself is organized as an OSPF area, and routing information for that area is also separate from the areas the backbone connects. It is possible to organize an OSPF backbone so that the backbone is composed of two or more unconnected groups. The backbone is made contiguous by defining a virtual link through routers in a non-backbone area to serve as the connection between backbone groups. The backbone of an OSPF system composed of border routers communicates with other exterior gateway protocols (EGPs) such as the Border Gateway Protocol (BGP) or the Exterior Gateway Protocol (EGP). Figure 9.10 shows an OSPF network with several areas, a backbone, and a virtual link. 3. What is symmetric key cryptography? Describe one symmetric key algorithm? Symmetric key encryption uses same key, called secret key, for both encryption and decryption. Users exchanging data keep this key to themselves. Message encrypted with a secret key can be decrypted only with the same secret key. The algorithm used for symmetric key encryption is called secret-key algorithm. Since secretkey algorithms are mostly used for encrypting the content of the message they are also called content-encryption algorithms. The major vulnerability of secret-key algorithm is the need for sharing the secret-key. One way of solving this is by deriving the same secret key at both ends from a user supplied text string (password) and the algorithm used for this is called password-based encryption algorithm. Another solution is to securely send the secret-key from one end to other end. This is done using another class of encryption called asymmetric algorithm, which is discussed later. (Book ID: B1650)

Strength of the symmetric key encryption depends on the size of the key used. For the same algorithm, encrypting using longer key is tougher to break than the one done using smaller key. Strength of the key is not liner with the length of the key but doubles with each additional bit. Following are some of popular secret-key algorithms and the key size that they use RC2 - 64 bits DES - 64 bits 3DES - 192 bits AES - 256 bits IDEA - 128 bits CAST - 128 bits (CAST256 uses 256 bits key) Symmetric-key algorithms are a class of algorithms for cryptography that use the same cryptographic keys for both encryption of plaintext and decryption of ciphertext. The keys may be identical or there may be a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. This requirement that both parties have access to the secret key is one of the main drawbacks of symmetric key encryption, in comparison to public-key encryption. Symmetric-key algorithms are a class of algorithms for cryptography that use trivially related, often identical, cryptographic keys for both decryption and encryption etc. The encryption key is trivially related to the decryption key, in that they may be identical or there is a simple transformation to go between the two keys. The keys, in practice, represent a shared secret between two or more parties that can be used to maintain a private information link. Other terms for symmetric-key encryption are secret-key, single-key, shared-key, one-key, and private-key encryption. Use of the last and first terms can create ambiguity with similar terminology used in public-key cryptography. Contents

1 Types of symmetric-key algorithms 2 Cryptographic primitives based on symmetric ciphers 3 Construction of symmetric ciphers 4 Security of symmetric ciphers 5 Key generation 6 See also 7 Notes

Types of symmetric-key algorithms Symmetric-key algorithms can be divided into stream ciphers and block ciphers. Stream ciphers encrypt the bits of the message one at a time, and block ciphers take a number of bits and encrypt them as a single unit. Blocks of 64 bits have been commonly used. The Advanced Encryption Standard (AES) algorithm approved by NIST in December 2001 uses 128-bit blocks. Some examples of popular and well-respected symmetric algorithms include Twofish, Serpent, AES (Rijndael), Blowfish, CAST5, RC4, 3DES, and IDEA. (Book ID: B1650)

Cryptographic primitives based on symmetric ciphers Symmetric ciphers are often used to achieve other cryptographic primitives than just encryption. Encrypting a message does not guarantee that this message is not changed while encrypted. Hence often a message authentication code is added to a ciphertext to ensure that changes to the ciphertext will be noted by the receiver. Message authentication codes can be constructed from symmetric ciphers (e.g. CBC-MAC). However, symmetric ciphers also can be used for non-repudiation purposes by ISO 13888-2 standard. Another application is to build hash functions from block ciphers. See one-way compression function for descriptions of several such methods.
4. Explain in detail, the Secure Socket layer. Describe the four types of protocol of SSL?

The SSL protocol was originally developed by Netscape, to ensure security of data transported and routed through HTTP, LDAP or POP3 application layers. SSL is designed to make use of TCP as a communication layer to provide a reliable end-to-end secure and authenticated connection between two points over a network (for example between the service client and the server). Notwithstanding this SSL can be used for protection of data in transit in situations related to any network service, it is used mostly in HTTP server and client applications. Today, almost each available HTTP server can support an SSL session, whilst IE or Netscape Navigator browsers are provided with SSL-enabled client software. The Secure Sockets Layer (SSL) Due to the fact that nearly all businesses have websites (as well as government agencies and individuals) a large enthusiasm exists for setting up facilities on the Web for electronic commerce. Of course there are major security issues involved here that need to be addressed. Nobody wants to send their credit card number over the Internet unless they have a guarantee that only the intended recipient will receive it. As businesses begin to see the threats of the Internet to electronic commerce, the demand for secure web pages grows. A number of approaches to providing Web security are possible. The various approaches are similar in many ways but may differ with respect to their scope of applicability and relative location within the TCP/IP protocol stack. For example we can have security at the IP level making it transparent to end users and applications. However another relatively general-purpose solution is to implement security just above TCP. The foremost example of this approach is the Secure Sockets Layer (SSL) and the follow-on Internet standard known as Transport Layer Security (TLS). This chapter looks at SSL which was originated by Netscape. The Internet standard, TLS, can be viewed essentially as SSLv3.1 and is very close to and backward compatible with SSLv3. We will mainly be interested in SSLv3 at present.

(Book ID: B1650)

SSL objectives and architecture Which problems does SSL target? The main objectives for SSL are:

Authenticating the client and server to each other: the SSL protocol supports the use of standard key cryptographic techniques (public key encryption) to authenticate the communicating parties to each other. Though the most frequent application consists in authenticating the service client on the basis of a certificate, SSL may also use the same methods to authenticate the client. Ensuring data integrity: during a session, data cannot be either intentionally or unintentionally tampered with. Securing data privacy: data in transport between the client and the server must be protected from interception and be readable only by the intended recipient. This prerequisite is necessary for both the data associated with the protocol itself (securing traffic during negotiations) and the application data that is sent during the session itself. SSL is in fact not a single protocol but rather a set of protocols that can additionally be further divided in two layers:

1. the protocol to ensure data security and integrity: this layer is composed of the SSL Record Protocol, 2. the protocols that are designed to establish an SSL connection: three protocols are used in this layer: the SSL Handshake Protocol, the SSL ChangeCipher SpecPprotocol and the SSL Alert Protocol. SSL uses these protocols to address the tasks as described above. The SSL record protocol is responsible for data encryption and integrity. As can be seen in Figure 2, it is also used to encapsulate data sent by other SSL protocols, and therefore, it is also involved in the tasks associated with the SSL check data. The other three protocols cover the areas of session management, cryptographic parameter management and transfer of SSL messages between the client and the server. Prior to going into a more detailed discussion of the role of individual protocols and their functions let us describe two fundamental concepts related to the use of SSL. The concepts as mentioned above are fundamental for a connection between the client and the server, and they also encompass a series of attributes. Lets try to give some more details:

connection: this is a logical client/server link, associated with the provision of a suitable type of service. In SSL terms, it must be a peer-to-peer connection with two network nodes. session: this is an association between a client and a server that defines a set of parameters such as algorithms used, session number etc. An SSL session is created by the Handshake Protocol that allows parameters to be shared among the connections made between the server and the client, and sessions are used to avoid negotiation of new parameters for each connection. This means that a single session is shared among multiple SSL connections between the client and the server. In theory, it may also be possible that multiple sessions are shared by a single connection, but this feature is not used in practice. The concepts of a SSL session and connection involve several parameters that are used for SSLenabled communication between the client and the server. During the negotiations of the handshake protocol, the encryption methods are established and a series of

(Book ID: B1650)

parameters of the Session State are subsequently used within the session. A session state is defined by the following parameters: session identifier: this is an identifier generated by the server to identify a session with a chosen client, Peer certificate: X.509 certificate of the peer, compression method: a method used to compress data prior to encryption, Algorithm specification termed CipherSpec: specifies the bulk data encryption algorithm (for example DES) and the hash algorithm (for example MD5) used during the session, Master secret: 48-byte data being a secret shared between the client and server, is resumable: this is a flag indicating whether the session can be used to initiate new connections. According to the specification, the SSL connection state is defined by the following parameters: Server write MAC secret: the secret key used for data written by the server, Server write key: the bulk cipher key for data encrypted by the server and decrypted by the client, Client write key: the bulk cipher key for data encrypted by the client and decrypted by the server, Sequence number: sequence numbers maintained separately by the server for messages transmitted and received during the data session. The abbreviation MAC used in the above definitions means Message Authentication Code that is used for transmission of data during the SSL session. The role of MAC will be explained further when discussing the record protocols. A brief description of the terms was necessary to be able to explain the next issues connected with the functioning of the SSL protocol, namely the SSL record protocol.

6. Describe the operation of RSVP. What is packet filtering? What are the advantages of packet filtering?

RSVP Operation Over IP Datagrams through IPv4 networks. [RFC1701] describes a generic routing encapsulation, while [RFC1702] applies this encapsulation to IPv4. Finally, [ESP] describes a mechanism that can be used to tunnel an encrypted IP datagram. From the perspective of traditional besteffort IP packet delivery, a tunnel behaves as any other link. Packets enter one end of the tunnel, and are delivered to the other end unless resource overload or error causes them to be lost.The RSVP setup protocol [RFC2205] is one component of a framework designed to extend IP to support multiple, controlled classes of service over a wide variety of link-level technologies. To deploy this technology with maximum flexibility, it is desirable for tunnels to act as RSVP-controllable links within the network. A tunnel, and in fact any sort of link, may participate in an RSVP aware network in one of three ways, depending on the capabilities of the equipment from which the tunnel is constructed and the desires of the operator. 1. The (logical) link may not support resource reservation or QoS control at all. This is a besteffort link. We refer to this as a best-effort or type 1 tunnel in this note. 2. The (logical) link may be able to promise that some overall level of resources is available to carry traffic, but not to allocate resources specifically to individual data flows. A (Book ID: B1650)

configured resource allocation over a tunnel is an example ofthis. We refer to this case as a type 2 tunnel in this note. 3. The (logical) link may be able to make reservations for individual end-to-end data flows. We refer to this case as a type 3 tunnel. Note that the key feature that distinguishes type 3 tunnels from type 2 tunnels is that in the type 3 tunnel new tunnel reservations are created and torn down dynamically as end-to-end reservations come and go. Type 1 tunnels exist when at least one of the routers comprising thetunnel endpoints does not support the scheme we describe here. Inthis case, the tunnel acts as a best-effort link. Our goal is simply to make sure that RSVP messages traverse the link correctly, and the presence of the non-controlled link is detected, as required by the integrated services framework. When the two end points of the tunnel are capable of supporting RSVP over tunnels, we would like to have proper resources reserved along the tunnel. Depending on the requirements of the situation, thismight mean that one client's data flow is placed into a larger aggregate reservation (type 2 tunnels) or that possibly a new, Packet filtering is a network security mechanism that works by controlling what data can flow to and from a network. We provide a very brief introduction to high-level IP networking concepts (a necessity for understanding packet filtering) here, but if you're not already familiar with the topic, then before continuing, you should refer to Appendix C, TCP/IP Fundamentals for a more detailed discussion. To transfer information across a network, the information has to be broken up into small pieces, each of which is sent separately. Breaking the information into pieces allows many systems to share the network, each sending pieces in turn. In IP networking, those small pieces of data are called packets. All data transfer across IP networks happens in the form of packets. The basic device that interconnects IP networks is called a router. A router may be a dedicated piece of hardware that has no other purpose, or it may be a piece of software that runs on a general-purpose UNIX or PC (MS-DOS, Windows, Macintosh, or other) system. Packets traversing an internetwork (a network of networks) travel from router to router until they reach their destination. The Internet itself is sort of the granddaddy of internetworks - the ultimate "network of networks." A router has to make a routing decision about each packet it receives; it has to decide how to send that packet on towards its ultimate destination. In general, a packet carries no information to help the router in this decision, other than the IP address of the packet's ultimate destination. The packet tells the router where it wants to go, but not how to get there. Routers communicate with each other using "routing protocols" such as the Routing Information Protocol (RIP) and Open Shortest Path First (OSPF) to build routing tables in memory to determine how to get the packets to their destinations. When routing a packet, a router compares the packet's destination address to entries in the routing table and sends the packet onward as directed by the routing table. Often, there won't be a specific route for a particular destination, and the router will use a "default route;" generally, such a route directs the packet towards smarter or better-connected routers. (The default routes at most sites point towards the Internet.) In determining how to forward a packet towards its destination, a normal router looks only at a normal packet's destination address and asks only "How can I forward this packet?" A (Book ID: B1650)

packet filtering router also considers the question "Should I forward this packet?" The packet filtering router answers that question according to the security policy programmed into the router via the packet filtering rules. 6.1.1 Advantages of Packet Filtering Packet filtering has a number of advantages. 6.1.1.1 One screening router can help protect an entire network One of the key advantages of packet filtering is that a single, strategically placed packet filtering router can help protect an entire network. If there is only one router that connects your site to the Internet, you gain tremendous leverage on network security, regardless of the size of your site, by doing packet filtering on that router. 6.1.1.2 Packet filtering doesn't require user knowledge or cooperation Unlike proxying, described in Chapter 7, Proxy Systems, packet filtering doesn't require any custom software or configuration of client machines, nor does it require any special training or procedures for users. When a packet filtering router decides to let a packet through, the router is indistinguishable from a normal router. Ideally, users won't even realize it's there, unless they try to do something that is prohibited (presumably because it is a security problem) by the packet filtering router's filtering policy. This "transparency" means that packet filtering can be done without the cooperation, and often without the knowledge, of users. The point is not that you can do this subversively, behind your users' backs (while actions like that are sometimes necessary - it all depends on the circumstances - they can be highly political). The point is that you can do packet filtering without their having to learn anything new to make it work, and without your having to depend on them to do (or not do) anything to make it work. 6.1.1.3 Packet filtering is widely available in many routers Packet filtering capabilities are available in many hardware and software routing products, both commercial and freely available over the Internet. Most sites already have packet filtering capabilities available in the routers they use. Most commercial router products, such as the routers from Livingston Enterprises and Cisco Systems, include packet filtering capabilities. Packet filtering capabilities are also available in a number of packages, such as Drawbridge, KarlBridge, and screend, that are freely distributed on the Internet; these are discussed in Appendix B, Tools.

(Book ID: B1650)