Você está na página 1de 23

ASSIGNMENT NO.

SEMESTER:V DATE OF DECLARATION: 15/10/2017

SUBJECT: CN DATE OF SUBMISSION: 15/10/2017

NAME OF THE STUDENT: Mitali Butala ROLL NO.: 12

Lear 1. Demonstrate the knowledge of basic networking concepts.


ning
2. Solve technical and aptitude questions related to computer networking.
Obje
ctive

Lear 1. Discuss basic networking concepts in detail.


ning
2. Solve technical and aptitude questions related to computer networking
Outc
ome

Cour CO504.2: Describe, analyze and compare datalink, network and transport layer
se protocol, algorithms and techniques.
Outc CO504.6: Communicate technical, ethical, social information related to computer
ome networking .

Progr PO1: Apply the knowledge of mathematics, science, engineering, fundamentals,


am and an engineering specialization to the solution of complex engineering
Outc problem.
ome
P10: Communicate effectively on complex engineering activities with the
engineering community and with society as large, such as, being able to
comprehend and write effective reports and design documentations, make
effective presentation, and give and receive clear instructions.

Bloo Remember
m's
Taxo Apply
nomy
Level

Assig Q.1 Discuss link state routing with example.


nmen
t
Ques Link state routing is the second family of routing protocols. While
tions distance vector routers use a distributed algorithm to compute their
routing tables, link-state routers exchange messages to allow each
router to learn the entire network topology. Based on this learned
topology, each router is then able to compute its routing table by using
a shortest path computation [Dijkstra1959].

For link-state routing, a network is modelled as a directed weighted


graph. Each router is a node, and the links between routers are the
edges in the graph. A positive weight is associated to each directed
edge and routers use the shortest path to reach each destination. In
practice, different types of weight can be associated to each directed
edge :

 unit weight. If all links have a unit weight, shortest path


routing prefers the paths with the least number of
intermediate routers.
 weight proportional to the propagation delay on the link. If
all link weights are configured this way, shortest path
routing uses the paths with the smallest propagation
delay.
 where C is a constant larger than the
highest link bandwidth in the network. If all link weights
are configured this way, shortest path routing prefers
higher bandwidth paths over lower bandwidth paths

Usually, the same weight is associated to the two directed edges that
correspond to a physical link (i.e. and ). However,
nothing in the link state protocols requires this. For example, if the
weight is set in function of the link bandwidth, then an asymmetric
ADSL link could have a different weight for the upstream and
downstream directions. Other variants are possible. Some networks
use optimisation algorithms to find the best set of weights to minimize
congestion inside the network for a given traffic demand [FRT2002].

When a link-state router boots, it first needs to discover to which


routers it is directly connected. For this, each router sends a HELLO
message every N seconds on all of its interfaces. This message
contains the router’s address. Each router has a unique address. As
its neighbouring routers also send HELLO messages, the router
automatically discovers to which neighbours it is connected. These
HELLO messages are only sent to neighbours who are directly
connected to a router, and a router never forwards the HELLO
messages that they receive. HELLO messages are also used to detect
link and router failures. A link is considered to have failed if no HELLO
message has been received from the neighbouring router for a period
of seconds.
The exchange of HELLO messages

Once a router has discovered its neighbours, it must reliably distribute


its local links to all routers in the network to allow them to compute
their local view of the network topology. For this, each router builds
a link-state packet (LSP) containing the following information :

 LSP.Router : identification (address) of the sender of the


LSP
 LSP.age : age or remaining lifetime of the LSP
 LSP.seq : sequence number of the LSP
 LSP.Links[] : links advertised in the LSP. Each directed
link is represented with the following information : -
LSP.Links[i].Id : identification of the neighbour -
LSP.Links[i].cost : cost of the link

These LSPs must be reliably distributed inside the network without


using the router’s routing table since these tables can only be
computed once the LSPs have been received. The Flooding algorithm
is used to efficiently distribute the LSPs of all routers. Each router that
implements flooding maintains a link state database(LSDB) containing
the most recent LSP sent by each router. When a router receives an
LSP, it first verifies whether this LSP is already stored inside its LSDB.
If so, the router has already distributed the LSP earlier and it does not
need to forward it. Otherwise, the router forwards the LSP on all links
except the link over which the LSP was received. Flooding can be
implemented by using the following pseudo-code.

# links is the set of all links on the router


# Router R's LSP arrival on link l
if newer(LSP, LSDB(LSP.Router)) :
LSDB.add(LSP)
for i in links :
if i!=l :
send(LSP,i)
else:
# LSP has already been flooded

In this pseudo-code, LSDB(r) returns the most recent LSP originating


from router r that is stored in the LSDB. newer(lsp1,lsp2) returns true
if lsp1 is more recent than lsp2. See the note below for a discussion on
how newer can be implemented.

Flooding : example

Flooding allows LSPs to be distributed to all routers inside the network


without relying on routing tables. In the example above, the LSP sent
by router E is likely to be sent twice on some links in the network. For
example, routers B and C receive E‘s LSP at almost the same time and
forward it over the B-C link. To avoid sending the same LSP twice on
each link, a possible solution is to slightly change the pseudo-code
above so that a router waits for some random time before forwarding a
LSP on each link. The drawback of this solution is that the delay to
flood an LSP to all routers in the network increases. In practice,
routers immediately flood the LSPs that contain new information (e.g.
addition or removal of a link) and delay the flooding of refresh LSPs
(i.e. LSPs that contain exactly the same information as the previous
LSP originating from this router) [FFEB2005].

To ensure that all routers receive all LSPs, even when there are
transmissions errors, link state routing protocols use reliable flooding.
With reliable flooding, routers use acknowledgements and if
necessary retransmissions to ensure that all link state packets are
successfully transferred to all neighbouring routers. Thanks to reliable
flooding, all routers store in their LSDB the most recent LSP sent by
each router in the network. By combining the received LSPs with its
own LSP, each router can compute the entire network topology.
Q.2 Sketch the categories of standard Ethernet and
discuss it in detail.

10Base5:

The original cabling standard for Ethernet that uses coaxial cables.
The name derives from the fact that the maximum data transfer speed
is 10 Mbps, it uses baseband transmission, and the maximum length of
cables is 500 meters.
10Base5 is also called thick Ethernet, ThickWire, and ThickNet.
10Base5 networks are wired together in a bus topology—that is, in a
linear fashion using one long cable. The maximum length of any
particular segment of a 10Base5 network is 500 meters, hence the 5 in
10Base5. If distances longer than this are required, two or more
segments must be connected using repeaters. Altogether, there can
be a total of five segments connected using four repeaters, as long as
only three of the segments have stations (computers) attached to
them. This is referred to as the 5-4-3 rule.

A 10Base5 segment should have no more than 100 stations wired to it.
These stations are not connected directly to the thicknet cable as in
10Base2 networks. Instead, a transceiver is attached to the thicknet
cable, usually using a cable-piercing connector called a vampire tap.
From the transceiver, a drop cable is attached, which then connects
to the network interface card (NIC) in the computer. The minimum
distance between transceivers attached to the thicknet cable is 2.5
meters, and the maximum length for a drop cable is 50 meters.
Thicknet cable ends have N-series connectors soldered or crimped on
them for connecting segments together.

10Base5 networks were often used as backbones for large networks.


In a typical configuration, transceivers on the thicknet backbone
would attach to repeaters, which would join smaller thinnet segments
to the thicknet backbone. In this way, a combination of 10Base5 and
10Base2 standards could support sufficient numbers of stations for a
moderately large company.

10Base2
A type of standard for implementing Ethernet networks. 10Base2 is
sometimes referred to as thinnet (or “thin coax”) because it uses thin
coaxial cabling for connecting stations to form a network. 10Base2
supports a maximum bandwidth of 10 Mbps, but in actual networks,
the presence of collisions reduces this to more like 4 to 6 Mbps.
10Base2 is based on the 802.3 specifications of Project 802 developed
by the Institute of Electrical and Electronic Engineers (IEEE).

10Base2 networks are wired together in a bus topology, in which


individual stations (computers) are connected directly to one long
cable. The maximum length of any particular segment of a 10Base2
network is 185 meters. If distances longer than this are required, two
or more segments must be connected using repeaters. Altogether,
there can be a total of five segments connected using four repeaters,
as long as only three of the segments have stations attached to them.
This is referred to as the 5-4-3 rule.

A 10Base2 segment should have no more than 30 stations wired to it.


The minimum distance between these stations must be 0.5 meters.
Stations are attached to the cable using BNC connectors, and the
ends of the thinnet cabling have BNC cable connectors soldered or
crimped to them.

The designation 10Base2 comes from the speed of the network (10
Mbps), the signal transmission method (baseband transmission), and
the maximum segment length (185 meters, rounded off to 200 with the
zeros removed).

10 Base T
A type of standard for implementing Ethernet networks. 10BaseT is
the most popular form of 10-Mbps Ethernet, using unshielded twisted-
pair (UTP) cabling for connecting stations, and using hubs to form a
network. 10BaseT supports a maximum bandwidth of 10 Mbps, but in
actual networks, the presence of collisions reduces this to more like 4
to 6 Mbps. 10BaseT is based on the 802.3 specifications of Project 802
developed by the Institute of Electrical and Electronic Engineers
(IEEE).
Graphic 0-4. A 10BaseT network.

10BaseT networks are wired together in a star topology to a central


hub. The UTP cabling used for wiring should be category 3
cabling, category 4 cabling, or category 5 cabling, terminated with RJ-
45 connectors. Patch panels can be used to organize wiring and
provide termination points for cables running to wall plates in work
areas. Patch cables then connect each port on the patch panel to
the hub. Usually, most of the wiring is hidden in a wiring cabinet and
arranged on a rack for easy access.

The maximum length of any particular segment of a 10BaseT network


is 100 meters. If distances longer than this are required, two or more
segments must be connected using repeaters. The minimum length of
a segment should be 2.5 meters. By using stackable hubs or by
cascading regular hubs into a cascaded star topology, you can
network large numbers of computers using 10BaseT cable. Although
they support up to 1024 nodes, collision domains supporting no more
than 200 or 300 nodes will yield the best performance.

Q.3 Compare open loop congestion control and closed


loop congestion control.
Congestion Control

Congestion control refers to techniques and mechanisms that can


either prevent congestion, before it happens, or remove congestion,
after it has happened.

In general, we can divide congestion control mechanisms into two


broad categories: open-loop congestion control (prevention) and
closed-loop congestion control (removal) as shown in Figure 4.27.

1. Open-Loop Congestion Control


In open-loop congestion control, policies are applied to prevent
congestion before it happens. In these mechanisms, congestion
control is handled by either the source or the destination.

a. Retransmission Policy

Retransmission is sometimes unavoidable. If the sender feels that a


sent packet is lost or corrupted, the packet needs to be retransmitted.
Retransmission in general may increase congestion in the network.
However, a good retransmission policy can prevent congestion. The
retransmission policy and the retransmission timers must be designed
to optimize efficiency and at the same time prevent congestion. For
example, the retransmission policy used by TCP (explained later) is
designed to prevent or alleviate congestion.

b. Window Policy

The type of window at the sender may also affect congestion. The
Selective Repeat window is better than the Go-Back-N window for
congestion control. In the Go-Back-N window, when the timer for a
packet times out, several packets may be resent, although some may
have arrived safe and sound at the receiver. This duplication may
make the congestion worse. The Selective Repeat window, on the
other hand, tries to send the specific packets that have been lost or
corrupted.
c. Acknowledgment Policy

The acknowledgment policy imposed by the receiver may also affect


congestion. If the receiver does not acknowledge every packet it
receives, it may slow down the sender and help prevent congestion.
Several approaches are used in this case. A receiver may send an
acknowledgment only if it has a packet to be sent or a special timer
expires. A receiver may decide to acknowledge only N packets at a
time. We need to know that the acknowledgments are also part of the
load in a network. Sending fewer acknowledgments means imposing
fewer loads on the network.

d. Discarding Policy

A good discarding policy by the routers may prevent congestion and


at the same time may not harm the integrity of the transmission. For
example, in audio transmission, if the policy is to discard less sensitive
packets when congestion is likely to happen, the quality of sound is
still preserved and congestion is prevented or alleviated.

e. Admission Policy

An admission policy, which is a quality-of-service mechanism, can also


prevent congestion in virtual-circuit networks. Switches in a flow, first
check the resource requirement of a flow before admitting it to the
network. A router can deny establishing a virtual circuit connection if
there is congestion in the network or if there is a possibility of future
congestion.

2. Closed-Loop Congestion Control


Closed-loop congestion control mechanisms try to alleviate
congestion after it happens. Several mechanisms have been used by
different protocols.

a. Backpressure

The technique of backpressure refers to a congestion control


mechanism in which a congested node stops receiving data from the
immediate upstream node or nodes. This may cause the upstream
node or nodes to become congested, and they, in turn, reject data
from their upstream nodes or nodes. And so on. Backpressure is a
node-to-node congestion control that starts with a node and
propagates, in the opposite direction of data flow, to the source. The
backpressure technique can be applied only to virtual circuit
networks, in which each node knows the upstream node from which a
flow of data is corning. Figure 4.28 shows the idea of backpressure.

Node III in the figure has more input data than it can handle. It drops
some packets in its input buffer and informs node II to slow down.
Node II, in turn, may be congested because it is slowing down the
output flow of data. If node II is congested, it informs node I to slow
down, which in turn may create congestion. If so, node I inform the
source of data to slow down. This, in time, alleviates the congestion.
Note that the pressure on node III is moved backward to the source to
remove the congestion.

b. Choke Packet

A choke packet is a packet sent by a node to the source to inform it of


congestion. Note the difference between the backpressure and choke
packet methods. In backpressure, the warning is from one node to its
upstream node, although the warning may eventually reach the source
station. In the choke packet method, the warning is from the router,
which has encountered congestion, to the source station directly. The
intermediate nodes through which the packet has traveled are not
warned. We have seen an example of this type of control in ICMP.
When a router in the Internet is overwhelmed with IP datagrams, it
may discard some of them; but it informs the source host, using a
source quench ICMP message. The warning message goes directly to
the source station; the intermediate routers, and does not take any
action. Figure 4.29 shows the idea of a choke packet.
c. Implicit Signaling

In implicit signaling, there is no communication between the


congested node or nodes and the source. The source guesses that
there is congestion somewhere in the network from other symptoms.
For example, when a source sends several packets and there is no
acknowledgment for a while, one assumption is that the network is
congested. The delay in receiving an acknowledgment is interpreted
as congestion in the network; the source should slow down.

d. Explicit Signaling

The node that experiences congestion can explicitly send a signal to


the source or destination. The explicit signaling method, however, is
different from the choke packet method. In the choke packet method,
a separate packet is used for this purpose; in the explicit signaling
method, the signal is included in the packets that carry data. Explicit
signaling, as we will see in Frame Relay congestion control, can occur
in either the forward or the backward direction.

i. Backward Signaling

A bit can be set in a packet moving in the direction opposite to the


congestion. This bit can warn the source that there is congestion and
that it needs to slow down to avoid the discarding of packets.

ii. Forward Signaling

A bit can be set in a packet moving in the direction of the congestion.


This bit can warn the destination that there is congestion. The receiver
in this case can use policies, such as slowing down the
acknowledgments, to alleviate the congestion.

Q4 Write short note on IPV4 header format.


IPv4 Header Format: Internet Protocol version 4 (IPv4) is the fourth
revision in the development of the Internet Protocol (IP) and it is the
first version of the protocol to be widely deployed. IPv4 is a data-
oriented protocol to be used on a packet
switched internetwork (e.g., Ethernet). It is a best effort
delivery protocol in that it does not guarantee delivery, nor does it
assure proper sequencing, or avoid duplicate delivery. These aspects
are addressed by an upper layer protocol (e.g. TCP, and partly
by UDP). IPv4 does, however, provide data integrity protection
through the use of packet checksums.
ViVersion (4): The first header field in an IP packet is the four-bit version
field. For IPv4, this has a value of 4 (hence the name IPv4).
Internet Header Length (4): The second field (4 bits) is the Internet
Header Length (IHL) telling the number of 32-bit words in the header.
Since an IPv4 header may contain a variable number of options, this
field specifies the size of the header (this also coincides with the offset
to the data). The minimum value for this field is 5, which is a length of
5×32 = 160 bits. Being a 4-bit value, the maximum length is 15 words or
480 bits.
Type of Service (8): Types of Service tells how the datagram should
be handled. The following eight bits were allocated to a Type of
Service (TOS) field:
o bits 0–2: Precedence (111 - Network Control, 110 - Internetwork
Control, 101 - CRITIC/ECP, 100 - Flash Override, 011 - Flash, 010 -
Immediate, 001 - Priority, 000 - Routine)
o bit 3: 0 = Normal Delay, 1 = Low Delay
o bit 4: 0 = Normal Throughput, 1 = High Throughput
o bit 5: 0 = Normal Reliability, 1 = High Reliability
o bits 6–7: Reserved for future use

 Total Length (16): This 16-bit field defines the entire datagram
size, including header and data, in bytes. The minimum-length
datagram is 20 bytes (20-byte header + 0 bytes data) and the maximum
is 65,535 — the maximum value of a 16-bit word. The minimum size
datagram that any host is required to be able to handle is 576 bytes,
but most modern hosts handle much larger packets.
 Identification (16): Unique IP packet value. This field is an
identification field and is primarily used for uniquely identifying
fragments of an original IP datagram. Some experimental work has
suggested using the ID field for other purposes, such as for adding
packet-tracing information to datagrams in order to help trace back
datagrams with spoofed source addresses.
 Flags (3): A three-bit field follows and is used to control or
identify fragments. They are (in order, from high order to low order):
Reserved; must be zero.
Don't Fragment (DF)
More Fragments (MF)
If the DF flag is set and fragmentation is required to route the packet
then the packet will be dropped. This can be used when sending
packets to a host that does not have sufficient resources to handle
fragmentation.
When a packet is fragmented all fragments have the MF flag set except
the last fragment, which does not have the MF flag set. The MF flag is
also not set on packets that are not fragmented — an unfragmented
packet is its own last fragment.
 Fragment Offset (13): The fragment offset field, measured in
units of eight-byte blocks, is 13 bits long and specifies the offset of a
particular fragment relative to the beginning of the original
unfragmented IP datagram. It provides fragmentation and reassembly
if the packet is too large to put in a frame and allows maximum
transmission unit of the internet. The first fragment has an offset of
zero. This allows a maximum offset of 65,528 ((213 -1) X 8) which
would exceed the maximum IP packet length of 65,535 with the header
length included.
 Time to Live (8): An eight-bit time to live (TTL) field helps prevent
datagrams from persisting (e.g. going in circles) on an internet.
Historically the TTL field limited a datagram's lifetime in seconds, but
has come to be a hop count field. Each packet switch (or router) that a
datagram crosses decrements the TTL field by one. When the TTL field
hits zero, the packet is no longer forwarded by a packet switch and is
discarded. Typically, an ICMP message (specifically the time
exceeded) is sent back to the sender that it has been discarded. The
reception of these ICMP messages is at the heart of how trace
route works. It specifies how long, in seconds, a datagram is allowed
to remain in the internet.
 Protocol (8): This field defines the protocol used in the data
portion of the IP datagram. Port of upper layer protocol.
 Header Checksum (16): The 16-bit checksum field is used for
error-checking of the header. At each hop, the checksum of the
header must be compared to the value of this field. If a header
checksum is found to be mismatched, then the packet is
discarded. The checksum field is the 16-bit one's complement of the
one's complement sum of all 16-bit words in the header. For purposes
of computing the checksum, the value of the checksum field is zero.
 Source Address (32): An IPv4 address is a group of four eight-
bit octets for a total of 32 bits. The value for this field is determined by
taking the binary value of each octet and concatenating them together
to make a single 32-bit value. This address is the address of the
sender of the packet.
 Destination Address (32): 32 bit IP address of the station this
packet is destined for. Identical to the source address field but
indicates the receiver of the packet.
 Option (Variable): Encodes the option requested by the sending
user.
 Padding (Variable): Used to ensure that the datagram header is
a multiple of 32 bits in length.

Q.5 An ISP granted a block of addresses starting with


190.100.0.0/16(65,536 addresses). The ISP needs to
distribute these addresses to three groups of customers as
follows:
1. The first group has 64 customers; each needs 256
addresses
2. The second group has 128 customers; each needs 64
addresses

3. The third group has 128 customers; each needs 64


addresses.
Design the sub blocks and find out how many addresses are
still available after these allocations.
Technical and Aptitude Questions:

1. How many types of modes are used in the data


transferring through networks? Discuss these
modes.

Transmission mode means transferring of data between two devices.


It is also called communication mode. These modes direct the
direction of flow of information. There are three types of transmission
mode. They are :

 Simplex Mode
 Half duplex Mode
 Full duplex Mode

SIMPLEX Mode
In this type of transmission mode data can be sent only through one
direction i.e. communication is unidirectional. We cannot send a
message back to the sender. Unidirectional communication is done in
Simplex Systems.
Examples of simplex Mode is loudspeaker, television broadcasting,
television and remote, keyboard and monitor etc.

HALF DUPLEX Mode


Half-duplex data transmission means that data can be transmitted in
both directions on a signal carrier, but not at the same time. For
example, on a local area network using a technology that has half-
duplex transmission, one workstation can send data on the line and
then immediately receive data on the line from the same direction in
which data was just transmitted. Hence half-duplex transmission
implies a bidirectional line (one that can carry data in both directions)
but data can be sent in only one direction at a time.
Example of half duplex is a walkie- talkie in which message is sent one
at a time and messages are sent in both the directions.
FULL DUPLEX Mode
In full duplex system we can send data in both directions as it is
bidirectional. Data can be sent in both directions simultaneously. We
can send as well as we receive the data.
Example of Full Duplex is a Telephone Network in which there is
communication between two persons by a telephone line, through
which both can talk and listen at the same time.

In full duplex system there can be two lines one for sending the data
and the other for receiving data.

2. What is difference between physical address


and logical address.
3. Discuss DNS.
The domain name system (DNS) is the way that internet domain
names are located and translated into internet protocol (IP)
addresses. The domain name system maps the name people use
to locate a website to the IP address that a computer uses to
locate a website. For example, if someone types
TechTarget.com into a web browser, a server behind the
scenes will map that name to the IP address 206.19.49.149.
Web browsing and most other internet activity rely on DNS to
quickly provide the information necessary to connect users to
remote hosts. DNS mapping is distributed throughout the
internet in a hierarchy of authority. Access providers and
enterprises, as well as governments, universities and other
organizations, typically have their own assigned ranges of IP
addresses and an assigned domain name; they also typically run
DNS servers to manage the mapping of those names to those
addresses. Most URLs are built around the domain name of the
web server that takes client requests. For example, the URL for
this page is
How does DNS work?
DNS servers answer questions from both inside and outside
their own domains. When a server receives a request from
outside the domain for information about a name or address
inside the domain, it provides the authoritative answer. When a
server receives a request from inside its own domain for
information about a name or address outside that domain, it
passes the request out to another server -- usually one managed
by its internet service provider. If that server does not know the
answer or the authoritative source for the answer, it will reach
out to the DNS servers for the top-level domain -- e.g., for all of
.com or .edu. Then, it will pass the request down to the
authoritative server for the specific domain -- e.g.,
techtarget.com or stkate.edu; the answer flows back along the
same path.

4. Name a protocol that will be applied when you


want to transfer files between different
platforms such as Unix systems and Windows
server.
PSCP, the PuTTY Secure Copy client, is a tool for transferring
files securely between computers using an SSH connection.

PSCP is a command line application. This means that you cannot


just double-click on its icon to run it and instead you have to
bring up a console window. With Windows 95, 98, and ME, this is
called an ‘MS-DOS Prompt’ and with Windows NT, 2000, and XP,
it is called a ‘Command Prompt’. It should be available from the
Programs section of your Start Menu.
To start PSCP it will need either to be on your PATH or in your
current directory. To add the directory containing PSCP to
your PATH environment variable, type into the console window:

This will only work for the lifetime of that particular console
window. To set your PATH more permanently on Windows NT,
2000, and XP, use the Environment tab of the System Control
Panel. On Windows 95, 98, and ME, you will need to edit
your AUTOEXEC.BAT to include a setcommand like the one
above.
PSCP USAGE:
Once you've got a console window to type into, you can just
type pscp on its own to bring up a usage message. This tells you
the version of PSCP you're using, and gives you a brief summary
of how to use PSCP:
5. Differentiate between IPV4 and IPV6.
Rubri Performance Does not Developing Meets Exceeds
cs for Indicators meet expectations expectations
Asse expectations
ssme
nt Quality No relevantSome ofThe contentThe contents
Content content the contentis not asaddress the
is relateddetailed questions
to theand/or clearly and
assignment concise asfully.
topic. needed.

Data analysisNo properAnalyzes Analyzes Analyzes


and analysis and and and
interpretation interprets interprets interprets
data
data data
correctly
correctly correctly most of the
occasionall most of thetime; most of
y; sometime; most ofthe
conclusions
conclusion the are correct
s areconclusions and useful
incorrect are correct
and useful

Efforts Books andReferred Referred Referred


Internet only only book forbooks and
references Internet forwrite up. Internet for
are write up. write up.
improper

On Time Submiited submitted submitted submitted


Submission after 15 within 15 within 8 on time.
days days. days.

Refer 1. B.A. Forouzan, “Data Communications and Networking”, TMH, Fourth


ence Edition.
s: 2. Write the name of websites referred.
 https://www.ssh.com/ssh/putty/putty-manuals/0.68/Chapter5.html
 http://techdifferences.com/difference-between-logical-and-physical-
address.html
 http://www.studytonight.com/computer-networks/transmission-
mode
 http://www.networksolutions.com/support/what-is-a-domain-name-
server-dns-and-how-does-it-work/

Você também pode gostar