Você está na página 1de 47

Question 1:

Network Foundations

Question 1.1
a) Define latency as a metric of network performance, and explain what its component parts are.
[4 marks]

Ans: Latency is the amount of time a message takes to traverse a system. In a computer
network, it is an expression of how much time it takes for a packet of data to get from one
designated point to another. It is sometimes measured as the time required for a packet to be
returned to its sender. Latency has following several parts like.
Processing delay time routers take to process the packet header
Queuing delay time the packet spends in routing queues
Transmission delay time it takes to push the packet's bits onto the link
Propagation delay time for a signal to reach its destination

b) Besides latency, what is the other key metric of network performance?


[1 mark]
Ans: Besides latency, there are the following other key metric of network performance.
Bandwidth and throughput
Uptime or responsiveness
Jitter
Hardware and software
c) Give an example of an application for which latency is the more critical of the two key network
performance metrics.
[1 mark]

Ans: Voice over IP (VoIP) or any other application that relies on live, real-time transmission of
video or audio, you need to ask your service provider about their latency.

Question 1.2
Give one example of an application for which bandwidth is more critical than latency.
[1 mark]

Bandwidth is a key concept in many telephony applications. In radio communications, for


example, bandwidth is the frequency range like in POTS telephone line, that carry telephone
conversation.

Question 1.3
a) List two advantages of using a layered approach to network architecture.

[2 marks]
Ans: 1. Layered architecture enables teams to work on different parts of the application parallely with minimal
dependencies on other teams.
2. Layered architecture enables develop loosely coupled systems.

b) Suppose the designers of this link wanted to run TCP over it. Identify two distinct challenges that they

would have to deal with - one as a result of the large distance between Earth and Mars, and the other as
a result of the large delay * bandwidth product of the link.
[2 marks]

Ans:
Designers will have following problems like.
1. Transmission delay
2.

Elephants: the term elephant refers to a network that is deemed to be "long" and "fat."
"Elephant" is derived from the acronym for long fat network, LFN. An LFN is a network that is
composed of a long distance connection ("long") and high bandwidth capacity ("fat").

So they will need to provide high bandwidth capacity for long distance.

Suppose you have been asked to design an optimal ARQ strategy for this link. Identify two distinct challenges
your design will need to consider as a result of the large variance in distance between Earth and Mars?
[2 marks]
Ans:
1. Packet Loss on a Link due to large variance in distance
2. Delay Across a Link
3. Defining ARQ Protocols persistency for large distance variance

a) List three advantages and/or disadvantages of using a layered approach to network architecture.
[3 marks]

Advantages:
1. Layered architecture increases flexibility, maintainability, and scalability.
2. Multiple applications can reuse the components.
3. Different components of the application can be independently deployed, maintained, and
updated, on different time schedules.
Disadvantages
1.
2.

There might be a negative impact on the performance as we have the extra overhead of passing through layers
instead of calling a component directly.
Development of user-intensive applications can sometime take longer if the layering prevents the use of user
interface components that directly interact with the database.

3.

The use of layers helps to control and encapsulate the complexity of large applications, but adds complexity to
simple applications.

List the names of the two layers in the OSI model that do not appear explicitly in our 5-layer model.

[1 mark]

Ans: Session and presentation layers does not come in 5 layer model

Q:
Suppose a 100 Mbps point-to-point link is being set up between the earth and a new lunar colony. The
distance from the moon to the earth is approximately 384,000 km, and data signals travel at the speed of light
300,000 km/second.
a) Calculate the minimum RTT (Round trip time) for the link. Show your working.
Ans:
Minimum RTT = 2 x Propagation
Propagation = Distance / Speed of Light
= 2 x 384000 km / 300,000 km/second
= 2 x 3854 / 300 sec
= 2.56 sec
b) What is the significance of the minimum RTT value when implementing reliability on this link?
Ans: The important part of calculating RTO is to determine how long it takes for a segment to go to the receiver
and for ACK to come back from receiver to sender. This is a Round Trip Time, or RTT. In some ideal world (and
very static for that matter) this value would be constant and would never change. And RTO would be easy to
determine, it is equal to RTT, maybe slightly slightly larger, but nevertheless the two would be almost equal.
c) Using RTT as a delay, calculate the delay * bandwidth product for the link. Show your working.
Ans:
Delay x Bandwidth = 2.56 sec x 100 Mbits/sec
= 256 Mbits
= 256/8 MB
= 32MB

d) What is the significance of the delay * bandwidth product value when implementing reliability on
this link?
Ans: This represents the amount of data the sender can send before it would be possible to receive a response.
This is the minimum amount of sent-but-unacknowledged data the sender must be able
to handle if it wants to keep transmitting at the full bandwidth of the link.
In other words, if the sender has less send-buffer space than this, it cannot utilise the full
capacity of the link.

Question 2:

Physical Layer

a) What is the single most important function of the Physical Layer?


[1 mark]
Ans: The most important function of physical layer is to establish connection between two stations.
What is baseline shift? Which of the encoding methods we looked at in lectures (Manchester, 4B/5B) provides the
strongest solution to this problem? Justify your choice.
[3 marks]
Network baselining is the act of measuring and rating the performance of a network in real-time situations. Providing a
network baseline requires testing and reporting of the physical connectivity, normal network utilization, protocol usage, peak
network utilization, and average throughput of the network usage. Such in-depth network analysis is required to identify
problems with speed and accessibility, and to find vulnerabilities and other problems within the network. So baseline shift is
the shift in the performance of a network.
Solution:
In telecommunication, 4B5B is a form of data communications Block Coding. 4B5B maps groups of 4 bits onto groups of 5
bits, with a minimum density of 1 bits in the output. When NRZI-encoded, the 1 bits provide necessary clock transitions for the
receiver. For example, a run of 4 bits such as 00002 contains no transitions and that causes clocking problems for the receiver.
4B5B solves this problem by assigning each block of 4 consecutive bits an equivalent word of 5 bits. These 5 bit words are predetermined in a dictionary and they are chosen to ensure that there will be at least two transitions per block of bits.
A collateral effect of the code is that more bits are needed to send the same information than with 4 bits. An alternate to using
4B5B coding is to use a scrambler. Depending on the standard or specification of interest, there may be several 4B5B
characters left unused. The presence of any of the "unused" characters in the data stream can be used as an indication that there
is a fault somewhere in the link. Therefore, the unused characters can be used to detect errors in the data stream.
Manchester encoding is an alternative strategy. Briefly describe what it is. What is the cost?
In telecommunication and data storage, Manchester coding (also known as phase encoding, or PE) is a line code in which the
encoding of each data bit is either low then high, or high then low, of equal time
1 bit encoded by a rising voltage transition; 0 bit encoded by a falling voltage transition.
For consecutive 1s, or for consecutive 0s, we also need a voltage transition between bits; thus must be able to do two
physical-layer signal transitions per bit. In other words, we cannot transmit one bit per physical-layer transition: we are limited
to a bit per transition which means we just lost half of our bandwidth.
Note that signal transitions do not always occur at the 'bit boundaries' (the division between one bit and another), but that there
is always a transition at the centre of each bit.The encoding may be alternatively viewed as a phase encoding where each bit is
encoded by a postive 90 degree phase transition, or a negative 90 degree phase transition. The Manchester code is therefore
sometimes known as a Biphase

Question 3:

Link Layer

Question 3.1
Ethernet addresses are supposed to be globally unique. Briefly explain the standard strategy for ensuring
this.
[1 mark]
This is because Vendors are given a range of MAC addresses that can be assigned to their products by the IEEE
(Institute of Electrical and Electronics Engineers).
MAC Address are assigned to Vendors in various sized blocks as appropriate.
There are 248 or 281 474 976 710 656 different potential combinations. They are reasonably unique. The first 3 octets
define the manufacturer. The last 3 octets are usually generated at the time of PROM burning. It's up to the
manufacturer how they do this.

Question 3.2
What is framing, and how does it help the link layer to improve upon the services provided by the physical
layer?
[3 marks]
Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are
meaningful to the receiver. Ethernet, token ring, frame relay, and other data link layer technologies have
their own frame structures. Frames have headers that contain information such as error-checking codes.
Improve upon the services:
In physical layer framing improves services provided by physical layer by following functions.
1. Error Control
The bit stream transmitted by the physical layer is not guaranteed to be error free. The data link layer is
responsible for error detection and correction.
2. Flow Control
If the sender keeps pumping out frames at high rate, at some point the receiver will be completely swamped
and will start losing some frames. This problem may be solved by introducing flow control. Most flow control
protocols contain a feedback mechanism to inform the sender when it should transmit the next frame.
3. Stream resynchronisation.
If a sender gets confused about the interpretation of a stream of incoming bits, framing enables the receiver to
recognise the start of a subsequent frame and correctly interpret subsequent data. Some frames may still be
lost, but at least that doesn't doom us to losing all subsequent incoming data. Without framing, this would be
extremely difficult (likely impossible).
4. In addition, framing facilitates addressing (useful on a shared medium or a switched
network), and multiplexing (frames from separate data flows can be interleaved).

Question 3.3
A stream of 4 bytes of data is to be transmitted with 2-D parity data added. The output stream is to consist of
(in this order):
1. the four data bytes (32 bits)
2. four row-parity bits (one for each byte)
3. eight column-parity bits
4. one corner parity bit
Odd parity is used throughout.

Here is the stream of bits received at the destination:


01010010110100101011111001110101 0110 10110100 1
Is the received data free of bit-errors? Justify your answer.
[4 marks]

a) Convert this 48-bit binary Ethernet address into its hexadecimal equivalent:
0001 0000 00100100 00101111 11110100 10011001 11000011.
[2 marks]
Ans:
10242FF499C3

a) Ethernet addresses are supposed to be globally unique. Briefly explain the standard strategy for ensuring
this.
[1 mark]
The MAC address is a 48 bit address that is considered unique so that each component can be positively
identified across as many networks as it is used on. The IEEE standard posits a 100 year life span for all
MAC-48 addresses; in other words, no other device will have a particular device's MAC address for 100 years.
b) Explain why a sliding window ARQ is preferable to a stop-and-wait ARQ.
[2 marks]
Sliding window ARQ preferable to stop and wait ARQ because Sliding window ARQ provides unique consecutive
sequence number and the receiver uses the numbers to place received packets in the correct order, discarding duplicate
packets and identifying missing ones

b) What is framing, and how does it help the link layer to improve upon the services provided by the physical
layer?
[3 marks]

Framing is a function of the data link layer. It provides a way for a sender to transmit a set of bits that are
meaningful to the receiver. Ethernet, token ring, frame relay, and other data link layer technologies have their
own frame structures. Frames have headers that contain information such as error-checking codes.
Improve upon the services:
In physical layer framing improves services provided by physical layer by following functions.
1. Error Control
The bit stream transmitted by the physical layer is not guaranteed to be error free. The data link layer is
responsible for error detection and correction.
2. Flow Control
If the sender keeps pumping out frames at high rate, at some point the receiver will be completely swamped and
will start losing some frames. This problem may be solved by introducing flow control. Most flow control
protocols contain a feedback mechanism to inform the sender when it should transmit the next frame.
3. Stream resynchronisation.
If a sender gets confused about the interpretation of a stream of incoming bits, framing enables the receiver to
recognise the start of a subsequent frame and correctly interpret subsequent data. Some frames may still be lost,
but at least that doesn't doom us to losing all subsequent incoming data. Without framing, this would be
extremely difficult (likely impossible).
4. In addition, framing facilitates addressing (useful on a shared medium or a switched
network), and multiplexing (frames from separate data flows can be interleaved).

Explain in specific detail why the Ethernet frame format includes an optional pad field.
[4 marks]
The Length/Ether Type field is the only one which differs between 802.3 and
Ethernet II. In 802.3 it indicates the number of bytes of data in the frames payload,
and can be anything from 0 to 1500 bytes. Frames must be at least 64 bytes long,
not including the preamble, so, if the data field is shorter than 46 bytes, it must be
compensated by the Pad field.

Question 3.5
What does the acronym ARP stands for? State it's role in the network, and briefly outline its principal
request/reply mechanism.
[3 marks]

ARP: address resolution protocol


Role: The address resolution protocol (arp) is a protocol used by the Internet Protocol (IP) [RFC826], specifically IPv4, to map
IP network addresses to the hardware addresses used by a data link protocol. The protocol operates below the network layer as
a part of the interface between the OSI network and OSI link layer
Principals:
When machine A wants to send packets to macine B, it is possible that machine B is going to send packets to machine A in the
near future. Since A broadcasts its initial request for the MAC address of B, every machine on the network should extract and
store in its cache, the IP_to_MAC address binding of A When a new machine appears on the network (e.g. when an operating
system reboots) it can broadcast its IP_to_MAC address binding so that all other machines can store it in their caches.

Question 4:

Network Layer 14 marks

Question 4.1
Consider the circuit-switched network in this diagram:

Assume that a switch will assign virtual-circuit identifiers sequentially, starting at zero, and that all virtual circuit
tables are initially empty. Give the complete virtual circuit table for each individual switch after all of the following
connections are established in sequence:
1.
2.
3.
4.
[6 marks]

Host A connects to Host B


E connects to J
F connects to B
E connects to B

1) Host A to B
At Switch 1
Incoming Interface
2

Incoming VCI
0

Outgoing Interface
1

Outgoing VCI
0

At Switch 2
Incoming Interface
3
At Switch 3

Incoming VCI
0

Outgoing Interface
0

Outgoing VCI
0

Incoming Interface
0
At Switch 4

Incoming VCI
0

Outgoing Interface
3

Outgoing VCI
0

Incoming Interface
-

Incoming VCI
-

Outgoing Interface
-

Outgoing VCI
-

2)

E connects to J

At Switch 1
Incoming Interface
-

Incoming VCI
-

Outgoing Interface
-

Outgoing VCI
-

At Switch 2
Incoming Interface
2
At Switch 3

Incoming VCI
1

Outgoing Interface
0

Outgoing VCI
1

Incoming Interface
0
At Switch 4

Incoming VCI
0

Outgoing Interface
1

Outgoing VCI
0

Incoming Interface
-

Incoming VCI
-

Outgoing Interface
-

Outgoing VCI
-

Incoming Interface
-

Incoming VCI
-

Outgoing Interface
-

Outgoing VCI
-

At Switch 2
Incoming Interface
1
At Switch 3

Incoming VCI
0

Outgoing Interface
0

Outgoing VCI
0

Incoming Interface
0
At Switch 4

Incoming VCI
0

Outgoing Interface
3

Outgoing VCI
0

Incoming Interface
2

Incoming VCI
0

Outgoing Interface
1

Outgoing VCI
0

Incoming Interface
-

Incoming VCI
-

Outgoing Interface
-

Outgoing VCI
-

At Switch 2
Incoming Interface
2
At Switch 3

Incoming VCI
-

Outgoing Interface
0

Outgoing VCI
0

Incoming Interface
0

Incoming VCI
0

Outgoing Interface
3

Outgoing VCI
0

3)

F connects to B

At Switch 1

4) E connects to B
At Switch 1

At Switch 4
Incoming Interface
-

Incoming VCI
-

Outgoing Interface
-

Outgoing VCI
-

Question 4.2
Explain why IPv4 does reassembly only at the destination host, and not at intermediate routers?
[2 marks]
When one network wants to transmit datagrams to a network with a smaller MTU, it may fragment its datagrams. In IPv4,
this function was placed at the Internet Layer, and is performed in IPv4 routers, which thus only require this layer as the
highest one implemented in their design. So thats why in IPv4 is reassembled at host destination due to fragmentation on
the router.
In IPv4, fragmentation can be performed by a router between the source and destination of an IP datagram, but reassembly
is only done by the destination device.so IPv4 does reassembly only at the destination host, and not at intermediate routers.

Question 4.3
Link-state and distance-vector are two major classes of routing algorithms. Briefly explain the key conceptual
difference between these two approaches. Specifically, in each algorithm what information do network nodes
send out, and which nodes do they send it to?
[2 marks]
If all routers were running a Distance Vector protocol, the path or 'route' chosen would be from A B directly over the ISDN
serial link, even though that link is about 10 times slower than the indirect route from A C D B.
A Link State protocol would choose the A C D B path because it's using a faster medium (100 Mb ethernet). In this
example, it would be better to run a Link State routing protocol, but if all the links in the network are the same speed, then a
Distance Vector protocol is better.

Question 4.4
Consider a 1300-byte IP datagram with an identification number of 426 travelling through a network. It arrives
at a router which decides to forward it over a link with an MTU of 512 bytes. How many fragments are
generated, and what are their characteristics (size, offset, flag bits, and identification)? Assume that only
standard IP headers are used.

[4 marks]
Assume an Ethernet link with an MTU of 1300 bytes connects hosts A and B. An application process on
Host A sends 512 bytes of application data via UDP to a process on Host B. IPv4 is used at the network
layer.
Let us first look at how the datagram will be fragmented. There are 512 bytes of payload for IP to send
(512 bytes UDP payload and 8 bytes of UDP header). Each link layer frame can contain 1300 bytes of
payload. Every fragment has to contain an IP header, of course, which takes 20 bytes. The fragment
length has to be a multiple of 8 bytes (except for the last fragment), so that up to floor[(130020)/8]*8=1280 bytes of data can be sent in one IP fragment (but the last one).
How many fragments are transmitted?
(5p) 5(five) (=ceil(512/1280)) b)
Give the values of the MF bit, the offset and the total length field of the IP header of each
fragment! (10p)
Using the "tcpdump" notation (offset in bytes):
Fragment 1: 1280@0+ (1280 bytes of data, offset of 0 bytes, and more fragments bit set)
Fragment 2: 1280@1280+
Fragment 3: 1280@2560+
Fragment 4: 1280@3840+
Fragment 5: 88@5120(no more fragments)

e) What is the 8-bit time to live field in the IPv4 header actually used for in current IPv4
implementations? Why is it necessary?
[2 marks]
Ans: Time to live (TTL) or hop limit is a mechanism that limits the lifespan or lifetime of data in a computer
or network. TTL may be implemented as a counter or timestamp attached to or embedded in the data. TTL
prevents a data packet from circulating indefinitely.
Implementation:
Under the Internet Protocol, TTL is an 8-bit field. In the IPv4 header. Under IPv4, time to live is measured in
seconds, although every host that passes the datagram must reduce the TTL by at least one unit.
f) Explain what the count-to-infinity problem is. Construct a concrete numerical example that
clearly illustrates it. Include a diagram that supports your explanation.
Ans: The BellmanFord algorithm does not prevent routing loops from happening and suffers from the
count-to-infinity problem. The core of the count-to-infinity problem is that if A tells B that it has a path
somewhere, there is no way for B to know if the path has B as a part of it.

Have a look on this example:

The routing table will be:


R1 R2 R3
R1 0 1 2
R2 1 0 1
R3 2 1 0

Now, assume the connection between R2 and R3 is lost (You can the line broke or a middle router
between them fell).
After one iteration of sending the information, you wil get the following routing table:
R1 R2 R3
R1 0 1 2
R2 1 0 3
R3 2 3 0

It happens because R2,R3 is no longer connected, so R2 "thinks" it can redirect packages to R3 through
R1, which has a path of 2 - so it will get a path of weight 3.
After an extra iteration, R1 "sees" R2 is more expensive than it used to be, so it modifies its routing
table:
R1 R2 R3
R1 0 1 4

R2 1
R3 4

0
3

3
0

and so on, until they converge on the correct value - but that could take a long time, especially if
(R1,R3) is expensive.
This is called "count to infinity" (if w(R1,R3)=infinity and is the only path - it will continue counting
forever).
h) What is the 32-bit binary equivalent of the network address 192.168.170.18?
[2 marks]
(192)10 = (11000000)2
192=11000000
(168)10 = (10101000)2
168=10101000
(170)10 = (10101010)2
170=10101010
(18)10 = (00010010)2
18=00010010
11000000 10101000 10101010 00010010

Question 5:

Transport Layer 10 marks

Question 5.1
What is the single most important service provided by the Transport Layer?
[1 mark]
The transport layer is the layer in the open system interconnection (OSI) model responsible for end-to-end communication
over a network. It provides logical communication between application processes running on different hosts within a
layered architecture of protocols and other network components.

Question 5.2
State the full terms represented by the following acronyms:
[2 marks]
Ans:

TCP
Transmission Control Protocol
UDP
User Datagram Protocol
IP
Internet Protocol
ICMP
Internet Control Message Protocol

What is a port number, as used in TPC/IP?


Ans: A port number is a way to identify a specific process to which an Internet or other network message is to be
forwarded when it arrives at a server. For the Transmission Control Protocol and the User Datagram Protocol, a port
number is a 16-bit integer that is put in the header appended to a message unit.

Question 5.3
Suppose that host A sends two consecutive TCP segments to host B over a TCP connection: the first,
segment X, has sequence number 807, and is lost in transit.
the second, segment Y, has sequence number 828, and arrives successfully at B.
b) How much payload data is in segment X?
[1 mark]
Ans:
It has 21 payload data in segment X

c) Suppose that all data prior to segment X has already arrived at host B and has been acknowledged.
When segment Y arrives at host B, what acknowledgement number will host B send back to host A?

[2 marks]
The acknowledgement send to host is 1+ of sequence number, so it will be 829 in case of just 1 packet
and more in case of more packets in y segment (1+ of sequence number)

d) Suppose that host A sends two consecutive TCP segments to host B over a TCP connection. The first
segment has sequence number 90; the second has sequence number 112.
1) How much data is in the first segment?
[1 mark]
Ans: Data in 1st segment will be (112-90) 22.

2) Suppose that the first segment is lost but the second segment arrives at B. In the acknowledgement that
host B sends to host A, what will be the acknowledgement number?
[2 marks]

Ans: As we know acknowledgement sent to host is 1+ of the sequence number so acknowledgement number will
be 113 (112+1)

Question 5.4
It is sometimes claimed that UDP is faster than TCP. Explain some arguments for and/or against this assertion.
[4 marks]
UDP is faster than TCP, and the simple reason is because its non-existent acknowledges packet (ACK) that permits a
continuous packet stream, instead of TCP that acknowledges a set of packets, calculated by using the TCP window size and
round-trip time (RTT).
Things that tend to push applications towards UDP:
Group delivery semantics: it's possible to do reliable delivery to a group of people much more efficiently than TCP's point-topoint acknowledgement.
Out-of-order delivery: in lots of applications, as long as you get all the data, you don't care what order it arrives in; you can
reduce app-level latency by accepting an out-of-order block.
Unfriendliness: on a LAN party, you may not care if your web browser functions nicely as long as you're blitting updates to the
network as fast as you possibly can.
But even if you care about performance, you probably don't want to go with UDP:
You're on the hook for reliability now, and a lot of the things you might do to implement reliability can end up being slower
than what TCP already does.
Now you're network-unfriendly, which can cause problems in shared environments.
Most importantly, firewalls will block you.

e) What is the three-way handshake used to establish a TCP connection? What information is passed
between the two hosts at each stage?
[3 marks]

Three message handshake and/or SYN-SYN-ACK) is the method used by TCP set up a TCP/IP connection over an
Internet Protocol based network. TCP's three way handshaking technique is often referred to as "SYN-SYN-ACK"
(or more accurately SYN, SYN-ACK, ACK) because there are three messages transmitted by TCP to negotiate and
start a TCP session between two computers. The TCP handshaking mechanism is designed so that two computers
attempting to communicate can negotiate the parameters of the network TCP socket connection before transmitting
data such as SSH and HTTP web browser requests.
Example: (if needed of want to prepare)
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize-ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP socket connection is ESTABLISHED.

Question 6: Application Layer


Question 6 Application Layer
Question 6.1
Describe and clearly distinguish the roles in DNS of:
authoritative name servers, local name servers, and root name
servers.
[6 marks]
Authoritative name servers
An authoritative name server provides actual answer to your DNS queries such as mail server IP address or web site IP
address (A resource record). It provides original and definitive answers to DNS queries. It does not provides just cached
answers that were obtained from another name server. Therefore it only returns answers to queries about domain names that
are installed in its configuration system. There are two types of Authoritative Name Servers: Master and Slave

Local name servers


Local (caching/forwarding) server -- a local name server that only caches information for local clients once it has been retrieved
from an authoritative name server. The local server can effectively speed up name queries for the local network by serving up
names found by prior queries, preventing a request to the authoritative server for that host's domain.

Root name servers


Root server -- these servers are at the base of the name server hierarchy. They are a fixed set of name servers that maintain a
list of the authoritative (master/slave) name servers for every registered domain. These are typically name servers located at
companies either contracted to provide the service by ICANN or government institutions.
(b) What is the difference between a recursive DNS query and an iterative DNS query? Describe a scenario in

which an iterative query is beneficial and identify two specific benefits. Describe a scenario in which a
recursive query is beneficial and identify two specific benefits.
[6 marks]
Recursive DNS query
With a recursive name query, the DNS client requires that the DNS server respond to the client with either the
requested resource record or an error message stating that the record or domain name does not exist. The DNS server
cannot just refer the DNS client to a different DNS server.
Thus, if a DNS server does not have the requested information when it receives a recursive query; it queries other
servers until it gets the information, or until the name query fails.
Recursive name queries are generally made by a DNS client to a DNS server, or by a DNS server that is configured to
pass unresolved name queries to another DNS server, in the case of a DNS server configured to use a forwarder.
This process is sometimes referred to as "walking the tree," and this type of query is typically initiated by a DNS
server that attempts to resolve a recursive name query for a DNS client.

Iterative DNS query


An iterative name query is one in which a DNS client allows the DNS server to return the best answer it can give
based on its cache or zone data. If the queried DNS server does not have an exact match for the queried name, the
best possible information it can return is a referral (that is, a pointer to a DNS server authoritative for a lower level
of the domain namespace). The DNS client can then query the DNS server for which it obtained a referral. It
continues this process until it locates a DNS server that is authoritative for the queried name, or until an error or
time-out condition is met.

Interative scenario and benifit:

Interative queries is a request from a client tells the DNS server that the client expects the best answer the
DNS server can provide immediately, without contacting other DNS servers. The process then relies on the
client to continue the process possibly by using a referral where the DNS server supplying the client NS or
A records of a DNS server that is closer to the namespace which may possibly provide the answer. However
we don't see that with the normal sense of the word, 'query,' when a client sends a request to a DNS server,
which we are more familiar with.
Recursive scenario and benefit:
However, with a recursion request from a client to a DNS server, which as I said is what we normally think
of using the term 'query,' the DNS server will do its best to resolve it, either by using the Root Hints, which
is essentially an interactive query to the Roots to devolve the namespace from the TLD backwards, or a
query to a Forwarder, if configured with a Forwarder, which is essentially a recursion request.
(c) List four non-proprietary Internet applications and the
application-layer protocols they each use.
[4 marks]

The Web: HTTP; file transfer: FTP; remote login:


Telnet; Network News: NNTP; e-mail: SMTP
Here are 5 you can remember any 5 of them
(d) Why do HTTP, FTP, SMTP, and POP3 run on top of TCP
rather than on UDP?
[2 marks]

The applications associated with those protocols require that all


application data be received in the correct order and without gaps. TCP
provides this service whereas UDP does not.

Question 6.2
Why does DNS usually run on top of UDP rather than on TCP?
[2 marks]
Performance is a big reason. By comparison, TCP connections are very "expensive" with the whole SYN, SYNACK, ACK chain of events. Generally a dns response can fit within a single packet with lots of room to spare, so
using TCP for DNS really is killing the proverbial rabbit with an h-bomb.
UDP is often now used for a complete transaction even when the message size exceeds 512 bytes. The Extension
Mechanisms for DNS (EDNS0) draft originally specified a mechanism whereby a resolver can signal to an
authoritative server that it is able to handle longer packets.

Question 6.3
Explain how HTTP/1.1 can reduce latency compared to HTTP/1.0.
[2 marks]
This can reduce latency due to following reasons.
HTTP 1.1 has a required Host header by spec
An HTTP/1.1 server MAY assume that a HTTP/1.1 client intends to maintain a persistent connection unless
a Connection header including the connection-token "close"
An HTTP/1.1 client MAY expect a connection to remain open,
An HTTP/1.1 (or later) client sending a message-body SHOULD monitor the network connection for an
error status while it is transmitting the request.

Question 6.4
An HTTP proxy cache can be usefully deployed in a client network or in a server network. What benefits
can be achieved in each scenario?
[3 marks]
While you can certainly create and test HTML pages on your local computer without a web server, most web
professionals use their own web server. There are many advantages to using a web server within your
development environment. Of course, in a production hosting environment, a web server is essential. And,
depending on your website, a web server could indeed be essential in your development environment.
Benefit of deploying on client network:
In practice, we could have many copies of our website for different purposes (such as testing, training, protypes
etc), but let's just call it "development environment" for now.
Your local website behaves more like the live one. For example, you can configure directory security, test your
custom error pages etc before committing them to the production environment.
Benefits of deploying in server network:

When a business I growing more and they need to give access of their system or web to their users or customers
worldwide then they need to deploy this on network server from where everyone could access their software. So
it increases availability and reach to users as compared to local host server which is dependent on local machine
with limited access.

Question 7.5
Why is the Telnet protocol regarded as a security risk?
[2 marks]
Running a service in this protocol will keep its default port 23 open.
Telnet clients and servers exchange data by using unencrypted characters (plaintext). If you are using password
authentication, then your user name and password are not included. If you use NTLM authentication, then your user
name and password are encrypted. However the rest of the Telnet session is still plaintext. Anyone with a network
protocol analyzer with access to the network media can see the information in the Telnet session.

Question 7.2
The following cipher text has been encoded with the Caesar
Cipher (ROT-3):
jrrg ghfubswlrq
Recover the original plaintext.
[2 marks]
good decryption

Question 7.3
What is a frequency analysis attack, and why are substitution
ciphers particularly vulnerable to such
attacks?
[2 marks
Frequency analysis attack includes calculation and analysis of frequency of each letter in a text.
We can use this information to help us break a code given by a Monoalphabetic Substitution Cipher. This works because,
if "e" has been encrypted to "X", then every "X" was an "e". Hence, the most common letter in the ciphertext should be
"X".
Frequency analysis is the analysis of the frequency of each letter appearing in a piece of text.
Substitution cipher is valuable to such attacks because simple substitution, cipher is one in which the ciphertext alphabet
is a rearrangement of the plaintext alphabet. Substitution ciphers, despite having 26! Possible permutations, are actually
very insecure and are easily solved using letter frequencies. So its decryption is very easy
For understanding only:
Like it has formula
Cipher text= text + key
Example
D= a+3

Question 7:

Reliability

Question 8.1
Consider transmitting data over a series of unreliable links with the following characteristics:

end-to-end path traverses 3 separate links

a link-layer frame transmitted over any one link has a 95% chance of being delivered intact to the other
end of that link
the Transport-Layer packets being transmitted are too large for the Link-Layer MTUs, and must each be
broken down into 8 separate Network-Layer packets, each of which fits in one LinkLayer frame.

What is the probability of a Transport-Layer packet being successfully delivered without triggering
Transport-Layer retransmission if the Link Layer provides..
b) an unacknowledged connectionless service (no retransmission of lost frames)? Show your
calculations.
[2 marks]

b) an acknowledged connectionless service with up to four transmission attempts (original attempt

plus up to three retries) for a lost/damaged frame? Show your calculations.


[4 marks]

Question 8.2
Discuss the pros and cons of implementing reliability at the Link Layer versus implementing it at the
Transport Layer.
[5 mark]
Pros:
Providing reliability and error control at the link layer is an optimization , never a requirement.
The datalink layer transforms the physical layer, a raw transmission facility to a reliable link and is
responsible for node to node delivery.
Datalink protocol is a layer of control present in each communicating device that provides functions such as
flow control, error detection and error control with reliability if we will reliability at this layer then these
processes will also improve.
In a datalink control protocol, error control is activated by retransmission of damaged frame that have not
been acknowledged by other side which requests a retransmission.
Cons:
Transport layer breaks messages into packets. If we will just setup reliability on the lower layer then here in
transport layer error reduction will not be implemented effectively.
Function of flow control if not done adequately at the network layer so need to maintain reliability at
transport layer.
Function of multiplexing and demultiplexing sessions together will also effect.
.
Link level error control happens on a packet-by-packet basis on each and every link. For Ethernet, this is
a CRC-32 over the entire frame. This is implemented in hardware and is basically a trivial sunk cost.
By implementing reliability at link layer, Packets may be lost during transport due to network congestion and
errors, By means of an error detection code.

b) Describe what is meant by reliability in the context of computer networks, and discuss the pros and
cons of implementing reliability at the Network Layer versus implementing it at the Transport Layer.
[6 marks]
Reliability:
In computer networking, a reliable protocol provides reliability properties with respect to the delivery of data
to the intended recipient(s), as opposed to an unreliable protocol, which does not provide notifications to the
sender as to the delivery of transmitted data.
A reliable service is one that notifies the user if delivery fails, while an "unreliable" one does not notify the
user if delivery fails. For example, IP provides an unreliable service. Together, TCP and IP provide a reliable
service, whereas UDP and IP provide an unreliable one.
Pros:
By implementing reliability on the network layer, The delivery of packets form source to multiple networks will
become more reliable.
Reliability will improve connection services; network layer flow control and packet sequence control because
delivery of data of data or packets will be sure by means of acknowledgement.
Translation of logical network address into physical machine address will be more reliable due to transfer of
complete and reliable data.

Cons:
Transport layer will become less or no reliable if we will just implement reliability on network layer and it
will connectionless oriented.
Assembling of packets will be not reliable due to focus of reliability on network layer and can cause loss of
data.
Source to destination data transfer will be affected.
Host to host delivery of packets will improve but process to process deliver of entire message over transport
layer will be affected.

Question 8:

Processes and Scheduling

Question 9.1
Draw a state transition diagram for the 7-state Process Model:

Include the name of each state in your diagram.

Draw arrows for all required state transitions.

Do not include arrows for any invalid transitions.

Valid but optional transitions may be either included or omitted.

[10 marks]

Question 9.2
What is starvation? Propose a strategy for avoiding starvation in a priority scheduling algorithm.
[2 marks]

Starvation
Starvation is a problem encountered in concurrent computing where a process is perpetually denied
necessary resources to process its work. This normally happens when a process has occupied resources
and other process comes in search or resource to execute, so the time process waits to get a resource is
called starvation.
Solution:
Aging; It is a technique of gradually increasingthe priority of processes that wait in the system for long time

Question 9.3
In a mono-programming batch environment, what scheduling algorithm will optimise average turnaround time?
What is the main difficulty with implementing this algorithm?
[2 marks]

In a mono programming batch environment a shortest job first algorithm can be more useful and can optimize avegare
turnaround time to complete a process because in this environment only one process can run at once so if we will use SJF
then all programs will run completely because SJF is a non-primitive algorithm.
Now the problem that we can face or difficulty to implement it is to have issues of starvation. It could be difficult to
implement a algorithm because all programs of different length. So due to variability of program sizes and problems
related to starvation this can be difficult to implement in a mono-programming environment.

Question 9.4
Multiprogramming allows multiple processes to be resident in memory at the same time and leads to more
efficient utilisation of the CPU.
a) What is the formula for calculating the CPU utilisation based on a simplified analysis of
multiprogramming?
[0.5 marks]

U = 1 b^n
b) State the meaning of each term in your formula.
[1.5 marks]

CPU utilisation= 1 (probability CPU is not being utilised)


= 1 (probabiity all processes are simultaneously blocked)

= 1 b^n.
U= utilization of CPU
B= probability CPU is not being utilised
N= all or number of processes

Question 9:

Concurrency and Deadlock

Question 10.1
a) Consider the following sequence of resource allocation requests. Assume that there is only one instance
of each resource, a resource can only be allocated to one process at a time, and resource requests are
immediately granted if the resource is available.

Process P1 requests Resource R1.

Process P2 requests Resource R2.

Process P3 requests Resource R3.

Process P2 requests Resource R4.

Process P1 requests Resource R2.

Process P1 requests Resource R3.

Draw a resource allocation graph to depict the current state. Is the system currently deadlocked? Give a
reason for your answer.
[4 marks]
R1
P
1

R2

P
2

R3

P
3

R4

No there is no deadlock state in this RAG because there is no circle found in diagram so there is no deadlock everyone is
requesting for resource no one is allocated.

b) Following on from the above scenario, Process P3 requests Resource R4. Is the system
deadlocked now? State your reason/s.
[1 mark]
R1
P
1

P
2

R2

R3

R4
P
3
There is still no dead lock because no circle is found in the diagram

Question 10.3
The following diagram illustrates the synchronisation we would like to achieve between a set of
cooperating processes:

We want process A's critical region to complete before either B's or C's critical regions commence, and D should
not commence its critical region until both B's and C's critical regions have completed.
Eight binary semaphores have been set up called sem1, sem2, , sem8. Each has been initialised to zero. You
may use any of them that you need.
For each process (A, B, C, & D), write signal() and wait() calls to be called before/after the critical region that
would enforce the desired synchronisation. Use only as many separate semaphore objects in your solution as you
actually require for a simple, straightforward solution.
Where no signal()s or wait()s are required, write none.
[4 marks]
Process
A

Before critical region

Wait(A)

After critical region

Question 10.4
c) List and briefly describe the four conditions that must apply for deadlock to occur.
[6 marks]
Deadlock can arise if four conditions hold simultaneously.
1. Mutual exclusion: Only one process at a time can use a
resource.
2. Hold and wait: A process holding one or more allocated
resources while awaiting assignment of other.
3. No preemption: No resource can be forcibly removed
from a process holding it.
4. Circular wait: There exists a set {P0, P1, , Pn} of waiting
processes such that P0 is waiting for a resource that is
held by P1, P1 is waiting for a resource that is held by P2,
, Pn1 is waiting for a resource that is held by
Pn, and Pn is waiting for a resource that is held by P0.
P0 P1 P2 Pn P0

Q2.4 [4 marks]
Binary semaphores, general semaphores, and monitor condition variables all
provide signal() and wait() operations that seem quite similar. However, if
there is no thread/process currently waiting(), a signal() is handled quite
differently by the three different types of synchronisation objects. State
concisely and precisely what that difference is.

Binary Semaphore
Binary semaphore can have a value either 0 or 1. It means binary semaphore protect the access to a SINGLE shared
resource, so the internal counter of the semaphore can only take the values 1 or 0.

c) One possible solution to the Dining Philosophers problem is to get some of the philosophers to pick up
their left chopstick first and then their right chopstick, whilst others pick up their right chopstick first
followed by their left chopstick.
This strategy negates one of the above four conditions of deadlock. State exactly which one.
[2 marks]
Ans: The above given scenario depicts Hold and wait condition of deadlock because some are holding left

chopstick and others are holding right chopstick, while at the same time some are holding left chopstick and
waiting for right chopstick.

Question 10:

Memory Management

Question 11.1
What is the key difference between paging and virtual memory?
[1 mark]
Virtual memory: Address space can be larger than the amount of physical
Memory on the machine is called virtual memory.
Paging: In computer operating systems, paging is a memory management scheme by which a computer stores and retrieves
data from secondary storage [a] for use in main memory. In this scheme, the operating system retrieves data from secondary
storage in same-size blocks called pages.

Question 11.2
A process is running on a machine with 8-bit memory addresses, and it's page table is as follows:
Page
Number

Frame
Number

Valid?

00

00

01

11

10

01

11

01

a) What is the size of a page (number of bytes)?

[1 mark]
2-bit page/frame numbers => 6-bit ofsets
26 = 64 => page size is 64 bytes
b) What 8-bit physical address corresponds to the virtual address 10010111? Show your working.

[2 marks]

10010111
As we know 1001= 8+0+0+1= 9
And 0111= 0+4+2+1= 7
So answer is 97
Physical address is 97 for 10010111 virtual address
Page number is 10 and 010111 is the offset
c) Give one example of a non-zero 8-bit virtual address that will generate a page fault.

[1 mark]
243 = 11110011
Page 11 is not currently loaded (valid bit in page table is 0)
=> page fault the page will need to be loaded into a frame.

Question 11.3
A process has 8 virtual pages, and a resident set size (RSS) limit of 4 physical frames. The process needs to
access data on the following pages (in this order): 0,1,7,2,3,2,7,1,0,3,2. Assume the 4 frames are initially empty.
How many page faults occur if the system uses a LRU page replacement strategy? Show your working.
[3 marks]
FIFO: 0F, 1F, 7F, 2F, 3F(x0), 2, 7, 1, 0F(x1), 3 = 6 page faults
LRU: 0F, 1F, 7F, 2F, 3F(x0), 2, 7, 1, 0F(x3), 3F(x2) = 7 page faults
So in this (concocted) example, LRU performs worse than FIFO.
(But you can't draw any irm conclusions about the relative merits of FIFO vs LRU
in general from just this one example!)

Question 11.4
The following two code fragments each calculate the sum of all elements in a two-dimensional array.
/* code fragment #1 */ for(int c = 0; c < cols;
c++){

for(int r = 0; r < rows; r++){ sum += array[r]


[c];
}
}
/* code fragment #2 */ for(int r = 0; r < rows;
r++){

for(int c = 0; c < cols; c++){ sum += array[r]


[c];
}
}
Assuming the compiler will lay out a 2-D array in row-wise fashion, which piece of code conforms more closely
to the principle of locality? Explain.
[3 marks]
/* code fragment #1 */ for(int c = 0; c < cols; c+
+){

for(int r = 0; r < rows; r++){ sum += array[r]


[c];
}
}
This piece of code conforms more closely to the principle of locality because the inner loop will make the entire row in a 2D
manner and the outer loop will display the fashion row to table form.

Question 11.5
Define internal fragmentation and external fragmentation. Which type of fragmentation is often
associated with static memory partitioning schemes?
[3 marks]
Fragmentation is a phenomenon that occurs in computer memory such as Random Access
Memory (RAM) or hard disks, which causes wastage and inefficient usage of free space. While the
efficient usage of available space is hindered, this causes performance issues, as well. Internal
fragmentation occurs when memory allocation is based on fixed-size partitions where after a small
size application is assigned to a slot the remaining free space of that slot is wasted. External
fragmentation occurs when memory is dynamically allocated where after loading and unloading of
several slots here and there the free space is being distributed rather than being contiguous.
With static memory partitioning internal fragmentation is associated because there are fixed sized
blocks of memory where a process comes and fits as discussed in internal fragmentation.

Question 11.6
Describe what each of the following is, and what problem it is intended to solve:

page table

inverted page table

translation look-aside buffer

[6 marks]

page table

Page Tables are used to keep track of how logical addresses map to physical addresses. Page table entries
generally contain the frame number where the particular page is loaded. In addition a PTE may also
contain a modify bit, resident bit, valid bit and protection bits
Page table size depends on the maximum number of pages of a process that a CPU support. E.g. if a
system support a process of maximum 8 pages then the page table will contain 8 rows (length is
debatable). So the address of page table will consist of 3 bits

inverted page table

A Page Table has one entry for each Page in the Logical Address space of
process
An Inverted Page Table has one entry for each Frame in the Physical Address
space (Physical Memory)
Entries of a Page Table contains Frame numbers
Entries of a Inverted Page Table contains Page numbers and pid (information
about the process that owns that page)
Page Table is indexed with page number, p
Inverted Page Table is indexed with Frame number, f
Inverted Page Tables are used to reduce the size of page table
Only one page table is required in the system. (All processes will have / share a
single Inverted Page Table)
Page table size is limited by the number of frames (i.e. the physical / main

memory) and not Process Address Space


Each entry in the Page Table contains pid and p#. There is a possibility that
there are two processes executing at a time and each process has a page no 3.
So to avoid this confusion we have to keep pid as well
64 bit Ultra SPARC and IBM Power PC uses this technique
5/

translation look-aside buffer

A Translation lookaside buffer (TLB) is a memory cache that is used to reduce the time taken to access a user memory location.
[1][2] It is a part of the chips memory-management unit (MMU). The TLB stores the recent translations of virtual memory to
physical memory and can be called an address-translation cache. A TLB may reside between the CPU and the CPU cache,
between CPU cache and the main memory or between the different levels of the multi-level cache. The majority of desktop,
laptop, and server processors include one or more TLBs in the memory management hardware, and it is nearly always present
in any processor that utilizes paged or segmented virtual memory.

Q3.4
Define internal fragmentation and external fragmentation. Which type of
fragmentation is often associated with static memory partitioning schemes?
Fragmentation is a phenomenon that occurs in computer memory such as Random Access Memory (RAM)
or hard disks, which causes wastage and inefficient usage of free space. While the efficient usage of available
space is hindered, this causes performance issues, as well. Internal fragmentation occurs when memory
allocation is based on fixed-size partitions where after a small size application is assigned to a slot the
remaining free space of that slot is wasted. External fragmentation occurs when memory is dynamically
allocated where after loading and unloading of several slots here and there the free space is being distributed
rather than being contiguous.
With static memory partitioning internal fragmentation is associated because there are fixed sized blocks of
memory where a process comes and fits as discussed in internal fragmentation.

Q3.9
What is overlay programming? State one specific advantage of virtual memory
compared to overlay programming.
In a general computing sense, overlaying means "the process of transferring a block of program code or other data into internal
memory, replacing what is already stored.
Overlay is a programming method that allows programs to be larger than the computer's main memory.
An embedded system would normally use overlays because of the limitation of physical memory, which is internal memory for
a system-on-chip, and the lack of virtual memory facilities.
Advantage of virtual memory:
The main visible advantage of this scheme is that programs can be larger than physical memory. Virtual memory serves two
purposes. First, it allows us to extend the use of physical memory by using disk. Second, it allows us to have memory
protection, because each virtual address is translated to a physical address.

Question 11:

Performance Monitoring

Question 13.1
List the names of any two system performance monitoring tools generally available on unix platforms, and
briefly explain the primary capabilities of each.
[3 marks]
1. Top Linux Process Monitoring
Linux Top command is a performance monitoring program which is used frequently by many system administrators to monitor
Linux performance and it is available under many Linux/Unix like operating systems. The top command used to dipslay all the
running and active real-time processes in ordered list and updates it regularly. It display CPU usage, Memory usage, Swap
Memory, Cache Size, Buffer Size, Process PID, User, Commands and much more. It also shows high memory and cpu
utilization of a running processess. The top command is much userful for system administrator to monitor and take correct
action when required. Lets see top command in action.
2. Lsof List Open Files
Lsof command used in many Linux/Unix like system that is used to display list of all the open files and the processes. The
open files included are disk files, network sockets, pipes, devices and processes. One of the main reason for using this
command is when a disk cannot be unmounted and displays the error that files are being used or opened. With this commmand
you can easily identify which files are in use. The most common format for this command is.
3. Tcpdump Network Packet Analyzer
Tcpdump one of the most widely used command-line network packet analyzer or packets sniffer program that is used capture
or filter TCP/IP packets that received or transferred on a specific interface over a network. It also provides a option to save
captured packages in a file for later analysis. tcpdump is almost available in all major Linux distributions.

Question 13.2
Imagine your manager has asked you to collect system performance statistics on a large server to determine how
best to improve system performance. Outline and justify your strategy for collecting the necessary evidence.
[5 marks]
Performance tuning is the improvement of system performance. This is typically a computer systems. The motivation for such
activity is called a performance problem, which can be real or anticipated. Most systems will respond to increased load with
some degree of decreasing performance. A system's ability to accept higher load is called scalability, and modifying a system to
handle a higher load is synonymous to performance tuning.

To improve system performance I will follow these steps :


1. Assess the problem and establish numeric values that categorize acceptable behavior.
2. Measure the performance of the system before modification.
3. Identify the part of the system that is critical for improving the performance. This is called
the bottleneck.
4. Modify that part of the system to remove the bottleneck.
5. Measure the performance of the system after modification.
6. If the modification makes the performance better, adopt it. If the modification makes the performance
worse, put it back the way it was.
This is an instance of the measure-evaluate-improve-learn cycle from quality assurance.

A performance problem may be identified by slow or unresponsive systems. This usually occurs because high
system loading, causing some part of the system to reach a limit in its ability to respond. This limit within the
system is referred to as a bottleneck.
A handful of techniques are used to improve performance. Among them are code optimization, load balancing,
caching strategy, distributed computing and self-tuning.

Q7.1 [6 marks]
Explain the differences between programmed I/O, interrupt-driven I/O, and
DMA.
Programmed input/output (PIO):
is a method of transferring data between the CPU and a peripheral, such as a network adapter or an ATA storage
device. In general,programmed I/O happens when software running on the CPU uses instructions that access
I/O address space to perform data transfers to or from an I/O device.

It used only in some low-end microcomputers.


It has single input and single output instruction.
Each instructions selects one I/O device (by number) and transfers a single character (byte)
Four registers: input status and character, output status and character.
Interrupt-driven:
An alternative scheme for dealing with I/O is the interrupt-driven method. Here the CPU works on its given tasks
continuously. When an input is available, such as when someone types a key on the keyboard, then the CPU
is interrupted from its work to take care of the input data. The processor then executes the data transfer as before
and then resumes its former processing.
The processor, while waiting, must repeatedly interrogate the status of the Input/ Output module. As a result the
level of performance of entire system is degraded.
Interrupt-driven input/output still consumes a lot of time because every data has to pass with processor.
Direct Memory Access (DMA):
Slow devices like keyboards will generate an interrupt to the main CPU after each byte is transferred. If a fast
device such as a disk generated an interrupt for each byte, the operating system would spend most of its time
handling these interrupts. So a typical computer uses direct memory access (DMA) hardware to reduce this
overhead.
Direct Memory Access (DMA) means CPU grants I/O module authority to read from or write to memory without
involvement. DMA module itself controls exchange of data between main memory and the I/O device. CPU is only
involved at the beginning and end of the transfer and interrupted only after entire block has been transferred.

Q7.2 [3 marks]
What is non-blocking I/O? Give an example of where it might be used.
Non-blocking IO : Here the IO operations are performed independent of other system operations. Your system wont let you to
go into wait state, rather it can perform other operations which are independent of required IO.
Response from IO devices are furnished to the system once the operation completes, however system wouldn't stall in that
time. Once you get the response, it can trigger some function to be completed. Any new requests sent to the same IO device
will be queued and responses are also returned one after other. This is mostly achieved through parallel programming / creating
multiple threads for each IO.
Example:
When you send text messages, you initiate the exchange by sending something you wrote, go off and do something else instead
of waiting, and react at your own discretion when a signal alerts you that you have a response. This is non-blocking I/O.

Você também pode gostar