Você está na página 1de 16

DRIVE

SUMMER 2014

PROGRAM

MBADS (SEM 3/SEM 5)


MBAFLEX/ MBA (SEM 3)
PGDISMN (SEM 1)
II

SEMESTER
SUBJECT CODE &
NAME
BK ID

MI0035 COMPUTER NETWORKS

CREDIT & MARKS

4 Credits, 60 marks

B1973

ASSIGNMENT

Name:

Ajeet Kumar

Roll No:

1405010120

Center Code:

3293

Explaining the 3 categories of computer networks


Viz., Peer-to-Peer Network and Client-Server Network and Cloud based
network
A computer network or data network is a telecommunications network which allows
computers to exchange data. In computer networks, networked computing devices
exchange data with each other along network links (data connections). The connections
between nodes are established using either cable media or wireless media. The bestknown computer network is the Internet.

There are several different types of computer networks. Computer networks can be characterized by
their size as well as their purpose.
The size of a network can be expressed by the geographic area they occupy and the number of
computers that are part of the network. Networks can cover anything from a handful of devices
within a single room to millions of devices spread across the entire globe.
Some of the different networks based on size are:

Personal area network, or PAN

Local area network, or LAN

Metropolitan area network, or MAN

Wide area network, or WAN

Peer-to-Peer Network
Peer-to-peer (P2P) is a decentralized communications model in which each party has
the same capabilities and either party can initiate a communication session. Unlike the
client/server model, in which the client makes a service request and the server fulfills
the request, the P2P network model allows each node to function as both a client and
server.
P2P systems can be used to provide anonymized routing of network traffic, massive
parallel computing environments, distributed storage and other functions. Most P2P
programs are focused on media sharing and P2P is therefore often associated with
software piracy and copyright violation.

Client-Server Network
The clientserver model of computing is a distributed application structure that
partitions tasks or workloads between the providers of a resource or service, called
servers, and service requesters, called clients. Often clients and servers communicate
over a computer network on separate hardware, but both client and server may reside
in the same system. A server host runs one or more server programs which share their
resources with clients. A client does not share any of its resources, but requests a
server's content or service function. Clients therefore initiate communication sessions
with servers which await incoming requests.
A client-server network is designed for end-users called clients to access resources
(such as files, songs, video collections or some other service) from a central computer
called a server. A server's sole purpose is to do what its name implies - serve its clients!
You may have been using this configuration and not even have known it. Have you ever
played Xbox Live or used the PlayStation Network? Your Xbox One is the client, and
when it logs into the network, it contacts the Xbox Live servers to retrieve gaming
resources like updates, video and game demos.
Cloud based network

Cloud networking is a new networking paradigm for building and managing secure private networks
over the public Internet by utilizing global cloud computing infrastructure. In cloud networking,
traditional network functions and services including connectivity, security, management and control,
are pushed to the cloud and delivered as a service. There are two categories within cloud networking:
Cloud-Enabled Networking (CEN) and Cloud-Based Networking (CBN).
CEN moves management and certain aspects of control (such as policy definition) into the cloud, but
keeps connectivity and packet-mode functions such as routing, switching and security services
local and often in hardware.
Defining Network Protocols
A network protocol defines rules and conventions for communication between network
devices. Protocols for computer networking all generally use packet switching
techniques to send and receive messages in the form of packets.
Network protocols include mechanisms for devices to identify and make connections
with each other, as well as formatting rules that specify how data is packaged into
messages sent and received.
Some protocols also support message acknowledgement and data compression
designed for reliable and/or high-performance network communication. Hundreds of
different computer network protocols have been developed each designed for specific
purposes and environments.
Explaining Personal Area Network
A personal area network (PAN) is a computer network used for data transmission
among devices such as computers, telephones and personal digital assistants. PANs
can be used for communication among the personal devices themselves (intrapersonal
communication), or for connecting to a higher level network and the Internet (an
uplink).
A personal area network (PAN) is the interconnection of information technology devices
within the range of an individual person, typically within a range of 10 meters. For
example, a person traveling with a laptop, a personal digital assistant (PDA), and a

portable printer could interconnect them without having to plug anything in, using
some form of wireless technology. Typically, this kind of personal area network could
also be interconnected without wires to the Internet or other networks
Defining Local Area Network
A local area network (LAN) is a computer network that interconnects computers within
a limited area such as a residence, school, laboratory, or office building. A local area
network is contrasted in principle to a wide area network (WAN), which covers a larger
geographic distance and may involve leased telecommunication circuits, while the
media for LANs are locally managed.
A local area network (LAN) is a group of computers and associated devices that share a
common communications line or wireless link to a server. Typically, a LAN encompasses
computers and peripherals connected to a server within a small geographic area such
as an office building or home. Computers and other mobile devices can share resources
such as a printer or network storage.
Defining Metropolitan Area Network
A metropolitan area network (MAN) is a network that interconnects users with computer
resources in a geographic area or region larger than that covered by even a large local
area network (LAN) but smaller than the area covered by a wide area network (WAN).
The term is applied to the interconnection of networks in a city into a single larger
network (which may then also offer efficient connection to a wide area network). It is
also used to mean the interconnection of several local area networks by bridging them
with backbone lines. The latter usage is also sometimes referred to as a campus
network.
Defining Wide Area Network
A wide area network (WAN) is a geographically dispersed telecommunications network.
The term distinguishes a broader telecommunication structure from a local area
network (LAN). A wide area network may be privately owned or rented, but the term
usually connotes the inclusion of public (shared user) networks. An intermediate form of
network in terms of geography is a metropolitan area network (MAN).
A computer network that spans a relatively large geographical area. Typically, a WAN
consists of two or more local-area networks (LANs). Computers connected to a widearea network are often connected through public networks, such as the telephone
system. They can also be connected through leased lines or satellites.
Explaining video compression
Video compression technologies are about reducing and removing redundant video
data so that a digital video file can be effectively sent over a network and stored on
computer disks. With efficient compression techniques, a significant reduction in file
size can be achieved with little or no adverse effect on the visual quality. The video
quality, however, can be affected if the file size is further lowered by raising the
compression level for a given compression technique.
Different compression technologies, both proprietary and industry standards, are
available. Most network video vendors today use standard compression techniques.
Standards are important in ensuring compatibility and interoperability. They are
particularly relevant to video compression since video may be used for different
purposes and, in some video surveillance applications, needs to be viewable many
years from the recording date. By deploying standards, end users are able to pick and

choose from different vendors, rather than be tied to one supplier when designing a
video surveillance system.
Axis uses three different video compression standards. They are Motion JPEG, MPEG-4
Part 2 (or simply referred to as MPEG-4) and H.264. H.264 is the latest and most
efficient video compression standard.
Video codec
The process of compression involves applying an algorithm to the source video to
create a compressed file that is ready for transmission or storage. To play the
compressed file, an inverse algorithm is applied to produce a video that shows virtually
the same content as the original source video. The time it takes to compress, send,
decompress and display a file is called latency. The more advanced the compression
algorithm, the higher the latency.
A pair of algorithms that works together is called a video codec (encoder/decoder).
Video codecs of different standards are normally not compatible with each other; that
is, video content that is compressed using one standard cannot be decompressed with
a different standard. For instance, an MPEG-4 decoder will not work with an H.264
encoder. This is simply because one algorithm cannot correctly decode the output from
another algorithm but it is possible to implement many different algorithms in the same
software or hardware, which would then enable multiple formats to coexist.
Explaining audio compression
Encoding digital audio data to take up less storage space and transmission bandwidth.
Audio compression typically uses lossy methods, which eliminate bits that are not
restored at the other end. ADPCM and MP3 are examples of audio compression
methods.
Compression is one of the most common processes in all audio work, yet the
compressor is one of the least understood and most misused audio processors.
Compressed audio is an everyday fact of modern life, with the sound of records,
telephones, TV, radios and public address systems all undergoing some type of
mandatory dynamic range modification. The use of compressors can make pop
recordings or live sound mixes sound musically better by controlling maximum levels
and maintaining higher average loudness. It is the intent of this article to explain
compressors and the process of compression so that you can use this powerful process
in a more creative and deliberate way.
Compressors and limiters are specialized amplifiers used to reduce dynamic range--the
span between the softest and loudest sounds. All sound sources have different dynamic
ranges or peak-to-average proportions. An alto flute produces a tone with only about a
3dB difference between the peak level and the average level. The human voice
(depending on the particular person) has a 10dB dynamic range, while a plucked or
percussive instrument may have a 15dB or more difference.
Our own ears, by way of complex physiological processes, do a fine job of compressing
by responding to roughly the average loudness of a sound. Good compressor design
includes a detector circuit that emulates the human ear by responding to average
signal levels. Even better compressor designs also have a second detector that
responds to peak signal levels and can be adjusted to clamp peaks that occur at a
specific level above the average signal level.
When sound is recorded, broadcast or played through a P.A. system, the dynamic range
must be restricted at some point due to the peak signal limitations of the electronic
system, artistic goals, surrounding environmental requirements or all the above.

Typically, dynamic range must be compressed because, for artistic reasons, the singer's
voice will have a higher average loudness and compression allows vocalizations such as
melismatic phrasing and glottal stops to be heard better when the vocal track is mixed
within a dense pop record track.
With recording, the dynamic range may be too large to be processed by succeeding
recording equipment and recording media. Even with the arrival of 90dB-plus dynamic
range of digital recording, huge and unexpected swings of level from synthesizers and
heavily processed musical instruments can overwhelm analog-to-digital converters,
distorting the recording.
With broadcast audio, dynamics are reduced for higher average loudness to achieve a
certain aural impact on the listener and to help compete with the noisy environment of
freeway driving. The station-to-station competition for who can be the loudest on the
radio dial has led to some innovative twists in compressor design. "Brick wall" limiting
is where the compressor absolutely guarantees that a predetermined level will not be
exceeded, thus preventing overmodulation distortion of the station's transmitter. (The
Federal Communication Commission monitors broadcast station transmissions and
issues citations and fines for overmodulation that can cause adjacent channel
interference and other problems.)
Another type of specialization that sprung from broadcast is called multiband
compression, where the audio spectrum is split into frequency bands that are then
processed separately. By compressing the low frequencies more or differently than the
midrange and high frequencies, the station can take on a "sound" that stands out from
other stations on the dial. Radio stations "contour" their sound with multiband
processing to fit their playlist/format.
Features of Bluetooth Technology
Bluetooth is a wireless technology that enables any electrical device to wirelessly
communicate in the 2.5 GHz ISM (license free) frequency band. It allows devices such
as mobile phones, headsets, PDA's and portable computers to communicate and send
data to each other without the need for wires or cables to link to devices together. It
has been specifically designed as a low cost, low power, radio technology, which is
particularly suited to the short range Personal Area Network (PAN) application. (It is the
design focus on low cost, low size and low power which distinguishes it from the IEEE
802.11 wireless LAN technology).
The Main Features of Bluetooth:
- Operates in the 2.4GHz frequency band without a license for wireless communication.
- Real-time data transfer usually possible between 10-100m.
- Close proximity not required as with infrared data (IrDA) communication devices as
Bluetooth doesn't suffer from interference from obstacles such as walls.
- Supports both point-to-point wireless connections without cables between mobile
phones and personal computers, as well as point-to-multipoint connections to anable
ad hoc local wireless networks.
Sending /Receiving files from remote Bluetooth devices
Browsing files on a remote device
Stand-alone Bluetooth printing
Sharing a local printer over Bluetooth
Remote controls
Mobile phone SMS/Call related items
PIM (appointments, contacts) synchronisation (Phone/Palm)
Keyboards, mice
Dial-up, 3G, Networking over Bluetooth

Handsfree headsets (phone use) and Headphones (Music)


GPS
Working of Bluetooth Technology
Bluetooth silently connects so many of our gadgets together, its easy to forget its a
pretty impressive piece of technology on its own. It helps us listen to music, talk on our
phones, and play video games, all without being frustrated by miles of cables strewn
around the place. Were going to explain how the technology behind Bluetooth works
(and no, its not magic), so you can impress everyone at the party by explaining how
your phone can send dance music to the speaker without being connected physically.
Just dont pause the jams!
Looking to free your hands up? Check out our list of the best Bluetooth headsets so you
can explain to people that you arent talking yourself.
How does it work?
To understand how a Bluetooth connection works, we need an example of the wireless
technology being used, so lets take a phone connected to wireless speaker. First, each
device is equipped with Bluetooth connectivity, a feature that requires both software
and hardware components. On the hardware side, an antenna-equipped chip in both
devices sends and receives signals at a specific frequency. The software interprets
incoming Bluetooth signals and sends them out in ways other devices can read and
understand. In the case of the wireless speaker, the phone will know how to send audio
files and information in a format that the speaker understands, while the speaker can
interpret these signalsas well as other indicators such as volume and track controls
from the phone.
When two devices are equipped with Bluetooth, usually one of them will to be set to be
discoverable, meaning itll show up in a list of Bluetooth devices in the area on your
phone or other controlling device. Using our example, the wireless speaker would be
discoverable, and it will end up being controlled by a Bluetooth-equipped phone or
remote. The speaker, or any Bluetooth accessory, sends out a signal with a little bit of
information to alert other nearby devices of its presence and capabilities. You tell your
phone to connect, and the two devices form a personal area network, or piconet.
From this point on, the two devices know to connect with each other based on the
unique address within their respective signals. No matter what other signals come in on
wavelengths in which those devices operate operate, they will always detect, read, and
send the correct signals. Bluetooth signals have a limited range, which prevents
massive amounts of conflicting data covering huge areas and interrupting
communication between other devices. In other words, your speaker will always know
that youre the one who is trying to listen to Nickelback, even if it doesnt know how to
criticize you for it.
Lets Get Technical
According to the Bluetooth website, the technology operates in the unlicensed
industrial, scientific and medical (ISM) band at 2.4 to 2.485GHz, using a spread
spectrum, frequency hopping, full-duplex signal at a nominal rate of 1600 hops/sec. If
you fell asleep part way through that, lets break it down to find out exactly how your
headset knows to pick up calls from your phone.
Bluetooth chips produce wavelengths that are bound to frequencies operating within a
range specifically set aside for this sort of short-range communication. Other devices
you might find that use this frequency include cordless telephones and baby monitors.

However, there is an issue with always using the same frequency. Other devices
operating at the same, or close, frequencies can cause interruptions in the signal.
To prevent this from being an issue, the signal is spread out over a wider range of
frequencies. In order to achieve this, the signal hops around the frequency, and in the
case of Bluetooth that happens about 1600 times per second. The frequent change in
wavelength means that even a consistent signal wont interrupt, and wont be
interrupted, for longer than 1/1600th of a second.
Bluetooth headsets can sync up in two different ways, using a full or part duplex
connection. A full-duplex signal means that all connected devices are able to send and
receive signals in this case a two-way conversation simultaneously, as opposed to a
half-duplex signal, like a walkie-talkie, where each side can still talk and listen, just not
both at the same time.
Explain the network configuration of Dense wavelength division multiplexing
(DWDM)
Dense wavelength division multiplexing (DWDM) is a technology that puts data from
different sources together on an optical fiber, with each signal carried at the same time
on its own separate light wavelength. Using DWDM, up to 80 (and theoretically more)
separate wavelengths or channels of data can be multiplexed into a lightstream
transmitted on a single optical fiber. Each channel carries a time division multiplexed
(TDM) signal. In a system with each channel carrying 2.5 Gbps (billion bits per second),
up to 200 billion bits can be delivered a second by the optical fiber. DWDM is also
sometimes called wave division multiplexing (WDM).
Since each channel is demultiplexed at the end of the transmission back into the
original source, different data formats being transmitted at different data rates can be
transmitted together. Specifically, Internet (IP) data, Synchronous Optical Network data
(SONET), and asynchronous transfer mode (ATM) data can all be travelling at the same
time within the optical fiber.
DWDM promises to solve the "fiber exhaust" problem and is expected to be the central
technology in the all-optical networks of the future.

DWDM increases the bandwidth of an optical fiber by multiplexing several wavelengths


(or colors) onto it. Even though it costs more than CWDM, it is currently the most
popular WDM technology because it offers the most capacity. By providing channel
spacings of 50 GHz (0.4 nm), 100 GHz (0.8 nm) or 200 GHz (1.6 nm), several hundreds
of wavelengths can be placed on a single fiber. Most typical DWDM systems use 40 or
80 channels, although this number can be as high as 160.
The ITU-T G.694.1 frequency grid specifies the wavelengths used in DWDM. They are
found in the C-band (1525-1565 nm) and L-band (1565-1620 nm), which is a spectral
range that proves very attractive for DWDM. The reason is because it allows
amplification with erbium-doped fiber amplifiers (EDFA).
Long-haul, metro and now cellular backhaul networks
DWDM is the most suitable technology for long-haul transmission because of its ability
to allow EDFA amplification. This is why it was adopted in these networks a few decades
ago. Given the growing need for bandwidth to driven by data-hungry applications
(smartphones, video streaming, etc.), DWDM has now found its way into metro
networks, and is even being used in some cellular backhaul deployments.
DWDM network testing

DWDM networks must be tested for loss, connector cleanliness, dispersion and spectral
quality. EXFO offers all the right tools to ensure smooth DWDM operation, including
inspection probes, OTDRs, power meters, dispersion testers and optical spectrum
analyzers.
Short for Dense Wavelength Division Multiplexing, an optical technology used to
increase bandwidth over existing fiber optic backbones.
DWDM works by combining and transmitting multiple signals simultaneously at
different wavelengths on the same fiber. In effect, one fiber is transformed into multiple
virtual fibers. So, if you were to multiplex eight OC-48 signals into one fiber, you would
increase the carrying capacity of that fiber from 2.5 Gb/s to 20 Gb/s. Currently, because
of DWDM, single fibers have been able to transmit data at speeds up to 400Gb/s.
A key advantage to DWDM is that it's protocol- and bit-rate-independent. DWDM-based
networks can transmit data in IP, ATM, SONET /SDH, and Ethernet, and handle bit rates
between 100 Mb/s and 2.5 Gb/s. Therefore, DWDM-based networks can carry different
types of traffic at different speeds over an optical channel.
listing the pros and cons of DWDM

Shed light on User-to-network and Network-to-network connectivity and the


technologies used.
A home network or home area network (HAN) is a type of local area network with the
purpose to facilitate communication among digital devices present inside or within the
close vicinity of a home. Devices capable of participating in this network, for example,
smart devices such as network printers and handheld mobile computers, often gain
enhanced emergent capabilities through their ability to interact. These additional
capabilities can be used to increase the quality of life inside the home in a variety of
ways, such as automation of repetitious tasks, increased personal productivity,
enhanced home security, and easier access to entertainment.
describing Network-to-Network Connectivity
Network connectivity describes the extensive process of connecting various parts of a
network to one another, for example, through the use of routers, switches and
gateways, and how that process works.
Network connectivity is also a kind of metric to discuss how well parts of the network
connect to one another. Related terms include network topology, which refers to the
structure and makeup of the network as a whole.
There are many different network topologies including hub, linear, tree and star
designs, each of which is set up in its own way to facilitate connectivity between
computers or devices. Each has its own pros and cons in terms of network connectivity.
IT professionals, particularly network administrators and network analysts, talk about
connectivity as one piece of the network puzzle as they look at an ever greater variety
of networks and the ways networking pieces go together.
Ad hoc networks and vehicular networks are just two examples of new kinds of
networks that work on different connectivity models. Along with network connectivity,
network administrators and maintenance workers also have to focus on security as a

major concern, where the reliability of networking systems is closely related to


protecting the data that is kept within them.

elaborating the technologies used


Network links
The transmission media (often referred to in the literature as the physical media) used
to link devices to form a computer network include electrical cable (Ethernet,
HomePNA, power line communication, G.hn), optical fiber (fiber-optic communication),
and radio waves (wireless networking). In the OSI model, these are defined at layers 1
and 2 the physical layer and the data link layer.
A widely adopted family of transmission media used in local area network (LAN)
technology is collectively known as Ethernet. The media and protocol standards that
enable communication between networked devices over Ethernet are defined by IEEE
802.3. Ethernet transmits data over both copper and fiber cables. Wireless LAN
standards (e.g. those defined by IEEE 802.11) use radio waves, or others use infrared
signals as a transmission medium. Power line communication uses a building's power
cabling to transmit data.
Wired technologies
Fiber optic cables are used to transmit light from one computer/network node to
another
The orders of the following wired technologies are, roughly, from slowest to fastest
transmission speed.
Coaxial cable is widely used for cable television systems, office buildings, and other
work-sites for local area networks. The cables consist of copper or aluminum wire
surrounded by an insulating layer (typically a flexible material with a high dielectric
constant), which itself is surrounded by a conductive layer. The insulation helps
minimize interference and distortion. Transmission speed ranges from 200 million bits
per second to more than 500 million bits per second.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and
power lines) to create a high-speed (up to 1 Gigabit/s) local area network
Twisted pair wire is the most widely used medium for all telecommunication. Twistedpair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires
consist of two insulated copper wires twisted into pairs. Computer network cabling
(wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can
be utilized for both voice and data transmission. The use of two wires twisted together
helps to reduce crosstalk and electromagnetic induction. The transmission speed
ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling
comes in two forms: unshielded twisted pair (UTP) and shielded twisted-pair (STP). Each
form comes in several category ratings, designed for use in various scenarios.
An optical fiber is a glass fiber. It carries pulses of light that represent data. Some
advantages of optical fibers over metal wires are very low transmission loss and
immunity from electrical interference. Optical fibers can simultaneously carry multiple
wavelengths of light, which greatly increases the rate that data can be sent, and helps
enable data rates of up to trillions of bits per second. Optic fibers can be used for long
runs of cable carrying very high data rates, and are used for undersea cables to
interconnect continents.

Price is a main factor distinguishing wired- and wireless-technology options in a


business. Wireless options command a price premium that can make purchasing wired
computers, printers and other devices a financial benefit. Before making the decision to
purchase hard-wired technology products, a review of the restrictions and limitations of
the selections is necessary. Business and employee needs may override any cost
considerations.[6]
Wireless technologies
Computers are very often connected to networks using wireless links
Main article: Wireless network
Terrestrial microwave Terrestrial microwave communication uses Earth-based
transmitters and receivers resembling satellite dishes. Terrestrial microwaves are in the
low-gigahertz range, which limits all communications to line-of-sight. Relay stations are
spaced approximately 48 km (30 mi) apart.
Communications satellites Satellites communicate via microwave radio waves,
which are not deflected by the Earth's atmosphere. The satellites are stationed in
space, typically in geosynchronous orbit 35,400 km (22,000 mi) above the equator.
These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV
signals.
Cellular and PCS systems use several radio communications technologies. The
systems divide the region covered into multiple geographic areas. Each area has a lowpower transmitter or radio relay antenna device to relay calls from one area to the next
area.
Radio and spread spectrum technologies Wireless local area networks use a highfrequency radio technology similar to digital cellular and a low-frequency radio
technology. Wireless LANs use spread spectrum technology to enable communication
between multiple devices in a limited area. IEEE 802.11 defines a common flavor of
open-standards wireless radio-wave technology known as Wifi.
Free-space optical communication uses visible or invisible light for communications.
In most cases, line-of-sight propagation is used, which limits the physical positioning of
communicating devices.
Exotic technologies
There have been various attempts at transporting data over exotic media:
IP over Avian Carriers was a humorous April fool's Request for Comments, issued as
RFC 1149. It was implemented in real life in 2001.[7]
Extending the Internet to interplanetary dimensions via radio waves.[8]
Both cases have a large round-trip delay time, which gives slow two-way
communication, but doesn't prevent sending large amounts of information.
Network nodes
Main article: Node (networking)
Apart from any physical transmission medium there may be, networks comprise
additional basic system building blocks, such as network interface controller (NICs),
repeaters, hubs, bridges, switches, routers, modems, and firewalls.

Network interfaces
An ATM network interface in the form of an accessory card. A lot of network interfaces
are built-in.
A network interface controller (NIC) is computer hardware that provides a computer
with the ability to access the transmission media, and has the ability to process lowlevel network information. For example, the NIC may have a connector for accepting a
cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the NIC or the
computer as a whole.
In Ethernet networks, each network interface controller has a unique Media Access
Control (MAC) addressusually stored in the controller's permanent memory. To avoid
address conflicts between network devices, the Institute of Electrical and Electronics
Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an
Ethernet MAC address is six octets. The three most significant octets are reserved to
identify NIC manufacturers. These manufacturers, using only their assigned prefixes,
uniquely assign the three least-significant octets of every Ethernet interface they
produce.
Repeaters and hubs
A repeater is an electronic device that receives a network signal, cleans it of
unnecessary noise, and regenerates it. The signal is retransmitted at a higher power
level, or to the other side of an obstruction, so that the signal can cover longer
distances without degradation. In most twisted pair Ethernet configurations, repeaters
are required for cable that runs longer than 100 meters. With fiber optics, repeaters can
be tens or even hundreds of kilometers apart.
A repeater with multiple ports is known as a hub. Repeaters work on the physical layer
of the OSI model. Repeaters require a small amount of time to regenerate the signal.
This can cause a propagation delay that affects network performance. As a result, many
network architectures limit the number of repeaters that can be used in a row, e.g., the
Ethernet 5-4-3 rule.
Hubs have been mostly obsoleted by modern switches; but repeaters are used for long
distance links, notably undersea cabling.
Bridges
A network bridge connects and filters traffic between two network segments at the data
link layer (layer 2) of the OSI model to form a single network. This breaks the network's
collision domain but maintains a unified broadcast domain. Network segmentation
breaks down a large, congested network into an aggregation of smaller, more efficient
networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link between
LANs. Remote bridges, where the connecting link is slower than the end networks,
largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to LANs.
Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams between
ports based on the MAC addresses in the packets.[9] A switch is distinct from a hub in
that it only forwards the frames to the physical ports involved in the communication
rather than all ports connected. It can be thought of as a multi-port bridge.[10] It learns
to associate physical ports to MAC addresses by examining the source addresses of
received frames. If an unknown destination is targeted, the switch broadcasts to all
ports but the source. Switches normally have numerous ports, facilitating a star
topology for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or additional
logical levels. The term switch is often used loosely to include devices such as routers
and bridges, as well as devices that may distribute traffic based on load or based on
application content (e.g., a Web URL identifier).
Routers
A typical home or small office router showing the ADSL telephone line and Ethernet
network cable connections
A router is an internetworking device that forwards packets between networks by
processing the routing information included in the packet or datagram (Internet
protocol information from layer 3). The routing information is often processed in
conjunction with the routing table (or forwarding table). A router uses its routing table
to determine where to forward packets. (A destination in a routing table can include a
"null" interface, also known as the "black hole" interface because data can go into it,
however, no further processing is done for said data.)
Modems
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not
originally designed for digital network traffic, or for wireless. To do this one or more
carrier signals are modulated by the digital signal to produce an analog signal that can
be tailored to give the required properties for transmission. Modems are commonly
used for telephone lines, using a Digital Subscriber Line technology.
Firewalls
A firewall is a network device for controlling network security and access rules. Firewalls
are typically configured to reject access requests from unrecognized sources while
allowing actions from recognized ones. The vital role firewalls play in network security
grows in parallel with the constant increase in cyber attacks.
benefits of cloud computing
1. Flexibility
The second a company needs more bandwidth than usual, a cloud-based service can
instantly meet the demand because of the vast capacity of the services remote
servers. In fact, this flexibility is so crucial that 65% of respondents to an
InformationWeek survey said the ability to quickly meet business demands was an
important reason to move to cloud computing.
2. Disaster recovery
When companies start relying on cloud-based services, they no longer need complex
disaster recovery plans. Cloud computing providers take care of most issues, and they
do it faster. Aberdeen Group found that businesses which used the cloud were able to

resolve issues in an average of 2.1 hours, nearly four times faster than businesses that
didnt use the cloud (8 hours). The same study found that mid-sized businesses had the
best recovery times of all, taking almost half the time of larger companies to recover.
3. Automatic software updates
In 2010, UK companies spent 18 working days per month managing on-site security
alone. But cloud computing suppliers do the server maintenance including security
updates themselves, freeing up their customers time and resources for other tasks.
4. Cap-Ex Free
Cloud computing services are typically pay as you go, so theres no need for capital
expenditure at all. And because cloud computing is much faster to deploy, businesses
have minimal project start-up costs and predictable ongoing operating expenses.
5. Increased collaboration
Cloud computing increases collaboration by allowing all employees wherever they are
to sync up and work on documents and shared apps simultaneously, and follow
colleagues and records to receive critical updates in real time. A survey by Frost &
Sullivan found that companies which invested in collaboration technology had a 400%
return on investment.
6. Work from anywhere
As long as employees have internet access, they can work from anywhere. This
flexibility positively affects knowledge workers' work-life blanace and productivity. One
study found that 42% of working adults would give up some of their salary if they could
telecommute, and on average they would take a 6% paycut.
7. Document control
According to one study, "73% of knowledge workers collaborate with people in different
time zones and regions at least monthly".
If a company doesnt use the cloud, workers have to send files back and forth over
email, meaning only one person can work on a file at a time and the same document
has tonnes of names and formats.
Cloud computing keeps all the files in one central location, and everyone works off of
one central copy. Employees can even chat to each other whilst making changes
together. This whole process makes collaboration stronger, which increases efficiency
and improves a companys bottom line.
8. Security
Some 800,000 laptops are lost each year in airports alone. This can have some serious
monetary implications, but when everything is stored in the cloud, data can still be
accessed no matter what happens to a machine.
9. Competitiveness
The cloud grants SMEs access to enterprise-class technology. It also allows smaller
businesses to act faster than big, established competitors. A study on disaster recovery
eventually concluded that companies that didnt use the cloud had to rely on tape

backup methods and complicated procedures to recover slow, laborious things which
cloud users simply dont use, allowing David to once again out-manoeuvre Goliath.
10. Environmentally friendly
Businesses using cloud computing only use the server space they need, which
decreases their carbon footprint. Using the cloud results in at least 30% less energy
consumption and carbon emissions than using on-site servers. And again, SMEs get the
most benefit: for small companies, the cut in energy use and carbon emissions is likely
to be 90%.
drawback of cloud computing
Disadvantages of Cloud Computing
In spite of its many benefits, as mentioned above, cloud computing also has its
disadvantages. Businesses, especially smaller ones, need to be aware of these cons
before going in for this technology.
Technical Issues
Though it is true that information and data on the cloud can be accessed anytime and
from anywhere at all, there are times when this system can have some serious
dysfunction. You should be aware of the fact that this technology is always prone to
outages and other technical issues. Even the best cloud service providers run into this
kind of trouble, in spite of keeping up high standards of maintenance. Besides, you will
need a very good Internet connection to be logged onto the server at all times. You will
invariably be stuck in case of network and connectivity problems.
Security in the Cloud
The other major issue while in the cloud is that of security issues. Before adopting this
technology, you should know that you will be surrendering all your companys sensitive
information to a third-party cloud service provider. This could potentially put your
company to great risk. Hence, you need to make absolutely sure that you choose the
most reliable service provider, who will keep your information totally secure.
Prone to Attack
Storing information in the cloud could make your company vulnerable to external hack
attacks and threats. As you are well aware, nothing on the Internet is completely secure
and hence, there is always the lurking
High Speed Internet Required - Cloud Computings performance in slow speed
internet connections is absurd. Slow connections like dial-up make is Cloud computing
a pain for the user or it can be say it is impossible for the users to enjoy cloud
computing on slow connections. Large documents and web-base applications need a lot
of bandwidth to download. Is you are using a low speed internet connection then you
may feel Cloud Computing will take more than a lifetime to be operational on that type
of connection. In simple words Cloud Computing is not for slow connections.
Constant Internet Connection Cloud Computing solution without proper internet
connection is just like lifeless body. Because you are using internet for accessing both
your documents and applications, in case if you dont have an internet connection you
cant even access your documents. Departed internet connection means no work on
the cloud computing and areas where internet connections are slow and are unreliable
it can affect your business. Cloud Computing simply stops working without internet
connection.

Limited Features Today many web-based applications are not fully featured when
compared to their desktop versions. Just for an example there are n-number of things
which can be done using Microsoft PowerPoint with the help of Google Presentations
web based feature. Basics of using Microsoft PowerPoint on Cloud Computing are same
but it lacks many of the advanced features of PowerPoint. If you are specially fond of
power features, you will not look at cloud computing.
Data Stored is not secure All the data in Cloud Computing is stored on Cloud. Its
your duty to make sure how secure Cloud is? Whether only authorized persons are
slowed to access to your confidential data. Concept of cloud computing is new and
even if hosting companies say that the data is secured it cant be a 100% truth. If you
will see theoretically data on cloud computing is unsafe as it is replicated along multiple
machines. In any case if your data goes missing you dont have any chance of local or
physical backup. Simply depending on cloud can let you down and there is always a risk
of failure. In order to save the data only solution is downloading all cloud documents on
your machines. However this is a lengthy process and every time your documents
upgrades you will have to download a new copy of the application

Você também pode gostar