Escolar Documentos
Profissional Documentos
Cultura Documentos
SUMMER 2014
PROGRAM
SEMESTER
SUBJECT CODE &
NAME
BK ID
4 Credits, 60 marks
B1973
ASSIGNMENT
Name:
Ajeet Kumar
Roll No:
1405010120
Center Code:
3293
There are several different types of computer networks. Computer networks can be characterized by
their size as well as their purpose.
The size of a network can be expressed by the geographic area they occupy and the number of
computers that are part of the network. Networks can cover anything from a handful of devices
within a single room to millions of devices spread across the entire globe.
Some of the different networks based on size are:
Peer-to-Peer Network
Peer-to-peer (P2P) is a decentralized communications model in which each party has
the same capabilities and either party can initiate a communication session. Unlike the
client/server model, in which the client makes a service request and the server fulfills
the request, the P2P network model allows each node to function as both a client and
server.
P2P systems can be used to provide anonymized routing of network traffic, massive
parallel computing environments, distributed storage and other functions. Most P2P
programs are focused on media sharing and P2P is therefore often associated with
software piracy and copyright violation.
Client-Server Network
The clientserver model of computing is a distributed application structure that
partitions tasks or workloads between the providers of a resource or service, called
servers, and service requesters, called clients. Often clients and servers communicate
over a computer network on separate hardware, but both client and server may reside
in the same system. A server host runs one or more server programs which share their
resources with clients. A client does not share any of its resources, but requests a
server's content or service function. Clients therefore initiate communication sessions
with servers which await incoming requests.
A client-server network is designed for end-users called clients to access resources
(such as files, songs, video collections or some other service) from a central computer
called a server. A server's sole purpose is to do what its name implies - serve its clients!
You may have been using this configuration and not even have known it. Have you ever
played Xbox Live or used the PlayStation Network? Your Xbox One is the client, and
when it logs into the network, it contacts the Xbox Live servers to retrieve gaming
resources like updates, video and game demos.
Cloud based network
Cloud networking is a new networking paradigm for building and managing secure private networks
over the public Internet by utilizing global cloud computing infrastructure. In cloud networking,
traditional network functions and services including connectivity, security, management and control,
are pushed to the cloud and delivered as a service. There are two categories within cloud networking:
Cloud-Enabled Networking (CEN) and Cloud-Based Networking (CBN).
CEN moves management and certain aspects of control (such as policy definition) into the cloud, but
keeps connectivity and packet-mode functions such as routing, switching and security services
local and often in hardware.
Defining Network Protocols
A network protocol defines rules and conventions for communication between network
devices. Protocols for computer networking all generally use packet switching
techniques to send and receive messages in the form of packets.
Network protocols include mechanisms for devices to identify and make connections
with each other, as well as formatting rules that specify how data is packaged into
messages sent and received.
Some protocols also support message acknowledgement and data compression
designed for reliable and/or high-performance network communication. Hundreds of
different computer network protocols have been developed each designed for specific
purposes and environments.
Explaining Personal Area Network
A personal area network (PAN) is a computer network used for data transmission
among devices such as computers, telephones and personal digital assistants. PANs
can be used for communication among the personal devices themselves (intrapersonal
communication), or for connecting to a higher level network and the Internet (an
uplink).
A personal area network (PAN) is the interconnection of information technology devices
within the range of an individual person, typically within a range of 10 meters. For
example, a person traveling with a laptop, a personal digital assistant (PDA), and a
portable printer could interconnect them without having to plug anything in, using
some form of wireless technology. Typically, this kind of personal area network could
also be interconnected without wires to the Internet or other networks
Defining Local Area Network
A local area network (LAN) is a computer network that interconnects computers within
a limited area such as a residence, school, laboratory, or office building. A local area
network is contrasted in principle to a wide area network (WAN), which covers a larger
geographic distance and may involve leased telecommunication circuits, while the
media for LANs are locally managed.
A local area network (LAN) is a group of computers and associated devices that share a
common communications line or wireless link to a server. Typically, a LAN encompasses
computers and peripherals connected to a server within a small geographic area such
as an office building or home. Computers and other mobile devices can share resources
such as a printer or network storage.
Defining Metropolitan Area Network
A metropolitan area network (MAN) is a network that interconnects users with computer
resources in a geographic area or region larger than that covered by even a large local
area network (LAN) but smaller than the area covered by a wide area network (WAN).
The term is applied to the interconnection of networks in a city into a single larger
network (which may then also offer efficient connection to a wide area network). It is
also used to mean the interconnection of several local area networks by bridging them
with backbone lines. The latter usage is also sometimes referred to as a campus
network.
Defining Wide Area Network
A wide area network (WAN) is a geographically dispersed telecommunications network.
The term distinguishes a broader telecommunication structure from a local area
network (LAN). A wide area network may be privately owned or rented, but the term
usually connotes the inclusion of public (shared user) networks. An intermediate form of
network in terms of geography is a metropolitan area network (MAN).
A computer network that spans a relatively large geographical area. Typically, a WAN
consists of two or more local-area networks (LANs). Computers connected to a widearea network are often connected through public networks, such as the telephone
system. They can also be connected through leased lines or satellites.
Explaining video compression
Video compression technologies are about reducing and removing redundant video
data so that a digital video file can be effectively sent over a network and stored on
computer disks. With efficient compression techniques, a significant reduction in file
size can be achieved with little or no adverse effect on the visual quality. The video
quality, however, can be affected if the file size is further lowered by raising the
compression level for a given compression technique.
Different compression technologies, both proprietary and industry standards, are
available. Most network video vendors today use standard compression techniques.
Standards are important in ensuring compatibility and interoperability. They are
particularly relevant to video compression since video may be used for different
purposes and, in some video surveillance applications, needs to be viewable many
years from the recording date. By deploying standards, end users are able to pick and
choose from different vendors, rather than be tied to one supplier when designing a
video surveillance system.
Axis uses three different video compression standards. They are Motion JPEG, MPEG-4
Part 2 (or simply referred to as MPEG-4) and H.264. H.264 is the latest and most
efficient video compression standard.
Video codec
The process of compression involves applying an algorithm to the source video to
create a compressed file that is ready for transmission or storage. To play the
compressed file, an inverse algorithm is applied to produce a video that shows virtually
the same content as the original source video. The time it takes to compress, send,
decompress and display a file is called latency. The more advanced the compression
algorithm, the higher the latency.
A pair of algorithms that works together is called a video codec (encoder/decoder).
Video codecs of different standards are normally not compatible with each other; that
is, video content that is compressed using one standard cannot be decompressed with
a different standard. For instance, an MPEG-4 decoder will not work with an H.264
encoder. This is simply because one algorithm cannot correctly decode the output from
another algorithm but it is possible to implement many different algorithms in the same
software or hardware, which would then enable multiple formats to coexist.
Explaining audio compression
Encoding digital audio data to take up less storage space and transmission bandwidth.
Audio compression typically uses lossy methods, which eliminate bits that are not
restored at the other end. ADPCM and MP3 are examples of audio compression
methods.
Compression is one of the most common processes in all audio work, yet the
compressor is one of the least understood and most misused audio processors.
Compressed audio is an everyday fact of modern life, with the sound of records,
telephones, TV, radios and public address systems all undergoing some type of
mandatory dynamic range modification. The use of compressors can make pop
recordings or live sound mixes sound musically better by controlling maximum levels
and maintaining higher average loudness. It is the intent of this article to explain
compressors and the process of compression so that you can use this powerful process
in a more creative and deliberate way.
Compressors and limiters are specialized amplifiers used to reduce dynamic range--the
span between the softest and loudest sounds. All sound sources have different dynamic
ranges or peak-to-average proportions. An alto flute produces a tone with only about a
3dB difference between the peak level and the average level. The human voice
(depending on the particular person) has a 10dB dynamic range, while a plucked or
percussive instrument may have a 15dB or more difference.
Our own ears, by way of complex physiological processes, do a fine job of compressing
by responding to roughly the average loudness of a sound. Good compressor design
includes a detector circuit that emulates the human ear by responding to average
signal levels. Even better compressor designs also have a second detector that
responds to peak signal levels and can be adjusted to clamp peaks that occur at a
specific level above the average signal level.
When sound is recorded, broadcast or played through a P.A. system, the dynamic range
must be restricted at some point due to the peak signal limitations of the electronic
system, artistic goals, surrounding environmental requirements or all the above.
Typically, dynamic range must be compressed because, for artistic reasons, the singer's
voice will have a higher average loudness and compression allows vocalizations such as
melismatic phrasing and glottal stops to be heard better when the vocal track is mixed
within a dense pop record track.
With recording, the dynamic range may be too large to be processed by succeeding
recording equipment and recording media. Even with the arrival of 90dB-plus dynamic
range of digital recording, huge and unexpected swings of level from synthesizers and
heavily processed musical instruments can overwhelm analog-to-digital converters,
distorting the recording.
With broadcast audio, dynamics are reduced for higher average loudness to achieve a
certain aural impact on the listener and to help compete with the noisy environment of
freeway driving. The station-to-station competition for who can be the loudest on the
radio dial has led to some innovative twists in compressor design. "Brick wall" limiting
is where the compressor absolutely guarantees that a predetermined level will not be
exceeded, thus preventing overmodulation distortion of the station's transmitter. (The
Federal Communication Commission monitors broadcast station transmissions and
issues citations and fines for overmodulation that can cause adjacent channel
interference and other problems.)
Another type of specialization that sprung from broadcast is called multiband
compression, where the audio spectrum is split into frequency bands that are then
processed separately. By compressing the low frequencies more or differently than the
midrange and high frequencies, the station can take on a "sound" that stands out from
other stations on the dial. Radio stations "contour" their sound with multiband
processing to fit their playlist/format.
Features of Bluetooth Technology
Bluetooth is a wireless technology that enables any electrical device to wirelessly
communicate in the 2.5 GHz ISM (license free) frequency band. It allows devices such
as mobile phones, headsets, PDA's and portable computers to communicate and send
data to each other without the need for wires or cables to link to devices together. It
has been specifically designed as a low cost, low power, radio technology, which is
particularly suited to the short range Personal Area Network (PAN) application. (It is the
design focus on low cost, low size and low power which distinguishes it from the IEEE
802.11 wireless LAN technology).
The Main Features of Bluetooth:
- Operates in the 2.4GHz frequency band without a license for wireless communication.
- Real-time data transfer usually possible between 10-100m.
- Close proximity not required as with infrared data (IrDA) communication devices as
Bluetooth doesn't suffer from interference from obstacles such as walls.
- Supports both point-to-point wireless connections without cables between mobile
phones and personal computers, as well as point-to-multipoint connections to anable
ad hoc local wireless networks.
Sending /Receiving files from remote Bluetooth devices
Browsing files on a remote device
Stand-alone Bluetooth printing
Sharing a local printer over Bluetooth
Remote controls
Mobile phone SMS/Call related items
PIM (appointments, contacts) synchronisation (Phone/Palm)
Keyboards, mice
Dial-up, 3G, Networking over Bluetooth
However, there is an issue with always using the same frequency. Other devices
operating at the same, or close, frequencies can cause interruptions in the signal.
To prevent this from being an issue, the signal is spread out over a wider range of
frequencies. In order to achieve this, the signal hops around the frequency, and in the
case of Bluetooth that happens about 1600 times per second. The frequent change in
wavelength means that even a consistent signal wont interrupt, and wont be
interrupted, for longer than 1/1600th of a second.
Bluetooth headsets can sync up in two different ways, using a full or part duplex
connection. A full-duplex signal means that all connected devices are able to send and
receive signals in this case a two-way conversation simultaneously, as opposed to a
half-duplex signal, like a walkie-talkie, where each side can still talk and listen, just not
both at the same time.
Explain the network configuration of Dense wavelength division multiplexing
(DWDM)
Dense wavelength division multiplexing (DWDM) is a technology that puts data from
different sources together on an optical fiber, with each signal carried at the same time
on its own separate light wavelength. Using DWDM, up to 80 (and theoretically more)
separate wavelengths or channels of data can be multiplexed into a lightstream
transmitted on a single optical fiber. Each channel carries a time division multiplexed
(TDM) signal. In a system with each channel carrying 2.5 Gbps (billion bits per second),
up to 200 billion bits can be delivered a second by the optical fiber. DWDM is also
sometimes called wave division multiplexing (WDM).
Since each channel is demultiplexed at the end of the transmission back into the
original source, different data formats being transmitted at different data rates can be
transmitted together. Specifically, Internet (IP) data, Synchronous Optical Network data
(SONET), and asynchronous transfer mode (ATM) data can all be travelling at the same
time within the optical fiber.
DWDM promises to solve the "fiber exhaust" problem and is expected to be the central
technology in the all-optical networks of the future.
DWDM networks must be tested for loss, connector cleanliness, dispersion and spectral
quality. EXFO offers all the right tools to ensure smooth DWDM operation, including
inspection probes, OTDRs, power meters, dispersion testers and optical spectrum
analyzers.
Short for Dense Wavelength Division Multiplexing, an optical technology used to
increase bandwidth over existing fiber optic backbones.
DWDM works by combining and transmitting multiple signals simultaneously at
different wavelengths on the same fiber. In effect, one fiber is transformed into multiple
virtual fibers. So, if you were to multiplex eight OC-48 signals into one fiber, you would
increase the carrying capacity of that fiber from 2.5 Gb/s to 20 Gb/s. Currently, because
of DWDM, single fibers have been able to transmit data at speeds up to 400Gb/s.
A key advantage to DWDM is that it's protocol- and bit-rate-independent. DWDM-based
networks can transmit data in IP, ATM, SONET /SDH, and Ethernet, and handle bit rates
between 100 Mb/s and 2.5 Gb/s. Therefore, DWDM-based networks can carry different
types of traffic at different speeds over an optical channel.
listing the pros and cons of DWDM
Network interfaces
An ATM network interface in the form of an accessory card. A lot of network interfaces
are built-in.
A network interface controller (NIC) is computer hardware that provides a computer
with the ability to access the transmission media, and has the ability to process lowlevel network information. For example, the NIC may have a connector for accepting a
cable, or an aerial for wireless transmission and reception, and the associated circuitry.
The NIC responds to traffic addressed to a network address for either the NIC or the
computer as a whole.
In Ethernet networks, each network interface controller has a unique Media Access
Control (MAC) addressusually stored in the controller's permanent memory. To avoid
address conflicts between network devices, the Institute of Electrical and Electronics
Engineers (IEEE) maintains and administers MAC address uniqueness. The size of an
Ethernet MAC address is six octets. The three most significant octets are reserved to
identify NIC manufacturers. These manufacturers, using only their assigned prefixes,
uniquely assign the three least-significant octets of every Ethernet interface they
produce.
Repeaters and hubs
A repeater is an electronic device that receives a network signal, cleans it of
unnecessary noise, and regenerates it. The signal is retransmitted at a higher power
level, or to the other side of an obstruction, so that the signal can cover longer
distances without degradation. In most twisted pair Ethernet configurations, repeaters
are required for cable that runs longer than 100 meters. With fiber optics, repeaters can
be tens or even hundreds of kilometers apart.
A repeater with multiple ports is known as a hub. Repeaters work on the physical layer
of the OSI model. Repeaters require a small amount of time to regenerate the signal.
This can cause a propagation delay that affects network performance. As a result, many
network architectures limit the number of repeaters that can be used in a row, e.g., the
Ethernet 5-4-3 rule.
Hubs have been mostly obsoleted by modern switches; but repeaters are used for long
distance links, notably undersea cabling.
Bridges
A network bridge connects and filters traffic between two network segments at the data
link layer (layer 2) of the OSI model to form a single network. This breaks the network's
collision domain but maintains a unified broadcast domain. Network segmentation
breaks down a large, congested network into an aggregation of smaller, more efficient
networks.
Bridges come in three basic types:
Local bridges: Directly connect LANs
Remote bridges: Can be used to create a wide area network (WAN) link between
LANs. Remote bridges, where the connecting link is slower than the end networks,
largely have been replaced with routers.
Wireless bridges: Can be used to join LANs or connect remote devices to LANs.
Switches
A network switch is a device that forwards and filters OSI layer 2 datagrams between
ports based on the MAC addresses in the packets.[9] A switch is distinct from a hub in
that it only forwards the frames to the physical ports involved in the communication
rather than all ports connected. It can be thought of as a multi-port bridge.[10] It learns
to associate physical ports to MAC addresses by examining the source addresses of
received frames. If an unknown destination is targeted, the switch broadcasts to all
ports but the source. Switches normally have numerous ports, facilitating a star
topology for devices, and cascading additional switches.
Multi-layer switches are capable of routing based on layer 3 addressing or additional
logical levels. The term switch is often used loosely to include devices such as routers
and bridges, as well as devices that may distribute traffic based on load or based on
application content (e.g., a Web URL identifier).
Routers
A typical home or small office router showing the ADSL telephone line and Ethernet
network cable connections
A router is an internetworking device that forwards packets between networks by
processing the routing information included in the packet or datagram (Internet
protocol information from layer 3). The routing information is often processed in
conjunction with the routing table (or forwarding table). A router uses its routing table
to determine where to forward packets. (A destination in a routing table can include a
"null" interface, also known as the "black hole" interface because data can go into it,
however, no further processing is done for said data.)
Modems
Modems (MOdulator-DEModulator) are used to connect network nodes via wire not
originally designed for digital network traffic, or for wireless. To do this one or more
carrier signals are modulated by the digital signal to produce an analog signal that can
be tailored to give the required properties for transmission. Modems are commonly
used for telephone lines, using a Digital Subscriber Line technology.
Firewalls
A firewall is a network device for controlling network security and access rules. Firewalls
are typically configured to reject access requests from unrecognized sources while
allowing actions from recognized ones. The vital role firewalls play in network security
grows in parallel with the constant increase in cyber attacks.
benefits of cloud computing
1. Flexibility
The second a company needs more bandwidth than usual, a cloud-based service can
instantly meet the demand because of the vast capacity of the services remote
servers. In fact, this flexibility is so crucial that 65% of respondents to an
InformationWeek survey said the ability to quickly meet business demands was an
important reason to move to cloud computing.
2. Disaster recovery
When companies start relying on cloud-based services, they no longer need complex
disaster recovery plans. Cloud computing providers take care of most issues, and they
do it faster. Aberdeen Group found that businesses which used the cloud were able to
resolve issues in an average of 2.1 hours, nearly four times faster than businesses that
didnt use the cloud (8 hours). The same study found that mid-sized businesses had the
best recovery times of all, taking almost half the time of larger companies to recover.
3. Automatic software updates
In 2010, UK companies spent 18 working days per month managing on-site security
alone. But cloud computing suppliers do the server maintenance including security
updates themselves, freeing up their customers time and resources for other tasks.
4. Cap-Ex Free
Cloud computing services are typically pay as you go, so theres no need for capital
expenditure at all. And because cloud computing is much faster to deploy, businesses
have minimal project start-up costs and predictable ongoing operating expenses.
5. Increased collaboration
Cloud computing increases collaboration by allowing all employees wherever they are
to sync up and work on documents and shared apps simultaneously, and follow
colleagues and records to receive critical updates in real time. A survey by Frost &
Sullivan found that companies which invested in collaboration technology had a 400%
return on investment.
6. Work from anywhere
As long as employees have internet access, they can work from anywhere. This
flexibility positively affects knowledge workers' work-life blanace and productivity. One
study found that 42% of working adults would give up some of their salary if they could
telecommute, and on average they would take a 6% paycut.
7. Document control
According to one study, "73% of knowledge workers collaborate with people in different
time zones and regions at least monthly".
If a company doesnt use the cloud, workers have to send files back and forth over
email, meaning only one person can work on a file at a time and the same document
has tonnes of names and formats.
Cloud computing keeps all the files in one central location, and everyone works off of
one central copy. Employees can even chat to each other whilst making changes
together. This whole process makes collaboration stronger, which increases efficiency
and improves a companys bottom line.
8. Security
Some 800,000 laptops are lost each year in airports alone. This can have some serious
monetary implications, but when everything is stored in the cloud, data can still be
accessed no matter what happens to a machine.
9. Competitiveness
The cloud grants SMEs access to enterprise-class technology. It also allows smaller
businesses to act faster than big, established competitors. A study on disaster recovery
eventually concluded that companies that didnt use the cloud had to rely on tape
backup methods and complicated procedures to recover slow, laborious things which
cloud users simply dont use, allowing David to once again out-manoeuvre Goliath.
10. Environmentally friendly
Businesses using cloud computing only use the server space they need, which
decreases their carbon footprint. Using the cloud results in at least 30% less energy
consumption and carbon emissions than using on-site servers. And again, SMEs get the
most benefit: for small companies, the cut in energy use and carbon emissions is likely
to be 90%.
drawback of cloud computing
Disadvantages of Cloud Computing
In spite of its many benefits, as mentioned above, cloud computing also has its
disadvantages. Businesses, especially smaller ones, need to be aware of these cons
before going in for this technology.
Technical Issues
Though it is true that information and data on the cloud can be accessed anytime and
from anywhere at all, there are times when this system can have some serious
dysfunction. You should be aware of the fact that this technology is always prone to
outages and other technical issues. Even the best cloud service providers run into this
kind of trouble, in spite of keeping up high standards of maintenance. Besides, you will
need a very good Internet connection to be logged onto the server at all times. You will
invariably be stuck in case of network and connectivity problems.
Security in the Cloud
The other major issue while in the cloud is that of security issues. Before adopting this
technology, you should know that you will be surrendering all your companys sensitive
information to a third-party cloud service provider. This could potentially put your
company to great risk. Hence, you need to make absolutely sure that you choose the
most reliable service provider, who will keep your information totally secure.
Prone to Attack
Storing information in the cloud could make your company vulnerable to external hack
attacks and threats. As you are well aware, nothing on the Internet is completely secure
and hence, there is always the lurking
High Speed Internet Required - Cloud Computings performance in slow speed
internet connections is absurd. Slow connections like dial-up make is Cloud computing
a pain for the user or it can be say it is impossible for the users to enjoy cloud
computing on slow connections. Large documents and web-base applications need a lot
of bandwidth to download. Is you are using a low speed internet connection then you
may feel Cloud Computing will take more than a lifetime to be operational on that type
of connection. In simple words Cloud Computing is not for slow connections.
Constant Internet Connection Cloud Computing solution without proper internet
connection is just like lifeless body. Because you are using internet for accessing both
your documents and applications, in case if you dont have an internet connection you
cant even access your documents. Departed internet connection means no work on
the cloud computing and areas where internet connections are slow and are unreliable
it can affect your business. Cloud Computing simply stops working without internet
connection.
Limited Features Today many web-based applications are not fully featured when
compared to their desktop versions. Just for an example there are n-number of things
which can be done using Microsoft PowerPoint with the help of Google Presentations
web based feature. Basics of using Microsoft PowerPoint on Cloud Computing are same
but it lacks many of the advanced features of PowerPoint. If you are specially fond of
power features, you will not look at cloud computing.
Data Stored is not secure All the data in Cloud Computing is stored on Cloud. Its
your duty to make sure how secure Cloud is? Whether only authorized persons are
slowed to access to your confidential data. Concept of cloud computing is new and
even if hosting companies say that the data is secured it cant be a 100% truth. If you
will see theoretically data on cloud computing is unsafe as it is replicated along multiple
machines. In any case if your data goes missing you dont have any chance of local or
physical backup. Simply depending on cloud can let you down and there is always a risk
of failure. In order to save the data only solution is downloading all cloud documents on
your machines. However this is a lengthy process and every time your documents
upgrades you will have to download a new copy of the application