Você está na página 1de 25

COMPUTER SECURITY

Computer security, also known as cyber security or IT security is the protection of information
systems from theft or damage to the hardware, the software, and to the information on them, as
well as from disruption or misdirection of the services they provide. It includes controlling
physical access to the hardware, as well as protecting against harm that may come via network
access, data and code injection, and due to malpractice by operators, whether intentional,
accidental, or due to them being tricked into deviating from secure procedures. Organizations
and people that use computers can describe their needs for information security under four major
headings. Secrecy controlling who gets to read information; integrity meaning controlling how
information changes or resources are used; accountability: knowing who has had access to
information or resource and availability: providing prompt access to information and resources.

CRYPTO ANALYSIS
It is the study of analyzing information systems in order to study the hidden aspects of the
systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the
contents of encrypted messages, even if the cryptographic key is unknown. Cryptanalysis
invoves decryption and analysis of codes, ciphers or encrypted text. Cryptanalysis
uses mathematical formulas to search for algorithm vulnerabilities and break into
cryptography or information security systems. Cryptanalysis attack types include:

Known-Plaintext Analysis (KPA): Attacker decrypt ciphertexts with known partial


plaintext.

Chosen-Plaintext Analysis (CPA): Attacker uses ciphertext that matches arbitrarily


selected plaintext via the same algorithm technique.

Ciphertext-Only Analysis (COA): Attacker uses known ciphertext collections.

Man-in-the-Middle (MITM) Attack: Attack occurs when two parties use message or key
sharing for communication via a channel that appears secure but is actually

compromised. Attacker employs this attack for the interception of messages that pass
through the communications channel. Hash functions prevent MITM attacks.

Adaptive Chosen-Plaintext Attack (ACPA): Similar to a CPA, this attack uses chosen
plaintext and ciphertext based on data learned from past encryptions.

Cryptanalysis refers to the study of ciphers, ciphertext, or cryptosystems (that


is, to secret code systems) with a view to finding weaknesses in them that will
permit retrieval of the plaintext from the ciphertext, without necessarily
knowing the key or the algorithm. This is known as breaking the cipher,
ciphertext, or cryptosystem.
Breaking is sometimes used interchangeably with weakening. This refers to
finding a property (fault) in the design or implementation of the cipher that
reduces the number of keys required in a brute force attack (that is, simply
trying every possible key until the correct one is found).

STEGANOGRAPHY
It is the practice of concealing a file, message, image, or video within another file, message,
image, or video. Generally, the hidden messages appear to be (or be part of) something else:
images, articles, shopping lists, or some other cover text. For example, the hidden message may
be in invisible ink between the visible lines of a private letter. Some implementations of
steganography that lack a shared secret are forms of security through obscurity.

Steganography is an ancient technique for hiding information in plain sight. It


has been used throughout history as a covert means of communication. It is
rumored that the ancient Greek would shave the head of a messenger and
write a message on his bald head. Over time, his hair would grow in and hide

the message. He would pass through enemy lines without anyone being
aware that the valuable message was right in front of them. The messenger
would get his head shaved again when he was ready to deliver the message
to the intended recipient.
Luckily, the modern, technical equivalent does not require that you shave your
head. Instead, covert information can be embedded into standard file types.
One of the most common steganographic techniques is to embed a text file
into an image file. Anyone viewing the image file would see no difference
between the original file and the file with the message embedded into it. This
is accomplished by storing the message using least significant bits in the data
file.

Steganography in images
This type of steganography is very effective against discovery and can serve a variety of purposes. These
purposes can include authentication, concealing of messages, and transmission of encryption keys. The
most effective method for this type of steganography is normally the least significant bit method. This
simply means that the hidden message will alter the last bit of a byte in a picture. By altering that last bit,
there will be relatively no change to the color of that pixel within the carrier image. This keeps the
message from being easily detected. The best type of image file to hide information inside of is a 24 bit
Bitmap. This is due the large file size and high quality.

Steganography in Audio
In audio files, the most prominent method for concealing information is the low bit encoding method. The
low bit encoding method is somewhat similar to the least significant bit method used in image files. The
secret information is attached to the end of the file. One of the issues with low bit encoding is that it can
be noticeable to the human ear. If someone is trying to hide information, this could be risky, since it is so
easily detectable. The spread spectrum method is another method that has been used in the concealment
of information in audio files. What this method does, is it adds random noise to the audio broadcast. This
method enables for the information to be spread accross the frequency spectrum and remain hiddden
under the random noise. The last method seen in audio steganography is echo hiding data. This method
seeks to hide information by using the echos that occur naturally within sound files. Then, extra sound
can be added to these echos, extra sound being the concealed message. This is a sufficient way to hide
information, expecially since it even improves the sound of the original audio file in some cases.

Steganography In Video
Steganography in Videos is basically hiding of information in each frame of video.
Only a small amount of information is hidden inside of video it generally isnt
noticeable at all, however the more information that is hidden the more noticeable it
will become. This method is effective as well, but must be done right or else reveal
more information instead of hiding.
Steganography In Documents
This is basically adding white space and tabs to the ends of the lines of a document.
This type of
Steganography is extremely effective, because the use white space and tabs is not
visible to the human eye in most text/document editors.

CRYPTOGRAPHY
Cryptography or cryptology the practice and study of techniques for secure communication in
the presence of third parties called adversaries. More generally, cryptography is about
constructing and analyzing protocols that prevent third parties or the public from reading private
messages;[3] various aspects in information security such as data confidentiality, data
integrity, authentication, and non-repudiation[4] are central to modern cryptography. Modern
cryptography exists at the intersection of the disciplines of mathematics, computer science,
and electrical engineering. Applications of cryptography include ATM cards, computer
passwords, and electronic commerce.
Cryptography is the science of providing security for information. It has been used historically as
a means of providing secure communication between individuals, government agencies, and
military forces. Today, cryptography is a cornerstone of the modern security technologies used to
protect information and resources on both open and closed networks.
Encryption is a modern form of cryptography that allows a user to hide
information from others. Encryption uses a complex algorithm called a cipher in
order to turn normalized data (plaintext) into a series of seemingly random
characters (ciphertext) that is unreadable by those without a special key in which to

decrypt it. Those that possess the key can decrypt the data in order to view the
plaintext again rather than the random character string of ciphertext.
Cryptography is closely related to the disciplines of cryptology and cryptanalysis.
Cryptography includes techniques such as microdots, merging words with images,
and other ways to hide information in storage or transit. However, in today's
computer-centric world, cryptography is most often associated with scrambling
plaintext (ordinary text, sometimes referred to as cleartext) into ciphertext (a
process called encryption), then back again (known as decryption). Individuals who
practice this field are known as cryptographers.
The art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only
those who possess a secret key can decipher (or decrypt) the message into plain text. Encrypted messages can
sometimes be broken by cryptanalysis, also called codebreaking, although modern cryptography techniques are
virtually unbreakable.
As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming
increasingly important. Cryptography is used to protect e-mailmessages, credit card information, and corporate data.
One of the most popular cryptography systems used on the Internet is Pretty Good Privacybecause it's effective and
free.
Cryptography systems can be broadly classified into symmetric-key systems that use a single key that both the
sender and recipient have, and public-keysystems that use two keys, a public key known to everyone and a private
key that only the recipient of messages uses.

ASYMMETRIC ENCRYPTION
In cryptography, encryption is the process of encoding messages or information in such a way
that only authorized parties can read it. Encryption does not of itself prevent interception, but
denies the message content to the interceptor. In an encryption scheme, the intended
communication information or message, referred to as plaintext, is encrypted using an encryption
algorithm, generating ciphertext that can only be read if decrypted. For technical reasons, an
encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is
in principle possible to decrypt the message without possessing the key, but, for a well-designed
encryption scheme, large computational resources and skill are required. An authorized recipient

can easily decrypt the message with the key provided by the originator to recipients, but not to
unauthorized interceptors.
Quite simply, encryption is the process of taking information and transforming it using a
mathematical algorithm and an encryption key to make it unreadable to anyone who might come
across it inadvertently or illegitimately. When an authorized user of the information encounters
it, he or she decrypts the information using a similar mathematical algorithm and a key to take
the encrypted ciphertext and transform it back into the original plaintext.
Asymmetric Encryption is a form of Encryption where keys come in pairs. What one key
encrypts, only the other can decrypt. Frequently (but not necessarily), the keys are
interchangeable, in the sense that if key A encrypts a message, then B can decrypt it, and if key B
encrypts a message, then key A can decrypt it.
Public key cryptography, or asymmetric cryptography, is any cryptographic system that uses
pairs of keys: public keys which may be disseminated widely, and private keys which are known
only to the owner. This accomplishes two functions: authentication, which is when the public key
is used to verify that a holder of the paired private key sent the message, and encryption,
whereby only the holder of the paired private key can decrypt the message encrypted with the
public key.
In a public key encryption system, any person can encrypt a message using the public key of the
receiver, but such a message can be decrypted only with the receiver's private key. For this to
work it must be computationally easy for a user to generate a public and private key-pair to be
used for encryption and decryption. The strength of a public key cryptography system relies on
the degree of difficulty (computational impracticality) for a properly generated private key to be
determined from its corresponding public key. Security then depends only on keeping the private
key private, and the public key may be published without compromising security
wo of the best-known uses of public key cryptography are:

Public key encryption, in which a message is encrypted with a recipient's public key. The
message cannot be decrypted by anyone who does not possess the matching private key, who

is thus presumed to be the owner of that key and the person associated with the public key.
This is used in an attempt to ensure confidentiality.

Digital signatures, in which a message is signed with the sender's private key and can be
verified by anyone who has access to the sender's public key. This verification proves that
the sender had access to the private key, and therefore is likely to be the person associated
with the public key. This also ensures that the message has not been tampered with, as any
manipulation of the message will result in changes to the encoded message digest, which
otherwise remains unchanged between the sender and receiver.

Private vs. Public Crypto-Systems


The security of a given crypto-system depends on the amount of information known by
the cryptanalyst about the algorithms and keys in use. In theory, if the encryption
algorithm and keys are independent of the decryption algorithm and keys, then full
knowledge of the encryption algorithm and key wouldnt help the cryptanalyst break the
code. However, in many practical crypto-systems, the same algorithm and key are used
for both encryption and decryption. The security of these symmetric cipher systems
depends on keeping at least the key secret from others, making them known as private
key crypto-systems.
An example of a symmetric, private-key crypto-system is the Data Encryption Standard
(DES) [NBS 1978]. In this case, the encryption/decryption algorithm is widely known
and has been widely studied, relying on the privacy of the encryption/decryption key for
its security. Other private-key systems have been implemented and deployed by the NSA
for the protection of classified government information. In contrast to the DES, the
encryption/decryption algorithms within those crypto-systems have been kept private, to
the extent that the computer chips on which they are implemented are coated in such a
way as to prevent them from being examined.
Users are often intolerant of private encryption and decryption algorithms because they
dont know how the algorithms work or if a trap-door exists which would allow the
algorithm designer to read the users secret information. In an attempt to eliminate this

lack of trust a number of crypto-systems have been developed around encryption and
decryption algorithms based on fundamentally-difficult problems, or one-way
functions, which have been studied extensively by the research community. In this way,
users can be confident that no trap-door exists that would render their methods insecure.

SYSMETRIC ENCRYPTION
ymmetric-key algorithms[1] are algorithms for cryptography that use the same cryptographic
keys for both encryption of plaintext and decryption of ciphertext. The keys may be identical or
there may be a simple transformation to go between the two keys. The keys, in practice,
represent a shared secret between two or more parties that can be used to maintain a private
information link.[2] This requirement that both parties have access to the secret key is one of the
main drawbacks of symmetric key encryption, in comparison to public-key encryption (also
known as asymmetric key encryption).[
Asymmetric Encryption
The problem with secret keys is exchanging them over the Internet or a large network while
preventing them from falling into the wrong hands. Anyone who knows the secret key can
decrypt the message. One answer is asymmetric encryption, in which there are two related keys-a key pair. A public key is made freely available to anyone who might want to send you a
message. A second, private key is kept secret, so that only you know it.
Any message (text, binary files, or documents) that are encrypted by using the public key can
only be decrypted by applying the same algorithm, but by using the matching private key. Any
message that is encrypted by using the private key can only be decrypted by using the matching
public key.
This means that you do not have to worry about passing public keys over the Internet (the keys
are supposed to be public). A problem with asymmetric encryption, however, is that it is slower

than symmetric encryption. It requires far more processing power to both encrypt and decrypt the
content of the message.
Symmetric Encryption
Symmetric encryption is the oldest and best-known technique. A secret key, which can be a
number, a word, or just a string of random letters, is applied to the text of a message to change
the content in a particular way. This might be as simple as shifting each letter by a number of
places in the alphabet. As long as both sender and recipient know the secret key, they can encrypt
and decrypt all messages that use this key.
Symmetric-key encryption can use either stream ciphers or block ciphers.[4]

Stream ciphers encrypt the digits (typically bytes) of a message one at a time.

Block ciphers take a number of bits and encrypt them as a single unit, padding the
plaintext so that it is a multiple of the block size. Blocks of 64 bits were commonly used.
The Advanced Encryption Standard (AES) algorithm approved by NIST in December 2001,
and the GCM block cipher mode of operation use 128-bit blocks.

TAXONOMY OF COMPUTER SECURITY


1

Integrity

Integrity seeks to maintain resources in a valid and intended state. This might be important to
keep resources from being changed improperly (adding money to a bank account) or to maintain
consistency between two parts of a system (double-entry bookkeeping). Integrity is not a
synonym for accuracy, which depends on the proper selection, entry and updating of information.
The most highly developed policies for integrity reflect the concerns of the accounting and
auditing community for preventing fraud. A classic example is a purchasing system. It has three
parts: ordering, receiving, and payment. Someone must sign off on each step, the same person

cannot sign off on two steps, and the records can only be changed by fixed procedures, e.g., an
account is debited and a check written only for the amount of an approved and received order.
2

Accountability

In any real system there are many reasons why actual operation will not always reflect the
intentions of the owners: people make mistakes, the system has errors, the system is vulnerable
to certain attacks, the broad policy was not translated correctly into detailed specifications, the
owners change their minds, etc. When things go wrong, it is necessary to know what has
happened: who has had access to information and resources and what actions have been taken.
This information is the basis for assessing damage, recovering lost information, evaluating
vulnerabilities, and taking compensating actions outside the system such as civil suits or criminal
prosecution.
3

Availability

Availabilityi seeks to ensure that the system works promptly. This may be essential for operating
a large enterprise (the routing system for long-distance calls, an airline reservation system) or for
preserving lives (air traffic control, automated medical systems). Delivering prompt service is a
requirement that transcends security, and computer system availability is an entire field of its
own. Availability in spite of malicious acts and environmental mishaps, however, is often
considered an aspect of security.
An availability policy is usually stated like this:
On the average, a terminal shall be down for less than ten minutes per month.
A particular terminal (e.g., an automatic teller machine, a reservation agents keyboard and
screen, etc.) is up if it responds correctly within one second to a standard request for service;
otherwise it is down. This policy means that the up time at each terminal, averaged over all the
terminals, must be at least 99.9975%.
Such a policy covers all the failures that can prevent service from being delivered: a broken
terminal, a disconnected telephone line, loss of power at the central computer, software errors,
operator mistakes, system overload, etc. Of course, to be implementable it must be qualified by
some statements about the environment, e.g. that power doesnt fail too often.

A security policy for availability usually has a different form, something like this:
No inputs to the system by any user who is not an authorized administrator shall cause any
other users terminal to be down.
Note that this policy doesnt say anything about system failures, except to the extent that they
can be caused by user actions. Also, it says nothing about other ways in which an enemy could
deny service, e.g. by cutting a telephone line.
4

Individual accountability (authentication)

To answer the question Who is responsible for this statement? it is necessary to know what sort
of entities can be responsible for statements. These entities are (human) users or (computer)
systems, collectively called principals. A user is a person, but a system requires some
explanation. A computer system is comprised of hardware (e.g., a computer) and perhaps
software (e.g., an operating system). Systems implement other systems, so, for example, a
computer implements an operating system which implements a database management system
which implements a user query process. As part of authenticating a system, it may be necessary
to verify that the system that implements it is trusted to do so correctly.
The basic service provided by authentication is information that a statement was made by some
principal. Sometimes, however, theres a need to ensure that the principal will not later be able to
claim that the statement was forged and he never made it. In the world of paper documents, this
is the purpose of notarizing a signature; the notary provides independent and highly credible
evidence, which will be convincing even after many years, that the signature is genuine and not
forged. This aggressive form of authentication is called non-repudiation.
5

Authorization and separation of duty

Authorization determines who is trusted for a given purpose. More precisely, it determines
whether a particular principal, who has been authenticated as the source of a request to do
something, is trusted for that operation. Authorization may also include controls on the time at
which something can be done (only during working hours) or the computer terminal from which
it can be requested (only the one on the managers desk).

It is a well established practice, called separation of duty, to insist that important operations
cannot be performed by a single person, but require the agreement of (at least) two different
people. This rule make it less likely that controls will be subverted because it means that
subversion requires collusion.
6

Auditing

Given the reality that every computer system can be compromised from within, and that many
systems can also be compromised if surreptitious access can be gained, accountability is a vital
last resort. Accountability policies were discussed earlier --e.g., all significant events should be
recorded and the recording mechanisms should be nonsubvertible. Auditing services support
these policies. Usually they are closely tied to authentication and authorization, so that every
authentication is recorded as well as every attempted access, whether authorized or not.
The audit trail is not only useful for establishing accountability. In addition, it may be possible to
analyze the audit trail for suspicion patterns of access and so detect improper behavior by both
legitimate users and masqueraders. The main problem however, is how to process and interpret
the audit data. Both statistical and expert-system approaches are being tried

BIOMETRIC AUTHENTICATION MECHANISM


The biometrictechnologies involved are based on the ways in which individuals can be uniquely
identified through one or more distinguishing biological traits, such as fingerprints, hand
geometry, earlobe geometry, retina and iris patterns, voice waves, keystroke dynamics, DNA and
signatures. Biometric authentication is the application of that proof of identity as part of a
process validating a user for access to a system. Biometric technologies are used to secure a wide
range of electronic communications, including enterprise security, online commerce and
banking -- even just logging in to a computer or smartphone.
Biometric authentication systems compare the current biometric data capture to stored,
confirmed authentic data in a database. If both samples of the biometric data match,
authentication is confirmed and access is granted. The process is sometimes part of a multifactor

authentication system. For example, a smartphone user might log on with his personal
identification number (PIN) and then provide an iris scan to complete the authentication process.
Types of biometric authentication technologies:
Retina scans produce an image of the blood vessel pattern in the light-sensitive surface lining the
individual's inner eye.
Retina scanning is a biometric verification technology that uses an image of an individuals
retinal blood vessel pattern as a unique identifying trait for access to secure installations.
Biometric verification technologies are based on ways in which individuals can be uniquely
identified through one or more distinguishing biological traits. Unique identifiers include
fingerprints, hand geometry, earlobe geometry, retina and iris patterns, voice waves, DNA and
signatures.

Iris recognition is used to identify individuals based on unique patterns within the ring-shaped
region surrounding the pupil of the eye.
Iris recognition is a method of identifying people based on unique patterns within the ringshaped region surrounding the pupil of the eye. The iris usually has a brown, blue, gray, or
greenish color, with complex patterns that are visible upon close inspection. Because it makes
use of a biological characteristic, iris recognition is considered a form of biometric verification.
Fingerscanning, the digital version of the ink-and-paper fingerprinting process, works with
details in the pattern of raised areas and branches in a human finger image.
Fingerscanning, also called fingerprint scanning, is the process of electronically obtaining and
storing human fingerprints. The digital image obtained by such scanning is called a finger image.
In some texts, the terms fingerprinting and fingerprint are used, but technically, these terms refer
to traditional ink-and-paper processes and images.

Finger vein ID is based on the unique vascular pattern in an individual's finger.


Finger vein ID is a biometric authentication system that matches the vascular pattern in an
individual's finger to previously obtained data. Hitachi developed and patented a finger vein ID
system in 2005. The technology is currently in use or development for a wide variety of
applications, including credit card authentication, automobile security, employee time and
attendance tracking, computer and network authentication, end point security and ATM
machines.
Facial recognition systems work with numeric codes called faceprints, which identify 80 nodal
points on a human face.
Facial recognition (or face recognition) is a type of biometric software application that can
identify a specific individual in a digital image by analyzing and comparing patterns.
Facial recognition systems are commonly used for security purposes but are increasingly being
used in a variety of other applications. The Kinect motion gaming system, for example, uses
facial recognition to differentiate among players.

Voice identification systems rely on characteristics created by the shape of the speaker's mouth
and throat, rather than more variable conditions.
Voice ID (sometimes called voice authentication) is a type of user authentication that uses
voiceprintbiometrics, voice ID relies on the fact that vocal characteristics, like fingerprints and
the patterns of people's irises, are unique for each individual.

STRENGTHS OF BIOMETRIC AUTHENTICATION MECHANISM


Accurate Identification
While traditional security systems are reliant on passwords, personal identification numbers
(PINs) or smart cards, you can achieve a high level of accuracy with biometrics systems. If you

have set up the system correctly, you can use biological characteristics like fingerprints and iris
scans, which offer you unique and accurate identification methods. These features cannot be
easily duplicated, which means only the authorized person gets access and you get high level of
security.
Accountability
Biometric log-ins mean a person can be directly connected to a particular action or an event. In
other words, biometrics creates a clear, definable audit trail of transactions or activities. This is
especially handy in case of security breaches because you know exactly who is responsible for
it. As a result you get true and complete accountability, which cannot be duplicated.
Easy and Safe for Use
The good thing about using biometrics for identificaiton is that modern systems are built and
designed to be easy and safe to use. Biometrics technology gives you accurate results with
minimal invasiveness as a simple scan or a photograph is usually all thats required. Moreover
the software and hardware can be easily used and you can have them installed without the need
for excessive training.
Time Saving
Biometric identification is extremely quick, which is another advantage it has over other
traditional security methods. A person can be identified or rejected in a matter of seconds. For
those business owners that understand the value of time management the use of this technology
can only be beneficial to your office revenue by increasing productivity and reducing costs by
eliminating fraud and waste.
User Friendly Systems
You can have biometrics systems installed rather easily and after that, they do their job quickly,
reliably and uniformly. You will need only a minimum amount of training to get the system
operational and there is no need for expensive password administrators. If you use high quality
systems, it will also mean your maintenance costs are reduced to minimize the expenses of
maintaining an ongoing system.
Security
Another advantage these systems have is that they cant be guessed or stolen; hence they will be
a long term security solution for your company. The problem with efficient password systems is
that there is often a sequence of numbers, letters, and symbols, which makes them difficult to

remember on a regular basis. The problem with tokens is that they can be easily stolen or lost
both these traditional methods involve the risk of things being shared. As a result you cant ever
be really sure as to who the real user is. However that wont be the case with biometric
characteristics, and you wont have to deal with the problem of sharing, duplication, or fraud.

Convenience
Its considered to be a convenient security solution because you dont have to remember
passwords, or carry extra badges, documents, or ID cards. You are definitely saved the hassle of
having to remember passwords frequently or changing cards and badges. People forget
passwords and ID cards are lost, which can be a huge nuisance with traditional security methods.

Versatility
There are different types of biometrics scanners available today and they can be used for
various applications. They can be used by companies at security checkpoints including entrances,
exits, doorways, and more.
Moreover you can make the most out of the biometric solutions to decide who can access certain
systems and networks. Companies can also use them to monitor employee time and attendance,
which raises accountability.
Scalability
Biometrics systems can be quite flexible and easily scalable. You can use higher versions of
sensors and security systems based on your requirements. At the lowest level you can use

characteristics that are not very discriminative; however if you are looking for a higher level of
security for large scale databases then you can use systems with more discriminable features,
or multi-modal applications to increase identification accuracy.
ADVANTAGES OF BIOMETRIC OVER TRADITIONAL METHODS
Password and PINs have been the most frequently used authentication method. Their use
involves controlling access to a building or a room, securing access to computers, network, the
applications on the personal computers and many more. In some higher security applications,
handheld tokens such as key fobs and smart cards have been deployed. Due to some problems
related to these methods, the suitability and reliability of these authentication technologies have
been questioned especially in this modern world with modern applications. Biometrics offer
some benefits compare to these authentication technologies.
INCREASED SECURITY
Biometric technology can provide a higher degree of security compared to traditional
authentication methods. Chirillo (2003 p. 2) stated that biometrics is preferred over traditional
methods for many reasons which include the fact that the physical presence of the authorized
person is required at the point of identification. This means that only the authorized person has
access to the resources.
Effort by people to manage several passwords has left many choosing easy or general words,
with considerable number writing them in conspicuous places. This vulnerability leads to
passwords easily guessed and compromised. Also, tokens can be easily stolen as it is something
you have. By contrast, it is almost impossible for biometrics data to be guessed or even stolen in
the same manner as token or passwords. Nanavati (2002 p. 4) was of the opinion that although
some biometric systems can be broken under certain conditions, today's biometric systems are
highly unlikely to be fooled by a picture of a face..." He further added that this is based on the
assumption that the imposter has been able to successfully gather these physical characteristics
which he concluded as unlikely in most cases.

INCREASED CONVENIENCE
One major reason passwords are sometimes kept simple is because they can be easily forgotten.
To increase security, many computer users are mandated to manage several passwords and this
increases the tendency to forget them. Card and tokens can be stolen and forgotten as well even
though attaching them to keyholders or chains can reduce the risk. Because biometric
technologies are based on something you are, it makes them almost impossible to forgot or
manage. This characteristic allows biometrics to offer much convenience than other systems
which are based on having to keep possession of cards or remembering several passwords.
Biometrics can greatly simplify the whole process involved in authentication which reduces the
burden on user as well as the system administrator (For PC applications where biometrics
replaces multiple passwords).
Nanavati (2002 p. 5) stated that "Biometric authentication also allows for the association of
higher levels of rights and privileges with a successful authentication." He further explained that
information of high sensitivity can be made more readily available on a network which is
biometrically protected than one which is password protected. This can increase convenience as
a user can access otherwise protected data without any need of human intervention.
INCREASED ACCOUNTABILITY
Traditional authentication methods such as tokens, passwords and PINs can be shared thereby
increasing the possibility of unaccountable access, even though it might be authorized. Many
organizations share common passwords among administrators for the purpose of facilitating
system administration. Unluckily, because there is uncertainty as to who at a particular point in
time is using the shared password or token, accountability of any action is greatly reduced. Also,
the user of a shared password or token may not be authorized and sharing makes it even hard to
verify, the security (especially confidentiality and integrity) of the system is also reduced.
Increase in security awareness in organizations and the applications being used has led to the
need for strong and reliable auditing and reporting. Deploying biometrics to secure access to
computers and other facilities eliminates occurrence such as buddy-punching and therefore
provides a great level of certainty as to who accessed what computer at what point in time.

WEAKNESES BIOMETRIC AUTHENTICATION MECHANISM


Like all technology however, biometrics also comes with some disadvantages. One disadvantage
of biometrics is cost. Different biometric technologies need the use of different devices that have
a range of costs. Also the use of these biometric devices may cause delay in peoples day. People
are concerned they will have to wait in line to get scanned or finger printed to gain access to a
building or school. More disadvantages deriving from using the finger-scan are, some users
cannot be enrolled because of unreadable fingerprints, whether due to damage, age or ethnicity
(Reynolds, 2004). A second disadvantage is the fact that people are concerned they might have to
touch a device that someone else has to touch which could cause the spread of germs.
Disadvantages from the iris-scan are some individuals are difficult to capture. Also the
iris can be easily obscured by eyelashes, eyelids, lens and reflections from the cornea. There is
also a lack of existing data which deters the ability to use for background or watch list checks.
Face recognition also has disadvantages that come along with it. The face can be
obstructed by hair, glasses, hats, scarves, etc. Also changes in lighting or facial expressions can
throw off the device. A third disadvantage related to face recognition is that peoples faces
change over time. In order for face recognition to be accurate images are most accurate when
taken facing the acquisition camera and not sharp angles. The users face must be lit evenly,
preferably from the front, (SANS Institute, 2003). This is not always possible and can be very
hard to do in some environments.
More biometric technologies that have disadvantages linked to them include voice,
signature, and hand geometry verification. With voice verification there needs to be as little
background noise as possible or the spoken phrase will not be registered accurately. With
signature verification the problem happens when peoples signatures change over time and are
not always consistent. And lastly, hand geometry verification is a high cost service and needs
device of a large size to carry out the task.
Facial recognition uses a 2-D recognition system, which is susceptible to changes in lighting, the
persons hair, whether the person wears glasses and age, as people's faces change over time. In

order for facial recognition to be accurate, the image of user's face must be lit evenly and
preferably from the front, which is not always possible and can be very hard to do in some
environments.
Voice recognition has low accuracy, and a voice can be easily recorded and used for
unauthorized access. An illness such as a cold can alter the user's voice and make identification
more difficult or even impossible.
Retinal scanning is an expensive identification method and can cause delays, as the comparison
with stored templates can take up to 10 seconds.
Fingerprint reading can make mistakes with the dryness or dirt of the fingers skin, as well as
with age. Fingerprints are captured in an image format that requires a lot of memory to process.
Limitations of biometric devices and their system support.
Although biometric devices provide opportunities in organizational settings, organisations
willing to implement a biometric system must face to its limitations. The first disadvantage of a
biometric system is its high cost. Because a biometric system alone is not effective, it must be
combined to a system supporting smart cards. The cost of implementation of both system
together can reach sum of hundreds of thousands of dollars. In addition, the training cost of
employees to the new system and the temporary loss of productivity due the training program are
added up to the implementation of the new system. Secondly, organisations will have to deal
with people aversionof using a new system. In term of privacy concerns, people will not likely
be willing to accept a system which records and stores their physical and personal traits.
Moreover, people often assimilate fingerprints and others physical records to criminal contexts.
So common people are more susceptible to reject biometric system while real criminals would
refuse it in the fear to be discovered.
Lack of reliability.
The last but not the least disadvantage of biometric system is the lack of reliability of some of its
aspect. First, biometric devices can be fooled. As Russel Kay explains in his article Testing the
limits of biometrics, "Japanese cryptographer Tsutomu Matsumoto at Yokohama National
University found that by making moulds out of gelatine he could reproduce a fingerprint that

would fool 80 percent of commercial readers". On the other hand, as fingerprints or any other
physical traits are compromised due to falsification, they can not just be replacedlike a password
or smart card. Finally, a major inconvnient of biometric system is the lack of durability of
biometric devices. After a frquent use of biometric devices, readers lose their reliabilty and their
accuracy leading to rptitive false rejection and false authorization
However, a lot of things are possible with the information. It can be corrupted, taken, stolen, or
used otherwise. People are having concerns about the existing rules on the use of biometric data
and the appropriate safeguards for it. Because of data sharing, biometric data collected for noncriminal purposes are shared and used for national security purposes with no transparency.
Security cameras are installed at many places that keep on recording and adding photographs to
the database. This adds to security, but it means that anyone could end up in the database, even
those not involved in a crime (Conan, "The Pros and Cons of Gathering Biometric Data," par.
75). DNA presents privacy issues different from those involved in other biometrics collection.
Sample can contain information about a persons entire genetic makeup. False acceptance and
false rejection by the biometric systems can prove a criminal to be an innocent and an innocent
to be a criminal. Hence, careful use of biometrics and incorporation of appropriate investigation
techniques are important for ensuring national security.
Read more at iBuzzle: http://www.ibuzzle.com/articles/biometrics-types-merits-anddemerits.html

SYSTEM ADMINISTRATOR
A system administrator, or sysadmin, is a person who is responsible for the upkeep,
configuration, and reliable operation of computer systems; especially multi-user computers, such
as servers.
The system administrator seeks to ensure that the uptime, performance, resources, and security of
the computers he or she manages meet the needs of the users, without exceeding the budget.
To meet these needs, a system administrator may acquire, install, or upgrade computer
components and software; provide routine automation; maintain security policies; troubleshoot;
train and/or supervise staff; or offer technical support for projects.

PHYSICAL ISSUES
5: Initial Configuration
The first problem that comes to mind is glitches that occur when configuring your network, your
systems and resources for use. There are many components to a typical network and as size and
use grows, so do its complexities and the possibility for problems to arise. With the rise in
telecommuting over the past 10 years, and the growth of this market in terms of hardware and
software offerings, there are many people setting up systems and networking them together
without any formal education on the topics or systems, networking and security.
4: Credential, Permission and Rights Problems
So, if you configured everything correctly and connected all systems without issue then what
could possibly go wrong? Anything and everything. The first problem that comes to mind with
Windows systems is credentials, permissions and rights. Most times, you may try to access a host
and not be able to because yep, you guessed it because they can not log in, or they do not
have permissions to access resources once they are logged in
Network Performance
This is by far the most common issue with networking in general. With Windows, performance
can be affected in many ways. For example, if you build or buy a computer system without
taking into consideration the applications you will run across the network. The most common
applications are any type that requires a client to server relationship, which means the client
installed on the Windows desktop must interface and transmit data over the network in order to
function. If network performance is impacted, either the network is too slow (very common
term), or the application was not developed with the network in mind. It can be confusing to
solve this type of issue and normally requires advanced analysis of the problem usually needing
a tool such as a packet analyzer (known as a sniffer) to solve

General Security Concerns


The #1 networking issue when dealing with Windows clients is the poor application of basic
security services and features - or lack thereof. For example, your system may get a virus (or
other type of Malware) that causes the network to fail or, ties up your systems resources so
intensely that you cannot even browse a Web page. It is a fact that most of the intrusions over
your network come from within the network, or very easily over wireless connections. This is
seen more so with home offices and small companies that cannot afford (or are oblivious to)
enterprise security solutions used to control, monitor and lock down wireless usage. That does
not mean that your home PC, or router cannot be secured. The benefits you get from most
hardware and software sold today is that almost everything you get now comes with some form
of security features. Routers are now firewalls, IDS (intrusion detection systems), and provide
detailed logs of everything going through it. A common form of attack is intrusion. An example
would be, someone surfing (or roaming) your neighborhood (or a neighbor themselves) jumping
on an open wireless connection and using your resources (such as the internet)

LOGICAL ISSUES

A database administrator (DBA) maintains a database system, and is responsible for the
integrity of the data and the efficiency and performance of the system.

A network administrator maintains network infrastructure such as switches and routers,


and diagnoses problems with these or with the behavior of network-attached computers.

A security administrator is a specialist in computer and network security, including the


administration of security devices such as firewalls, as well as consulting on general security
measures.

A web administrator maintains web server services (such as Apache or IIS) that allow for
internal or external access to web sites. Tasks include managing multiple sites, administering
security, and configuring necessary components and software. Responsibilities may also
include software change management.

A computer operator performs routine maintenance and upkeep, such as changing backup
tapes or replacing failed drives in a redundant array of independent disks (RAID). Such tasks
usually require physical presence in the room with the computer, and while less skilled than
sysadmin tasks, may require a similar level of trust, since the operator has access to possibly
sensitive data.

A computer operational or external access to web sites. Tasks include managing multiple
sites, administering security, and configuring necessary components and software.
Responsibilities may also include software change management.

VULNERABLE SECURITY AREAS IN EGERTON UNIVERSITY

i Often called 'preventing denial of service'.

Você também pode gostar