Escolar Documentos
Profissional Documentos
Cultura Documentos
Computer security, also known as cyber security or IT security is the protection of information
systems from theft or damage to the hardware, the software, and to the information on them, as
well as from disruption or misdirection of the services they provide. It includes controlling
physical access to the hardware, as well as protecting against harm that may come via network
access, data and code injection, and due to malpractice by operators, whether intentional,
accidental, or due to them being tricked into deviating from secure procedures. Organizations
and people that use computers can describe their needs for information security under four major
headings. Secrecy controlling who gets to read information; integrity meaning controlling how
information changes or resources are used; accountability: knowing who has had access to
information or resource and availability: providing prompt access to information and resources.
CRYPTO ANALYSIS
It is the study of analyzing information systems in order to study the hidden aspects of the
systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the
contents of encrypted messages, even if the cryptographic key is unknown. Cryptanalysis
invoves decryption and analysis of codes, ciphers or encrypted text. Cryptanalysis
uses mathematical formulas to search for algorithm vulnerabilities and break into
cryptography or information security systems. Cryptanalysis attack types include:
Man-in-the-Middle (MITM) Attack: Attack occurs when two parties use message or key
sharing for communication via a channel that appears secure but is actually
compromised. Attacker employs this attack for the interception of messages that pass
through the communications channel. Hash functions prevent MITM attacks.
Adaptive Chosen-Plaintext Attack (ACPA): Similar to a CPA, this attack uses chosen
plaintext and ciphertext based on data learned from past encryptions.
STEGANOGRAPHY
It is the practice of concealing a file, message, image, or video within another file, message,
image, or video. Generally, the hidden messages appear to be (or be part of) something else:
images, articles, shopping lists, or some other cover text. For example, the hidden message may
be in invisible ink between the visible lines of a private letter. Some implementations of
steganography that lack a shared secret are forms of security through obscurity.
the message. He would pass through enemy lines without anyone being
aware that the valuable message was right in front of them. The messenger
would get his head shaved again when he was ready to deliver the message
to the intended recipient.
Luckily, the modern, technical equivalent does not require that you shave your
head. Instead, covert information can be embedded into standard file types.
One of the most common steganographic techniques is to embed a text file
into an image file. Anyone viewing the image file would see no difference
between the original file and the file with the message embedded into it. This
is accomplished by storing the message using least significant bits in the data
file.
Steganography in images
This type of steganography is very effective against discovery and can serve a variety of purposes. These
purposes can include authentication, concealing of messages, and transmission of encryption keys. The
most effective method for this type of steganography is normally the least significant bit method. This
simply means that the hidden message will alter the last bit of a byte in a picture. By altering that last bit,
there will be relatively no change to the color of that pixel within the carrier image. This keeps the
message from being easily detected. The best type of image file to hide information inside of is a 24 bit
Bitmap. This is due the large file size and high quality.
Steganography in Audio
In audio files, the most prominent method for concealing information is the low bit encoding method. The
low bit encoding method is somewhat similar to the least significant bit method used in image files. The
secret information is attached to the end of the file. One of the issues with low bit encoding is that it can
be noticeable to the human ear. If someone is trying to hide information, this could be risky, since it is so
easily detectable. The spread spectrum method is another method that has been used in the concealment
of information in audio files. What this method does, is it adds random noise to the audio broadcast. This
method enables for the information to be spread accross the frequency spectrum and remain hiddden
under the random noise. The last method seen in audio steganography is echo hiding data. This method
seeks to hide information by using the echos that occur naturally within sound files. Then, extra sound
can be added to these echos, extra sound being the concealed message. This is a sufficient way to hide
information, expecially since it even improves the sound of the original audio file in some cases.
Steganography In Video
Steganography in Videos is basically hiding of information in each frame of video.
Only a small amount of information is hidden inside of video it generally isnt
noticeable at all, however the more information that is hidden the more noticeable it
will become. This method is effective as well, but must be done right or else reveal
more information instead of hiding.
Steganography In Documents
This is basically adding white space and tabs to the ends of the lines of a document.
This type of
Steganography is extremely effective, because the use white space and tabs is not
visible to the human eye in most text/document editors.
CRYPTOGRAPHY
Cryptography or cryptology the practice and study of techniques for secure communication in
the presence of third parties called adversaries. More generally, cryptography is about
constructing and analyzing protocols that prevent third parties or the public from reading private
messages;[3] various aspects in information security such as data confidentiality, data
integrity, authentication, and non-repudiation[4] are central to modern cryptography. Modern
cryptography exists at the intersection of the disciplines of mathematics, computer science,
and electrical engineering. Applications of cryptography include ATM cards, computer
passwords, and electronic commerce.
Cryptography is the science of providing security for information. It has been used historically as
a means of providing secure communication between individuals, government agencies, and
military forces. Today, cryptography is a cornerstone of the modern security technologies used to
protect information and resources on both open and closed networks.
Encryption is a modern form of cryptography that allows a user to hide
information from others. Encryption uses a complex algorithm called a cipher in
order to turn normalized data (plaintext) into a series of seemingly random
characters (ciphertext) that is unreadable by those without a special key in which to
decrypt it. Those that possess the key can decrypt the data in order to view the
plaintext again rather than the random character string of ciphertext.
Cryptography is closely related to the disciplines of cryptology and cryptanalysis.
Cryptography includes techniques such as microdots, merging words with images,
and other ways to hide information in storage or transit. However, in today's
computer-centric world, cryptography is most often associated with scrambling
plaintext (ordinary text, sometimes referred to as cleartext) into ciphertext (a
process called encryption), then back again (known as decryption). Individuals who
practice this field are known as cryptographers.
The art of protecting information by transforming it (encrypting it) into an unreadable format, called cipher text. Only
those who possess a secret key can decipher (or decrypt) the message into plain text. Encrypted messages can
sometimes be broken by cryptanalysis, also called codebreaking, although modern cryptography techniques are
virtually unbreakable.
As the Internet and other forms of electronic communication become more prevalent, electronic security is becoming
increasingly important. Cryptography is used to protect e-mailmessages, credit card information, and corporate data.
One of the most popular cryptography systems used on the Internet is Pretty Good Privacybecause it's effective and
free.
Cryptography systems can be broadly classified into symmetric-key systems that use a single key that both the
sender and recipient have, and public-keysystems that use two keys, a public key known to everyone and a private
key that only the recipient of messages uses.
ASYMMETRIC ENCRYPTION
In cryptography, encryption is the process of encoding messages or information in such a way
that only authorized parties can read it. Encryption does not of itself prevent interception, but
denies the message content to the interceptor. In an encryption scheme, the intended
communication information or message, referred to as plaintext, is encrypted using an encryption
algorithm, generating ciphertext that can only be read if decrypted. For technical reasons, an
encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is
in principle possible to decrypt the message without possessing the key, but, for a well-designed
encryption scheme, large computational resources and skill are required. An authorized recipient
can easily decrypt the message with the key provided by the originator to recipients, but not to
unauthorized interceptors.
Quite simply, encryption is the process of taking information and transforming it using a
mathematical algorithm and an encryption key to make it unreadable to anyone who might come
across it inadvertently or illegitimately. When an authorized user of the information encounters
it, he or she decrypts the information using a similar mathematical algorithm and a key to take
the encrypted ciphertext and transform it back into the original plaintext.
Asymmetric Encryption is a form of Encryption where keys come in pairs. What one key
encrypts, only the other can decrypt. Frequently (but not necessarily), the keys are
interchangeable, in the sense that if key A encrypts a message, then B can decrypt it, and if key B
encrypts a message, then key A can decrypt it.
Public key cryptography, or asymmetric cryptography, is any cryptographic system that uses
pairs of keys: public keys which may be disseminated widely, and private keys which are known
only to the owner. This accomplishes two functions: authentication, which is when the public key
is used to verify that a holder of the paired private key sent the message, and encryption,
whereby only the holder of the paired private key can decrypt the message encrypted with the
public key.
In a public key encryption system, any person can encrypt a message using the public key of the
receiver, but such a message can be decrypted only with the receiver's private key. For this to
work it must be computationally easy for a user to generate a public and private key-pair to be
used for encryption and decryption. The strength of a public key cryptography system relies on
the degree of difficulty (computational impracticality) for a properly generated private key to be
determined from its corresponding public key. Security then depends only on keeping the private
key private, and the public key may be published without compromising security
wo of the best-known uses of public key cryptography are:
Public key encryption, in which a message is encrypted with a recipient's public key. The
message cannot be decrypted by anyone who does not possess the matching private key, who
is thus presumed to be the owner of that key and the person associated with the public key.
This is used in an attempt to ensure confidentiality.
Digital signatures, in which a message is signed with the sender's private key and can be
verified by anyone who has access to the sender's public key. This verification proves that
the sender had access to the private key, and therefore is likely to be the person associated
with the public key. This also ensures that the message has not been tampered with, as any
manipulation of the message will result in changes to the encoded message digest, which
otherwise remains unchanged between the sender and receiver.
lack of trust a number of crypto-systems have been developed around encryption and
decryption algorithms based on fundamentally-difficult problems, or one-way
functions, which have been studied extensively by the research community. In this way,
users can be confident that no trap-door exists that would render their methods insecure.
SYSMETRIC ENCRYPTION
ymmetric-key algorithms[1] are algorithms for cryptography that use the same cryptographic
keys for both encryption of plaintext and decryption of ciphertext. The keys may be identical or
there may be a simple transformation to go between the two keys. The keys, in practice,
represent a shared secret between two or more parties that can be used to maintain a private
information link.[2] This requirement that both parties have access to the secret key is one of the
main drawbacks of symmetric key encryption, in comparison to public-key encryption (also
known as asymmetric key encryption).[
Asymmetric Encryption
The problem with secret keys is exchanging them over the Internet or a large network while
preventing them from falling into the wrong hands. Anyone who knows the secret key can
decrypt the message. One answer is asymmetric encryption, in which there are two related keys-a key pair. A public key is made freely available to anyone who might want to send you a
message. A second, private key is kept secret, so that only you know it.
Any message (text, binary files, or documents) that are encrypted by using the public key can
only be decrypted by applying the same algorithm, but by using the matching private key. Any
message that is encrypted by using the private key can only be decrypted by using the matching
public key.
This means that you do not have to worry about passing public keys over the Internet (the keys
are supposed to be public). A problem with asymmetric encryption, however, is that it is slower
than symmetric encryption. It requires far more processing power to both encrypt and decrypt the
content of the message.
Symmetric Encryption
Symmetric encryption is the oldest and best-known technique. A secret key, which can be a
number, a word, or just a string of random letters, is applied to the text of a message to change
the content in a particular way. This might be as simple as shifting each letter by a number of
places in the alphabet. As long as both sender and recipient know the secret key, they can encrypt
and decrypt all messages that use this key.
Symmetric-key encryption can use either stream ciphers or block ciphers.[4]
Stream ciphers encrypt the digits (typically bytes) of a message one at a time.
Block ciphers take a number of bits and encrypt them as a single unit, padding the
plaintext so that it is a multiple of the block size. Blocks of 64 bits were commonly used.
The Advanced Encryption Standard (AES) algorithm approved by NIST in December 2001,
and the GCM block cipher mode of operation use 128-bit blocks.
Integrity
Integrity seeks to maintain resources in a valid and intended state. This might be important to
keep resources from being changed improperly (adding money to a bank account) or to maintain
consistency between two parts of a system (double-entry bookkeeping). Integrity is not a
synonym for accuracy, which depends on the proper selection, entry and updating of information.
The most highly developed policies for integrity reflect the concerns of the accounting and
auditing community for preventing fraud. A classic example is a purchasing system. It has three
parts: ordering, receiving, and payment. Someone must sign off on each step, the same person
cannot sign off on two steps, and the records can only be changed by fixed procedures, e.g., an
account is debited and a check written only for the amount of an approved and received order.
2
Accountability
In any real system there are many reasons why actual operation will not always reflect the
intentions of the owners: people make mistakes, the system has errors, the system is vulnerable
to certain attacks, the broad policy was not translated correctly into detailed specifications, the
owners change their minds, etc. When things go wrong, it is necessary to know what has
happened: who has had access to information and resources and what actions have been taken.
This information is the basis for assessing damage, recovering lost information, evaluating
vulnerabilities, and taking compensating actions outside the system such as civil suits or criminal
prosecution.
3
Availability
Availabilityi seeks to ensure that the system works promptly. This may be essential for operating
a large enterprise (the routing system for long-distance calls, an airline reservation system) or for
preserving lives (air traffic control, automated medical systems). Delivering prompt service is a
requirement that transcends security, and computer system availability is an entire field of its
own. Availability in spite of malicious acts and environmental mishaps, however, is often
considered an aspect of security.
An availability policy is usually stated like this:
On the average, a terminal shall be down for less than ten minutes per month.
A particular terminal (e.g., an automatic teller machine, a reservation agents keyboard and
screen, etc.) is up if it responds correctly within one second to a standard request for service;
otherwise it is down. This policy means that the up time at each terminal, averaged over all the
terminals, must be at least 99.9975%.
Such a policy covers all the failures that can prevent service from being delivered: a broken
terminal, a disconnected telephone line, loss of power at the central computer, software errors,
operator mistakes, system overload, etc. Of course, to be implementable it must be qualified by
some statements about the environment, e.g. that power doesnt fail too often.
A security policy for availability usually has a different form, something like this:
No inputs to the system by any user who is not an authorized administrator shall cause any
other users terminal to be down.
Note that this policy doesnt say anything about system failures, except to the extent that they
can be caused by user actions. Also, it says nothing about other ways in which an enemy could
deny service, e.g. by cutting a telephone line.
4
To answer the question Who is responsible for this statement? it is necessary to know what sort
of entities can be responsible for statements. These entities are (human) users or (computer)
systems, collectively called principals. A user is a person, but a system requires some
explanation. A computer system is comprised of hardware (e.g., a computer) and perhaps
software (e.g., an operating system). Systems implement other systems, so, for example, a
computer implements an operating system which implements a database management system
which implements a user query process. As part of authenticating a system, it may be necessary
to verify that the system that implements it is trusted to do so correctly.
The basic service provided by authentication is information that a statement was made by some
principal. Sometimes, however, theres a need to ensure that the principal will not later be able to
claim that the statement was forged and he never made it. In the world of paper documents, this
is the purpose of notarizing a signature; the notary provides independent and highly credible
evidence, which will be convincing even after many years, that the signature is genuine and not
forged. This aggressive form of authentication is called non-repudiation.
5
Authorization determines who is trusted for a given purpose. More precisely, it determines
whether a particular principal, who has been authenticated as the source of a request to do
something, is trusted for that operation. Authorization may also include controls on the time at
which something can be done (only during working hours) or the computer terminal from which
it can be requested (only the one on the managers desk).
It is a well established practice, called separation of duty, to insist that important operations
cannot be performed by a single person, but require the agreement of (at least) two different
people. This rule make it less likely that controls will be subverted because it means that
subversion requires collusion.
6
Auditing
Given the reality that every computer system can be compromised from within, and that many
systems can also be compromised if surreptitious access can be gained, accountability is a vital
last resort. Accountability policies were discussed earlier --e.g., all significant events should be
recorded and the recording mechanisms should be nonsubvertible. Auditing services support
these policies. Usually they are closely tied to authentication and authorization, so that every
authentication is recorded as well as every attempted access, whether authorized or not.
The audit trail is not only useful for establishing accountability. In addition, it may be possible to
analyze the audit trail for suspicion patterns of access and so detect improper behavior by both
legitimate users and masqueraders. The main problem however, is how to process and interpret
the audit data. Both statistical and expert-system approaches are being tried
authentication system. For example, a smartphone user might log on with his personal
identification number (PIN) and then provide an iris scan to complete the authentication process.
Types of biometric authentication technologies:
Retina scans produce an image of the blood vessel pattern in the light-sensitive surface lining the
individual's inner eye.
Retina scanning is a biometric verification technology that uses an image of an individuals
retinal blood vessel pattern as a unique identifying trait for access to secure installations.
Biometric verification technologies are based on ways in which individuals can be uniquely
identified through one or more distinguishing biological traits. Unique identifiers include
fingerprints, hand geometry, earlobe geometry, retina and iris patterns, voice waves, DNA and
signatures.
Iris recognition is used to identify individuals based on unique patterns within the ring-shaped
region surrounding the pupil of the eye.
Iris recognition is a method of identifying people based on unique patterns within the ringshaped region surrounding the pupil of the eye. The iris usually has a brown, blue, gray, or
greenish color, with complex patterns that are visible upon close inspection. Because it makes
use of a biological characteristic, iris recognition is considered a form of biometric verification.
Fingerscanning, the digital version of the ink-and-paper fingerprinting process, works with
details in the pattern of raised areas and branches in a human finger image.
Fingerscanning, also called fingerprint scanning, is the process of electronically obtaining and
storing human fingerprints. The digital image obtained by such scanning is called a finger image.
In some texts, the terms fingerprinting and fingerprint are used, but technically, these terms refer
to traditional ink-and-paper processes and images.
Voice identification systems rely on characteristics created by the shape of the speaker's mouth
and throat, rather than more variable conditions.
Voice ID (sometimes called voice authentication) is a type of user authentication that uses
voiceprintbiometrics, voice ID relies on the fact that vocal characteristics, like fingerprints and
the patterns of people's irises, are unique for each individual.
have set up the system correctly, you can use biological characteristics like fingerprints and iris
scans, which offer you unique and accurate identification methods. These features cannot be
easily duplicated, which means only the authorized person gets access and you get high level of
security.
Accountability
Biometric log-ins mean a person can be directly connected to a particular action or an event. In
other words, biometrics creates a clear, definable audit trail of transactions or activities. This is
especially handy in case of security breaches because you know exactly who is responsible for
it. As a result you get true and complete accountability, which cannot be duplicated.
Easy and Safe for Use
The good thing about using biometrics for identificaiton is that modern systems are built and
designed to be easy and safe to use. Biometrics technology gives you accurate results with
minimal invasiveness as a simple scan or a photograph is usually all thats required. Moreover
the software and hardware can be easily used and you can have them installed without the need
for excessive training.
Time Saving
Biometric identification is extremely quick, which is another advantage it has over other
traditional security methods. A person can be identified or rejected in a matter of seconds. For
those business owners that understand the value of time management the use of this technology
can only be beneficial to your office revenue by increasing productivity and reducing costs by
eliminating fraud and waste.
User Friendly Systems
You can have biometrics systems installed rather easily and after that, they do their job quickly,
reliably and uniformly. You will need only a minimum amount of training to get the system
operational and there is no need for expensive password administrators. If you use high quality
systems, it will also mean your maintenance costs are reduced to minimize the expenses of
maintaining an ongoing system.
Security
Another advantage these systems have is that they cant be guessed or stolen; hence they will be
a long term security solution for your company. The problem with efficient password systems is
that there is often a sequence of numbers, letters, and symbols, which makes them difficult to
remember on a regular basis. The problem with tokens is that they can be easily stolen or lost
both these traditional methods involve the risk of things being shared. As a result you cant ever
be really sure as to who the real user is. However that wont be the case with biometric
characteristics, and you wont have to deal with the problem of sharing, duplication, or fraud.
Convenience
Its considered to be a convenient security solution because you dont have to remember
passwords, or carry extra badges, documents, or ID cards. You are definitely saved the hassle of
having to remember passwords frequently or changing cards and badges. People forget
passwords and ID cards are lost, which can be a huge nuisance with traditional security methods.
Versatility
There are different types of biometrics scanners available today and they can be used for
various applications. They can be used by companies at security checkpoints including entrances,
exits, doorways, and more.
Moreover you can make the most out of the biometric solutions to decide who can access certain
systems and networks. Companies can also use them to monitor employee time and attendance,
which raises accountability.
Scalability
Biometrics systems can be quite flexible and easily scalable. You can use higher versions of
sensors and security systems based on your requirements. At the lowest level you can use
characteristics that are not very discriminative; however if you are looking for a higher level of
security for large scale databases then you can use systems with more discriminable features,
or multi-modal applications to increase identification accuracy.
ADVANTAGES OF BIOMETRIC OVER TRADITIONAL METHODS
Password and PINs have been the most frequently used authentication method. Their use
involves controlling access to a building or a room, securing access to computers, network, the
applications on the personal computers and many more. In some higher security applications,
handheld tokens such as key fobs and smart cards have been deployed. Due to some problems
related to these methods, the suitability and reliability of these authentication technologies have
been questioned especially in this modern world with modern applications. Biometrics offer
some benefits compare to these authentication technologies.
INCREASED SECURITY
Biometric technology can provide a higher degree of security compared to traditional
authentication methods. Chirillo (2003 p. 2) stated that biometrics is preferred over traditional
methods for many reasons which include the fact that the physical presence of the authorized
person is required at the point of identification. This means that only the authorized person has
access to the resources.
Effort by people to manage several passwords has left many choosing easy or general words,
with considerable number writing them in conspicuous places. This vulnerability leads to
passwords easily guessed and compromised. Also, tokens can be easily stolen as it is something
you have. By contrast, it is almost impossible for biometrics data to be guessed or even stolen in
the same manner as token or passwords. Nanavati (2002 p. 4) was of the opinion that although
some biometric systems can be broken under certain conditions, today's biometric systems are
highly unlikely to be fooled by a picture of a face..." He further added that this is based on the
assumption that the imposter has been able to successfully gather these physical characteristics
which he concluded as unlikely in most cases.
INCREASED CONVENIENCE
One major reason passwords are sometimes kept simple is because they can be easily forgotten.
To increase security, many computer users are mandated to manage several passwords and this
increases the tendency to forget them. Card and tokens can be stolen and forgotten as well even
though attaching them to keyholders or chains can reduce the risk. Because biometric
technologies are based on something you are, it makes them almost impossible to forgot or
manage. This characteristic allows biometrics to offer much convenience than other systems
which are based on having to keep possession of cards or remembering several passwords.
Biometrics can greatly simplify the whole process involved in authentication which reduces the
burden on user as well as the system administrator (For PC applications where biometrics
replaces multiple passwords).
Nanavati (2002 p. 5) stated that "Biometric authentication also allows for the association of
higher levels of rights and privileges with a successful authentication." He further explained that
information of high sensitivity can be made more readily available on a network which is
biometrically protected than one which is password protected. This can increase convenience as
a user can access otherwise protected data without any need of human intervention.
INCREASED ACCOUNTABILITY
Traditional authentication methods such as tokens, passwords and PINs can be shared thereby
increasing the possibility of unaccountable access, even though it might be authorized. Many
organizations share common passwords among administrators for the purpose of facilitating
system administration. Unluckily, because there is uncertainty as to who at a particular point in
time is using the shared password or token, accountability of any action is greatly reduced. Also,
the user of a shared password or token may not be authorized and sharing makes it even hard to
verify, the security (especially confidentiality and integrity) of the system is also reduced.
Increase in security awareness in organizations and the applications being used has led to the
need for strong and reliable auditing and reporting. Deploying biometrics to secure access to
computers and other facilities eliminates occurrence such as buddy-punching and therefore
provides a great level of certainty as to who accessed what computer at what point in time.
order for facial recognition to be accurate, the image of user's face must be lit evenly and
preferably from the front, which is not always possible and can be very hard to do in some
environments.
Voice recognition has low accuracy, and a voice can be easily recorded and used for
unauthorized access. An illness such as a cold can alter the user's voice and make identification
more difficult or even impossible.
Retinal scanning is an expensive identification method and can cause delays, as the comparison
with stored templates can take up to 10 seconds.
Fingerprint reading can make mistakes with the dryness or dirt of the fingers skin, as well as
with age. Fingerprints are captured in an image format that requires a lot of memory to process.
Limitations of biometric devices and their system support.
Although biometric devices provide opportunities in organizational settings, organisations
willing to implement a biometric system must face to its limitations. The first disadvantage of a
biometric system is its high cost. Because a biometric system alone is not effective, it must be
combined to a system supporting smart cards. The cost of implementation of both system
together can reach sum of hundreds of thousands of dollars. In addition, the training cost of
employees to the new system and the temporary loss of productivity due the training program are
added up to the implementation of the new system. Secondly, organisations will have to deal
with people aversionof using a new system. In term of privacy concerns, people will not likely
be willing to accept a system which records and stores their physical and personal traits.
Moreover, people often assimilate fingerprints and others physical records to criminal contexts.
So common people are more susceptible to reject biometric system while real criminals would
refuse it in the fear to be discovered.
Lack of reliability.
The last but not the least disadvantage of biometric system is the lack of reliability of some of its
aspect. First, biometric devices can be fooled. As Russel Kay explains in his article Testing the
limits of biometrics, "Japanese cryptographer Tsutomu Matsumoto at Yokohama National
University found that by making moulds out of gelatine he could reproduce a fingerprint that
would fool 80 percent of commercial readers". On the other hand, as fingerprints or any other
physical traits are compromised due to falsification, they can not just be replacedlike a password
or smart card. Finally, a major inconvnient of biometric system is the lack of durability of
biometric devices. After a frquent use of biometric devices, readers lose their reliabilty and their
accuracy leading to rptitive false rejection and false authorization
However, a lot of things are possible with the information. It can be corrupted, taken, stolen, or
used otherwise. People are having concerns about the existing rules on the use of biometric data
and the appropriate safeguards for it. Because of data sharing, biometric data collected for noncriminal purposes are shared and used for national security purposes with no transparency.
Security cameras are installed at many places that keep on recording and adding photographs to
the database. This adds to security, but it means that anyone could end up in the database, even
those not involved in a crime (Conan, "The Pros and Cons of Gathering Biometric Data," par.
75). DNA presents privacy issues different from those involved in other biometrics collection.
Sample can contain information about a persons entire genetic makeup. False acceptance and
false rejection by the biometric systems can prove a criminal to be an innocent and an innocent
to be a criminal. Hence, careful use of biometrics and incorporation of appropriate investigation
techniques are important for ensuring national security.
Read more at iBuzzle: http://www.ibuzzle.com/articles/biometrics-types-merits-anddemerits.html
SYSTEM ADMINISTRATOR
A system administrator, or sysadmin, is a person who is responsible for the upkeep,
configuration, and reliable operation of computer systems; especially multi-user computers, such
as servers.
The system administrator seeks to ensure that the uptime, performance, resources, and security of
the computers he or she manages meet the needs of the users, without exceeding the budget.
To meet these needs, a system administrator may acquire, install, or upgrade computer
components and software; provide routine automation; maintain security policies; troubleshoot;
train and/or supervise staff; or offer technical support for projects.
PHYSICAL ISSUES
5: Initial Configuration
The first problem that comes to mind is glitches that occur when configuring your network, your
systems and resources for use. There are many components to a typical network and as size and
use grows, so do its complexities and the possibility for problems to arise. With the rise in
telecommuting over the past 10 years, and the growth of this market in terms of hardware and
software offerings, there are many people setting up systems and networking them together
without any formal education on the topics or systems, networking and security.
4: Credential, Permission and Rights Problems
So, if you configured everything correctly and connected all systems without issue then what
could possibly go wrong? Anything and everything. The first problem that comes to mind with
Windows systems is credentials, permissions and rights. Most times, you may try to access a host
and not be able to because yep, you guessed it because they can not log in, or they do not
have permissions to access resources once they are logged in
Network Performance
This is by far the most common issue with networking in general. With Windows, performance
can be affected in many ways. For example, if you build or buy a computer system without
taking into consideration the applications you will run across the network. The most common
applications are any type that requires a client to server relationship, which means the client
installed on the Windows desktop must interface and transmit data over the network in order to
function. If network performance is impacted, either the network is too slow (very common
term), or the application was not developed with the network in mind. It can be confusing to
solve this type of issue and normally requires advanced analysis of the problem usually needing
a tool such as a packet analyzer (known as a sniffer) to solve
LOGICAL ISSUES
A database administrator (DBA) maintains a database system, and is responsible for the
integrity of the data and the efficiency and performance of the system.
A web administrator maintains web server services (such as Apache or IIS) that allow for
internal or external access to web sites. Tasks include managing multiple sites, administering
security, and configuring necessary components and software. Responsibilities may also
include software change management.
A computer operator performs routine maintenance and upkeep, such as changing backup
tapes or replacing failed drives in a redundant array of independent disks (RAID). Such tasks
usually require physical presence in the room with the computer, and while less skilled than
sysadmin tasks, may require a similar level of trust, since the operator has access to possibly
sensitive data.
A computer operational or external access to web sites. Tasks include managing multiple
sites, administering security, and configuring necessary components and software.
Responsibilities may also include software change management.