Você está na página 1de 39

COSC1300

HTTP & Apache

Web Specific Attacks

Computer insecurity is inevitable. Networks will be


hacked. Fraud will be committed. Money will be lost.
People will die.
Bruce Schneier, author of "Applied Cryptography" and CTO of
Counterpane Security.

Table Of Contents

n this chapter, we cover some basic computer and


communications security issues, then focus in on
web-specific security. We conclude the chapter with
some examples of useful security software.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

Pre-lecture Reading Material.


1. Introduction
Data security is an old problem. Even in the days before computers, there were data
security issues in the military and commercial domains. For example:
A written signature on paper in the presence of a witness was considered adequate
proof that a person is who they say they are. The weakness is that a signature can
be forged.
Important documents such as financial records, banknotes and share certificates
can be kept in a safe. In an organisation, opening the safe may require more than
one key.
Military communications, sent by Morse Code, were often encrypted prior to
transmission to foil enemy intelligence. Even before electronic communications,
simple paper-based ciphers would be used to transmit sensitive information.
Banknotes, cheques and certificates are carefully prepared on watermarked paper

to prevent forgery.
With the advent of modern mass communication systems, it is increasingly difficult to
prevent unauthorised access to sensitive data. Identity fraud, a crime which often
manifests itself in the form of credit card scams, is becoming more widespread.
Powerful computers make it possible to break strong encryption and probe a remote
computer system for security flaws. Crackers regularly violate the security of remote
systems for a variety of reasons - cash incentives, political agendas or ego gratification.
In order to prevent these types of attacks it is important to understand what kinds of
attacks are possible, then look at ways in which the risks of attack can be minimised
while still providing acceptable service to authorised parties.

2. Security Issues
2.1 Basic Terms
Protection - the measures taken to prevent security breaches
Security - the resistance of a system to unauthorised access
Authentication - proving the identity of a party
Authorisation - the level of access to data or actions permitted to a given party
Firewall - a service which screens access to a network
Encryption - technology allowing secure communications

2.2 Typical Security Attacks


Some typical security attacks include:
Virus: A self-replicating piece of code which attaches itself to another piece of
data such as an exectable program or an email. Virii are often programmed
specifically to exploit security holes in systems which have become subject to
"feature bloat". Beware of any software which automically runs foreign macros or
scripts without asking the user.
Worm: Like a virus, only it is a standalone program.
Trojan: A malicious piece of code which poses as a legitimate program.
Undocumented feature exploitation: Some operating systems and programs have
undocumented debugging or management facilities which can be discovered by a
clever cracker and utilised to wreak havoc.
Buffer overflow: A data string is provided to a server program which overruns its
input buffer and corrupts its internal state, allowing abnormal operation of the
server program.
Shell modifier abuse: A data string is passed to a server program which has
embedded shell commands in it. These commands may be inadvertently executed
by the server program. This only tends to work on OSs which have shells (eg.
UNIX and NT but not MacOS).
Port scanning: Often used as a precursor to an attack in order to discover
weaknesses in a remote host. Use of a program similar to nmap to find out which
services are operating on which port.
Denial of Service (DoS): Very common attack which bombards the server with
requests until the server machine crashes (the classic "blue screen of death" on

NT) or is simply unable to handle actual client requests.


Distributed DoS (DDoS): A DoS attack mounted from several machines
simultaneously. This is often done by crackers after gaining access to several
hundred machines, then using these "innocent" systems to attack a target machine.
SYN flooding: Sending of SYN packets to a host until its TCP/IP port backlog
limit is reached and it cant service real client requests. This is a form of DoS.
IP spoofing: Sending IP packets to a remote host which have the sender field
modified to a different IP address (ie. impersonating another machine). This is
often used immediately following a DoS on the machine being impersonated.
Packet sniffing: Somewhat uncommon, this requires placing a "sniffer" machine
with its Ethernet card set to "promiscuous" mode onto a network and
eavesdropping on the passing data. Interesting packets with passwords and other
sensitive data can be found this way. This may require physical access to the target
network.
Password cracking: Find out all about a user on the system, ie. their login name,
birthdate, pets name, favourite football team, spouses name, mothers maiden
name, etc., then try and guess their password. Another variation is the "dictionary"
attack, where dictionary words are used in the guesses, including substitution of
zeros for Os and threes for Es, etc.
The darker side of security attacks include techniques such as:
Bribery / blackmail: Unsurprisingly, the weakest link in the security chain is often
the staff of an organisation. Greed and fear are strong motivators to get someone to
reveal a password. A good reason for enforcement of the principle of minimum
privilege.
Garbage sifting: Theres an amazing amount of information which can be gained
by going through someones trash - credit card dockets, passwords, account
numbers, important documents. Invest in a paper shredder.
Van Eck Phreaking: Using a specially tuned directional antenna it is possible to
pick up electromagnetic signals from a monitor. An amplifier and some
phase-matching hardware can then be used to tune in on the video signal and
display it on a monitor. This may also be possible on keyboard cable signals. An
earthed, conductive "tent", known as a Faraday Cage, can prevent these signals
escaping.
Electromagnetic pulse attack: A large, sudden electromagnetic disturbance can
reboot or even destroy computer hardware when correctly deployed. In a
high-security environment this could have serious consequences. Proximity to the
target is a prerequisite.
The US Government produced a document in May, 2000, outlining the Top Ten Internet
Security Threats. They are:
1. BIND weaknesses: nxt, qinv and in.named allow immediate root compromise.
2. Vulnerable CGI programs and application extensions (e.g., ColdFusion) installed
on web servers.
3. Remote Procedure Call (RPC) weaknesses in rpc.ttdbserverd (ToolTalk), rpc.cmsd
(Calendar Manager), and rpc.statd that allow immediate root compromise
4. RDS security hole in the Microsoft Internet Information Server (IIS).
5. Sendmail buffer overflow weaknesses, pipe attacks and MIMEbo, that allow
immediate root compromise.

6. sadmind and mountd


7. Global file sharing and inappropriate information sharing via NFS and Windows
NT ports 135->139 (445 in Windows2000) or UNIX NFS exports on port 2049.
Also Appletalk over IP with Macintosh file sharing enabled.
8. User IDs, especially root/administrator with no passwords or weak passwords.
9. IMAP and POP buffer overflow vulnerabilities or incorrect configuration.
10. Default SNMP community strings set to "public" and "private".
Some strategies for dealing with these threats can be found at
http://www.sans.org/topten.htm.

Checkpoint Questions
1. Why is data security needed?
2. What factors have increased the importance of data security in recent times?
3. Generally speaking, what is the weakest point in a security-conscious computing
environment?
4. What is "feature bloat"? Give examples of how can it open security holes.
5. Why is a senders IP address on a packet not enough information to establish
trust?

HTTP & Apache


COSC1300 - Lecture Notes
Web Servers and Web Technology

Web Specific Attacks


Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Introduction

Web Server Security

Table Of Contents

n this section we look at some specific techniques


employed by crackers to compromise the security
of web servers.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

3. Web Specific Attacks


Web servers are often the weakest point on a network. The demand for functionality,
flexibility and usability as well as their public exposure makes them prime targets.
Politically motivated crackers will often place a message on the target site. The use of
web servers as e-commerce portals also allows the frightening possibility of credit card
theft or fraud. Common web server attacks include:
CGI abuse: Invalid data is sent to a CGI script, possibly causing a buffer overflow
or running a command using shell metacharacters. Commonly the attacker wants
the password file to be displayed, so that they can find the login names of users
and attempt to guess their passwords. A very common form of attack.
Brute force password attack: Some websites use the system password file to
authenticate web logins. The fact that a web server is optimised to serve requests
quickly means that it can also be used to guess passwords quickly, a lot quicker
than using telnet or ftp, which will usually provide a delay between subsequent
guesses and log you out after several unsuccessful guesses.
WebJacking: Instead of crippling down your opponents site, or defacing it, why
not deprive it of visitors? Even better, bring your opponents visitors to your site!
There, you can tell them about how nasty your opponent is, and perhaps conduct
credit card fraud, while earning advertising revenue at the same time. This is what
webjacking is all about. All you need to do is change the domain name to IP

address mapping in the domain registrars records. For more information, look at
Webjacking explained. This technique is alleged to have been used by a Brisbane
pornographer to generate traffic to his websites, so that advertising revenue could
be gained. Millions of unsuspecting websurfers were served pornography. You
may find that The Sun Keeps Setting on the Microsoft Empire and Linux Security
make interesting reading.
Remote site browsing: Badly configured web servers may allow browsing of
private parts of the document tree, such as directory listings which show
world-readable files not intended for the general public. A symbolic link to the
password file in your document tree is one potential risk.
Application server attack: Some application servers contain security holes. For
example, any application server that comes with a default administration password
is vulnerable. Diagnostic features left active can also allow crackers to gain clues
about a sites possible security weaknesses.
Even Australias new, electronic tax return system has security holes. According to a
report from the Wiretapped team (online at http://www.wiretapped.net our Tax Reform
scheme needs some reform itself:
"www.etax.com.au is a Webcentral-hosted company that provides a website that
allows you to file your tax return online. They have a 3 part procedure for filing a tax
return online. The website is hosted on Microsoft Windows NT 4 running Internet
Information Server 4 and the back end of the site is programmed in VBScript with
ASP (Active Server Pages).
The primary security hole with this site exists as a result of sloppy system
administration on either the part of www.etax.com.au or Webcentral, depending on the
exact nature of the relationship between the two companies. Nonetheless, its a serious
one, as well see."
The problem description goes on to point out the security holes in detail, concluding
with the statement:
"The end result of all this is that knowing such information about filesystem structure,
database location and database composition makes a pretty much untraceable
hit-and-run breach of the site for insertion, modification, deletion or wholesale access
to data a 30 second exercise instead of a much longer one."

Checkpoint Questions
1. How can information about a badly configured site be revealed to attackers? How
can this information be used to attack the site?
2. What are shell metacharacters and how can they be used to infiltrate a website?
3. Why should you not use your system password file to authenticate web users?

Introduction
COSC1300 - Lecture Notes

Web Server Security


Copyright 2000 RMIT Computer Science

Web Servers and Web Technology

All Rights Reserved

COSC1300
Web Specific Attacks

Cryptography

Table Of Contents

n this section, some principles and techniques are


presented for minimising risk of a security breach on a
web server.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

4. Web Server Security


In order to minimise the risks associated with web servers, it is necessary to restrict the
capabilities of the web server itself. As a general rule, the principle of minimum privilege
applies, meaning that the web server should only be given access to the resources it needs
and nothing more. Typically, the web server is run as the user nobody on a UNIX system, a
user with minimal access privileges.
The web server software should be placed on a machine that is not used for many other
purposes. Executable scripts should not be located in the document tree, as this may allow a
cracker to analyse the scripts and find weaknesses in them. The correct permissions of a CGI
script running on a web server for security are "-rwx--x--x", which stops everyone except the
owner from viewing or modifying the contents of the script. Remember that a local user may
also attempt to breach the security of your web site, so local users should not be able to read
a script. Scripts may contain passwords used to access databases.
Be particularly wary of any function which allows the modification of files on your
webserver. Examples of this include an FTP upload facility or a public forum where users are
allowed to place text data on your server. Any data uploaded should be treated with
suspicion, as it may contain executable code. A badly configured web server may allow this
code to be executed, resulting in a possible security breach.

When interfacing a web server script with a DBMS, make sure that the username under
which the connection is established is one of minimum privilege. In this way, potential
damage can be minimised should someone discover the database password from a script.
Some application servers, such as ColdFusion, allow server side scripts to be encrypted with
a cipher, so that they are decrypted on-the-fly when executed. Use this feature.
Do not rely on client-side validation of your form data. A JavaScript on a web page which
restricts data content, no matter how cleverly written, can not prevent someone from crafting
their own version of your form and submitting it from their website. Client-side form
validation can provide convenient feedback for users before form submission but it does not
guarantee valid data input to your scripts and hence does not provide security.
Private "intranet" sites can be restricted with passwords but it is also advisable to restrict the
subnets from which a HTTP query can be sent. Using Apache, the httpd.conf file allows
these restrictions to be set. Numeric IP addresses are preferable to domain name restrictions
as it is more difficult to forge an IP address than a domain name. A packet filtering router can
further reduce the opportunity for unauthorised access.
A .htaccess file can be used to restrict access to certain directories on a site. This file can in
turn use the htpasswd authentication mechanism to provide each user with a login name and
password for the site, allowing fine-grained access control. The drawback of the htpasswd
scheme is that the passwords are sent across the network as plaintext. A suitable use would
be to protect your documents from plagiarism, or restrict access to a forum, but for
e-commerce or other high-security communication needs, an encrypted link is required.
Lets see how we can add Authentication to a directory. First, we must create the file
.htaccess in the directory we wish to protect. It might look something like this:
AuthType Basic
AuthUserFile /home/html/.htpasswd
AuthName SameAsForums
AuthGroupFile /dev/null
<Limit GET POST>
require valid-user
order deny,allow
deny from TomThumb
allow from all
</Limit>

This specifies that the user name and password provided by the user should be compared
with those stored in the
/home/html/.htpasswd

file. We wish to use HTTP Basic authentication, and dont want to allow any user called
TomThumb. Of course, we must create the /home/html file. To do this, we use the
htpasswd command from the apache/bin directory. Lets add two users with the
username/password pairs student/tigerland and silk/road:
/usr/local/apache/bin/htpasswd -cb /home/html/.htpasswd student tigerland
/usr/local/apache/bin/htpasswd -b /home/html/.htpasswd silk road

The -c switch is used the first time around in order to create the file; it should not be used
when adding more users, since it will overwrite the existing file with a new, empty one.

Looking at the contents of this file, we find the user name in plaintext followed by a hash of
the password.
student:Y56IMX83dtdTc
silk:Y5hk2AHT7jQqM

If we want to use HTTP Digest authentication, which is much more secure than Basic
authentication, we would use:
AuthType Digest
AuthDigestFile /home/html/.htdigest
AuthName SameAsForums

The .htdigest file needs to be created with the htdigest command from the apache/bin
directory:
/usr/local/apache/bin/htdigest -c /home/html/.htdigest SameAsForums student
Adding password for student in realm SameAsForums.
New password:
Re-type new password:

Lets see whats in the newly-created file, .htdigest:


student:SameAsForums:1989dff1d4e66e328ae57b79633e65d5

Unfortunately, we cant always use Digest authentication, since not all browsers support this
method. More importantly, it is implemented in the Apache mod_auth_digest, which is not
compiled into the Apache web server by default.
Note in general that the more complex the functionality of a site, the more likely it is that you
are going to have a security hole. Custom CGIs, application servers and online databases all
present oppotunities for clever crackers. The assertion "there is no reason why anyone would
want to crack our site" is naive. Crackers will scan random IP addresses looking for possible
security holes. Your machine could be compromised, then used in a politically motivated
DDoS attack against another organisation.
Note again that the most secure place to run a webserver is a machine without a command
shell. According to Lincoln D. Steins "World Wide Web Security FAQ", online at
http://www.w3.org/Security/Faq/www-security-faq.html, the most secure web server setup is
"a bare-bones Macintosh running a bare-bones Web server". The FAQ also says: "Windows
NT systems seem to be more vulnerable at the current time, partly the OS is relatively new
and the big bugs havent been shaken out, and partly because the NT file system and user
account system are highly complex and difficult to configure correctly". However, there is no
substitute for expert administration, and and the FAQ goes on to say that UNIX, while
inherently more secure than NT, can be less secure in the hands of a novice administrator.
Of particular interest is the WN server, developed by John Franks at Northwestern University
in Illinois, USA. The WN server runs on UNIX and is supposedly very secure by default.
More information is available here: http://hopf.math.nwu.edu/.
The free OpenBSD operating system ( http://www.openbsd.org) claims to be a very secure
operating system, and is probably worth considering for a web server installation. The project
homepage proudly states: "Three years without a remote hole in the default install! Only one
localhost hole in two years in the default install!"

Checkpoint Questions
1. What is the principle of minimum privilege?
2. Imagine you administer the website of a non-profit organisation with no commercial or
political affiliations. Why would anyone want to crack your site?
3. Which web software products are considered more secure? How does the use of secure
web server software not guarantee security?

Web Specific Attacks


COSC1300 - Lecture Notes
Web Servers and Web Technology

Cryptography
Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Web Server Security

Basic Encryption

Table Of Contents

n this section we provide an introduction to


cryptography and how it is relevant to web
security.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

5. Cryptography
The word cryptography comes from greek origins, meaning "secret writing". >From The
Free On-line Dictionary of Computing (15Feb98) at http://www.foldoc.org/ :

Cryptography
The practise and study of encryption and decryption - encoding data so that it can only
be decoded by specific individuals. A system for encrypting and decrypting data is a
cryptosystem. These usually involve an algorithm for combining the original data
("plaintext") with one or more "keys" - numbers or strings of characters known only to
the sender and recipient. The resulting output is known as "ciphertext".
The security of a cryptosystem usually depends on the secrecy of (some of) the keys
rather than with the supposed secrecy of the algorithm. A strong cryptosystem has a
large range of possible keys so that it is not possible to just try all possible keys (a
"brute force" approach). A strong cryptosystem will produce ciphertext which appears
random to all standard statistical tests. A strong cryptosystem will resist all known
previous methods for breaking codes ("cryptanalysis").
Cryptography is useful for several different applications in the context of web security:
Use of a cipher to encrypt sensitive data such as credit card numbers is advisable

for e-commerce applications in order to protect the interests of your customers. If


customers do not trust the security of your transmission medium, they will not
patronise your site. One common method used to encrypt data over the
transmission medium is the secure HTTP protocol (https), provided by most
modern web servers including Apache SSL.
Digital certificates, signatures and digests can authenticate a website, emails
coming from the website and files downloaded from the website. Without digital
certificates it is possible for confidence tricksters (con men), IP spoofers and
webjackers to set up trojan sites, collecting information about users or hijacking
traffic. Signatures allow a website operator to email their patrons with a guarantee
of the authenticity of the sender. Digests of a message or file can guarantee that it
was not tampered with during transmission.
Stored data gathered from website patrons should be encrypted to protect their
privacy. Personal information, credit card numbers, email addresses and
passwords should be stored in an encrypted format so that if the data is stolen it
will be of no use to the thieves.
Data files and application server scripts can be stored in encrypted format to
obscure the implementation and configuration of interactive features of the site
from prying eyes. While obscurity is no substitute for proper security, it may
hamper attempts to discover weaknesses in the server software, should an intruder
be able to obtain access to these files.
It is almost inevitable that remote shell access to the server machine will be
required by certain people. Make sure a secure connection protocol (such as ssh
on UNIX) is used when connecting to the server machine remotely.
According to Bruce Schneier of Counterpane Internet Security Inc, a world-renowned
expert of cryptography:
"the cryptography now on the market doesnt provide the level of security it advertises.
Most systems are not designed and implemented in concert with cryptographers, but by
engineers who thought of cryptography as just another component. Its not. You cant
make systems secure by tacking on cryptography as an afterthought. You have to know
what you are doing every step of the way, from conception through installation."
(from Schneier, B, "Why Cryptography Is Harder Than It Looks", available online at
http://www.counterpane.com/whycrypto.html).

Checkpoint Questions
1. How can cryptography be used in the daily operation of a web site which includes
e-commerce features?

Web Server Security


COSC1300 - Lecture Notes
Web Servers and Web Technology

Basic Encryption
Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Cryptography

Symmetric Key Encryption

Table Of Contents

n this section we give some definitions of


cryptographic concepts and a description of how
strong cryptography defies cryptanalysis.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

6. Basic Encryption
Some definitions:
encrypt - to covert ordinary data into code
decrypt - to recover the original data from encoded data
plaintext - unencrypted data
ciphertext - encrypted data
key - a fixed-size chunk of data used to encrypt and / or decrypt data
cryptosystem - a known protocol for encrypting and decrypting data
cryptanalysis - the branch of cryptography concerned with decoding encrypted
data
symmetric cryptosystem - a cryptosystem which relies on the same key for
encryption and decryption of data
asymmetric cryptosystem - a cryptosystem which uses different keys for the
encryption and decryption of data
cipher - a type of key used to encrypt and decrypt data in symmetric
cryptosystems, usually using an XOR algorithm
public key cryptosystem - an asymmetric cryptosystem which consists of public
key, circluated freely to the general public for encryption and a private key, used
exclusively by the owner for decryption
white noise - a signal which has no distinguishable recurring patterns in it - it

appears to be a purely random stream of data


A cryptanalyst looks for patterns in data in order to determine how it may have been
encrypted and recover the original data and possibly the decryption key as well. For
encryption to be effective against intuitive cryptanalysis, the ciphertext should be
indistinguishable from white noise. This makes the cryptanalysis problem intractable,
meaning that to recover the plaintext without knowledge of the key one has to test every
possible key combination. For a commercial-grade key of 512 bits, this means testing
2^512 = 1.34 x 10^154 keys. If you could test ten million keys every second, which
would take extremely fast, parallel hardware, this would take 2.44 x 10^142 years,
meaning that the data is not likely to be recovered any time soon.
At the other end of the scale is the simple substitution cipher, which maps every letter of
the alphabet to a different letter. This kind of cipher can be used to encrypt simple
messages "on the back of an envelope" and is relatively simple to crack. One can make
intuitive guesses based on the length of words and double letters appearing in the text to
guess the cipher.

Checkpoint Questions
What characteristic does data encrypted with strong cryptography exhibit?
What is cryptanalysis and how does one generally go about it?

Cryptography
COSC1300 - Lecture Notes
Web Servers and Web Technology

Symmetric Key Encryption


Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Basic Encryption

Asymmetric Key Encryption

Table Of Contents

In this section we examine symmetric key


encryption, its advantages and disadvantages.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

7. Symmetric Key Encryption


7.1 Introduction to Symmetric Key Encryption
Also known as "secret-key cryptosystems", symmetric key encryption cryptosystems
uses a single key to encrypt and decrypt data. Symmetric cryptosystems have the
advantage that they are significantly faster than asymmetric systems.

The major disadvantage is that both the sender and receiver need to know the key, so
some secret protocol for key exchange is needed. A typical situation for using a
symmetric system is to first use a strong asymmetric system to exchange keys, then use
symmetric encryption with a "session key" (used for only one session) for exchange of
data. This allows good security with good performance.

Several examples of symmetric key encryption systems include:


The Data Encryption Standard (DES) - described below.
Blowfish - an open source, faster drop-in replacement for DES designed by Bruce
Schneier of Counterpane. More information and source code is available here:
http://www.counterpane.com/blowfish.html
RC4 - a stream cipher designed by Rivest for RSA Data Security, Inc, used in
SecurPC and SSL.
International Data Encryption Algorithm (IDEA) - developed at the Swiss Federal
Institute of Technology in Zurich. More info available at:
http://www.mediacrypt.com/pages/fidea.html

7.2 The Data Encryption Standard (DES)


The Data Encryption Standard was developed by IBM and the NSA in the United States
during the early 1970s. It is the most widely used and best known symmetric algorithm
in the world. A variant known as triple-DES has been developed for better security, but
both DES and triple-DES will be replaced by the Advanced Encryption Standard (AES)
in the near future as the national standard in the USA. More information on this is
available at http://csrc.nist.gov/encryption/aes/.
The 56-bit DES is no longer considered secure, and even Triple-DES (which has an
equivalent key length of 112 bits) is beginning to show its age. A competition to design
a replacement for the DES, to be called the Advanced Encryption Standard (AES), was
won by Rijndael. This is a computation-friendly block cipher, using variable-length
blocks and variable length keys. The AES is due to be ratified as a US standard soon.
For more details, visit the NIST AES page.
DES was designed for efficient implementation in hardware. It uses a 64-bit block size
and a 56-bit encryption key during execution (8 of the 64 bits are used for parity). A
single key is used for encryption and decryption of a message, and for the generation
and verification of a Message Authentication Code (MAC). From the NIST document
on DES:
"In the encryption computation the 64-bit data input is divided into two halves each
consisting of 32 bits. One half is used as input to a complex nonlinear function, and the
result is exclusive ORed to the other half. After one iteration, or round, the two halves
of the data are swapped and the operation is performed again. The DES algorithm uses
16 rounds to produce a recirculating block product cipher. The cipher produced by the
algorithm displays no correlation to the input. Every bit of the output depends on every
bit of the input and on every bit of the active key."
The full text of this document, which details the suggested implementation of DES are
available here: http://www.itl.nist.gov/fipspubs/fip74.htm
Several methods for DES cryptanalysis have been devised and it is no longer considered
secure as a 56-bit key space can now be effectively searched using brute tactics. The
following extracts taken from the website of the Electronic Frontier Foundation (
http://www.eff.org/) illustrate why DES is no longer considered secure:

"To prove the insecurity of DES, EFF built the first unclassified hardware for cracking
messages encoded with it. On Wednesday, July 17, 1998 the EFF DES Cracker, which
was built for less than $250,000, easily won RSA Laboratorys "DES Challenge II"
contest and a $10,000 cash prize. It took the machine less than 3 days to complete the
challenge, shattering the previous record of 39 days set by a massive network of tens
of thousands of computers."
"Six months later, on Tuesday, January 19, 1999, Distributed.Net, a worldwide
coalition of computer enthusiasts, worked with EFFs DES Cracker and a worldwide
network of nearly 100,000 PCs on the Internet, to win RSA Data Securitys DES
Challenge III in a record-breaking 22 hours and 15 minutes. The worldwide computing
team deciphered a secret message encrypted with the United States governments Data
Encryption Standard (DES) algorithm using commonly available technology. From the
floor of the RSA Data Security Conference & Expo, a major data security and
cryptography conference being held in San Jose, Calif., EFFs DES Cracker and the
Distributed.Net computers were testing 245 billion keys per second when the key was
found."

Checkpoint Questions
1. What is the major advantage of symmetric cryptosystems over asymmetric
cryptosystems?
2. What major logistical disadvantage do symmetric cryptosystems suffer from?

Basic Encryption
COSC1300 - Lecture Notes
Web Servers and Web Technology

Asymmetric Key Encryption


Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Symmetric Key Encryption

Authentication

Table Of Contents

n this section we describe the concepts behind


asymmetric encryption systems and give some
examples of their practical use.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

8. Asymmetric Key Encryption


8.1 Introduction to Asymmetric Key Encryption
Perhaps better known as "public-key cryptosystems", these cryptosystems represent the
state of the art in cryptography. Asymmetric schemes use two keys - one for encrypting
the data and one for decrypting. They are much slower that symmetric key techniques
but have certain logistic advantages which make them very convenient and difficult to
crack. In general, asymmetric key systems use much larger keys than their symmetric
counterparts.
Generally, a party that wishes to be sent data in encrypted format creates for themselves
a public key and a private key. The public key is freely available for use by anyone who
wishes to send a message to its owner. The private key is known only to its owner. Data
encrypted with the public key cannot be decrypted except with the private key, so that
once a message is encrypted for a particular party to read, only that party can decrypt it.
Given the public key, it is not possible to recover the private key easily, as the search
space is very large.
Likewise, data encrpyted with the private key of a party can only be decrypted with that
partys public key. This feature allows strong authentication. Anyone with access to the
public key can read the message and know with high certainty that it was encrypted by

the private key corresponding to that public key, even though they do not have access to
the private key.
This example scenario demonstrates the use of public key cryptography:
Alice wishes to send a message to Bob. She wants to be certain that the message will
not be readable by anyone except Bob. Bob wants to be certain that the message he
receives is from Alice and not someone impersonating Alice. Alice and Bob each
have their own private key and each others public keys.
To accomplish the goal, Alice takes the message and encrypts it with Bobs public
key, then her own private key. She then transmits the message to Bob. Bob now uses
Alices public key and his own private key to recover the original message. He knows
now with a degree of certainty that the message came from Alice, without knowing
her private key.
This is illustrated in the following diagram:

The major difficulty in an e-commerce scenario is that we dont know if the public key
we have received is from an honest, genuine merchant who is accountable for their
actions, or a swindler who is collecting the carefully encrypted credit card numbers sent
to them and skimming five cents off each to make a million dollars. For this reason we
need Digital Certificates and Certifying Authorities to ensure the authenticity and
accountability of online merchants and establish a relationship of trust. This will be
further explained later.
Although asymmetric encryption schemes on their own are slow, we can use an
asymmetric scheme to perform exchange of a secret key, valid only for one
communication "session". The randomly-generated secret key is used for the duration of
a session to encrypt the data using a symmetric algorithm to speed up the
communication process. The secret key is discarded at the end of the session between
the two parties.
"Pick a random number and go to gaol." Practitioners who use strong cryptography
should be aware that in some parts of the world its usage, export or import is explicitly
prohibited. Laws in some countries regard hiding a secret key from the authorities as
withholding evidence, and the penalties are severe. Strong cryptography is still classified
in many countries as a munition, so that the distribution of software and algorithms is
equivalent to selling military weapons.
Several examples of asymmetric cryptosystems include:
Rivest-Shamir-Adelman (RSA) - described below.
Pretty Good Privacy (PGP) - originally developed in 1991 by Phil Zimmermanof

Boulder, Colorado, USA, this shareware software was intended to provide "strong
cryptography for the masses". The felony charges levelled against the author for
export of munitions were dropped in 1996. PGP is still widely used, worldwide.
Elliptic Curve Cryptography (ECC) - the current state of the art, resistant to search
techniques applied to RSA.
Diffie-Hellman - possibly the first ever public key cryptosystem, originally
published in 1976, uses the discrete logarithm problem as a basis.
ElGamal - invented in 1985, based on the discrete logarithm problem
LUCELG PK - first published in Dr Dobbs Journal in 1994, comparable speed to
RSA, uses a patented recursive algorithm involving "Lucas functions". The
technology is marketed by a New Zealand company, LUCENT (no relation to
Lucent Technologies, Inc of USA).

8.2 Rivest-Shamir-Adelman (RSA) Encryption


The patent on this algorithm, held by RSA Security, Inc, USA, expires on September 20,
2000. The RSA cryptosystem includes encryption and authentication. It was developed
by Ronald Rivest, Adi Shamir and Leonard Adelman in 1977. The system is free for
non-commercial and educational use.
The basis for RSA is to randomly select two large primes, p and q and generate their
product, n = pq. n is now the modulus of the system. Choose a number e, less than n and
relatively prime to (p - 1)(q - 1), meaning that e and (p - 1)(q - 1) have no common
factors except 1. Find another number d such that (ed - 1) is divisible by (p - 1)(q - 1).
The values e and d become the public and private exponents, respectively. The public
key is (n, e) and the private key (n, d). The factors p and q should be destroyed or kept
with the private key. If these factors are discovered, it is possible to find d from e and
the system has been cracked.
In practice the system is implemented slightly differently. Decoding an RSA-encrypted
message can be slow, so the message is encrypted as an RSA "digital envelope" as
follows (again, this is from RSA Labs FAQ):
"Suppose Alice wishes to send an encrypted message to Bob. She first encrypts the
message with DES, using a randomly chosen DES key. Then she looks up Bobs
public key and uses it to encrypt the DES key. The DES- encrypted message and the
RSA-encrypted DES key together form the RSA digital envelope and are sent to Bob.
Upon receiving the digital envelope, Bob decrypts the DES key with his private key,
then uses the DES key to decrypt the message itself. This combines the high speed of
DES with the key management convenience of the RSA system."
Of course, if youve read the previous section on symmetric cryptosystems you will
realise by now that Alice would be very naive to rely on the regular 56-bit DES scheme
to encrypt her message. She would use triple-DES, RC4, RC5, RC6 or Blowfish as
stronger alternatives to brute force attack.
Note that in regard to brute force attack, the obvious method for attacking RSA is to try
and factorise the number n into two numbers p and q, which for very large values of p
and q is impossible in any reasonable time on the fastest computers available. However,
the search space has been narrowed by Number Field Sieve (NFS) techniques, which
makes factoring large numbers easier and permits parallelisation on networks of

workstations.
Using this technology in 1996 it was possible to factor a 432-bit in 750 MIPS-years (ie.
750 machines running at one million operations per second for one year). This means
that RSA with a 512-bit key provides only marginal security, so in practice 1024-bit
keys should be used. It should also be noted at this point that many cryptographic
implementations exported from the USA prior to January 2000 (when export controls
were lifted) have relatively short private keys and typically employ ciphers of no longer
than 40 bits. The Elliptic Curve Cryptosystem exhibits much stronger resistance than
RSA and may be the future of public key cryptography.

8.3 Digital Signatures


Encryption and decryption addresses the problem of eavesdropping, one of the three
internet security issues mentioned at the beginning of this chapter. But, encryption and
decryption, by themselves, do not address the other two problems in internet security
issues: tampering and impersonation.
"Digital Signatures" is a mechanism that addresses the problem of tampering. In this
mechanism, we use a technique called a "one-way hash", that can be easily generated
from the original message. This "one-way hash" can be used to test the integrity of the
message at the receiving end. The idea is to send an encrypted "one way hash" with the
original message. (The hashing algorithm used here is not important, and it has nothing
to do with security issues. It just generates a one-way hash from the original message).
At the receiving end, the encrypted hash is decrypted, and compared with the newly
calculated "one way hash". The following diagram illustrates the way a digital signature
can be used to validate the integrity of signed data.

Lets assume Alice wants to send a message to Bob, and they are concerned about the
possibility of tampering. They agree to use a digital signature. Then, there are two
components: the original message and the digital signature, which is basically a one-way
hash (of the original data) that has been encrypted with Alices private key. To validate
the integrity of the data, at the receiving end, Bob uses Alices public key to decrypt the
hash. He then uses the same hashing algorithm on the received message to generate a
new one-way hash of the same data. Finally, Bob compares the new hash against the
original hash. If the two hashes match, the data has not changed since it was signed. If
they dont match, the data may have been tampered with since it was signed, or the
signature may have been created with a private key that doesnt correspond to Alices

public key. (There are other simple approaches, like that discussed in the lecture, of
using human-readable signatures. The principles are the same).
In this way, we can use "digital certificates" to test the integrity of the received message,
as well as to make sure that, the public key and private key combination are a valid pair.

Checkpoint Questions:
1. What keys are involved in an asymmetric cryptosystem and how are they used?
2. How can encryption and authentication be performed using the same set of keys?
3. How are symmetric and asymmetric cryptosystems combined in a practical
implementation?

Symmetric Key Encryption


COSC1300 - Lecture Notes
Web Servers and Web Technology

Authentication
Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Asymmetric Key Encryption

Secure Web Servers

Table Of Contents

n this section we will examine the concept of


authentication and how parties can establish trust.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

9. Authentication
Authentication in the context of web security means quite simply proving that a party is
indeed who they claim to be. This proof is necessary to establish a relationship of trust
between parties so that commerce can occur. Authentication also implies some form of
accountability as well. Accountability means that a party whose identity has been proven
can be held resposible for their actions. This kind of accountability seeks to reduce the
possibility of fraud.
In the previous section a description of a basic authentication process using RSA was
given. This system works in a mathematical sense, and one can prove irrefutably using a
public key supplied that the party they have contacted is indeed the one who supplied
the key. But one cannot prove that they are who they claim to be when they supplied the
public key without some outside interaction. Apart from that, you can not keep the
public keys of hundreds of thousands of web servers around the world.
This is where "Certifying Authorities" come in. A "Certifying Authority" (CA) is a
business that vouches for the identities of other individuals and organizations. Instead of
keeping everyones public key in your browser, you keep the public keys of a few
well-known and trusted CAs. When an organization (for example, a company doing
e-commerce) first wants to establish a digital signature, it presents documentary
evidence to a CA that it is who it says it is. When the CA is satisfied that the

organization is legitimate, it takes the organizations public key and encrypts it with
CAs private key, creating a "signed cerificate" that it returns to the organization.

The organization can now present its clients with proof of legitimacy. When it needs to
communicate with the clients, it sends a copy of its signed certificate, which the client
attempts to decrypt with the CAs public key. If the certificate decrypts correctly, the
client gets a copy of the organizations public key, which the client can then use to send
messages to the organization or to verify the organizations digital signature.
In short, the encrypted communication between the web servers and web browsers can
be described as follows. Browser software ships with the public keys of a few trusted
CAs. When a browser contacts an encrypting server, the server sends its signed
certificate, which the browser decrypts with the CAs public key. Provided that the
certificate decrypts correctly, the browser acknowledges, the server and the browser
exchange the public keys, and then they begin the web transaction. In the following
diagram, the steps of an encrypted server client communication are illustrated.

In order to upkeep their trustworthy reputation, CAs must ensure that the certificates
they issue are only sent to reputable customers. In general you must have a domain
name registered with a domain registry and proof that you have authority to transact
business under the organisation name appearing in your request. This is mainly for the
purposes of accountability, so that certificate owners who engage in fraudulent practices
can be identified.
The certificate must also be checked against the current time to make sure that it has not
expired. The "root CA" is an organization in which trust is assumed. The root CA signs
its own certificate. All certificates issued by the root CA are signed with the root CAs
private key and thus can be verified with the roots public key. Rather than every single
organization applying for a certificate from the root CA, it is possible to have a
certificate issued by a subordinate of the root CA. The subordinate signs the certificates
they issue. This leads to hierarchies of CAs and this phenomenon is known as
certificate chaining.
A digital certificate is a random number generated by a CA that uniquely identifies a
party. A commonly used certificate infrastructure is defined in standard X.509v3 of the
International Telecommunications Union (ITU) as well as RFC2459 of the Internet
Society.
A digital certificate (DC) is bound to a distinguished name (DN), representing a
"subject", which is either a person or entity. The DN consists of a series of
"name=value" pairs that uniquely identify the subject, for example:
uid=jcitizen,e=jcitizen@acme.com.au,cn=Jill Citizen,o=Acme Corp.,c=AU
where uid, e, cn, o and c are the user ID, email address, common name, organisation,
and country, respectively. Other name, value pairs may be used.
A typical certificate consists of:
the version of the X.509 standard supported
the serial number of the certificate, unique to the issuing CA
information about the subjects public key, including the algorithm used and a
representation of the key itself
the DN of the CA that issued the certificate
the period in time for which the certificate is valid
the DN of the subject
optional certificate extensions, perhaps to define the type of certificate (email
signing, SSL client, SSL server, etc.)
the cryptographic algorithm used by the CA to create its own digital signature
the CAs digital signature, obtained by hashing all of the data in the certificate
together and encrypting it with the CAs private key

9.1 RADIUS - Remote Authentication Dial-In User Service


RADIUS is a protocol for authentication of information between a network client, such
as a modem bank, and an authentication server, where authentication information is
kept. Web-Access Authentication Using RADIUS presents a good discussion of this

protocol.

Checkpoint Questions:
1. Why is trust needed in the online marketplace?
2. How is a digital certificate used to prove the identity of an online merchant to a
customer?

Asymmetric Key Encryption


COSC1300 - Lecture Notes
Web Servers and Web Technology

Secure Web Servers


Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Authentication

Firewalls

Table Of Contents

n this section we describe how a secure web server


works and how to establish one.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

10. Secure Web Servers


Secure web servers use encryption to prevent interception of data sent between client
and server. Secure web servers are generally implemented using the Secure Sockets
Layer (SSL). In the TCP/IP model, the SSL fits between the TCP layer and the
Application layer. You are probably viewing this document over a SSL-secured link
right now!
SSL was originally developed by Netscape and has been established by the Internet
Engineering Task Force (IETF) as the standard in the Transport Layer Security (TLS)
Protocol Version 1.0. It is universally accepted as the standard for authentication and
encrypted communication between clients and servers over the Web.

10.1 The Secure Socket Layer


SSL is a stream encryption protocol which aims to provide:
reliable delivery of data
integrity
privacy through encryption
authentication through X.509

All information transported with SSL is organised into records of a certain maximum
size. Records are encrypted and their integrity assured using a Message Authentication
Code (MAC) generated by a special hash function called a "MD5 digest". SSL records
come in four varieties:
application data
handshake messages
error messages
change cipher specification
The SSL handshake, performed when establishing a connection, agrees upon a cipher
suite to use and performs a cipher exchange using public keys. Typical SSL
implementations use some combination of an RSA or Diffie-Hellman asymmetric
cryptosystem for cipher exchange, RC4 or triple-DES ciphers and MD5 or SHA-1 MAC
digests. Ciphers are exchanged using a 512-bit public key algorithm. Ciphers in use can
be changed at any time during a session using SSLs dynamic renogotiation feature.

10.2 The Apache-SSL Server


An SSL-enabled web server is commonly known as a secure server. An example of a
secure server is the Apache-SSL server, available for free download from
http://www.apache-ssl.org/. The package includes 128-bit encryption worldwide and full
source code, and is free for commercial and non-commercial users alike.
The Apache-SSL package consists of a series of patches applied to the popular Apache
webserver to build in SSL support. To build the latest version of the Apache-SSL server,
you need OpenSSL version 0.9.5a or better and Apache version 1.3.12. OpenSSL is an
Open Source toolkit implementing the SSL and TLS protocols as well as being a
full-strength general purpose cryptography library. It was derived from the SSLeay
library developed by two Australians, Eric Young and Tim Hudson, who now work for
RSA Australia and have no further involvement with SSLeay.
Full instructions for the installation of Apache-SSL can be found in the source
distribution, available from the project website. Once installed a certificate for the server
should be obtained from a certifying authority. To create a test certificate, the following
commands should be used:
1. Request OpenSSL to create a new key:
> openssl req -new > new.cert.csr

2. Optionally, remove the passphrase from the key:


> openssl rsa -in privkey.pem -out new.cert.key

3. Convert the key into a signed certificate with one years validity:
>

openssl x509 -in new.cert.csr -out new.cert.cert -req


-signkey new.cert.key -days 365

4. To use the certificate, add the following line to your httpd.conf file:
SSLCertificateFile /path/to/certs/new.cert.cert

SSLCertificateKeyFile /path/to/certs/new.cert.key

It is also possible to create client certificates, ie. become a CA, like this (as a
prerequisite, you must have a server certificate):
1. Sign the client request using the CA key:
> openssl x509 -req -in client.cert.csr -out client.cert.cert
-signkey my.CA.key -CA my.CA.cert -CAkey

2. Issue the certificate client.cert.cert to the client.


3. Add the following lines to your httpd.conf file so that you can validate clients
certificates:
SSLCACertificateFile /path/to/certs/my.CA.cert
SSLVerifyClient 2

Of course, if you want to be trusted in an e-commerce scenario you will need a


certificate issued by a trusted root CA.
The secure HTTP protocol (https) is usually hosted on port 443, rather than standard
port 80 for http. To specify a https conenction to a website from a browser, the "https://"
prefix is used before the domain name of the server. When SSL encryption is in use over
a link, the browser may offer a warning, such as "you have requested a secure
document". In Netscape, a small padlock icon in the bottom left corner of the browser
window appears locked when SSL is in use. However, it is possible to configure a server
to use a different (non-standard) port for SSL communication. Furthermore, a server can
be configure to communicate in both normal and SSL encrypted forms, by using two
different ports. For example, you can access these online lecture material from
http://yallara.cs.rmit.edu.au:8002/cs843 in unencrypted form using HTTP protocol at
port 8002 and https://yallara.cs.rmit.edu.au:8001/cs843 in encrypted form using HTTPS
protocol at port 8001. If you use HTTPS protocol, as above, by clicking Security
button in Netsacpe (Tools --> Internet Options --> Content --> Certificates in Internet
Explorer) you can view the servers signed certificate.
Note that a secure server does not guarantee absolute security against a determined
cracker. In a paper, "Analysis of the SSL 3.0 Protocol" by David Wagner of Berkeley
and Bruce Schneier of Counterpane, Inc., the authors state that though SSLv3 is resistant
to passive attacks (such as packet sniffing), the cipher renegotiation and key exchange
features can be spoofed in some implementations. Note also that traffic frequency and
packet sizes can give away a lot of information even when the content is encrypted.

Checkpoint Questions:
1. What mechanism used in SSL provides tamper-evident packaging for data?
2. What is the SSL handshake and what information is exchanged in it?

Authentication
COSC1300 - Lecture Notes

Firewalls
Copyright 2000 RMIT Computer Science

Web Servers and Web Technology

All Rights Reserved

COSC1300
Secure Web Servers

Privacy

Table Of Contents

n this section we describe the function of a


network firewall and some different approaches to
network firewall configuration.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

11. Firewalls
11.1 Introduction to Firewalls
An effective way to protect a local network is to use a firewall. In architectural terms, a
firewall is a fire-resistant wall designed to prevent the spread of a fire through a
building. In a computer network it also acts as a prevention device, intended to stop
particular kinds of traffic from entering a local network in order to reduce security risks.
They can also prevent the machines on the LAN from accessing certain services outside
the LAN.
A firewalls job is to connect two networks, the local network and the outside world.
Typically a firewall is multi-homed, meaning that it has two or more network devices,
each with its own IP address. For most small LANs a single firewall is enough, with one
network device connected to the outside world and one to the LAN. The firewall decides
which traffic passes between the cards. Firewalls are configured to permit certain
essential traffic, such as smtp (email), ftp and http, while blocking others, such as
particular database services which parties on the LAN may need access to, but the
outside world need not know about. The firewall examines the port number of the
destination to which each packet is going and makes a decision as to whether the packet
should be allowed through. The same process is applied to packets entering the network
through the firewall.

11.2 Firewall Implementation


Firewall implementation can take many forms. Some firewalls designed to service
high-traffic requirements efficiently can cost thousands of dollars. A dedicated machine
like this will often have an embedded processor, a real-time operating system and no
external interface other than some networking hardware and a serial plug for interfacing
with a terminal or another firewall. These types of machines are designed to deliver very
high throughput, keeping track of thousands of connections at a time. The software is
loaded onto an onboard firmware chip (often a Flash memory or EEPROM), so that it
can be upgraded or parameters changed as required. The network devices on a firewall
can be either Ethernet, ISDN or even modems.
Another common firewall configuration for low-volume LANs on a budget is to use a
spare workstation and install two network devices and some firewall software. This sort
of firewall has the weakness that it usually runs on some UNIX variant and is subject to
all the normal security risks of a UNIX system. It is advisable to turn off all but the most
necessary features. Using a Linux system, the /etc/inetd.conf file should be edited (this
is where the Internet services are specified) and the services echo, discard, daytime,
chargen, ftp, gopher, shell, login, exec, talk, ntalk, pop-2, pop-3, netstat, systat, tftp,
bootp, finger, cfinger, time, swat and linuxconfig should all be turned off by placing a
hash in front of them. A kill -HUP < inetd_pid will restart the inetd without these
services.
For security reasons, root access to the firewall should only be through a secure shell
connection (ssh). Better still, through the console only.

11.3 Firewall Configuration


Two alternate philosophies can be employed when configuring a firewall:
1. That which is not expressly permitted is prohibited.
In this case, we start by completely blocking all types of traffic, then open a few
holes to allow certain services to proceed. This is a very rigorous form of security,
but may cause inconvenient service problems for users of the network. For
example, a user may wish to telecommute to a LAN from home using a protocol
which is not allowed by the firewall.
2. That which is not expressly prohibited is permitted.
This approach provides a more user-friendly firewall, where only those services
which are considered superfluous or potential security holes are blocked. Services
such as daytime and finger might be turned off. Common practice is to deny
access to telnet in favour of ssh for shell interfaces. This approach offers
flexibility to the LAN users, but it also presents an opportunity for crackers to
discover security holes in services where it is not yet known whether a security
hole exists.
There a several common topologies for firewalls and webservers in a network. In each
of the situations described below, packet filtering is performed by the router to disallow
traffic to machines inside the LAN. The router allows machines inside the LAN to
initiate connections to certain outside sevices but not vice versa. We consider only
network topologies which use a "screened-host" type configuration. The other kind are

dual-homed configurations where the firewall machine itself provides a series of


"proxies", one for each service required by the LAN machines.
1. Server Access Inside Only
In this case the web server is just another machine on the LAN. Outside hosts
cannot initiate connections to the web server, so the content can only be viewed by
machines on the LAN. The bastion host may be used to provide other internet
services such as mail and public FTP.

2. Server Insecure
This configuration puts the web server outside the firewall in the so-called
"demilitarised zone" (the DMZ). The server is then freely accessible from outside
the LAN and appears to machines inside the LAN as just another Internet host, and
treated with the appropriate level of distrust. A server in this configuration is more
vulnerable to cracking as packets sent to it are not screened, but hopefully this will
"draw the fire" away from the machines on the LAN.

3. Server is the Bastion Host


In this configuration the web server software is run on the bastion host, so that it is
accessible to the inside LAN and the outside world, but still protected by packet
screening. The disadvantage is that if the web server is compromised, the bastion
host is also in jeopardy, which could endanger the whole network. Also, having
the web server on the same machine as the bastion host may degrade the
performance of the bastion if the web server is subject to heavy traffic.

4. Server and Bastion Host Alongside


This configuration solves the problem of the previous one by putting the web
server on a separate machine. A hole is opened in the firewall so that requests can
be sent to the IP of the web server machine from outside, but only to the HTTP

port (usually 80). In this case the web server machine should be made very secure,
with no access to other machines on the network including network mounted
filesystems.

5. Internal and External Servers


This setup extends the topology of case 2 above to place a second server inside the
packet-screened perimeter. A hole is opened in the firewall to allow the outer
server to fetch documents from the inner server. This communication can be done
by regular mirroring of certain public parts of the inner server content to the outer
server, or by having the two servers communicate via a proxy on the bastion.
Executable scripts should be located on the outer server to minimise risk. Besides
security advatages, this configuration has the advantage that the inner server may
have a less restrictive security regime, making it more convenient for authoring
documents.

Checkpoint Questions:
1. What is a firewall and how does it improve network security?
2. What are five possible topologies for firewalls and webservers?

Secure Web Servers


COSC1300 - Lecture Notes
Web Servers and Web Technology

Privacy
Copyright 2000 RMIT Computer Science
All Rights Reserved

COSC1300
Firewalls

Web Server Performance

Table Of Contents

n this section we examine privacy.

Introduction
Security Issues
Basic Terms
Typical Security Attacks
Web Specific Attacks
Web Server Security
Cryptography
Basic Encryption
Symmetric Key Encryption
Asymmetric Key Encryption
Authentication
Secure Web Servers
Firewalls
Privacy

12. Privacy
As you have probably discovered while programming web applications, the server can access
quite a bit of information about the client. Take a look at what the server knows about you:
http://www.privacy.net/analyze/
http://www.astalavista.net/new/network.php
In order the server being able to identify you, or, for that matter, someone else logging your
visit to a site under surveillance, you might want to make use of a proxy. The server will see
the request as coming from the proxy; there is no direct contact between you and the server.
Most proxies available for use are actually not configured properly; few people want to gift
valuable bandwidth to all and sundry. Lists of available proxies are published on the Web; one
such list is available at Astalavista. Often, the network administrator notices the huge increase
in bandwidth usage, and reconfigures the proxy to deny requests from external users.
Several sites offer anonymous use of their proxy server. One such site is Anonymizer; another
is SafeWeb. Of course, this assumes you trust the proxy provider. But it is obvious that those
who want to keep tabs on your activities will try to play the role of proxy provider. For
example, it is public knowledge that the CIA has a stake in SafeWeb. It is also rumoured that
some "activist" sites are set up to attract interested parties, and thus identify them!

Many other Web-based services collect information about you as you surf; the Netscape
"Whats related" feature is the subject of a number of privacy concerns, although Netscape
naturally tries to downplay them.

Banner ads and Web bugs


Almost all Web sites have some form of advertising; this helps to defray the costs of hosting
the site, and is probably reasonable. However, given the technical capabilities at their disposal,
its not surprising that advertising companies try to leverage these ads to increase their profits.
Most banner ads are provided by a handful of companies; DoubleClick is a leading advertiser.
The huge number of web sites covered by these companies makes it difficult for them to resist
linking them together in some way.
As you know, cookies are set in the HTTP response headers; when I visit a page with a banner
ad, my browser requests the banner image. This is often not on the visited site, but rather on a
server belonging to the advertising company. This allows the advertising company to set a
cookie in my browser for its own domain. Of course, it can also read any existing cookies for
this domain as well. Over a period of time, a detailed profile of my browsing habits can be
generated.
Take a look at your ~/.netscape/cookiesfile (Windows users should look in
Windows\Temporary Internet Files. In my cookies file, I see lots of cookes from
companies I dont remember visiting. For example:
# Netscape HTTP Cookie File
# http://www.netscape.com/newsref/std/cookie_spec.html
# This is a generated file! Do not edit.
.admonitor.net TRUE
/
FALSE
1292827724
ID
72777070756999589942
.advertising.com
TRUE
/
FALSE
1757475525
ACID
ee820009997597000072!
.appserver-zone.com
TRUE
/
FALSE
2051227706
SITESERVER
ID=9a005626f557587e794aa6d24f2c87fe
.fastclick.com TRUE
/
FALSE
1002224078
oatmeal 85:2287:5622:2464:0:999627922|||||||||||||||
.fastclick.net TRUE
/
FALSE
1067522048
pluto
477609405|0|
.flycast.com
TRUE
/
FALSE
1292752600
EngaGlobID
CTG000F2EAD7D8F2B9050E740B8D8FBE576
.flycast.com
TRUE
/
FALSE
1292752600
atf
7_702707920672
.flycast.com
TRUE
/
FALSE
1292752600
ctime
999225084
.flycast.com
TRUE
/
FALSE
1292752600
ngt
7
by.advertising.com
TRUE
/
FALSE
1002257590
77475704
!ee820009997597000072!00000000-00006b4400007b28-2b977cfc-00000000-*727.770.70.756*
cookies.cmpnet.com
FALSE
/
FALSE
1046224676
Apache 727.770.70.756.78526999568652792
ehg-dig.hitbox.com
TRUE
/
FALSE
1027274642
DM570222G4FSV6 V7Xi(#Xz^^^e@iCiC%@rB^ez%zrzrz^^^e@iCiC"
^^^e@iCiC"^^^e@iCiC%@rB^ezA6ca~auh2f2aF6Ga:G~a6haTahDFF:62T_aK|
Ofm~zOffGz66kkk|W::W~a|c:m6FaIhcOtp.xBBhaTaxBrhDFF:xBB
esg.hitbox.com TRUE
/
FALSE
1027800470
E70220JGDN
V7A62f>:KH:maz%rrrBeCBe^A6Vafz%rrrBeCXBX
hc2.humanclick.com
FALSE
/hc/88919926
FALSE

7027299029
22:0

HumanClickID

727.770.70.756-795007258-9997772

Once a company has a profile of my interests, it can tailor advertising to make it more likely I
will click on an ad. Read a bit about what DoubleClick does.
Web Bugs are similar to banner ads, but are invisible to the user; they are used to build up a
profile of the user.
You can find a lot of useful information about privacy issues on the Privacy Power! web site.

Content filtering
Controlling what pops up during normal surfing, and defining limits for what children or
others may see, is very difficult. The Web is a huge collection of pages that are being created
and removed all the time; its impossible to define a list of off-limits sites, and leave it.
The main options are:
URL filtering:
make a list of permitted sites; only these sites can be visited. Look at the ChiBrow
- Childrens Browser.
make a list of banned URLs yourself (a hopeless task!), or depend on a service
provider to do this for you; look at Surf on the safe side and Surf Monkey.
Text filtering: Search the page for unwanted words, expunge unwanted words, or
refuse to load the page altogether.
Content ratings; we need to trust someone to rate sites for us; some systems rely
on the web site declaring their content, which is hardly a dependable method of
eliminating unwanted content. See Content Rating and Filtering.
Filtering is not foolproof, since filtering software is not intelligent enough to block all
unwanted pages without blocking innocuous pages as well. Some see any form of
filtering as an infringement on civil liberties, and provide software to circumvent it.
For further information on filtering, see:
the CPSR filters FAQ
Child Welfare filters
GetNetWise tools for families
Web filters: Which ones work?
Internet filtering software.

Checkpoint Questions:
Contributors:
Vaughan Shanks (shanks@cs.rmit.edu.au)
Saied Tahaghoghi (stahagho@cs.rmit.edu.au)

Firewalls

Web Server Performance

COSC1300 - Lecture Notes


Web Servers and Web Technology

Copyright 2000 RMIT Computer Science


All Rights Reserved

Você também pode gostar