Você está na página 1de 37

c 

 
ccc„

{  
 

As networks increase in size, powerful computers and sophisticated applications drive the need for greater
network bandwidth, performance, and scalability. Users are concerned with sending and receiving data
quickly and reliably.
This appendix first provides an overview of the routing and switching technologies. It then describes the
routing and switching equipment selected and installed at Terra Flora, the fictitious case study described in
Chapters 4 and 5 of this book, and gives the technical and business reasons for those choices.
‘  Terra Flora is a totally fictitious corporation. The names of companies, products, people, characters,
and data mentioned herein are fictitious and are in no way intended to represent any real individual,
company, product, or event, unless otherwise noted.

 

The routing process allows messages (or ©    to be delivered from a node on one network to a
node on another network using the most appropriate, efficient path (or
. Routing environments
employ routers, which function at the network layer of the OSI model (described in Chapter 1 of this book).
Routers direct data packets to the proper network and deliver them to the appropriate node on that
network. The router uses
 


, a set of rules governing the exchange of information between
nodes, to direct packets to their destination. The sending node (referred to as 
 and receiving node
(referred to as © 
 must use either the same protocols or a protocol converter in order to
communicate. Routers also allow data packets to be transmitted over dissimilar networks, such as Ethernet,
Token Ring, and FDDI, without having to be translated.
Determining the most appropriate, efficient path is protocol-specific. Some protocols determine the path by


, which is the number of routers between the source and the destination; other protocols
determine the path by analyzing the available bandwidth and calculating which route provides the best
quality of service.
In a routing environment, each physical destination must be uniquely identified. Most routing protocols are
based on an addressing scheme that uses a network, and a node number that identifies each node.
When a computer on an internetwork wants to communicate with a node on another network, the network
layer software creates a packet. The packet contains the data to be sent, the address of the sender (source
address), and the address of the destination (destination address).
The following diagram illustrates how packets break down information, such as the sender, sender address,
destination address, and data.

u u 
     
 
 
ccc„

The packet is put inside the appropriate frame for transmission across the network.
The network layer software also determines„whether the destination resides on the local network or another
network. If the destination address is on another network, the data packet is sent to the router that is
attached to the local network. When the router receives the data packet, it removes the a  encapsulating
the data packet and examines the destination address of the network-layer packets to find on which network
the node resides.

u u    ‘ 


The router then compares the network information to its routing table, which contains directions on how to
get to various places on the internetwork. There can be a number of networks and routers between a source
and destination. When the destination's network is found in the table, the router puts the network-layer
packet into the appropriate frame for the next network over which the packet will be transmitted, and sends
it to the next network in the direction of the destination node. Each router between the source and
destination network functions in this manner.
When the router attached to the network containing the destination node receives the packet, it removes
the frame encapsulating the network-layer packet and compares the destination network address to its
routing table.
When the router determines that it is directly attached to the destination network, it examines the packet
for the destination node number, which is compared to the router's Address Mapping Table. This table
correlates the packet's network-layer address to the device's Media Access Control (MAC) address. When a
match is found, the router puts the data from the packet into the appropriate frame and sends it to the
destination node.
Top of page„


] 
 
ccc„

 

Switching uses temporary connections for routing information. This communications method is widely
implemented in local area networks and provides better performance on the LAN than routers in two ways:

„ First, switches provide a means of directing a frame to the appropriate output port, typically using the
data-link layer addressing. The hardware itself forwards the frame to the output port. Routers use
software programs on the network to route frames. Because the data-link layer does not use software to
forward frames to the proper route, performance is improved.

„ Second, switches improve performance by enabling communication between logical groupings of users.
Only one communication at a time can take place in a routing environment. The computers must wait
until the line is free to communicate. Switches act as bridges, allowing the messages to be routed from
one port on the switch to another. There is no requirement to wait until the communication finishes
before communicating.
Switching technology uses  , which are multiport devices that create temporary paths to send frames
directly to a device based on its MAC address. There are three basic switching technologies: Configuration,
Frame, and Cell. Another technology, Frame-to-Cell translation, enables the migration to cell-based
backbones without changes to the host network's interface.

a  
  , or 
   allows individual ports to be assigned to individual segments
within a multi-segmented network hub. This provides the equivalent of an intelligent patch panel for network
centers and wiring closets. Port assignments are performed when devices are initially attached to networks.
This type of switching offers the ability to segment networks quickly for better performance. Configuration
switching operates at the physical level and is transparent to end systems and upper-layer protocols.
    is a connectionless technology that provides cost-effective bandwidth to the workgroup and
to multiplexed, low-speed traffic from the wiring closet onto high-speed down links. Operating at the data-
link layer, frame switches "learn" the destination MAC addresses of each attached computer. Through the
MAC address of each received frame, the switch forwards the packet to the output port attached to the
computer with the same MAC address. The output port can be directly attached to the computer, to a shared
access segment with multiple users, or to another frame switch, which, in turn, is connected to the
destination. Frame switching supports both Ethernet Frame switching and Token Ring switching.  
    supports shared or dedicated 10-Mbps or 100-Mbps connections. A 
   , also
referred to as a    , is a high-capacity, high port-density frame-relaying engine that forwards
data simultaneously, among all ports, at wire speed (4/16-Mbps).
  , also called  

 a 
© (ATM), is a high-speed technology that switches
fixed-length, 53-byte cells at speeds of 155-Mbps and higher. Designed to carry voice, video, and data
traffic, cell switching supports asynchronous traffic. Cell switching also supports different classes of traffic:
loss-sensitive, delay-sensitive, delay-variance sensitive, and various combinations of these.

›  ‘ 
Backbone Link Node (BLN) is the Bay Networks multiprotocol router/bridge. The symmetric multiprocessor
architecture of BLN satisfies the high-performance and availability requirements of backbone internetworks
using, for example, 100BASE-T, FDDI, HSSI, ATM, and SNA. Its symmetric multiprocessor architecture uses
multiple MC68060- and MC68040-based Fast Routing Engines (FRE), multiple dual PowerPC microprocessor-
based ATM Routing Engine (ARE) processor modules, and Bay Networks 1-Gbps Parallel Packet Express
(PPX). The BLN provides 300,000 pps forwarding performance when equipped with the MC68060-based FRE.
£ 
 
ccc„

The dual PowerPC-based ARE supports 155 Mbps full-duplex virtual network routing to provide connectivity
between multiple virtual LANs over a single ATM interface.
The BLN supports 4 FRE/ARE processor modules, 16 LAN interfaces, and 32 serial interfaces. A BLN
configured with symmetric multiprocessors, redundant LAN interfaces, processor interconnects, power
supplies, and software image storage is completely fault resilient. A redundant router capability also
provides another level of network fault tolerance.

  u   



For bandwidth-intensive applications, the Bay Networks 28000 series of switches enhance existing Ethernet
LANs and provide 2 gigabits per second (Gbps) of internal switching and scalable 10/100 megabits-per-
second (Mbps) dedicated bandwidth to support high-demand networks. These features enable the switch to
provide scalable, dedicated bandwidth to attached end users, shared-media segments, servers, and high-
end workstations.
All 28000 series switches include two high-speed, front-panel expansion ports for supporting direct 200
Mbps, full-duplex connection to other switches. Up to seven 28000 switches can be linked in a single stack,
providing a scalable solution. The 28000 switches support redundant links between devices, in which one
link automatically assumes standby status to provide a backup data path in the event of a primary-link
failure.
 ! ‘ " ‘   !

All Bay Networks routers, intelligent hubs, and high-speed switches are fully manageable through the
industry's best and most sophisticated suite of integrated network management, Bay Networks Optivity
Enterprise family of network management applications which includes:

„ Optivity LAN provides comprehensive switching hubs and remote troubleshooting of client/server
problems.

„ Optivity ATM delivers management services for Bay Networks-based ATM networks.
„ Optivity Design & Analysis focuses on enterprise network planning, analysis, and reporting.
„ Optivity Internetwork performs real-time router performance monitoring and status reporting.
Top of page„

    # u ‘ 

In Chapter 4 of this book, we proposed a plan for uniting the three independent and diverse networks of the
fictitious company, Terra Flora. One of the company's main goals was to centralize all administration.
‘  Terra Flora is a totally fictitious corporation. The names of companies, products, people, characters,
and data mentioned herein are fictitious and are in no way intended to represent any real individual,
company, product, or event, unless otherwise noted.
Terra Flora elected to use Bay Networks products, which combine a distributed management support
foundation with SNMP-based tools for comprehensive router configuration, monitoring, and control. They will
implement the Bay Networks Switched Internetworking Service (BaySIS) architecture. This extensible
switched internetworking architecture is comprised of four basic services²transport, policy, operation, and
design²which are implemented across the enterprise network. In this way, Terra Flora will integrate
multiprotocol routing, switching, and shared-media and wide-area solutions into a cohesive, switched
topology, all managed by a single network management system.
º 
 
ccc„

The Bay Networks Access Stack Note (ASN) router has a stackable architecture. Up to four ASN units are
supported in one stack. An ASN stack supports up to 40 network interfaces and 200,000 pps forwarding
performance, providing a superior path for growth. The MC68040 processor in the ASN's integrated design
maintains high forwarding and filtering rates, regardless of the number of protocols and network interfaces
used, even when processing Simple Network Management Protocol (SNMP) management inquiries.
The ASN meets the connectivity needs of the Terra Flora remote branch offices by offering modularity and
flexibility for building configurations. The ASN provides network connectivity through a selection of net
modules and adapter modules. An ASN can support up to four net modules, such as 100BASE-T, 10BASE-T
Ethernet, 4/16 Mbps Token Ring, FDDI, Synchronous, and ISDN BRI, to meet a wide variety of connectivity
requirements. Wide-area services, such as PPP, X.25, Frame Relay, SMDS, HDLC encapsulation, and ATM
DXI, are supported by the ASN synchronous interface.
The method used to accomplish this consolidation of resources and information is described next.
The following diagram of the network shows the NENTS40B0FO1, NENTS40DIV01, NENTS40ENT01,
EUNTS4ENT01, and EUNTS40DIV01 servers running multi-provider router (MPR) software. MPR passes
requests to the various network providers configured in the system. These servers are connected to a Bay
Networks Backbone Link Node (BLN) router through T1 links.
M 
 
ccc„

u u !   "   

The BLN creates a multi-protocol, collapsed, WAN backbone that provides a centralized wide-area
infrastructure. The BLN is also connected to another BLN over a T3 link which is attached to two Bay
Networks 28000 Series Fast Ethernet switches to form a multiprotocol LAN backbone. These two switches
appear logically on the Terra Flora network diagram as three switches. One 28000 switch connects to the
Terra Flora Enterprise level; the second 28000 switch connects to the Terra Flora Division, Department, and
Desktop levels.
The ASN is connected to the first BLN by means of a dedicated leased line to provide access to branch
servers and corporate systems. The Bay Networks router is configured to support a dial-on-demand feature,
which is used when additional capacity is required between the corporate and remote regional offices. For
example, during monthly end processing, transmission of data frequently exceeds the 128K capacity of the
leased line. The Bay Network's dial-on-demand feature then establishes a second connection to provide
additional bandwidth.
A 
 
ccc„

Recovery from leased line failure is provided through the ASN dial backup feature, which provides a two-
point, fault-tolerance measure. Two lines are configured at Terra Flora. The second line is activated or used
only when the first line goes down. This means that although two lines are available, usage charges apply
only to the line that is being used. This reduces operational costs by delivering the connection only when
needed and ensuring continued operation in the event of a network failure.
The entire network infrastructure is managed by the Bay Networks Optivity network management tools.
These tools enable the network manager to configure the routers and switches, monitor and evaluate the
network, and react to problems on the network from a central console.
The collapsed WAN backbone interconnects the Terra Flora LANs and computing devices across long
distances by way of the T1 and T3 links. The LAN backbone ensures that traffic is directed to the destination
over the best possible route and, when coupled with the switches' dedicated high-speed connections,
ensures high data throughput. Because routing is employed on both the LAN and WAN backbones, all
resources, such as nodes, servers, and computers, can interoperate, regardless of the type of network they
are on and the distance between them.
Because of this connectivity, users at the New York retail store now have access to the applications and
records contained in servers NENTS40B0FO1, NENTS40DIV01, NENTS40ENT01, EUNTS4ENT01, and
EUNTS40DIV01, as well as to all information and applications at the Enterprise, Division, Department, and
Desktop levels.
Security measures can be put in place in the router to block access to particular portions of the network. For
example, filters can be configured in any of the BLNs to drop data destined for a network based on source
address or destination address. The data packet information is examined and, if it is being sent to a specific
server, the data will be discarded, thus limiting access to the network.
In the network diagram, the routers allow access from regional remote retail stores to corporate resources.
Data is transmitted from these remote locations over WAN links, ensuring that individuals in the corporate
and regional offices have needed information to satisfy current and future customer needs. This information
is also passed to the appropriate corporate offices to maintain records, ensuring that all stores have
adequate merchandise to sell to the consumer. Sales information from the retail stores can be sent in a
timely fashion to the Enterprise and Division computers for processing inventory management, billing
purposes, and so forth.
Stocking information, changes in products, sales incentive programs, advertising literature, and other
information can be easily transmitted from corporate headquarters to the retail stores and all other remote
Terra Flora sites over the internetwork.
The routers determine the best possible paths for sending data and avoiding breakdowns in
communications. Through the router's lookup table, changes in the network are reported to the routers and
the tables are updated with information about the new configuration. This enables IP traffic to be rerouted if,
for example, a network change prevents the sent message from reaching its destination. This ensures
continued communications.
Corporate data, such as employee lists, resource materials, and accounting facilities, are also accessible
from anywhere no the internetwork through routing. Authorized users can access this information from their
offices and interoperate with others around the corporation, using a variety of applications.
The Bay Networks Optivity console functions as a centralized point of administration for managing all
network resources. Management servers offer a number of services including Logon Authentication,
replicated User Account databases, centralized Network services, Name Resolution services, and Backup
Services at the Enterprise level. These provide a consistent, master copy of common information and
resources. A centralized architecture of file and print application servers is in place to provide heterogeneous
w 
 
ccc„

file and print interfaces, integration services, backup services, and intranet services. All of these servers are
accessible throughout the internetwork through the routers.
The configuration of the Bay Networks equipment is as follows:
BLN Base Unit with:

„ Quad-port Ethernet Intelligent Link Module with 32 MB memory


„ Single-port High-Speed Serial Interface (HSSI) Intelligent Link Module
„ Quad-port Synchronous Intelligent Link Module with 32 MB memory
„ Dual-port Multichannel T1 Intelligent Link Module
„ Backbone Node Corporate Software Suite Version 10.0
BLN Base Unit with:

„ Quad-port Ethernet Intelligent Link Module with 16 MB memory


„ Single-port High-Speed Serial Interface (HSSI) Intelligent Link Module
„ Quad-port Synchronous Intelligent Link Module with 16 MB memory
„ Single-port Multichannel T1 Intelligent Link Module with 8 MB memory
„ Backbone Node Corporate Software Suite Version 10.0
28000 Series Switch

„ Model 28115 Fast Ethernet Switch


„ 16-port 10/100 Ethernet Switch
ASN Base Unit

„ Dual Ethernet Net Module


„ Dual Synchronous Net Module
„ Multimode FDDI Net Module
„ 32 MB DRAM
„ ASN Corporate Software Suite Version 10.0
Network Management Software

„ Optivity for OpenView (MS-DOS-compatible version)


„

„
„
„
„
„
„
 
 
ccc„

§ „ „
„  „ „„ „„

  ©
 
   © 
    
  
     ©
 
   © 
 
 
  

  

By Ed Woodrick

On This Page
Reduce or eliminate subnets and routers to increase your network's speed
To Subnet or Not to Subnet: That's the Real Question
Routers: A Path Not Taken?
Putting Thoughts to Paper: The Network Design
Designing the Network Services
Configuring the Network
After It Is Built

  !          $
This article appeared first in Windows NT Magazine in September 1997
Reprinted with permission from Windows NT Magazine.
In "Techniques to Speed Up Your NT Network" (April 1997), Joel Sloss describes how to create a network to
connect clients to Windows NT servers. In this article, I'll show you another network design that can increase
network speeds even more. I always try to design networks that install easily, work fast, reduce costs, and
increase reliability. To create such networks, you need to avoid subnets and reduce the number of routers.
Top Of Page

#  ‘  # $ % 


One major difference between Joel's design and mine is the use of IP subnets. Joel's design features four subnets. I
prefer to configure a network without subnets because their use increases complexity and usually decreases
performance.
Ethernet switching provides an easy way to segment a network without subnets. The difference between Ethernet
switching and IP routing is how deep the device must look into the packet to determine where to send it. In IP
routing, the device must look deep into the packet. In Ethernet switching, the device has to look at only the first
few bytes, enabling much faster traversal of the network. Ethernet switches are protocol independent and require
minimal, if any, configuration. In other words, they're almost Plug and Play.
You can also use IP switches (new devices that offer the speeds of switches and the ability to route packets
simultaneously), but their cost is high. As the price of IP switches drops, they will become viable for networks. But
at this time, you can justify them for only extremely high-speed TCP/IP networks.
In addition to speed and simplicity, switches offer versatility. With Ethernet switches, you can mix 10Base-T and
100Base-T adapters on the same network, decreasing costs. Many Ethernet adapters feature either 10 Megabits
per second (Mbps) or 100Mbps operation. The only difference is the port they plug into. Because the network
automatically detects the adapter's speed, you don't need any protocol or setup modifications.
With Ethernet adapters, you can connect 10Base-T hubs to 10/100 network cards in your workstations and servers.
Then as the need develops, you simply plug the 10/100 network card into a 100Base-T hub, increasing throughput
without touching the workstation.
Top Of Page

   ‘ # &


Another difference between Joel's design and mine is how the network connects to other corporate networks or the
Internet. Joel suggests that you create a router-to-router connection to link networks. Routers, however, have too
c 
 
ccc„

many configuration parameters that you can inadvertently misconfigure. As a result, I suggest that you use a port
on the existing corporate router to link to other corporate networks. To connect to the Internet, you probably need
another router to link to your Internet Service Provider (ISP). You might also need a firewall to provide protection.
To minimize costs while keeping network throughput high, I use 10Base-T ports for standard workstations,
100Base-T ports for advanced workstations, and 100Base-T switched ports for servers. As Table 1 shows, ports for
10Base-T are inexpensive. But, if you have the money, you can connect everyone to 100Base-T, which will
increase throughput for network intensive applications.
#›    

# 

10Base-T Hub $10

100Base-T Hub $60

10Base-T Switched $100

100Base-T Switched $600

10Base-T to 100Base-T Converter $500

Although I try to avoid using routers and subnets, they do have their place. I can't always design a network
without subnets. Subnets work effectively for linking locations with low-speed connections, connecting large
numbers of computers, and setting up networks that have many protocols.
Similarly, you might need to use routers. But router configuration isn't for the beginner. With a few hundred
parameters to configure, you need to know what you are doing.
Top Of Page

#    #‘ 



With these considerations in mind, you can start putting the network design on paper. To begin, you must
determine the configuration of the hubs, switches, and routers. An average small office needs to provide for about
50 low-speed hub connections, 12 high-speed connections, and 6 high-speed switched connections for the servers.
You can use low-speed connections for a majority of the devices on the network, such as standard workstations,
printers, routers, and other instruments. You need to use high-speed connections for engineering workstations and
other devices that need high-speed access but usually talk to only one or two other devices. You must use high-
speed switched connections to servers or to any device that needs high-speed access and connects to many
different devices on your network.
You can configure the network in many different ways. I like to use a 10/100 switch as the central point and
connect the 10Base-T hubs, 100Base-T hubs, and servers to it. Figure 1 shows this configuration.
cc 
 
ccc„

u ! '  

The amount of available bandwidth can help you determine whether to use a hub, switch, or router as the central
point in your network. As Figure 2 shows, using a hub is like using one garden hose to connect all the devices. All
information flows through the same line.

u 
''  › (     

Using a switch or router is like using several garden hoses to connect each device on the switch or router. Because
information flows through several lines, throughput increases significantly.
c 
 
ccc„

Routers and switches differ in their total speed capability. Routing imparts a significant penalty: Many smaller
routers have problems keeping up with 10Base-T. Some smaller routers can't even handle a T-1 line at 1.544Mbps.
Most switches provide full throughput between ports, providing a total of 400Mbps for an 8-port 100Base-T switch.
To provide the fastest connections to the application servers, I give the servers a dedicated 100Mbps port on the
switch. If the switch and the network adapter support full duplex, you can run both devices at 200Mbps with no
collisions. This configuration provides an extremely fast connection from the clients to the server. Collisions are
isolated to the user segments, and each server can obtain full 200Mbps throughput.
Top Of Page

‘   


After designing the network, you need to create the network services design. Because you don't have multiple
subnets to worry about, you might be tempted to put all the network services²such as Primary Domain Controller
(PDC), Domain Name System (DNS), Windows Internet Name Service (WINS), and Dynamic Host Configuration
Protocol (DHCP)²on one server. But you don't want to build in a single point of failure.
Instead, you can use two servers, each capable of performing all the necessary services. Neither server needs to
be very large or fast, just reliable. Two 486 or small Pentium systems, for example, can easily fill the needs of up
to 100 users.
When designing the services, you first need to determine the domain controller architecture. An NT domain
controller provides security for a network. It lets you centralize user administration to provide a fairly secure
network. The NT domain system consists of a PDC and any number of Backup Domain Controllers (BDCs).
Although BDCs are optional, I strongly suggest having at least one. If your PDC fails and you don't have a BDC,
you will lose all security information and the ability to access most of the network. Thus, you need to configure
both a PDC and BDC. (For more information about how to configure PDCs and BDCs, see Ed Tittel and Mary
Madden, "PDCs, BDCs, and Availability," August 1996.)
Although not mandatory, I recommend including DHCP in your network because it can save you a lot of time. DHCP
lets you set up workstations and servers without worrying about TCP/IP addresses, gateways, or subnets. You
simply set up the package once and then forget about it. DHCP also makes using laptop computers on different
networks easy. You just plug the laptop into a new network and reboot the computer. NT will issue the laptop a
new IP address so that it can operate on the new network. (For more information about DHCP, see Mark Minasi,
"DHCP and Assigning IP Addresses," August 1996.)
Another network service that you can add is WINS, which is a lot like DNS for NetBIOS. On a basic NetBEUI
network, computers periodically broadcast who and where they are so that they know about each other's presence.
These broadcasts add a lot of traffic to a network and do not work across routers. WINS provides a centralized
database of computer names and associated IP addresses to reduce network traffic and enable connectivity across
routers.
When a computer running NetBIOS over TCP/IP starts, the computer looks in its configuration (which it probably
got from DHCP) and determines whether it knows where the WINS server is. If not, the computer reverts to
broadcasts. Otherwise, the computer registers with the WINS server. The registration process includes not only the
computer's name and IP address, but also a list of services available on that computer.
Workstations use the registration process to know where the domain controllers reside. Domain controllers register
as such with WINS. When other computers need the services of a domain controller, they query WINS for the
addresses of all domain controllers. WINS responds by providing those addresses. The clients then communicate
with the domain controllers. (For more information about WINS, see Mark Minasi, "NetBIOS Names and WINS,"
January 1997.)
One service that you might not need is DNS. Basic Microsoft TCP/IP does not require DNS because WINS satisfies
all the requirements for name resolution. But if your network needs to communicate with UNIX servers or the
Internet, you will probably want to install a DNS server. If you are running Exchange Server, you will want to
c] 
 
ccc„

install a DNS server to decrease client load times. (For information about DNS, see Spyros Sakellariadis,
"Configuring and Administering DNS," August 1996, and "Integrating and Administering DNS," September 1996.)
NT 4.0 DNS has a feature not present in many other implementations: It can query WINS for name resolution.
Thus, NT 4.0 and WINS can effectively meet all your name resolution needs.
Of the PDC, BDC, WINS, DNS, and DHCP services, DHCP generates the least amount of traffic and requires the
least amount of processor time. The DHCP server operates only when a workstation boots and every 36 hours (the
default) after that. Thus, on most networks, DHCP works only once when employees turn on their workstations.
Because networks use WINS, PDCs, and BDCs whenever the network accesses a resource on a remote system, the
usage of these servers occurs at about the same frequency.
For my network services configuration, I typically put PDC on one system and DHCP, DNS, and WINS on the other.
This configuration is a good split of resources, especially when running on slower systems. Although most systems
can easily handle all the services, I use two systems for redundancy.
I put secondary versions of WINS, DNS, and installed-but-not-operational DHCP on the PDC. Why isn't this DHCP
operational? One quirk with DHCP is that it doesn't recognize secondary or backup servers. This quirk isn't a
problem, however, because with default settings, a DHCP server can be unavailable for up to three days and the
network will continue to operate. But for those of us with redundancy on the brain, this safety net isn't enough.
One alternative is to set aside a range of unused IP addresses on the primary DHCP server. If the primary server
fails, you can turn on the backup server and configure it with the IP addresses, enabling the network to continue its
operations.
Figure 3 shows a complete network and network services design. Server A and Server B are application or file
servers needing fast connections to clients, so they have 100Base-T ports. The PDC and BDC do not require fast
connections, so they have 10Base-T ports. To provide redundancy, the PDC and BDC are on separate hubs. Besides
user workstations, other devices on the 10Base-T hubs are printers and the router connection.

u  '  


 

Top Of Page

' ‘ 
The next task is to configure the network. Steps 1 through 5 summarize the configuration process.
c£ 
 
ccc„


 ›
 '   In an NT domain, you need to configure the PDC server first. You will
install WINS and DHCP on the PDC server, so you need to assign the server a static IP address. Before starting the
installation, you must determine the server name, domain name, and Administrator password. The server name
can consist of up to eight characters with no punctuation. The domain name can be the same length and contain a
dash. (These restrictions for the server and domain names are tighter than Microsoft's current requirements. I
added more restrictions to accommodate NT 5.0's enhancements to the domain system.) The Administrator
password, which must be hard for another person to guess, can be a combination of characters and numbers.
NT Setup will prompt you to set up the PDC during the installation. When NT Setup comes to the server function,
select Primary Domain Controller. You will then get the options to set the Administrator password and domain
name. After you configure the PDC, you can set up the BDC, selecting Backup Domain Controller as the system
function. You will need to enter the domain name and password during the setup so that the BDC can authenticate
the PDC and replicate the user database.

(  '   Because the DHCP doesn't recognize backup servers, you need to configure only
the PDC. One DHCP server can provide services to a subnet or an entire network. If the network has multiple
subnets, you need to configure the routers so that they route the DHCP packets to the DHCP server. (Because the
network design discussed here has only one subnet, I will not explain how to do this task.)
To install DHCP, you need to go to the Network applet in Control Panel and add the DHCP Server service. In DHCP,
each subnet is known as a scope. A scope contains the information necessary for configuring a specific subnet.
DHCP has global and scope settings. A global setting is the default for all scopes on the server. An example of a
global setting is the domain name. The domain name is usually the same for all workstations in a company.
Defining the information once simplifies the administration of servers that have many scopes. If you have one
scope, you can define settings as either global or scope settings, but good procedure dictates that you put settings
in their correct place. With correct scope settings, you can avoid many problems if you have to expand the network
in the future.
One of the biggest problems on a TCP/IP network is the assignment of IP addresses and subnets. Two sources
assign IP addresses. If you are connecting to the Internet, you will receive the IP addresses from your ISP. If you
are connecting to private networks, use one (or a portion) of the address spaces set aside by Request for
Comments (RFC) 1918: 10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, and 192.168.0.0 to
192.168.255.255.
)*‘ '   WINS is one of the easiest network services to install and configure. (As a result,
it's the one that many people tinker with and mess up.) To install WINS, you need to go to the Network applet in
Control Panel and add the Windows Internet Naming Service. You don't have to set any properties during
installation. The system will request that you reboot when you finish.
Because WINS lets you set up secondary servers, you can replicate databases to other servers. The hardest part of
installing WINS is configuring the replication. To change the WINS configuration, select the WINS Manager in the
Administrative Tools Program Group. You don't need to change any of the operating parameters except to enable
replication. Then select the server on the Server Replication Partners window and enable the Push Partner and Pull
Partner selections at the bottom of the window. You must complete this task for each server.
+
‘ '   With Microsoft networking, you do not need DNS. However, you will need DNS if you
are running UNIX systems, other non-Microsoft network platforms, or Exchange clients. For an internal network,
the parameters are fairly simple. (Internet or intranet DNS goes beyond the scope of this article.) Like WINS, just
select the Domain Name System service from the Network applet in Control Panel. Choose defaults whenever
possible.
,        I designed the network topology in Figure 3 to make the
integration with the rest of the corporate network as simple as possible. As long as the corporate network
administrator assigns the IP network address, the connection will be fairly easy. Just get the network administrator
cº 
 
ccc„

to configure a port that will connect you to the rest of the network. To make the network run, all you need is a
10Base-T connection from one of the hubs to the corporate router.
Many methods are available to connect the network to the Internet. The easiest method to use is Microsoft's Proxy
Server and a dial-up modem connection. This method is not the most elegant or flexible solution, but it is
inexpensive and easy to configure.
Top Of Page

' **›
After you have configured the network, you're basically finished. The only maintenance you need to schedule is a
periodic compaction of the WINS database if you are not running NT 4.0. Many networks use the AT service to
perform this maintenance as a weekly process. The only other task you might need to do is add users and
workstations to the network. As you can see, ease of operation is the best feature of this network design.
 # 
)   is a consultant with ED-COM, a networking and messaging organization. He is an MCSE and MVP
currently specializing in Exchange and Directory Services. You can reach him at ewoodrick@ed-com.com.
 ) ‘#  -
Windows NT Magazine is the leading publication for corporate IS teams deploying Windows NT and related
applications and technologies. Each month you'll find timely, how-to articles, tips and techniques to help you get
the most out of your NT system. Keep up with the rapid pace of Windows NT technology changes and better equip
yourself to make those all-important technology decisions for your company. Subscribe today!
Over 100,000 paid subscribers receive Windows NT Magazine every month in over 110 countries. Don't delay, for
subscription information/to order, point your browser to:
http://www.winntmag.com/AboutUs/Index.cfm?Action=OurPubs
Call 970-203-2782 or 800-793-5712, or email subs@winntmag.com.
· ! 

a



  a
 
  
   

"
 
a a
 


 ©  

   

  a
 
  
 
 ©©# $ # 
 
    
  ©
a   
 a  a
      
 

$
a   ©

a ©$ 
© 
 a
 
 
© 
  
©

 ©© 
©
 ©! 

a


! 

a


 
 a
 
©  
       a
 
 ©  ©    ©  

%   
  a    © ©
a
   
a ©  

„
cM 
 
ccc„

 „
„  „  „  „„
-  &
 
Network Load Balancing, a clustering technology included in the Microsoft Windows 2000 Advanced Server and
Datacenter Server operating systems, enhances the scalability and availability of mission-critical, TCP/IP-based
services, such as Web, Terminal Services, virtual private networking, and streaming media servers. This
component runs within cluster hosts as part of the Windows 2000 operating system and requires no dedicated
hardware support. To scale performance, Network Load Balancing distributes IP traffic across multiple cluster
hosts. It also ensures high availability by detecting host failures and automatically redistributing traffic to the
surviving hosts. Network Load Balancing provides remote controllability and supports rolling upgrades from the
Windows NT 4.0 operating system.
The unique and fully distributed architecture of Network Load Balancing enables it to deliver very high performance
and failover protection, especially in comparison with dispatcher-based load balancers. This white paper describes
the key features of this technology and explores its internal architecture and performance characteristics in detail.

On This Page
Introduction
Network Load Balancing Architecture
Network Load Balancing Performance
Summary
Glossary of Key Terms

*   
Internet server programs supporting mission-critical applications such as financial transactions, database access,
corporate intranets, and other key functions must run 24 hours a day, seven days a week. And networks need the
ability to scale performance to handle large volumes of client requests without creating unwanted delays. For these
reasons, clustering is of wide interest to the enterprise. Clustering enables a group of independent servers to be
managed as a single system for higher availability, easier manageability, and greater scalability.
The Microsoft® Windows® 2000 Advanced Server and Datacenter Server operating systems include two clustering
technologies designed for this purpose: Cluster service, which is intended primarily to provide failover support for
critical line-of-business applications such as databases, messaging systems, and file/print services; and Network
Load Balancing, which serves to balance incoming IP traffic among multi-node clusters. We will treat this latter
technology in detail here.
Network Load Balancing provides scalability and high availability to enterprise-wide TCP/IP services, such as Web,
Terminal Services, proxy, Virtual Private Networking (VPN), and streaming media services. Network Load Balancing
brings special value to enterprises deploying TCP/IP services, such as e-commerce applications, that link clients
with transaction applications and back-end databases.
Network Load Balancing servers (also called 
) in a cluster communicate among themselves to provide key
benefits, including:
Ô    Network Load Balancing scales the performance of a server-based program, such as a Web
server, by distributing its client requests across multiple servers within the cluster. As traffic increases,
additional servers can be added to the cluster, with up to 32 servers possible in any one cluster.

Ô (    Network Load Balancing provides high availability by automatically detecting the failure
of a server and repartitioning client traffic among the remaining servers within ten seconds, while
providing users with continuous service.
cA 
 
ccc„

Network Load Balancing distributes IP traffic to multiple copies (or  ) of a TCP/IP service, such as a Web
server, each running on a host within the cluster. Network Load Balancing transparently partitions the client
requests among the hosts and lets the clients access the cluster using one or more "virtual" IP addresses. From the
client's point of view, the cluster appears to be a single server that answers these client requests. As enterprise
traffic increases, network administrators can simply plug another server into the cluster.
For example, the clustered hosts in Figure 1 below work together to service network traffic from the Internet. Each
server runs a copy of an IP-based service, such as Internet Information Services 5.0 (IIS), and Network Load
Balancing distributes the networking workload among them. This speeds up normal processing so that Internet
clients see faster turnaround on their requests. For added system availability, the back-end application (a
database, for example) may operate on a two-node cluster running Cluster service.

u '  "              ''    
   '  ‘  ›      ! '   

   '‘  ›   


Network Load Balancing is superior to other software solutions such as round robin DNS (RRDNS), which
distributes workload among multiple servers but does not provide a mechanism for server availability. If a server
within the host fails, RRDNS, unlike Network Load Balancing, will continue to send it work until a network
administrator detects the failure and removes the server from the DNS address list. This results in service
disruption for clients. Network Load Balancing also has advantages over other load balancing solutions²both
hardware- and software-based²that introduce single points of failure or performance bottlenecks by using a
centralized dispatcher. Because Network Load Balancing has no proprietary hardware requirements, any industry-
standard compatible computer can be used. This provides significant cost savings when compared to proprietary
hardware load balancing solutions.
The unique and fully distributed software architecture of Network Load Balancing enables it to deliver the industry's
best load balancing performance and availability. The specific advantages of this architecture are described below
in the "Network Load Balancing Architecture" section.
*    ‘  ›   
Network Load Balancing is automatically installed and can be optionally enabled on the Advanced Server and
Datacenter Server versions of the Windows 2000 operating system. It operates as an optional service for local area
network (LAN) connections and can be enabled for one LAN connection in the system; this LAN connection is
known as the    ©  . No hardware changes are required to install and run Network Load Balancing. Since
it is compatible with almost all Ethernet and Fiber Distributed Data Interface (FDDI) network adapters, it has no
specific hardware compatibility list.
cw 
 
ccc„

*  
Once Network Load Balancing is enabled, its parameters are configured using its Properties dialog box, as
described in the online help guide. The cluster is assigned a   '( ©© , which represents a virtual IP
address to which all cluster hosts respond. The remote control program provided as a part of Network Load
Balancing uses this IP address to identify a target cluster. Each cluster host also can be assigned a ©©  ©'(
©©  for network traffic unique to that particular host within the cluster. Network Load Balancing never load-
balances traffic for the dedicated IP address. Instead, it load-balances incoming traffic from  IP addresses other
than the dedicated IP address.
When configuring Network Load Balancing, it is important to enter the dedicated IP address, primary IP address,
and other optional virtual IP addresses into the TCP/IP Properties dialog box in order to enable the host's TCP/IP
stack to respond to these IP addresses. The dedicated IP address is always entered first so that outgoing
connections from the cluster host are sourced with this IP address instead of a virtual IP address. Otherwise,
replies to the cluster host could be inadvertently load-balanced by Network Load Balancing and delivered to
another cluster host. Some services, such as the Point-to-Point Tunneling Protocol (PPTP) server, do not allow
outgoing connections to be sourced from a different IP address, and thus a dedicated IP address cannot be used
with them.
(   
Each cluster host is assigned a unique 

 in the range of 1 to 32, where lower numbers denote higher
priorities. The host with the highest host priority (lowest numeric value) is called the ©a 
. It handles all
client traffic for the virtual IP addresses that is not specifically intended to be load-balanced. This ensures that
server applications not configured for load balancing only receive client traffic on a single host. If the default host
fails, the host with the next highest priority takes over as default host.

Network Load Balancing uses 
  to customize load balancing for a consecutive numeric range of server
ports. Port rules can select either   $
 or  $
 load-balancing policies. With multiple-host load
balancing, incoming client requests are distributed among all cluster hosts, and a load percentage can be specified
for each host. Load percentages allow hosts with higher capacity to receive a larger fraction of the total client load.
Single-host load balancing directs all client requests to the host with highest  © 
. The handling priority
essentially overrides the host priority for the port range and allows different hosts to individually handle all client
traffic for specific server applications. Port rules also can be used to block undesired network access to certain IP
ports.
When a port rule uses multiple-host load balancing, one of three    aa  modes is selected. When no client
affinity mode is selected, Network Load Balancing load-balances client traffic from one IP address and different
source ports on multiple-cluster hosts. This maximizes the granularity of load balancing and minimizes response
time to clients. To assist in managing client sessions, the default  $   affinity mode load-balances 
network traffic from a given client's IP address on a single-cluster host. The   affinity mode further constrains
this to load-balance all client traffic from a single class C address space. See the "Managing Application State"
section below for more information on session support.
By default, Network Load Balancing is configured with a single port rule that covers all ports (0-65,535) with
multiple-host load balancing and single-client affinity. This rule can be used for most applications. It is important
that this rule not be modified for VPN applications and whenever IP fragmentation is expected. This ensures that
fragments are efficiently handled by the cluster hosts.
!   
Network Load Balancing provides a remote control program (· ) that allows system administrators to
remotely query the status of clusters and control operations from a cluster host or from any networked computer
running Windows 2000. This program can be incorporated into scripts and monitoring programs to automate
cluster control. Monitoring services are widely available for most client/server applications. Remote control
c 
 
ccc„

operations include starting and stopping either single hosts or the entire cluster. In addition, load balancing for
individual port rules can be enabled or disabled on one or more hosts. New traffic can be blocked on a host while
allowing ongoing TCP connections to complete prior to removing the host from the cluster. Although remote control
commands are password-protected, individual cluster hosts can disable remote control operations to enhance
security.
      
Server applications need not be modified for load balancing. However, the system administrator starts load-
balanced applications on  cluster hosts. Network Load Balancing does not directly monitor server applications,
such as Web servers, for continuous and correct operation. Monitoring services are widely available for most
client/server applications. Instead, Network Load Balancing provides the mechanisms needed by application
monitors to control cluster operations²for example, to remove a host from the cluster if an application fails or
displays erratic behavior. When an application failure is detected, the application monitor uses the Network Load
Balancing remote control program to stop individual cluster hosts and/or disable load balancing for specific port
ranges.
     . 
Computers can be taken offline for preventive maintenance without disturbing cluster operations. Network Load
Balancing also supports rolling upgrades to allow software or hardware upgrades without shutting down the cluster
or disrupting service. Upgrades can be individually applied to each server, which immediately rejoins the cluster.
Network Load Balancing hosts can run in mixed clusters with hosts running the Windows NT® Load Balancing
Service (WLBS) under Windows NT 4.0. Rolling upgrades can be performed without interrupting cluster services by
taking individual hosts out of the cluster, upgrading them to Windows 2000, and then placing them back in the
cluster. (Note that the first port in the default port range has been changed for Windows 2000 from 1 to 0, and the
port rules must always be compatible for all cluster hosts.)
( ‘  ›   ) 
Network Load Balancing scales the performance of a server-based program, such as a Web server, by distributing
its client requests among multiple servers within the cluster. With Network Load Balancing, each incoming IP
packet is received by each host, but only accepted by the intended recipient. The cluster hosts concurrently
respond to different client requests, even multiple requests from the same client. For example, a Web browser may
obtain the various images within a single Web page from different hosts in a load-balanced cluster. This speeds up
processing and shortens the response time to clients.
Each Network Load Balancing host can specify the load percentage that it will handle, or the load can be equally
distributed among all of the hosts. Using these load percentages, each Network Load Balancing server selects and
handles a portion of the workload. Clients are statistically distributed among cluster hosts so that each server
receives its percentage of incoming requests. This load balance dynamically changes when hosts enter or leave the
cluster. In this version, the load balance does not change in response to varying server loads (such as CPU or
memory usage). For applications, such as Web servers, which have numerous clients and relatively short-lived
client requests, the ability of Network Load Balancing to distribute workload through statistical mapping efficiently
balances loads and provides fast response to cluster changes.
Network Load Balancing cluster servers emit a     to other hosts in the cluster, and listen for the
heartbeat of other hosts. If a server in a cluster fails, the remaining hosts adjust and redistribute the workload
while maintaining continuous service to their clients. Although existing connections to an offline host are lost, the
Internet services nevertheless remain continuously available. In most cases (for example, with Web servers), client
software automatically retries the failed connections, and the clients experience only a few seconds' delay in
receiving a response.
     
Application   refers to data maintained by a server application on behalf of its clients. If a server application
(such as a Web server) maintains state information about a client session²that is, when it maintains a client's

 ²that spans multiple TCP connections, it is usually important that all TCP connections for this client be
 
 
ccc„

directed to the same cluster host. Shopping cart contents at an e-commerce site and Secure Sockets Layer (SSL)
authentication data are examples of a client's session state. Network Load Balancing can be used to scale
applications that manage session state spanning multiple connections. When its client affinity parameter setting is
enabled, Network Load Balancing directs all TCP connections from one client IP address to the same cluster host.
This allows session state to be maintained in host memory. However, should a server or network failure occur
during a client session, a new logon may be required to re-authenticate the client and re-establish session state.
Also, adding a new cluster host redirects some client traffic to the new host, which can affect sessions, although
ongoing TCP connections are not disturbed. Client/server applications that manage client state so that it can be
retrieved from any cluster host (for example, by embedding state within cookies or pushing it to a back-end
database) do not need to use client affinity.
To further assist in managing session state, Network Load Balancing provides an optional client affinity setting that
directs all client requests from a TCP/IP class C address range to a single cluster host. With this feature, clients
that use multiple proxy servers can have their TCP connections directed to the same cluster host. The use of
multiple proxy servers at the client's site causes requests from a single client to appear to originate from different
systems. Assuming that all of the client's proxy servers are located within the same 254-host class C address
range, Network Load Balancing ensures that the same host handles client sessions with minimum impact on load
distribution among the cluster hosts. Some very large client sites may use multiple proxy servers that span class C
address spaces.
In addition to session state, server applications often maintain persistent, server-based state information that is
updated by client transactions, such as merchandise inventory at an e-commerce site. Network Load Balancing
should not be used to ©  scale applications, such as Microsoft SQL ServerŒ (other than for read-only database
access), that independently update inter-client state because updates made on one cluster host will not be visible
to other cluster hosts. To benefit from Network Load Balancing, applications must be designed to permit multiple
instances to simultaneously access a shared database server that synchronizes updates. For example, Web servers
with Active Server Pages should have their client updates pushed to a shared back-end database server.
Top of page

‘  ›      


To maximize throughput and high availability, Network Load Balancing uses a fully distributed software
architecture. An identical copy of the Network Load Balancing driver runs in parallel on each cluster host. The
drivers arrange for all cluster hosts on a single subnet to concurrently detect incoming network traffic for the
cluster's primary IP address (and for additional IP addresses on multihomed hosts). On each cluster host, the
driver acts as a filter between the network adapter's driver and the TCP/IP stack, allowing a portion of the incoming
network traffic to be received by the host. By this means incoming client requests are partitioned and load-
balanced among the cluster hosts.
Network Load Balancing runs as a network driver logically situated beneath higher-level application protocols, such
as HTTP and FTP. Figure 2 below shows the implementation of Network Load Balancing as an intermediate driver in
the Windows 2000 network stack.
c 
 
ccc„

u ‘  ›       !   # *    
      )    ‘      
    /      ‘  ›   

This architecture maximizes throughput by using the broadcast subnet to deliver incoming network traffic to all
cluster hosts and by eliminating the need to route incoming packets to individual cluster hosts. Since filtering
unwanted packets is faster than routing packets (which involves receiving, examining, rewriting, and resending),
Network Load Balancing delivers higher network throughput than dispatcher-based solutions. As network and
server speeds grow, its throughput also grows proportionally, thus eliminating any dependency on a particular
hardware routing implementation. For example, Network Load Balancing has demonstrated 250 megabits per
second (Mbps) throughput on Gigabit networks.
Another key advantage to Network Load Balancing's fully distributed architecture is the enhanced availability
resulting from (-1)-way failover in a cluster with N hosts. In contrast, dispatcher-based solutions create an
inherent single point of failure that must be eliminated using a redundant dispatcher that provides only 1-way
failover. This offers a less robust failover solution than does a fully distributed architecture.
Network Load Balancing's architecture takes advantage of the subnet's hub and/or switch architecture to
simultaneously deliver incoming network traffic to all cluster hosts. However, this approach increases the burden
on switches by occupying additional port bandwidth. (Please refer to the "Network Load Balancing Performance"
section of this paper for performance measurements of switch usage.) This is usually not a concern in most
intended applications, such as Web services and streaming media, since the percentage of incoming traffic is a
small fraction of total network traffic. However, if the client-side network connections to the switch are significantly
faster than the server-side connections, incoming traffic can occupy a prohibitively large portion of the server-side
port bandwidth. The same problem arises if multiple clusters are hosted on the same switch and measures are not
taken to setup virtual LANs for individual clusters.
 
 
ccc„

During packet reception, Network Load Balancing's fully pipelined implementation overlaps the delivery of incoming
packets to TCP/IP and the reception of other packets by the network adapter driver. This speeds up overall
processing and reduces latency because TCP/IP can process a packet while the network driver interface
specification (NDIS) driver receives a subsequent packet. It also reduces the overhead required for TCP/IP and the
NDIS driver to coordinate their actions, and in many cases, it eliminates an extra copy of packet data in memory.
During packet sending, Network Load Balancing also enhances throughput and reduces latency and overhead by
increasing the number of packets that TCP/IP can send with one NDIS call. To achieve these performance
enhancements, Network Load Balancing allocates and manages a pool of packet buffers and descriptors that it uses
to overlap the actions of TCP/IP and the NDIS driver.

   '  # '' 


Network Load Balancing uses layer-two broadcast or multicast to simultaneously distribute incoming network traffic
to all cluster hosts. In its default   mode of operation, Network Load Balancing reassigns the station address
("MAC" address) of the network adapter for which it is enabled (called the    ©  ), and all cluster hosts
are assigned the same MAC address. Incoming packets are thereby received by all cluster hosts and passed up to
the Network Load Balancing driver for filtering. To insure uniqueness, the MAC address is derived from the cluster's
primary IP address entered in the Network Load Balancing Properties dialog box. For a primary IP address of
)*+,, the unicast MAC address is set to -*$.$)$*$+$,. Network Load Balancing automatically modifies the
cluster adapter's MAC address by setting a registry entry and then reloading the adapter's driver; the operating
system does not have to be restarted.
If the cluster hosts are attached to a switch instead of a hub, the use of a common MAC address would create a
conflict since layer-two switches expect to see unique source MAC addresses on all switch ports. To avoid this
problem, Network Load Balancing uniquely modifies the source MAC address for outgoing packets; a cluster MAC
address of -*$.$)$*$+$, is set to -*$$)$*$+$,, where  is the host's priority within the cluster (set in the
Network Load Balancing Properties dialog box). This technique prevents the switch from learning the cluster's
actual MAC address, and as a result, incoming packets for the cluster are delivered to all switch ports. If the cluster
hosts are connected directly to a hub instead of to a switch, Network Load Balancing's masking of the source MAC
address in unicast mode can be disabled to avoid flooding upstream switches. This is accomplished by setting the
Network Load Balancing registry parameter ! &
! to 0. The use of an upstream level three switch will
also limit switch flooding.
Network Load Balancing's unicast mode has the side effect of disabling communication between cluster hosts using
the cluster adapters. Since outgoing packets for another cluster host are sent to the same MAC address as the
sender, these packets are looped back within the sender by the network stack and never reach the wire. This
limitation can be avoided by adding a second network adapter card to each cluster host. In this configuration,
Network Load Balancing is bound to the network adapter on the subnet that receives incoming client requests, and
the other adapter is typically placed on a separate, local subnet for communication between cluster hosts and with
back-end file and database servers. Network Load Balancing only uses the cluster adapter for its heartbeat and
remote control traffic.
Note that communication between cluster hosts and hosts outside the cluster is never affected by Network Load
Balancing's unicast mode. Network traffic for a host's dedicated IP address (on the cluster adapter) is received by
all cluster hosts since they all use the same MAC address. Since Network Load Balancing never load balances traffic
for the dedicated IP address, Network Load Balancing immediately delivers this traffic to TCP/IP on the intended
host. On other cluster hosts, Network Load Balancing treats this traffic as load balanced traffic (since the target IP
address does not match another host's dedicated IP address), and it may deliver it to TCP/IP, which will discard it.
Note that excessive incoming network traffic for dedicated IP addresses can impose a performance penalty when
Network Load Balancing operates in unicast mode due to the need for TCP/IP to discard unwanted packets.
Network Load Balancing provides a second mode for distributing incoming network traffic to all cluster hosts. Called
    mode, this mode assigns a layer two multicast address to the cluster adapter instead of changing the
adapter's station address. The multicast MAC address is set to -+$.$)$*$+$, for a cluster's primary IP address of
] 
 
ccc„

)*+,. Since each cluster host retains a unique station address, this mode alleviates the need for a second
network adapter for communication between cluster hosts, and it also removes any performance penalty from the
use of dedicated IP addresses.
Network Load Balancing's unicast mode induces switch flooding in order to simultaneously deliver incoming
network traffic to all cluster hosts. Also, when Network Load Balancing uses multicast mode, switches often flood
all ports by default to deliver multicast traffic. However, Network Load Balancing's multicast mode gives the system
administrator the opportunity to limit switch flooding by configuring a virtual LAN within the switch for the ports
corresponding to the cluster hosts. This can be accomplished by manually programming the switch or by using the
Internet Group Management Protocol (IGMP) or the GARP (Generic Attribute Registration Protocol) Multicast
Registration Protocol (GMRP). The current version of Network Load Balancing does not provide automatic support
for IGMP or GMRP.
Network Load Balancing implements the Address Resolution Protocol (ARP) functionality needed to ensure that the
cluster's primary IP address and other virtual IP addresses resolve to the cluster's multicast MAC address. (The
dedicated IP address continues to resolve to the cluster adapter's station address.) Experience has shown that
Cisco routers currently do not accept an ARP response from the cluster that resolves unicast IP addresses to
multicast MAC addresses. This problem can be overcome by adding a static ARP entry to the router for each virtual
IP address, and the cluster's multicast MAC address can be obtained from the Network Load Balancing Properties
dialog box or from the ·  remote control program. The default unicast mode avoids this problem because
the cluster's MAC address is a unicast MAC address.
Network Load Balancing does not manage any incoming IP traffic other than TCP traffic, User Datagram Protocol
(UDP) traffic, and Generic Routing Encapsulation (GRE) traffic (as part of PPTP traffic) for specified ports. It does
not filter IGMP, ARP (except as described above), the Internet Control Message Protocol (ICMP), or other IP
protocols. All such traffic is passed unchanged to the TCP/IP protocol software on all of the hosts within the cluster.
As a result, the cluster can generate duplicate responses from certain point-to-point TCP/IP programs (such as
 ) when the cluster IP address is used. Because of the robustness of TCP/IP and its ability to deal with
replicated datagrams, other protocols behave correctly in the clustered environment. These programs can use the
dedicated IP address for each host to avoid this behavior.
 ›    !
Network Load Balancing employs a fully distributed filtering algorithm to map incoming clients to the cluster hosts.
This algorithm was chosen to enable cluster hosts to make a load-balancing decision independently and quickly for
each incoming packet. It was optimized to deliver statistically even load balance for a large client population
making numerous, relatively small requests, such as those typically made to Web servers. When the client
population is small and/or the client connections produce widely varying loads on the server, Network Load
Balancing's load balancing algorithm is less effective. However, the simplicity and speed of its algorithm allows it to
deliver very high performance, including both high throughput and low response time, in a wide range of useful
client/server applications.
Network Load Balancing load-balances incoming client requests by directing a selected percentage of new requests
to each cluster host; the load percentage is set in the Network Load Balancing Properties dialog box for each port
range to be load-balanced. The algorithm does not respond to changes in the load on each cluster host (such as
the CPU load or memory usage). However, the mapping is modified when the cluster membership changes, and
load percentages are renormalized accordingly.
When inspecting an arriving packet, all hosts simultaneously perform a statistical mapping to quickly determine
which host should handle the packet. The mapping uses a randomization function that calculates a host priority
based on the client's IP address, port, and other state information maintained to optimize load balance. The
corresponding host forwards the packet up the network stack to TCP/IP, and the other cluster hosts discard it. The
mapping does not vary unless the membership of cluster hosts changes, ensuring that a given client's IP address
and port will always map to the same cluster host. However, the particular cluster host to which the client's IP
£ 
 
ccc„

address and port map cannot be predetermined since the randomization function takes into account the current
and past cluster's membership to minimize remappings.
The load-balancing algorithm assumes that client IP addresses and port numbers (when client affinity is not
enabled) are statistically independent. This assumption can break down if a server-side firewall is used that proxies
client addresses with one IP address and, at the same time, client affinity is enabled. In this case, all client
requests will be handled by one cluster host and load balancing is defeated. However, if client affinity is not
enabled, the distribution of client ports within the firewall usually provides good load balance.
In general, the quality of load balance is statistically determined by the number of clients making requests. This
behavior is analogous to coin tosses where the two sides of the coin correspond to the number of cluster hosts
(thus, in this analogy, two), and the number of tosses corresponds to the number of client requests. The load
distribution improves as the number of client requests increases just as the fraction of coin tosses resulting in
"heads" approaches 1/2 with an increasing number of tosses. As a rule of thumb, with client affinity set, there
must be many more clients than cluster hosts to begin to observe even load balance.
As the statistical nature of the client population fluctuates, the evenness of load balance can be observed to vary
slightly over time. It is important to note that achieving precisely identical load balance on each cluster host
imposes a performance penalty (throughput and response time) due to the overhead required to measure and
react to load changes. This performance penalty must be weighed against the benefit of maximizing the use of
cluster resources (principally CPU and memory). In any case, excess cluster resources must be maintained to
absorb the client load in case of failover. Network Load Balancing takes the approach of using a very simple but
powerful load-balancing algorithm that delivers the highest possible performance and availability.
Network Load Balancing's client affinity settings are implemented by modifying the statistical mapping algorithm's
input data. When client affinity is selected in the Network Load Balancing Properties dialog box, the client's port
information is not used as part of the mapping. Hence, all requests from the same client always map to the same
host within the cluster. Note that this constraint has no timeout value (as is often the case in dispatcher-based
implementations) and persists until there is a change in cluster membership. When single affinity is selected, the
mapping algorithm uses the client's full IP address. However, when class C affinity is selected, the algorithm uses
only the class C portion (the upper 24 bits) of the client's IP address. This ensures that all clients within the same
class C address space map to the same cluster host.
In mapping clients to hosts, Network Load Balancing cannot directly track the boundaries of sessions (such as SSL
sessions) since it makes its load balancing decisions when TCP connections are established and prior to the arrival
of application data within the packets. Also, it cannot track the boundaries of UDP streams, since the logical session
boundaries are defined by particular applications. Instead, Network Load Balancing's affinity settings are used to
assist in preserving client sessions. When a cluster host fails or leaves the cluster, its client connections are always
dropped. After a new cluster membership is determined by 
   (described below), clients that previously
mapped to the failed host are remapped among the surviving hosts. All other client sessions are unaffected by the
failure and continue to receive uninterrupted service from the cluster. In this manner, Network Load Balancing's
load-balancing algorithm minimizes disruption to clients when a failure occurs.
When a new host joins the cluster, it induces convergence, and a new cluster membership is computed. When
convergence completes, a minimal portion of the clients will be remapped to the new host. Network Load Balancing
tracks TCP connections on each host, and, after their current TCP connection completes, the next connection from
the affected clients will be handled by the new cluster host; UDP streams are immediately handled by the new
cluster host. This can potentially break some client sessions that span multiple connections or comprise UDP
streams. Hence, hosts should be added to the cluster at times that minimize disruption of sessions. To completely
avoid this problem, session state must be managed by the server application so that it can be reconstructed or
retrieved from any cluster host. For example, session state can be pushed to a back-end database server or kept in
client cookies. SSL session state is automatically recreated by re-authenticating the client.
º 
 
ccc„

The GRE stream within the PPTP protocol is a special case of a session that is unaffected by adding a cluster host.
Since the GRE stream is temporally contained within the duration of its TCP control connection, Network Load
Balancing tracks this GRE stream along with its corresponding control connection. This prevents the addition of a
cluster host from disrupting the PPTP tunnel.
  
Network Load Balancing hosts periodically exchange multicast or broadcast heartbeat messages within the cluster.
This allows them to monitor the status of the cluster. When the state of the cluster changes (such as when hosts
fail, leave, or join the cluster), Network Load Balancing invokes a process known as 
  , in which the
hosts exchange heartbeat messages to determine a new, consistent state of the cluster and to elect the host with
the highest host priority as the new default host. When all cluster hosts have reached consensus on the correct
new state of the cluster, they record the change in cluster membership upon completion of convergence in the
Windows 2000 event log.
During convergence, the hosts continue to handle incoming network traffic as usual, except that traffic for a failed
host does not receive service. Client requests to surviving hosts are unaffected. Convergence terminates when all
cluster hosts report a consistent view of the cluster membership for several heartbeat periods. If a host attempts
to join the cluster with inconsistent port rules or an overlapping host priority, completion of convergence is
inhibited. This prevents an improperly configured host from handling cluster traffic.
At the completion of convergence, client traffic for a failed host is redistributed to the remaining hosts. If a host is
added to the cluster, convergence allows this host to receive its share of load-balanced traffic. Expansion of the
cluster does not affect ongoing cluster operations and is achieved in a manner transparent to both Internet clients
and to server programs. However, it may affect client sessions because clients may be remapped to different
cluster hosts between connections, as described in the previous section.
In unicast mode, each cluster host periodically broadcasts heartbeat messages, and in multicast mode, it
multicasts these messages. Each heartbeat message occupies one Ethernet frame and is tagged with the cluster's
primary IP address so that multiple clusters can reside on the same subnet. Network Load Balancing's heartbeat
messages are assigned an ether type-value of hexadecimal //0. The default period between sending heartbeats is
one second, and this value can be adjusted with the  !(
© registry parameter. During convergence, the
exchange period is reduced by half in order to expedite completion. Even for large clusters, the bandwidth required
for heartbeat messages is very low (for example, 24 Kbytes/second for a 16-way cluster).
Network Load Balancing assumes that a host is functioning properly within the cluster as long as it participates in
the normal heartbeat exchange among the cluster hosts. If other hosts do not receive a heartbeat message from
any member for several periods of message exchange, they initiate convergence. The number of missed heartbeat
messages required to initiate convergence is set to five by default and can be adjusted using the  !
 
registry parameter.
A cluster host will immediately initiate convergence if it receives a heartbeat message from a new host or if it
receives an inconsistent heartbeat message that indicates a problem in the load distribution. When receiving a
heartbeat message from a new host, the host checks to see if the other host has been handling traffic from the
same clients. This problem could arise if the cluster subnet was rejoined after having been partitioned. More likely,
the new host was already converged alone in a disjoint subnet and has received no client traffic. This can occur if
the switch introduces a lengthy delay in connecting the host to the subnet. If a cluster host detects this problem
and the other host has received more client connections since the last convergence, it immediately stops handling
client traffic in the affected port range. Since both hosts are exchanging heartbeats, the host that has received
more connections continues to handle traffic, while the other host waits for the end of convergence to begin
handling its portion of the load. This heuristic algorithm eliminates potential conflicts in load handling when a
previously partitioned cluster subnet is rejoined; this event is recorded in the event log.
!   
Network Load Balancing's remote control mechanism uses the UDP protocol and is assigned port 2504. Remote
control datagrams are sent to the cluster's primary IP address. Since the Network Load Balancing driver on each
M 
 
ccc„

cluster host handles them, these datagrams must be routed to the cluster subnet (instead of to a back-end subnet
to which the cluster is attached). When remote control commands are issued from within the cluster, they are
broadcast on the local subnet. This ensures that all cluster hosts receive them even if the cluster runs in unicast
mode.
Top of page

‘  ›     ' !  


The performance impact of Network Load Balancing can be measured in four key areas:
Ô .    on the cluster hosts, which is the CPU percentage required to analyze and filter network
packets (lower is better).

Ô  ! to clients, which increases with the non-overlapped portion of CPU overhead, called
   (lower is better).

Ô # to clients, which increases with additional client traffic that the cluster can handle prior to
saturating the cluster hosts (higher is better).

Ô     , which increases with additional client traffic (lower is better) and must not adversely
affect port bandwidth.

In addition, Network Load Balancing's scalability determines how its performance improves as hosts are added to
the cluster. Scalable performance requires that CPU overhead and latency not grow faster than the number of
hosts.
.  
All load-balancing solutions require system resources to examine incoming packets and make load-balancing
decisions, and thus impose an overhead on network performance. As previously noted, dispatcher-based solutions
examine, modify, and retransmit packets to particular cluster hosts. (They usually modify IP addresses to retarget
packets from a virtual IP address to a particular host's IP address.) In contrast, Network Load Balancing
simultaneously delivers incoming packets to all cluster hosts and applies a filtering algorithm that discards packets
on all but the desired host. Filtering imposes less overhead on packet delivery than re-routing, which results in
lower response time and higher overall throughput.
Network Load Balancing's filtering algorithm runs in parallel on all cluster hosts.   
  © can be
measured as a percentage of the CPU used on all hosts. This overhead grows in proportion to the incoming packet
rate (as does the routing overhead for dispatcher-based solutions), independent of the number of cluster hosts. For
example, if a two-host cluster experiences ( percent CPU overhead per-host to sustain a given incoming packet
rate, this percentage will remain the same for a 32-host cluster. If the packet rate doubles, the CPU overhead
grows to *(.
In practice, hosts are added to a Network Load Balancing cluster in proportion to the request rate as the client load
increases. When this is the case, the CPU overhead grows linearly with both the packet rate and number of hosts.
In the example used above, a 32-host cluster would experience a per-host CPU overhead of )0( to sustain a 16-
fold increase in request-rate and throughput over a two-way cluster with CPU overhead (.
Network Load Balancing's CPU overhead has been measured for Web loads. The chart in Figure 3 below shows CPU
overhead versus total throughput on a two-host cluster with 450-megahertz (MHz) CPUs. The Web load consisted
of HTTP GET requests, with each request pulling a 10-kilobyte (KB) static Web page. As an example, approximately
4.1 percent of each host's CPU (out of 45.7 percent total CPU per host) is required to filter 773 GET requests per
second with 64.6 megabits per second (Mbps) throughput. This overhead, which is used to analyze and filter
packets, was measured by setting the load percentage to zero on one of the two hosts and measuring the residual
CPU percentage while the other host handled the entire load.
A 
 
ccc„

u # .    0 ‘  ›    '  !   
'   '   ' )  "1›  )   "   +,"
(- .

The chart above shows that Network Load Balancing's filtering overhead grows linearly with packet rate as
expected. At peak Fast Ethernet capacity of 100 Mbps, Network Load Balancing would require about 5.8 percent of
a 450-MHz CPU to perform its packet filtering. Note that for multiprocessor hosts, this CPU percentage represents
only a portion of the total CPU available on the host. Extrapolating from this chart, a cluster handling 250 Mbps
total throughput would use 12.9 percent of a CPU on each host for packet filtering to serve 2,993 10-KB Web-page
requests per second (or more than 258 million Web hits per day).
The ratio of CPU filtering overhead to throughput varies with the size and type of client requests. For example, at a
given throughput level, GET requests for 1-KB Web pages will require more filtering overhead than do GET
requests for 10-KB Web pages. Since Network Load Balancing's filtering overhead for tracking client connections is
higher than for tracking packets within a connection, client requests that generate larger replies experience lower
filtering overhead.
A second type of CPU overhead due to Network Load Balancing is its packet-handling overhead during data
transfers, called  a 
  ©. Because Network Load Balancing is implemented as an intermediate driver, it
forwards all network packets flowing through the cluster adapter. This packet handling is heavily pipelined and
adds very little latency to data transfers. Its CPU overhead is proportional to the packet rate for an individual host,
not the total packet rate for the cluster. Hence, it remains fixed as cluster hosts are added in proportion to
increases in client load.
Measurements of transfer overhead are shown in Figure 4 below. Transfer overhead was measured by comparing
the total CPU percentage for a system running Network Load Balancing to the total CPU percentage for a system
with Network Load Balancing disabled and subtracting out the filtering overhead. As an example, in a four-host
cluster with 80 Mbps total throughput, each host would handle 20 Mbps throughput, which would require about 2.6
percent of the CPU.
w 
 
ccc„

u +# .    0 ‘  ›     '    '   '
     ' )  "1›  ) +,"(-   

It is useful to regard a cluster as a set of CPU "containers" that are available to handle client load. Network Load
Balancing distributes incoming client load among these containers in order to distribute the CPU load as evenly as
possible. It also requires a small amount of CPU in each container to accomplish the load balancing, and this
amount increases in proportion to the packet rate (with filtering overhead increasing in proportion to the total
packet rate and transfer overhead increasing in proportion to the per-host packet rate). The total number of hosts
needed in the cluster depends on the speed of the hosts and the characteristics of the server application. CPU-
intensive applications, such as Web servers with Active Server Pages may require relatively large amounts of CPU
per client request compared to Web servers pulling static Web pages. To meet CPU demand, they would handle
less network traffic per host and require more hosts than would applications with lower CPU demand per client
request. Figure 5 below shows a cluster of "CPU containers," each occupied in part by a server application load as
well as the filtering and transfer overhead loads imposed by Network Load Balancing.

u ,(          .!  '  0 
‘  ›       ' .     ' '   ' 
   2  .      2    ' '  

#   #!


Network Load Balancing scales performance by increasing throughput and minimizing response time to clients.
When the capacity of a cluster host is reached, it cannot deliver additional throughput, and response time grows
non-linearly as clients awaiting service encounter queuing delays. Adding another cluster host enables throughput
to continue to climb and reduces queuing delays, which minimizes response time. As customer demand for
throughput continues to increase, more hosts are added until the network's subnet becomes saturated. At that
point, throughput can be further scaled by using multiple Network Load Balancing clusters and distributing traffic to
 
 
ccc„

them using Round Robin DNS. For example, this approach is used by Microsoft's Web site, Microsoft.com, which
currently has approximately five six-host Network Load Balancing clusters.
In practice, cluster hosts are added until all hosts are using a moderate amount of CPU at the anticipated request
rate. Excess CPU capacity is maintained in each host to handle excess load after a failover. For example, the hosts
for Microsoft.com typically run at about 60 percent of capacity so that they can handle host outages without
difficulty.
All load balancers introduce overheads that impact throughput scaling and response time. For Network Load
Balancing to scale application performance, it must not introduce bottlenecks that would constrain throughput as
hosts are added. The overall throughput should increase linearly as hosts are added to the cluster and join in
serving the client load. Network Load Balancing uses a highly pipelined implementation that minimizes the increase
in response time, called   , due to filtering and transfer overheads. As a result, only a small portion of
Network Load Balancing's CPU overhead contributes to latency. The complexity of operating system interactions
makes latency difficult to calculate directly. Preliminary measurements made during the previously described tests
for accessing static, 10±KB Web pages indicated that the latency increase was approximately one percent of total
response time (to receive the first byte of a GET request).
Network Load Balancing's latency and related CPU filtering overhead affect performance by eventually limiting the
maximum possible throughput that can be obtained as packet rate increases. (CPU transfer overhead limits the
peak throughput of a single host in comparison to a host with Network Load Balancing disabled but does not affect
scalability.) Since a host's application service rate usually grows in proportion to its CPU usage, Network Load
Balancing's CPU overhead has the effect of constraining the host's total throughput. Assuming that the client
request rate grows with the number of hosts, throughput scaling for maximally loaded cluster hosts is limited by
growth in CPU filtering overhead. If 1 represents the maximum throughput on a single host, the maximum
throughput per host in an -host cluster can be calculated as:
 =  * (1 ± ( * (RN / 1)*-)) where 1 =  * (1 ± -)
where - is the CPU percentage required for Network Load Balancing to filter client requests at rate 1 and is a
constant relating service rate to its associated CPU usage. This filtering overhead grows with the increased request
rate in proportion to the number of hosts (but limited by the feedback from  / 1) and reduces each host's
maximum service rate as hosts are added. Note that if - = 0, the cluster produces linear -fold increase in
throughput, as expected. For applications with increasingly large CPU requirements per client request, Network
Load Balancing will have lower CPU percentage overhead to sustain a given request rate, thus improving scalability
as cluster hosts are added.
Figure 6 below shows throughput scaling by plotting the formula above for maximum throughput versus the
number of hosts for various filter overheads, including 0 percent overhead (which yields the ideal -fold increase in
throughput). Measurements of GET requests for 10-KB static Web pages indicate that a 450-MHz CPU saturates at
an approximate request rate of 70.1 Mbps (836 requests/second) with about 4.4 percent CPU filtering overhead. As
shown in the graph, throughput rolls off rapidly as hosts running at this request rate are added to the cluster.
However, if the application consumes four times the CPU load per client request, Network Load Balancing's filtering
overhead drops to 1.1 percent, and throughput remains within 90 percent of the ideal -fold throughput increase
for 12 cluster hosts. With the same hosts, the request rate would be lower by four-fold. To maintain the request
rate shown in the graph, four-way multiprocessor hosts could be used.
] 
 
ccc„

u 3‘  ›   $   ! 2!!   !       
    # + + ‘  ›    .       
)  '4# 0' "1›  ) +,"(-    # 
  ‘  ›    .       '  "'      
.    0 '  " !!  ! 5 6   

This throughput model closely matches measurements recently made on a 30-host cluster used as a demonstration
during the launch of Windows 2000. This cluster ran Internet Information Server serving the home page for an
actual stock quotation Web site. The home page consisted of a mix of text, graphics, and calculations using Active
Server Pages. The cluster hosts consisted of dual processor Dell Power Edge 4350 systems with 512 MB of
memory; 500 Dell client systems were used to generate load. Throughput measurements (GET requests per
second) were taken for various cluster sizes They are shown in Figure 7 below along with the calculated maximum
throughput curve using 1 percent CPU filtering overhead for one host. Network Load Balancing throughput remains
close to the ideal -fold increase for 10 hosts. With more than 10 hosts, throughput stays within 85 percent of the
ideal for 20 hosts and 77 percent for 30 hosts. The maximum aggregate throughput for a 30 host cluster was
approximately 18,000 GET requests per second (400 Mbps), or 1.55 billion hits per day, which corresponds to
about ten times the client traffic rate that the actual Web site currently experiences. This demonstration showed
that Network Load Balancing provides the scaled performance needed to handle very large Web sites.

u 7‘  ›   $!  ! 2!!   !      
         ' )   !    # 
!  !  !       ‘› .'    

It is important to observe that the throughput curves above show the maximum throughput that can be achieved
when all cluster hosts are saturated. This situation rarely occurs in practice because excess CPU capacity is always
]c 
 
ccc„

needed to handle extra client load after the loss of a cluster host. When the cluster runs at less than maximum
throughput, Network Load Balancing's latency and CPU overhead do not reduce delivered throughput, and the
cluster exhibits linear throughput growth. For example, Figure 8 below shows throughput measurements for a Web
load of GET requests for 1-KB static web pages. The graph follows the ideal -fold growth in throughput as both
packet rate and the number of hosts increases. The total CPU load for each cluster host was under 20 percent in
these measurements.

u 2 ! '‘  ›   $    ' )  '4#
0' "1›  ) +,"(- 

    
Network Load Balancing's filtering architecture relies on the broadcast subnet of the LAN to deliver client requests
to all cluster hosts simultaneously. In small clusters, this can be achieved using a hub to interconnect cluster hosts.
Each incoming client packet is automatically presented to all cluster hosts. Larger clusters use a switch to
interconnect cluster hosts, and, by default, Network Load Balancing induces switch-flooding to deliver client
requests to all hosts simultaneously. It is important to ensure that switch-flooding does not use an excessive
amount of switch capacity, especially when the switch is shared with computers outside the cluster. (Note that
computers outside the cluster do not receive packets as a result of switch-flooding.) The percentage of switch
bandwidth consumed by Network Load Balancing's flooding of client requests is called its  
  .
For most Network Load Balancing applications, the bandwidth used for request traffic is a small fraction of the total
bandwidth needed for client/server communication. Figure 9 below shows the percentage of each 100 Mbps switch
port consumed by client GET requests for 10-KB Web pages at various throughput rates. The chart confirms that
client requests use less than 2 percent of switch bandwidth for this Web load.

u 8    '   4# 0' "1›  ) 
] 
 
ccc„

Switch-flooding can become a problem in applications with a significant percentage of network traffic directed to
the cluster (such as file uploads in FTP applications). It can also become a concern when multiple clusters share a
switch and their combined flooding becomes significant. Finally, if a switch is configured to use ports to connect to
the backbone network that have a higher speed than those used to connect to cluster hosts, switch occupancy can
become prohibitively high. For example, if the switch uses gigabit ports to connect to the backbone and 100 Mbps
ports for the cluster, switch occupancy can increase by a factor of ten. In the example above, at 500 Mbps total
bandwidth, switch occupancy would increase to 10 percent. In these cases, running Network Load Balancing in
multicast mode and setting up a virtual LAN in the switch can limit switch-flooding.
Top of page

!! 
Internet technology has been widely embraced, serving as the foundation for delivering enterprise-wide and
frequently mission-critical applications such as Web, streaming media, and VPN servers. As an integral part of
Windows 2000 Advanced Server and Datacenter Server, Network Load Balancing provides an ideal, cost-effective
solution for enhancing the scalability and high availability of these applications in both Internet and intranet
contexts.
Network Load Balancing lets system administrators build clusters with up to 32 hosts among which it load-balances
incoming client requests. Clients are unable to distinguish the cluster from a single server, and server programs are
not aware that they are running in a cluster.
Network Load Balancing gives network administrators excellent control, including the ability to remotely manage
(with password protection) the cluster from any point on the network. Network administrators also have the ability
to tailor clusters to specific services, with control defined on a port-by-port level. Hosts can be added to or
removed from a cluster without interrupting services. In addition, they can upgrade software on cluster hosts
without interrupting services to clients.
Network Load Balancing uses a fully distributed algorithm to partition client workload among the hosts. Unlike
dispatcher-based solutions, this architecture delivers very high performance by reducing the overhead required to
distribute client traffic. It also offers inherently high availability with (-1)-way failover in an -host cluster. All of
this is accomplished without the expense and support burden of using special purpose hardware or software.
Network Load Balancing emits periodic heartbeat messages so that all members of a cluster can monitor the
presence of other hosts. Host failures are detected within five seconds, and recovery is accomplished within ten
seconds. Should a host be lost or a new one brought online, the workload is automatically and transparently
redistributed among the cluster hosts.
Performance measurements have shown that Network Load Balancing's efficient software implementation imposes
very low overhead on network traffic-handling and delivers excellent performance-scaling limited only by subnet
bandwidth. Network Load Balancing has demonstrated more than 200 Mbps throughput in realistic customer
scenarios handling e-commerce loads of more than 800 million requests per day.
u  *' !  
For the latest information on Windows 2000 Server, please visit our Web site at
http://www.microsoft.com/windows2000 and the Windows NT Server Forum in the MSNΠnetwork of Internet
services at http://computingcentral.msn.com/topics/windowsnt.
Top of page

4   '1# !
 : Address Resolution Protocol. This is a TCP/IP protocol that resolves IP addresses used by TCP/IP-based
software to media access control addresses used by LAN hardware.
  : A measure of the fault tolerance of a computer and its programs. A highly available computer system
provides continuous service without interruptions due to software or hardware failures.
]] 
 
ccc„

 '': A configuration option for the multiple-host filtering mode within a port rule that specifies whether
Network Load Balancing should direct multiple requests from the same client IP address or class C address space to
the same cluster host. Three affinity settings are possible: none, single client, and class C.
 0: A service request from a client computer to a server computer or a cluster of computers. Network
Load Balancing forwards each client request to a specific host within a cluster according to the system
administrator's load-balancing policy.
 : A set of computers that work together to provide a service. The use of a cluster enhances both the
availability and scalability of the service. Network Load Balancing provides a software solution for clustering
multiple computers running networked client/server applications.
    : The network adapter in a Network Load Balancing cluster that handles the network traffic for
cluster operations (that is, the traffic for all hosts in the cluster). This adapter is assigned one or more virtual IP
address and optionally a dedicated IP address.
  : A process by which Network Load Balancing hosts exchange messages to determine a new,
consistent state of the cluster and to elect the host with the highest host priority, known as the default host.
During convergence, a new load distribution is determined for hosts that share the handling of network traffic for
specific TCP or UDP ports.
.'    : A performance measure defined here as Network Load Balancing's CPU percentage on a
cluster host required to analyze and filter network packets (lower is better).
. '    : A performance measure defined here as Network Load Balancing's CPU percentage on a
cluster host required for it to transfer network packets through a cluster host.
 *   : A cluster host's unique IP address used for network traffic not associated with the cluster
(for example, Telnet access to a specific host within the cluster).
'  : The host with the highest host priority and which handles all of the network traffic for TCP and UDP
ports that are not otherwise covered by port rules.
' ! : A configuration setting for a port rule that determines load-balancing policy for the range of ports
covered by the port rule. There are three possible filtering modes: multiple host, single host, and disabled.
   : A configuration setting in single-host filtering mode that specifies a host's priority for handling
all of the cluster's network traffic for that port rule. Handling priority overrides host priority for the range of ports
covered by the port rule.
  ! : A network packet periodically broadcast by each cluster host to inform other hosts of its
health and configuration. Network Load Balancing initiates convergence when it fails to receive heartbeat messages
from another host or when it receives a heartbeat message from a new host.
   : See     .
 : As used here, a computer that participates in a Network Load Balancing cluster. A host is also called a
  .
   : A configuration setting that specifies a cluster host's priority for handling all of the cluster's network
traffic not covered by port rules. The host with the highest priority (lowest numerical value in the range of 1 to 32)
is called the ©a 
.
  : A performance measure defined here as the non-overlapped portion of Network Load Balancing's CPU
overhead (lower is better). Latency adds to the client response time.
    : A technique for scaling performance by distributing load among multiple servers. Network Load
Balancing distributes load for networked client/server applications in the form of client requests that it partitions
across multiple cluster hosts.
 : A configuration setting for the multiple-host filtering mode within a port rule that specifies the
percentage of load-balanced network traffic that this host should handle; allowed values range from 0 (zero) to
100. The actual fraction of traffic handled by each host is computed as the local load weight divided by the sum of
all load weights across the cluster.
]£ 
 
ccc„

   : A link-layer network address, called a media access control address, that is used to communicate
with other network adapters on the same subnet. Each network adapter has an associated MAC address.
!    : A type of media access control address used by multiple, networked computers to
concurrently receive the same incoming network packets. In multicast mode, Network Load Balancing optionally
uses multicast MAC addresses to efficiently distribute incoming network traffic to cluster hosts.
! ! : A configuration setting which instructs Network Load Balancing to add a multicast MAC address
to the cluster adapters on all hosts in a cluster. The adapters' existing MAC addresses are not changed. See also
 
©.
! !: A computer that has multiple network cards or that has been configured with multiple IP addresses
for a single network card. Network Load Balancing supports multihomed servers by allowing multiple virtual IP
addresses to be assigned to the cluster adapter.
    : A plug-in board that connects a computer to a local area network (LAN). A Network Load
Balancing cluster communicates using a    ©  over the LAN connecting the cluster's hosts to clients.
  : A performance measure defined here as the CPU percentage on a cluster host used by Network Load
Balancing, consisting primarily of (a  
  © and ( a 
  ©. A portion of overhead
contributes to   .
  : A configuration setting within the Network Load Balancing Properties dialog box that specifies how
cluster network traffic for a range of ports is to be handled. The method by which a port's network traffic is handled
is called its a  
©.
 ! *   : A virtual IP address used by Network Load Balancing to identify the cluster for remote
control operations and within heartbeat messages. See   '( ©© .
 !: A performance measure defined as the round-trip delay to process a client request. Response
time increases with the non-overlapped portion of CPU overhead, called    (lower is better).

‘: A type of domain name service, called Round Robin DNS, which distributes clients to multiple servers in a
round-robin manner. RRDNS scales performance but does not provide high availability.
  : A measure of how well a computer, service, or application can grow to meet increasing performance
demands. For clusters, scalability is measured by the ability to incrementally add one or more systems to an
existing cluster when the overall load of the cluster exceeds its capabilities.
 : In the context of load balancing TCP/IP traffic, a set of client requests directed to a server. These
requests can be invoked with multiple, possibly concurrent, TCP connections. The server program sometimes
maintains state information between requests. To preserve access to the server state, system administrators can
have Network Load Balancing direct all requests within a session to the same cluster host when load balancing by
setting    aa .
 : As used here, data maintained by a server application on behalf of its clients. To scale a server application
by running multiple instances with Network Load Balancing, client state must be accessible and properly
coordinated by all instances of the application.
 : A central network device that forwards packets to specific ports rather than, as in conventional hubs,
broadcasting every packet to every port. Switches can deliver higher total bandwidth than can hubs by supporting
multiple, simultaneous connections.
 "' : A switch behavior induced by Network Load Balancing to simultaneously deliver all incoming
client packets to all switch ports. This behavior enables Network Load balancing to deliver very high throughput,
but it may cause high switch occupancy.
    : A performance measure defined here as the fraction of a switch port's bandwidth occupied by
Network Load Balancing's incoming client requests (lower is better). Switch occupancy increases with additional
client traffic and must not adversely impact port bandwidth.
# * : Transmission Control Protocol/Internet Protocol. A set of networking protocols that is used on the
Internet to provide communications among interconnected networks made up of computers with diverse hardware
]º 
 
ccc„

architectures and various operating systems. TCP/IP includes standards for how computers communicate and
conventions for connecting networks and routing traffic.
 : A performance measure defined here as the number of client requests processed by a Network Load
Balancing cluster per unit time (higher is better). Throughput increases with additional client traffic that the cluster
can handle prior to saturating its hosts.
 ! : A configuration setting which instructs Network Load Balancing to change the MAC address of the
cluster adapters to the same value for all hosts in a cluster. This is the default mode of operation. See also
   
©.
  *   : An IP address that is shared among the hosts of a Network Load Balancing cluster and used
by clients to address the cluster as a whole. A Network Load Balancing cluster supports multiple virtual IP
addresses, such as in a cluster of multihomed Web servers. One of the virtual IP addresses is the   '(
©© , which is used to identify the cluster for remote control operations.

„
]M 
 
ccc„

u  „ 
„ „  „„

  ©
 
   © 
    
  
     ©
 
   © 
 
 
  

  

Microsoft Internet Security and Acceleration (ISA) Server can be deployed as a dedicated firewall that acts as the
secure gateway to the Internet for internal clients. ISA Server protects all communication between internal
computers and the Internet. In a simple firewall scenario, the ISA Server computer has two network interface
cards, one connected to the local network and one connected to the Internet. The figure illustrates this scenario.
You can use Microsoft Internet and Security (ISA) Server to configure the firewall, configuring policies and creating
rules to implement your business guidelines. By setting the security access policies, you prevent unauthorized
access and malicious content from entering the network. You can also restrict what traffic is allowed for each user
and group, application, destination, content type, and schedule. For more information, see Internet security.
ISA Server includes the following firewall and security features:
Ô     . You can use ISA Server to configure site and content rules and protocol rules
that control how your internal clients access the Internet. Site and content rules specify which sites and
content can be accessed. Protocol rules indicate whether a particular protocol is accessible for inbound and
outbound communication. For more information, see Controlling outgoing requests and Access policy
rules.

Ô *    . Integrated intrusion detection mechanisms can alert you when a specific attack is
launched against your network. For example, you can configure the ISA Server to alert you if a port
scanning attempt is detected. For more information, see Intrusion detection.

Ô   )- . The ISA Server Security Wizard enables you to set the appropriate level of system
security, depending on how ISA Server functions in your network. For more information, see ISA Server
system security.

Ô   ' . ISA Server controls application-specific traffic with data-aware filters. ISA Server
uses the filters to determine if packets should be accepted, rejected, redirected, or modified. For more
information, see Application filters.

Ô   . ISA Server supports the following user authentication methods: Integrated Windows
authentication, client certificates, digest, and basic. For more information, see Authentication.

„
]A 
 
ccc„

„„ 
„!"""„  „„
When a computer is configured with more than one IP address, it is referred to as a multihomed system.
Multihoming is supported in two different ways:

Ô Multiple network adapters or media types per physical network.


The network adapters can be for similar or dissimilar networks. For example, a host with one Ethernet and
one Token Ring adapter installed, each linked to a separate network, requires IP addresses to be bound to
both adapters. There are no restrictions other than hardware.

Ô Multiple IP addresses per network adapter.


A computer can access multiple subnets that are logically separated, but bound to a single network
adapter. Such a configuration might be used in an environment where a host requires access to different
divisions of a corporation network that are separated by different subnets.
Windows 2000 Professional allows an unlimited number of IP address/subnet mask pairs.
Figure 22.15 shows an example of a Windows 2000 Professional computer using multihoming to connect
to two subnets. These subnets can be physically separated by disparate or disconnected cabling, or
logically separated through subnetting.

u  , !)  '   !    #  
‘ 

Você também pode gostar