Escolar Documentos
Profissional Documentos
Cultura Documentos
Volume 1
Course Introduction 1
Overview 1
Prerequisites 1
Learner Skills and Knowledge 2
Course Goal and Objectives 3
Course Flow 4
Additional References 5
Cisco Glossary of Terms 5
Cisco Wide Area Application Services 1-1
Overview 1-1
Module Objectives 1-1
Cisco WAAS Overview 1-3
Overview 1-3
Objectives 1-3
Application Delivery Initiatives 1-4
Cisco WAAS Introduction 1-10
Cisco WAAS Optimizations 1-25
Cisco WAE Platforms and Software Licensing 1-29
Summary 1-36
WAN Optimization Technical Overview 1-37
Overview 1-37
Objectives 1-37
Application Performance Barriers 1-38
Introduction to TCP 1-42
Transport Flow Optimization 1-50
Advanced Compression 1-63
Summary 1-80
Application Acceleration Technical Overview 1-81
Overview 1-81
Objectives 1-81
The Need for Application Acceleration 1-82
CIFS Acceleration 1-85
Connectivity Directive 1-87
Integrated Print Services 1-99
Summary 1-103
Module Summary 1-105
Module Self-Check 1-107
Module Self-Check Answer Key 1-111
Designing Cisco WAAS Solutions 2-1
Overview 2-1
Module Objectives 2-1
Network Design, Interception, and Interoperability 2-3
Overview 2-3
Objectives 2-3
Physical Inline Deployment 2-4
Off-Path Network Deployment 2-16
Interception Using WCCPv2 2-29
IOS Routing Platforms 2-29
Switching Platforms 2-30
Security Platforms 2-30
WCCPv2 2-32
Interception Using Policy-Based Routing 2-38
Data Center Deployment Using ACE 2-43
Automatic Discovery 2-47
Asymmetric Routing 2-58
Network Transparency 2-66
Summary 2-71
Performance, Scalability, and Capacity Sizing 2-73
Overview 2-73
Objectives 2-73
Cisco WAAS Design Fundamentals 2-74
Cisco WAE Device Positioning 2-91
WCCPv2 Design Considerations 2-94
Summary 2-99
Module Summary 2-101
Module Self-Check 2-103
Module Self-Check Answer Key 2-105
ii Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WAAS
Course Introduction
Overview
Cisco Wide Area Application Services Technical Training (WAAS) is a three-day course that
introduces Cisco WAAS to pre-sales and post-sales engineers, as well as server, storage,
application and network IT managers. This course teaches the business value of WAN
optimization and application acceleration technologies. You will learn how to design a basic
WAAS deployment and configure Cisco WAN Application Engine (WAE) devices, including
WAN optimization and application acceleration components.
In the lab, you will install and configure WAE devices, configure and test application traffic
policies, and configure acceleration for CIFS.
Prerequisites
You should have a fundamental knowledge of data networking and Microsoft Windows
networking technologies. This course includes appendices with key supplemental information
about Microsoft Windows Networking, the Common Internet File System (CIFS) protocol, and
other relevant course information.
Learner Skills and Knowledge
2 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Course Goal and Objectives
This topic describes the course goal and objectives.
Course Goal:
After you complete this course, you will be able to design and deploy a WAAS configuration
that includes Transport Flow Optimizations, application traffic policies, compression, and file
and print services.
Upon completing this course, you will be able to meet these objectives:
Explain the business value of WAN optimization and application acceleration technologies
and understand the technologies employed by Cisco Wide Area Application Services to
enable consolidation while improving application performance over the WAN
Design Cisco WAAS solutions
Describe Cisco WAAS implementation, integration, and management
Troubleshoot Cisco WAAS installations, including platform and network connectivity
issues, network interception issues, WAN optimization issues, and application acceleration
issues
Explain Microsoft Windows Networking concepts, and describe the training lab and
challenge topologies
Course Flow
Day 1 Day 2 Day 3
Course Introduction
A Implementation and
Lab: Configuring
M Application
Introduction to Cisco Integration
Acceleration
WAAS
Lunch
The schedule reflects the recommended structure for this course. This structure allows enough
time for the instructor to present the course information and for you to work through the lab
activities. The exact timing of the subject materials and labs depends on the pace of your
specific class.
4 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Additional References
This topic presents the Cisco icons and symbols that are used in this course, as well as
information on where to find additional technical references.
NAS
NAS Filer Ethernet Switch
Module Objectives
Upon completing this module, you will be able to explain the business value of WAN
optimization and application acceleration technologies and understand the technologies
employed by Cisco Wide Area Application Services to enable consolidation while improving
application performance over the WAN. This includes being able to meet these objectives:
Describe the business value of WAN optimization and application acceleration
technologies
Explain Cisco WAAS WAN optimization features
Explain how Cisco WAAS application-specific acceleration improves performance for file
and print protocols
1-2 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Lesson 1
Objectives
Upon completing this lesson, you will be able to describe the business value of WAN
optimization and application acceleration technologies. This includes being able to meet these
objectives:
Describe the infrastructure and application challenges faced by IT organizations today
Define how Cisco WAAS technologies help to enable application delivery and
infrastructure consolidation
Describe the optimization technologies provided by Cisco WAAS
Describe the Cisco WAE family of appliances and the router-integrated network module,
along with the software licenses that are necessary to enable WAAS functionality
Application Delivery Initiatives
This topic explains the key drivers and application challenges that customers face when
consolidating infrastructure.
IT faces two opposing forces. One force is the need to provide high performance access to files,
content, and applications to a globally distributed workforce, which is necessary to ensure
productivity and job satisfaction. This force causes IT to push resources, such as servers,
storage, and applications, out to the enterprise edge. The other force is driving consolidation,
because maintaining an IT infrastructure in a distributed deployment model is an expensive
task.
One force drives the movement of resources out to the enterprise edge to support the remote
users, while the other force drives the movement of resources inward toward the data center in
an effort to control costs, simplify management, protect data, and improve compliance with
regulations.
One factor that is commonly overlooked by both forces is the over-worked WAN. In most
cases, the WAN is running beyond capacity with voice traffic and other business critical
applications. The WAN presents a variety of challenges in meeting the needs of remote users
and in fulfilling the requirements of the IT organization when consolidating the remote office
infrastructure.
Explain the business value of WAN optimization and application acceleration technologies and
understand the technologies employed by Cisco Wide Area Application Services (WAAS) to
enable consolidation while improving application performance over the WAN
Another challenge for most global organizations is the rise in the use of web-based applications
happened just as there was a major push toward data center and management consolidation.
The web is well-suited for centralized deployment and it allows organizations to manage large
scale operations from a single site. The downside is that it quickly puts significant strain on the
network, especially the WAN. Many users might work in branch or remote offices, but it is just
1-4 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
as likely that they might work from home, on the road, or that you need business partners to
interact with your applications, as well. The vast majority of enterprise-class applications were
not built operate in this diffuse environment. Simply put, application interactions that work fine
on a campus LAN are wildly inefficient across the WAN. It is not that these are poorly written
applications; its just that they are being asked to serve in an environment for which they were
never meant. And where there are well established solutions to help distribute website content
and cache data, applications are filled with dynamic and changing information that requires a
new approach.
The implications of poor performance are profound. Users refuse to access corporate portals
and continue to use outdated information. Expectations are set that the application is slow,
causing employee stress over the inability to maintain productivity levels, leading to eroding
job satisfaction. Major application rollouts are stalled because managers refuse to give up on
outdated systems that at least work. Complaints to the help desk escalate, and IT becomes the
problem rather than the solution. Who suffers when new systems arent adopted, arent used,
and arent productive?
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-5
Typical Distributed Enterprise
Expensive distributed IT
infrastructure:
File and print servers
Email servers
Tape backup
Application delivery woes:
Congested WAN WAN
Many organizations have infrastructure silos in each of their remote, branch, and regional
offices. These silos are typically carbon-copies of the infrastructure in the data center; including
file servers, print servers, backup servers, application servers, e-mail servers, web servers,
storage infrastructure, and more. In any location where storage capacity is deployed with active
data, that data must be protected with disk drives, tapes drives, tape libraries, backup software,
service with an off-site vaulting company, and perhaps even replication. The remote office
infrastructure is costly to maintain.
The goal of the typical distributed enterprise is to consolidate as much of this infrastructure as
possible into the data center, without overloading the WAN, and without compromising the
performance expectations of remote office users who are accustomed to working with local
resources.
1-6 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
The WAN Is A Barrier To Consolidation
High bandwidth
Low latency
Reliability
WAN characteristics
Round Trip Time (RTT) ~ many milliseconds
hinder consolidation:
Routed Network
Already congested Client Switch Switch Server
Low bandwidth
Latency
Packet Loss
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.71-6
Applications often do not perform well in WAN environments that are built in a utopian
environment like that shown at the top of the figure. Application developers and operating
system vendors tend to put servers adjacent to the clients on the same LAN when developing
their products. LANs provide a great environment for application development because the
challenges associated with a WAN are not encountered. LANs tend to have high bandwidth,
high reliability, low latency, low congestion, and low packet loss. There are few performance
barriers on the LAN.
In contrast, the WAN is generally low in bandwidth, high in latency, unreliable, and high in
packet loss. Separating the user from the server with a low performance long-distance
unreliable network can wreak havoc with application performance. In comparison to the
utopian LAN environment, WAN applications tend to fall apart for a variety of reasons,
including insufficient bandwidth, unreliable connections or lack of connection stability, packet
loss or congestion, retransmission, transport latency, and application latency. Some applications
require many hundreds of roundtrips to complete a trivial operation, to perform a Common
Internet File System (CIFS) file open for example. Web applications are also affected, requiring
the exchange of hundreds of operations. With each message that is exchanged, the user
application pays a roundtrip penalty of the WAN latency per operation. With 1,000 operations
that must ping-pong in a 40 millisecond WAN, this equates to about 40 seconds of response
time.
Unsatisfactory WAN application performance is the primary reason that most organizations do
not consolidate costly remote office infrastructures.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-7
Addressing the WAN Challenge
The most silent yet largest detractor of application performance over the WAN is latency.
Latency is problematic because of the volume of message traffic that must be sent and received.
Some messages are very small, yet even with substantial compression and flow optimizations,
these messages must be exchanged between the client and the server to maintain protocol
correctness, data integrity, and so on. The best way to mitigate latency is to deploy intelligent
protocol optimizations, otherwise known as application acceleration, in the remote office. This
is done on a device that understands the application protocol well enough to make decisions on
how best to handle application traffic as it occurs and can closely mimic the performance of a
local server in many cases. On a per-message basis, this application accelerator could examine
messages to determine they can be suppressed, or handled locally. If the request is for data, the
application accelerator could determine if the data is best served from cache (if the object is
valid, the user is authenticated, and the appropriate state is applied against the object on the
origin server), or if a message must be sent to the origin server to maintain proper protocol
semantics.
Bandwidth utilization is another application performance killer. Transferring a file multiple
times can consume significant WAN bandwidth. If a validated copy of a file or other object is
stored locally in an application cache, it can be served to the user without using the WAN.
Application caching is typically tied to an application accelerator and is specific to that
application, but there are compression techniques that can be applied at the transport layer that
are application agnostic. One of these techniques is standards-based compression. Another
technique is called Data Redundancy Elimination (DRE), which is an advanced form of
suppressing the transmission of redundant network byte streams. Compression and application
caching provide another way to improve application performance by minimizing the amount of
data that must traverse the network. Minimizing the amount of data on the network improves
response time and leads to better application performance, while also freeing up network
resources for other applications.
The next application performance barrier in a WAN environment is transport throughput.
Application protocols run on top of a transport mechanism that provides connection-oriented or
non-connection-oriented delivery of data. In many cases, enterprise applications use TCP for its
1-8 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
inherent reliability. Although it is reliable, TCP presents performance barriers of its own. If
TCP could be optimized to perform better in WAN environments, then application throughput,
response time, and the user experience can all show improvement, due to better utilization of
existing network capacity and better response to network conditions.
Two factors should be considered with all consolidation-enabling solutions. The first factor is
network integration. Consolidation solutions should not disrupt the operation of existing
network features such as QOS, access-lists, NetFlow, firewall policies, and others. By
integrating with the network in a logical manner, that is maintaining service transparency
(preserving information in packets that the network needs to make intelligent feature-oriented
decisions), fundamental network-layer optimizations can continue to operate in the face of
application acceleration or WAN optimization. Physical integration allows such technology to
be directly integrated into existing network devices, thereby providing a far more effective
Total Cost of Ownership (TCO) and Return On Investment (ROI) model.
When possible, administrative services such as print services should be centrally managed but
locally deployed when possible in remote sites. This keeps such administrative traffic from
needing to traverse the WAN.
The network should be aligned with business priority and application requirements to ensure
the appropriate handling of traffic. Quality of Service, or QoS, for instance, allows network
administrators to configure network behavior in specific ways for specific applications. As all
applications are not created equal, the network must be prepared to handle traffic in different
ways based on how the application needs to be handled. This involves classification of data
(seeing what application it is and who is talking to who, among other metrics), pre-queuing
operations (immediate actions, such as marking, dropping, or policing), queuing and scheduling
(ensuring the appropriate level of service and capacity are assigned to the flow), and post-
queuing optimizations (link fragmentation and interleaving, packet header compression). This
set of four functions is known as the QoS Behavioral Model, which relies on visibility (service
transparency), should acceleration technology be deployed to fully function. Also, the network
should be able to make on-the-fly path routing decisions (advanced routing) to ensure that the
right path is taken for the right flows. This includes policy-based routing (PBR), optimized
edge routing (OER), and more.
Finally, the network should be visible. That is, administrators need to know how the network is
performing, being used, and when network characteristics are performing as expected.
Technologies such as NetFlow and collection or analysis tools allow administrators to see how
the network is being utilized, top talkers, and more. Functions such as IP Service Level
Agreements (IP SLAs) allow the network to alert administrators when conditions exceed
thresholds, and furthermore allow the network to even react when such events occur.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-9
Cisco WAAS Introduction
This topic defines the market positioning for Cisco WAAS technologies, and how the
technologies within Cisco WAAS help IT organizations to consolidate infrastructure and
improve application performance for centralized applications.
Application Acceleration
Cisco WAAS
NetFlow Integrated with Queuing
Performance Cisco IOS
Shaping
Visibility Policing
Monitoring OER
Monitor and QoS and
Provision IP SLAs Dynamic Control
Easily Auto-Discovery Applications
Manage WAN Network Transparency Meet Goals
Compliance
Cisco WAAS provides the cornerstone for a holistic solution that provides an optimized
framework for application delivery and infrastructure consolidation. While Cisco WAAS is one
piece of this framework, the other piece is the network infrastructure itself. When combining
powerful technologies from both Cisco WAAS and the network together, IT organizations find
themselves well poised to better leverage existing network resources, improve performance,
consolidate infrastructure, simplify, and control costs. Each of the items shown in the solution
diagram above directly correlates to a problem area mentioned in the previous slide relative to
application performance challenges over the WAN.
Network technologies that help address performance barriers created by the WAN (not an
exhaustive list):
NetFlow: NetFlow gathers data about flows on the network from specific points within the
network. This data can then be sent to a collector and analyzed. NetFlow provides the
foundation for visibility into how the network is performing and how the network is being
used.
IP Service Level Agreements: IP SLAs are measurements taken from within the network
and actions taken when metrics are violated. For instance, IP SLAs can be used to monitor
the latency from point-to-point and trigger a reaction should the latency exceed a specific
threshold.
Optimized Edge Routing: OER is a function employed at the edge of the network
whereby a specific network path can be selected based on metrics such as latency,
bandwidth, and loss. OER can be combined with other optimized routing capabilities, such
as PBR, to ensure that high priority flows utilize the best available network path, and can
1-10 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
also be combined with high availability technologies such as Hot Standby Router Protocol
(HSRP), Virtual Router Redundancy Protocol (VRRP), and Gateway Load Balancing
Protocol (GLBP).
Quality of Service (QoS): QoS is an architecture rather than a feature. QoS provides a
series of components that allow the network to identify incoming flows and understand
what they are and how to respond (classification); and to perform immediate actions
against flows before network resources are consumed, such as dropping packets, policing,
marking packets, estimating bandwidth requirements (pre-queuing operations); and to
queue traffic according to application requirements and business priority, scheduling queue
service, and shaping (queuing); and to apply optimization to packets being serviced to
ensure performance (post-queuing operations).
Cisco WAAS technologies that help address performance barriers created by the WAN:
WAN optimization: Cisco WAAS overcomes performance limitations caused by WAN
bandwidth, latency, packet loss, and TCP through a series of optimizations that includes
Data Redundancy Elimination (DRE), Persistent LZ Compression, and Transport Flow
Optimization (TFO). WAN optimization is generic in nature and benefits a broad range of
applications concurrently.
Application Acceleration: Cisco WAAS not only overcomes unruly WAN conditions, but
also overcomes unruly conditions that exist in applications and application protocols
themselves as well. This capability of latency reduction, bandwidth utilization reduction,
and response time improvement is called application acceleration. Whereas WAN
optimization is application-agnostic and can benefit almost any application, application
acceleration is specific to a given application protocol.
Wide Area File Services (WAFS): WAFS is in many ways a component of application
acceleration in that it helps to improve user performance when accessing remote file
servers over the WAN, however, WAFS also contains other distinctive characteristics that
fall outside of the realm of application acceleration. These include disconnected mode of
operation, which helps to ensure some level of productivity during network outage
scenarios, as well as local services such as print services, which helps to keep unruly
administrative traffic off of the WAN.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-11
Cisco WAAS Overcomes the WAN
Data Center
Op
tim
ize
dC
onn
WAN ect
ion
Remote Office s
ions
ed Connect
Optimiz
Cisco WAAS is a powerful new solution that overcomes the challenges presented by the WAN.
Cisco WAAS is a software package that runs on the Wide Area Application Engine (WAE) that
transparently integrates with the network to optimize applications without client, server, or
network feature changes.
A WAE is deployed in each remote office, regional office, and data center of the enterprise.
With Cisco WAAS, flows to be optimized are transparently redirected to the Cisco WAE,
which overcomes the restrictions presented by the WAN, including bandwidth disparity, packet
loss, congestion, and latency. Cisco WAAS enables application flows to overcome restrictive
WAN characteristics to enable the consolidation of distributed servers, save precious WAN
bandwidth, and improve the performance of applications that are already centralized.
As shown in the diagram, Cisco WAAS can be deployed using appliances that attach to the
network as nodes, or can be deployed within the Cisco Integrated Services Router.
Alternatively, Cisco WAAS can be deployed on devices that are deployed physically in-path.
1-12 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
The Problem with Tunneling
Other WAN optimization solutions use GRE tunnels
to encapsulate optimized WAN connections.
Tunneling hides Layer 3-7 headers.
This breaks many traffic monitoring and optimization
applications, like IDS and QoS.
WAN
Competing WAN optimization solutions use GRE tunnels to encapsulate optimized WAN
connections. The problem with this approach is that tunneling prevents the network from
having visibility into Layer 37 data. Without visibility into Layer 37 data, functions like
Quality of Service (QoS), Network-Based Application Recognition (NBAR), NetFlow, and
Intrusion Detection Systems (IDS) cannot function. This means that the existing traffic
monitoring and optimization infrastructure is broken.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-13
WAAS Network Transparency
WAN
Cisco WAAS provides a transparent framework for accelerating applications. Cisco WAAS
uses a TCP proxy architecture that preserves information critical to network feature operation,
such as Layer 3-7 packet header information. With its transparent optimization architecture,
Cisco WAAS is uniquely positioned to offer feature compatibility with functions that are
already used within the network, including:
Quality of Service: QoS policies provide traffic classification, prioritization, scheduling,
queuing, policing, and shaping.
Network-Based Application Recognition (NBAR): NBAR provides protocol and
application discovery.
Access Control Lists (ACLs): ACLs permit or deny traffic based on identification.
NetFlow: NetFlow provides statistics on nodes communicating on the network, and the
applications being used.
Firewall policies: Firewall policies are used to prevent malicious traffic from entering or
exiting a network.
By providing a transparent architecture, Cisco WAAS is able to provide interoperability with
end-to-end performance analysis systems such as Cisco Network Application Performance
Analysis (NAPA), and Performance Visibility Manager (PVM), as well as other 3rd party
products.
1-14 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Enables Consolidation
Cisco WAAS is designed to help consolidate infrastructure from remote offices into the data
center. Cisco WAAS is characterized by its ability to:
Integrate transparently into the existing infrastructure
Understand application protocols and how to optimize those applications
Provide compression and flow optimizations to improve delivery of data that must traverse
the WAN
Simplify consolidation by providing policy-based configuration and automatic discovery
Aside from cost savings, the primary goal of infrastructure consolidation is to give users the
same level of access that is available with a local infrastructure.
Maintaining performance while enabling consolidation entails a number of services:
Application-specific acceleration (file and print services)
WAN optimizations such as Transport Flow Optimization (TFO), DRE, and Persistent
Lempel-Ziv (LZ) compression
With Cisco WAAS, WAEs automatically discover each other to minimize the administrative
burden.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-15
WAAS Accelerates Broad Range of
Applications
* Performance improvement varies based on user workload, compressibility of data, and WAN
characteristics and utilization. Actual numbers are case-specific and results might vary.
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.71-14
The table lists typical improvements that can be expected with specific applications and related
protocols. Note that the typical improvements shown are subjective and can be relative to a
variety of factors including, but not limited to:
Response time improvement how long does an operation or series of operations take
without Cisco WAAS as opposed to with Cisco WAAS
Bandwidth savings how much bandwidth capacity is consumed on the WAN without
Cisco WAAS as opposed to with Cisco WAAS
Examples of application performance improvements are shown in the next series of slides.
1-16 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Performance: File Services
Operations over T1 (1.544Mbps), 80mS RTT
20 Seconds 40 Seconds 60 Seconds 80 Seconds
Opening 5MB
PowerPoint
Saving 5MB
PowerPoint
Download of
8MB Package Legend
Microsoft SMS
Operation over native WAN
First operation with WAAS, no preposition
First operation with WAAS, with preposition
Future operation with WAAS
WAAS acceleration for file services protocols leads the industry. By coupling latency-
reduction and bandwidth savings techniques such as read ahead, message and operation
batching, prediction, data caching, metadata caching, and local protocol handling, to a powerful
WAN optimization architecture (consisting of DRE, LZ, and TFO), Cisco WAAS can provide
up to a 400X performance improvement for CIFS file sharing over the WAN. The top graph in
the figure shows performance improvements when working with a 5MB PowerPoint file. The
bottom graph shows performance improvements when working with Microsoft Service
Management Solution (SMS) for package download using CIFS. With Cisco WAAS, file and
software distribution servers can be consolidated without compromising end-user performance
expectations.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-17
Cisco WAAS Performance: Exchange
Sending and Receiving of 16 Seconds 32 Seconds 48 Seconds
E-mail with 1MB
Attachment over
T1 (1.544 Mbps) Line
with 80-ms Latency
Microsoft Exchange
No Cached Mode
Sending and Receiving of 40 Seconds 80 Seconds 120 Seconds
E-mail with 5MB
Attachment over
T1 (1.544 Mbps) Line
with 80-ms Latency
Microsoft Exchange
No Cached Mode
Sending and Receiving 12 Seconds 24 Seconds 36 Seconds
E-Mail with 1MB
Attachment over
T1 (1.544 Mbps) Line with
80-ms Latency
Microsoft Exchange
Cached Mode
Sending and Receiving 40 Seconds 80 Seconds 120 Seconds
E-Mail with 5MB
Attachment over
T1 (1.544 Mbps) Line with
80-ms Latency Legend
Microsoft Exchange
Cached Mode Send and Receive over WAN
First Send and Receive with Cisco WAAS
Future Send and Receive with Cisco WAAS
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.71-16
Email is another application that can be accelerated with Cisco WAAS. Microsoft Exchange
and Outlook (5.5, 2000, 2003, with or without cached mode), Lotus Notes, or other mail servers
using TCP-based protocols such as IMAP, POP3, or Simple Management Transport Protocol
(SMTP) can all be accelerated. As shown in the figure, Cisco WAAS can dramatically reduce
bandwidth consumption and improve response time for email users. The example in the figure
uses a send and receive function, as the user sends an email with a large attachment to himself.
1-18 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Performance: SharePoint
Operations over 256kbps line with 120mS RTT and .5 percent packet loss
File Save
File Save
SharePoint and other collaboration portal applications can be optimized by Cisco WAAS. The
results in this figure show that response time is significantly improved with Cisco WAAS.
Bandwidth savings are also significant, thanks to Cisco WAAS DRE and persistent LZ
compression.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-19
Cisco WAAS Performance: Data Protection
For replication applications such as Network Appliance SnapMirror, EMC Symmetrix Remote
Data Facility (SRDF) native over IP or over Fibre Channel over IP (FCIP, a fabric bridging
protocol that leverages TCP/IP as an interconnect between Storage Area Networks (SAN)
fabrics), EMC SANCopy (over FCIP), EMC Celerra IP Replicator, or any other replication
applications that use FCIP or native TCP/IP as a transport. Cisco WAAS can dramatically
improve performance and minimize bandwidth for these applications.
Cisco WAAS also accelerates backup and restore operations performed over the WAN. The
example in the figure shows how Cisco WAAS can improve the performance of Microsoft
NTBackup. With Cisco WAAS compression history through DRE, a previously-seen backup
provides huge performance increases for a file or system restore.
1-20 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Performance: Citrix
Cisco WAAS can also provide acceleration for remote desktop applications such as Citrix
Presentation Server or Microsoft Terminal Services. With Cisco WAAS, client connections use
less bandwidth, experience greater connection stability, and improved application
responsiveness. For optimum optimization of Citrix and other remote desktop applications,
such as Terminal Services, it is recommended that native compression be disabled and
encryption be enabled only for login traffic.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-21
Cisco WAAS Deployment Architecture
Regional Office
WAE
Remote Office Appliance Branch Office
WAE
Appliance
ISR with
WAN WAE Network Module
Data Center
WAE
WAAS Central Manager Appliance
Primary/Standby
WAEs are deployed at network entry and exit points of WAN connections. If multiple entry
and exit points exist, you can deploy a single WAE that optimizes both connections by sharing
the interception configuration across those entry and exit routers. To provide and support
optimizations, WAAS requires that devices be deployed in two or more sites. To support
redundancy, more than one WAE is typically deployed in the data center. WAEs must also be
deployed to host the Central Manager application, which can be made highly available by using
two WAEs. To provide transparent optimizations, WAAS requires two devices in the path of
the connection to be optimized.
As shown in the figure, Cisco WAE devices can either be standalone appliances or network
modules that integrate physically into the Integrated Services Router (ISR).
1-22 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Seamless, Transparent Integration
WCCPv2
Policy Based Routing
CSM and ACE modules
WAEs provide compliance with
network value-added features:
Preserves packet headers
Supports QoS, Network-Based
Application Recognition (NBAR),
queuing, policing, shaping
classifications
Supports firewall policies and Access Src Mac BBB Src IP 1.1.1.10 Src TCP 15131
optimized
Dst Mac AAA Dst IP 2.2.2.10 Dst TCP 80
Control Lists (ACLs)
Supports NetFlow, monitoring, and
reporting
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.71-21
The WAE relies on network interception and redirection to receive packets to be optimized.
Cisco WAAS can leverage a variety of options:
Physical inline: The WAE can be deployed physically in-path within the network when
configured with the inline card. This allows the WAE to see all traffic traversing a network
path.
WCCPv2: The WCCPv2 protocol (Web Cache Communication Protocol version 2) allows
the WAE devices to be deployed virtually in-path but physically off-path.
PBR: Policy Based Routing allows the network to treat a Cisco WAE as a next-hop router
to automatically route traffic through it for optimization. Like WCCPv2, the WAEs are
virtually in-path but physically off-path.
CSM/ACE: The Content Services Module (CSM) or Application Control Engine (ACE)
modules for the Catalyst 6500 series switch can be used for enterprise data center
integration and scalability. Like WCCPv2 and PBR, the WAEs are virtually in-path, but
physically off-path.
After the packets are redirected to the WAE, the WAE applies the appropriate optimization as
determined by the application policy. For all traffic that is optimized (with some exceptions),
Cisco WAAS retains the packet header information to ensure that upstream network features
are not impaired. The source IP, destination IP, source port, and destination TCP port are fully
maintained. This is called service transparency and helps to ensure compatibility with existing
network features.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-23
Scaleable, Secure Central Management
Comprehensive management:
Central configuration
Device grouping
Monitoring, statistics
Alerts, reporting
Easy-to-use interface:
GUI, wizards
IOS CLI
Roles-based administration
Proven scalability and
security:
Up to 2500 managed nodes
Redundancy and recovery
SSL-encrypted
WAAS Central Manager is a secure, robust, and scaleable management platform with many
years of lineage from the Application and Content Networking System (ACNS). WAAS
Central Manager provides all of the features necessary to provide enterprise-wide system
management, including inter-device secure communications, secure access, roles-based
administration, global and local policy configuration, group configuration, and more. With
roles-based administration, a user can be configured to see only specific features or specific
devices or groups, thus facilitating a common management framework for all users working
with services that are consolidated by WAAS.
Central Manager scales to support up to 2500 nodes and can be configured for high availability
using active-standby clustering.
The Cisco WAE provides a command-line interface (CLI) that is similar to the IOS CLI.
1-24 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Optimizations
This section describes the application-specific and application-agnostic optimizations that are
provided by Cisco WAAS.
Application Acceleration
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-25
Read-ahead optimization: Multiple forms of read-ahead are employed within Cisco
WAAS to overcome performance challenges. Read-ahead is used when accessing a file that
is not fully cached or can not be cached.
Write-behind optimization: Write-behind optimization examines blocks being written to
suppress redundant write operations. Only changed blocks are propagated to the server.
Message prediction: Message prediction relies on a programmatic understanding of the
way applications leverage the protocols, and dynamic learning of how protocols are used.
This knowledge can then be used to perform operations ahead of time on behalf of the user,
thereby mitigating latency.
Integration with WAN optimization: Cisco WAAS application acceleration capabilities
can also take advantage of the WAN optimization capabilities provided by WAAS. For
instance, DRE, TFO, and persistent LZ compression can also be employed to minimize
bandwidth consumption and better leverage available network capacity.
The acceleration capabilities provided by Cisco WAAS not only help to improve performance,
but also offload the origin file server to provide better scalability using existing hardware.
Application-specific acceleration offers other tangible benefits:
Disconnected modes of operation: This includes disconnected and guest printing, for
example, and read-only CIFS file server access.
Cache prepopulation: Cache prepopulation schedules the movement of data that is
frequently used, or where first access must be very high performance. Note that the DRE
cache is also populated.
WAN and origin server offload
Cisco WAAS fully encompasses the features provided by the original Cisco WAFS product
family. Procedures to migrate from a WAFS v3.0 installation to a WAAS v4.0 solution are
available. For more information, please contact your Cisco account team or product specialist
team.
1-26 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
DRE and LZ Manage Bandwidth Utilization
WAN
FILEDOC
FILEDOC
DRE CACHE DRE CACHE
LZ LZ
DRE allows the WAE to maintain a local database of TCP segments that have already been
seen by the device. Those same segments can be safely suppressed if they occur again. When
redundant segments are identified, the WAE sends a small instruction set to the other WAE on
how to rebuild the message with zero loss and 100% coherency. As traffic comes into the
WAE, it is compared against the DRE database context, which is a partition of the DRE
database that is reserved for the peer WAE. If the segments are identified as new, they are
added to the context. If segments are identified as redundant, they are removed and replaced
with a lightweight signature (an instruction set) that instructs the other device on which context
entries to reference in its DRE database context to accurately rebuild the message.
Along with DRE, WAAS also uses persistent LZ compression. DRE signatures (instruction
sets) are heavily compressible, meaning an additional 2:1, 3:1, or 4:1 compression can be
applied in addition to the 2:1 to 100:1 compression applied by DRE. Persistent LZ compression
provides strong compression for DRE instruction sets, as well as strong compression for data
that has not been identified by DRE before.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-27
TFO Improves Efficiency and Utilization
WAN
Window Scaling
LAN TCP Large Initial Windows LAN TCP
behavior Congestion Mgmt behavior
Improved Retransmit
TFO is a mechanism that shields communicating nodes from WAN conditions. Running a TCP
proxy and optimized TCP stacks between devices prevents many of the problems that occur in
a WAN from propagating back to the communicating nodes. With this approach,
communicating nodes experience LAN-like TCP response times because the WAE is
terminating TCP locally. From a client or server perspective, packet loss is rare. When packet
loss is encountered on the WAN, the recovery is performed at LAN speed. As packets are lost
or congestion is encountered, the WAEs create a buffer or a boundary at the border of the
WAN to keep problematic situations from bleeding over and impacting the TCP stacks of the
communicating nodes.
The TCP proxy and TFO are not applied until after device auto-discovery, which occurs during
the TCP connection establishment. In the figure, the first round-trip is completed natively with
no optimization.
In addition to buffering the WAN condition, Cisco WAAS also provides performance
optimizations to improve the throughput and responsiveness of TCP connections that traverse
the WAN. These optimizations include:
Window scaling: Window scaling allows communicating nodes to better use the available
WAN capacity.
Large initial windows: Large initial windows helps TCP connections to exit the slow-start
phase and progress more quickly to receiving ample throughput.
Improved retransmission: Improved retransmission minimizes the amount of data that
must be retransmitted during packet loss or congestion scenarios.
Advanced congestion management: Advanced congestion management algorithms to
safely maximize throughput in lousy scenarios, thereby providing bandwidth scalability
and fairness with other application flows attempting to leverage the network.
1-28 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAE Platforms and Software Licensing
This section introduces Cisco WAE hardware appliances and describes the router-integrated
network module. Performance and scalability characteristics of these components are also
briefly discussed, and discussed in more detail in the design module. This section also
introduces the Cisco WAAS software licenses, which are necessary to provide WAAS
functionality.
ACE
Performance
WAE-7326
Regional
Office or Small
Data Center
WAE-612
Branch or Remote
Office
WAE-512
NME-WAE-502
NME-WAE-302
Scalability
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.71-28
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-29
WAN optimization or application acceleration; rather, it acts as a scalability mechanism for
a large number of WAE devices within a data center.
1-30 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
NME-WAE: Router Integrated Module
NME-WAE:
Provides the lowest CapEx and
OpEx; integrates within the ISR;
addresses the majority of remote
branch offices
Single processor system, can be
clustered with WCCPv2, PBR
Supported in ISR models 2811,
NME-WAE 2821, 2851, 3825, and 3845
Router-Integrated Network Module
for the Cisco Integrated Services Router NME-WAE-302:
512MB of RAM, 80GB of disk
Supports up to 4Mbps WAN
connections and up to 250 optimized
TCP connections
Transport license only
NME-WAE-502:
1GB of RAM, 120GB of disk
Cisco Integrated Services Supports up to 4Mbps WAN
Router (ISR) Series connections and up to 500 optimized
TCP connections
Enterprise license capable
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.71-29
The WAE relies on network interception and redirection to receive packets to be optimized.
Cisco WAAS can leverage a variety of options:
Physical inline: The WAE can be deployed physically in-path within the network when
configured with the inline card. This allows the WAE to see all traffic traversing a network
path.
WCCPv2: The WCCPv2 protocol (Web Cache Communication Protocol version 2) allows
the WAE devices to be deployed virtually in-path but physically off-path.
PBR: Policy Based Routing allows the network to treat a Cisco WAE as a next-hop router
to automatically route traffic through it for optimization. Like WCCPv2, the WAEs are
virtually in-path but physically off-path.
CSM/ACE: The Content Services Module or Application Control Engine linecards for the
Catalyst 6500 series switch can be used for enterprise data center integration and
scalability. Like WCCPv2 and PBR, the WAEs are virtually in-path but physically off-
path.
After the packets are redirected to the WAE, the WAE applies the appropriate optimization as
determined by the application policy. For all traffic that is optimized (with some exceptions),
Cisco WAAS retains the packet header information to ensure that upstream network features
are not impaired. The source IP, destination IP, source port, and destination TCP port are fully
maintained. This is called service transparency and helps to ensure compatibility with existing
network features.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-31
WAE Appliance Family
The WAE-512 appliance:
1RU, Single processor, 1 or 2GB RAM
Supports up to 20Mbps WAN
connections and up to 1500 optimized
TCP connections
WAE-512
Remote Office Appliance Supports 250GB RAID-1 disk capacity
1-32 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WAE-7326 (a 2RU appliance): This dual-processor system succeeds the FE-7326 (and CE-
7325), and is RoHS compliant. Supports two to six 300GB SCSI drives (running RAID-1
or JBOD), and can be configured with either the Transport or Enterprise license. Supports
WAN links up to 310Mbps and up to 7500 concurrently-optimized TCP connections.
Note Any Cisco WAE platform can be used as a branch office or data center platform, as long as
appropriate performance and sizing guidelines are followed per the design module.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-33
Enterprise Data Center Integration
Cisco ACE:
Provides transparent integration,
interception, load-balancing, and
failover for:
Up to 16Gbps throughput
Deploying Cisco WAAS using the Cisco ACE linecard for the Catalyst 6500 series of
intelligent switches provides a solution that meets the needs of the most demanding enterprise
data center environments in terms of performance, scalability, and availability. The ACE
module can scale to 4 million TCP connections, with a setup rate of 350 thousand TCP
connections per second, and up to 16Gbps of throughput. Additionally, ACE represents the
industrys most scaleable, high performance, secure, and feature-rich solution for server load
balancing, network device load balancing, virtualization, and application control. With physical
integration into the Catalyst 6500, operational costs are minimized through simplified
deployment and management. The ACE module supports a variety of features, including
virtualization and virtual partitions, contexts and domains, flexible resource assignment,
granular security, and control.
1-34 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Licensing
Enterprise Includes all of the features of Transport license Used for deployments where applications
and: need to be optimized and file servers are
CIFS Acceleration (file and print) being consolidated.
Provides optimizations for all TCP-based
applications AND protocol acceleration for
CIFS (file and print).
Central Enables a WAE to act as Central Manager for Required for each deployment of Cisco
Manager Cisco WAAS deployments: WAAS. Deployments without Central
Includes Central Manager GUI Manager are not supported under any
circumstance
Order two for active or standby deployments
Note Cisco WAAS licensing is not enforced, and no license files or keys need to be added to the
WAEs running Cisco WAAS. This enforcement level might change in a future release of the
product.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-35
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
1-36 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Lesson 2
Objectives
Upon completing this lesson, you will be able to explain Cisco WAAS WAN optimization
features. This includes being able to meet these objectives:
Identify the performance barriers created by the WAN
Describe the basic characteristics of TCP
Describe the functions provided by Cisco WAAS TFO
Identify the compression capabilities provided by Cisco WAAS
Application Performance Barriers
This topic examines the performance barriers created by the WAN, including bandwidth,
packet loss, throughput, and latency.
Bandwidth
In low bandwidth environments such as the WAN, traffic encounters a point of bandwidth
disparity at the network element that connects the disparate networks. In most cases, this
network element is the router. The router has to negotiate the transmission of data from a high
speed network to a low speed network. Given the difference in bandwidth, data might need to
be queued for transmission for long periods of time before the next set of data can be sent. As
shown in the figure, data leaves the node on a high bandwidth network and enters the network
element managing the bandwidth disparity, and a fraction of the data is parsed across the long
distance, low bandwidth WAN.
The challenge of the low bandwidth environment is two-fold. First, only a small amount of data
can be sent. Second, when the flow reaches another high bandwidth network, it is not able to
fully leverage the capacity of that network. From the perspective of the server in this example,
the client is only sending small amounts of data. The same is true in the reverse direction,
where the server response is throttled at the WAN router, and the client sees the server as
responding slowly.
Other challenges arise with bandwidth constraints, including congestion and packet loss. As
packets are queued on each router in the network, and these queues become full, some packets
are lost. For applications using a reliable connection-oriented transport such as TCP, the data
must be retransmitted. Additionally, the amount of data outstanding on the wire is significantly
reduced when packet loss is encountered. These characteristics force the communicating nodes
to send less data, and application performance is derailed.
1-38 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Latency
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-39
Packet Loss, Congestion, and
Retransmission
Packet Loss
Congestion
When detected, packet loss and congestion cause transmitting nodes to retransmit data and
adjust transmission throughput. These events signal to the transmitting node that there is
contention in the network and that the available bandwidth capacity needs to be shared with
another node. As such, the transmitting node will slow down transmission rates to allow other
network nodes to consume bandwidth as well. In this way, when a packet is lost, the transmitter
needs to not only retransmit the lost data, but also slows down to accommodate a network that
is being shared.
1-40 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
The Impact of Latency and Packet Loss
MSS 1 .2
R=
RTT p 0 .5
R : Average Throughput
MSS: Packet Size
RTT: Round-Trip Time
Expected P : Packet Loss
Throughput
1.544Mbps
Actual
500Kbps
80 ms
The formula in the top-right corner of this slide shows how latency (RTT) and packet loss (p)
negatively impact overall throughput. Given that both variables are in the lower half of the
equation, as they increase, the overall throughput will decrease. Packet loss and latency cause
an exponential drop in overall throughput, causing most long-distance networks to never reach
their utilization potential.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-41
Introduction to TCP
This topic describes the basic characteristics of TCP and examines its behavior in WAN
environments.
TCP Overview
Rx Data
Tx Data
and the unreliable network
infrastructure.
As the network is able to
handle transmission, TCP
drains data from the TCP
application buffer and sends it
Buffers
Buffers
through the network layer.
Rx
Tx
IP
TCP provides reliable and guaranteed delivery of data from one application buffer to another.
After a TCP connection is established, TCP receives data from applications and the operating
system and places it into a transmit buffer. The TCP process then manages the transmission of
this data through the IP network by packetizing the data with control information, including
port numbers, TCP sequence (SEQ) numbers, and acknowledgement (ACK) numbers.
When the data is received by the distant node and drained from the receive buffer by the
application process, an ACK is sent to the sender, to tell the sender that the data was
successfully received and that it can be safely flushed from the send buffer. The data is then
removed from the recipient receive buffer, and after the ACK is received, the data is removed
from the senders transmit buffer.
1-42 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
TCP Connection Establishment
Attempt Connection
Src port, Dst port
Sequence Number TCP SYN
Window Size, Checksum
Options (MSS, SACK, etc.) Acknowledge Connection
Attempt Connection
Src port, Dst port
TCP SYN, ACK Sequence Number
Acknowledgement Number
Window Size, Checksum
Options (MSS, SACK, etc.)
Acknowledge Connection
Sequence Number
Acknowledgement Number TCP ACK
Window Size, Checksum
Options (MSS, SACK, etc.)
Before transmitting TCP-based application, two communicating nodes must first establish a
connection through a process called a three-way-handshake. The establishment of the
connection determines the transmission and acknowledgement characteristics of the two
communicating nodes.
The synchronize/start (SYN) message is used to initiate the connection. This SYN message
includes information such as:
Source TCP port: This is a unique port number on the sender that logically maps to an
application process on the sender.
Destination TCP port: This is a unique port number on the recipient that logically maps to
an application process on the receiver.
SEQ number: SEQ numbers are a cumulative number of all data that has been received,
starting with a specific value at the beginning of the connection.
Window size: This figure is the advertised TCP window size supported by the client, that
is, the amount of data the client can safely hold in receipt.
Checksum: The checksum is a 16-bit summation of all data being transmitted. It is used to
verify data integrity at the receiver.
Options: Options are additional TCP settings, such as segment size, selective
acknowledgement, and window scaling.
For example, if a client is talking to a web server using HTTP, the source port would likely be a
random high-value port number (greater than 1024) and the destination port would be 80, the
well-known port for HTTP.
The SYN ACK packet is used to respond to the SYN packet to establish connectivity in the
reverse direction from the receiver to the sender. Characteristics of the SYN ACK packet
include the following:
The SEQ number is incremented based on the amount of data received
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-43
The ACK number acknowledges receipt of the SYN packet
After the SYN ACK packet is received by the original sender, an ACK packet is returned to
confirm the connection.
After the connection is established, the application processes on the two connected nodes can
exchange application data. For example, an HTTP/1.1 GET request can be issued for a web
server object.
1-44 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Maximum Window Size
4 3 2 1
A window is used for managing the data that is in-flight and the data to be exchanged. A TCP
window is the amount of outstanding (unacknowledged by the recipient) data a sender can
transmit before it gets an acknowledgment back from the receiver saying that data is being
received.
For example, if a pair of hosts are exchanging data using a TCP connection that has a TCP
maximum window size (MWS) of 64 KB, the sender can only send up to 64 KB of data. It must
then stop and wait for an acknowledgment from the receiver saying that some or all of the data
has been received. If the receiver acknowledges that all the data has been received, then the
sender is free to send another 64 KB. If the sender receives an acknowledgment from the
receiver that it received the first 32 KB (which happens when the second 32 KB segment is still
in-transit or lost), the sender can only send another 32 KB without exceeding the maximum
limit of 64 KB of unacknowledged outstanding data.
The primary benefit of TCP windows is congestion control. The network connection consists of
hosts, routers, switches, and associated physical links, and usually has a bottleneck somewhere
that limits the speed of data passage. Bottlenecks cause transmissions to occur at a rate that the
network is not prepared to handle, often resulting in data that is lost in-transit. The TCP
window attempts to throttle the transmission speed down to a level where congestion and data
loss do not occur.
The challenge with the MWS is its relatively small size, which is commonly 64KB or 256KB.
On long fat networks (LFNs), or elephants, the small window size limits throughput because
TCP does not allow the communicating nodes to drive available network capacity to higher
levels of utilization.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-45
TCP Acknowledgements
4 3 2 1
ACK
Acknowledgements are sent after an entire TCP window has been received and the receivers
application process has removed the data from the receive buffer. Standard TCP
implementations acknowledge entire windows, which can be smaller than the MWS. If any of
the data is lost in-transit, the entire window must be retransmitted, which leads to poor
performance in high packet loss environments or low-speed links.
1-46 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
TCP Window Management
Receive
buffer (4KB)
Application performs
2KB write seq=1,ack=1,win=4096,2048B data
seq=1,ack=2048,win=2048
Application performs
2KB write seq=2049,ack=2048,win=2048,2048B data
seq=2049,ack=4096,win=0
Sender blocked! Application reads
first 2KB
seq=2049,ack=4096,win=2048
Application performs
1KB write seq=4097,ack=4096,win=2048,1024B data
The figure shows how TCP buffers and the TCP window are used to throttle the amount of data
that can be exchanged between two application processes.
In this example, the receiver has a receive buffer MWS of 4KB. As the client application writes
2KB, starting at sequence 1, the 2KB of data are placed into the receive buffer, leaving 2KB of
empty space. The 2KB of data does not leave the receive buffer until requested by the
application process. After the data is placed safely in the 2KB receive buffer, an ACK is sent
with the value of 2048 (2KB of data received) and a window (WIN) value of 2048, specifying
that 2048 bytes remain empty in the receive buffer. At this point, the client can safely send 2KB
more of data.
The client sends another 2KB of data, this time with a SEQ of 2049, specifying that the new
2KB of data is appended to the end of the previous 2048 (SEQ numbers identify the
placeholder for data in-transit). When the receiver receives this 2KB of data, it is added to the
receive buffer, leaving no free space. An ACK is sent with a value of 4096 (4096 bytes have
been placed into the receive buffer) with a WIN value of 0 (no more data can be received). At
this point, the data has been safely placed in the receive buffer, but the client is blocked from
transmitting more data, because the receiver has no where to put it because the receive buffer is
full.
The receiver application then reads 2KB of data from the receive buffer. This 2KB of data is
then freed from the receive buffer, and an ACK is sent to the client with a value of 4096
(acknowledging all 4096 bytes) and a WIN value of 2048. The receiver can handle 2KB more
of data in the receive buffer.
The client then sends another 1KB of data, with a SEQ number of 4097. The incremented
sequence number specifies the placeholder for the new data, which is appended to the end of
the first 4096.
If data is lost and retransmitted, the SEQ number is used to identify the original location of the
data being transmitted.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-47
Bandwidth Delay Product
RTT 10 ms
RTT 200 ms
Every network is able to store a certain amount of data in-transit. This data is constantly in
motion, and it takes time to travel from the entry point of the network to the exit point. This
storage capacity is called the Bandwidth Delay Product (BDP), which defines how much data
can be in-transit on a link at any given time. The BDP is calculated by converting the rate of
transmission to bytes (divide by eight) and then multiplying the resultant value by the latency
of the link. Multiplying by the one-way latency defines how much data can be sent by one node
to another node. Multiplying by the round-trip latency defines how much data can be
exchanged between the two nodes.
For example, an Optical Carrier-3 (155Mbps) that is 10mS long has a BDP of (155Mbps/8 =
19.375MBps * 10mS) = 193KB. This means that a maximum of 193KB of data can be in-flight
on the network link at any given time. Given that the link is only 10mS long, data exits the
network quickly, allowing higher levels of throughput that are closer to the maximum link
capacity. For larger and longer-distance links, as shown in the bottom figure, the BDP
continues to increase to a capacity that is difficult for a pair of communicating nodes to fully
leverage.
When the BDP of the network is higher than the MWS of the communicating nodes, a
percentage of the network capacity is left unused.
1-48 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Challenge
The challenge with standard TCP implementations is their inability to handle the network
connections that are available today. TCP/IP was developed almost 20 years ago and the
network landscape has changed significantly:
Longer distance links, including satellite, are more frequently used, with higher levels of
packet loss.
Larger levels of oversubscription and congestion are common in broadband environments.
High capacity links now span large geographic boundaries.
Cisco WAAS TFO uses standards-based TCP optimizations to remove TCP as a barrier to
application performance over the WAN.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-49
Transport Flow Optimization
This topic provides a high level overview of Cisco WAAS TFO, and describes the
optimizations provided and the process of implementation.
WAAS TFO provides significant optimizations for TCP connections that traverse the WAN.
These optimizations enable applications to better use available network capacity and shield the
communicating nodes from problems that are encountered in the WAN such as packet loss and
congestion.
Starting a TCP Proxy after auto-discovery allows communicating nodes to experience LAN-
like TCP behavior through local acknowledgement and TCP handling. Reliable delivery is still
guaranteed through the Wide Area Application Engine (WAE). After the TCP proxy is started,
TFO optimizations can be applied, along with other optimizations such as DRE.
Note Cisco WAAS uses TFO as the data path for optimized connections. The TCP proxy service
is the foundation for other optimizations that are applied by the system. The TCP proxy
service is automatically restarted when clearing the DRE cache, causing connections and
sessions to be broken. Clients and servers automatically regenerate these connections.
1-50 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Without TCP Proxy
WAN
X
TIMEOUT! RESEND ?
If no TCP proxy infrastructure is in place, no LAN TCP features are available over a WAN. In
this example, each ACK sent has to traverse the WAN completely. For a 100mS WAN, after 10
ACKs, TCP has added up to 1 second of additional overhead to the application experience. If a
packet is lost, the entire window of data must be retransmitted after the timeout period has
passed.
Also note the slow ramp-up of throughput after the establishment of the connection.
Throughput is severely limited until after the connection enters congestion avoidance. This is
called the slow-start phase.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-51
TCP Proxy and TFO
Client receives LAN Server receives LAN
TCP behavior. TCP behavior.
WAN
Window Scaling
Large Initial Windows
Congestion Mgmt
Improved Retransmit
X
No retransmission
necessary: packet loss
is handled
by the WAE.
A successful completion of automatic discovery initializes the TCP proxy service. TCP proxy
is a mechanism by which WAEs are inserted in the TCP connection to provide localized
handling of TCP buffering and control. With a TCP proxy in place, each WAE provides a TCP
termination and generation point. This allows the WAE to locally acknowledge TCP data to
keep the nodes communicating. By using larger buffers and WAN optimizations, the WAE can
more efficiently use the WAN and more effectively handle situations such as packet loss. Due
to the large windows, selective acknowledgements, and advanced congestion avoidance
functions of Cisco WAAS TFO, packet loss has minimal impact on overall throughput, and the
vast majority of WAN latency associated with TCP control handling is mitigated.
WAAS TFO is designed to overcome challenges common to standard TCP implementations:
Window scaling: Scaling allows you to capitalize on available bandwidth.
Selective acknowledgement: This mechanism provides efficient packet loss recovery and
retransmission mechanisms.
Large initial windows: A larger size maximizes transmission after connection
establishment.
Advanced congestion management: This function provides an adaptive return to
maximum throughput upon encountering congestion based on packet loss history, thereby
allowing bandwidth scalability and fairness.
1-52 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Link Utilization and MWS, BDP
BDP
MWS
Link Utilization
Latency
The MWS determines the maximum amount of data that can be in-transit and unacknowledged
at any given time. The BDP (Bandwidth-delay product) defines the amount of data that can be
contained within a network at any given time:
If MWS > BDP, then the application might not be throughput bound and can fill the pipe.
If BDP > MWS, then the application can not fully use the network capacity, and can not fill
the pipe.
MWS does not account for application-layer (L7) latency such as that experienced with
protocol-specific messaging.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-53
Windows Scaling, MWS, and BDP
Without window scaling, high BDP networks can not be fully used by nodes that have a MWS
that is smaller than the BDP. Significant improvement can be achieved by virtually scaling the
TCP MWS, and handling TCP on the WAE. In this manner the communicating nodes can fully
use the available WAN capacity.
1-54 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Link Utilization After Window Scaling
BDP
Cisco WAAS TFO
Original MWS
Latency
WAAS TFO window scaling is based on RFC 1323 and scales the TCP window to 2MB to
overcome problems when filling LFNs. Window scaling applies a binary shift to the decimal
value supplied in the specified window field. For instance, if an advertised window is 64KB,
and a binary shift of 2 (window scale factor of 2) were employed, this would indicate a 256KB
TCP window.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-55
Selective Acknowledgement
Transmit Receive
3 2 1 2 1
ACK
Retransmit
3 2 1 3 2 1
ACK 3 2 1
With standard TCP implementations, ACKs are sent to acknowledge receipt of the entire
window of data. If any piece of that data is lost, the entire window must be retransmitted by the
sender. In cases of high-latency or low-bandwidth links, the propagation delay of the
acknowledgement could be very large, and the amount of data that needs to be retransmitted
could also be quite large. This combination of factors leads to degraded application
performance.
With standard TCP, the loss of a segment results in application throughput reductions of fifty
percent. Add in the need for increased management overhead and a high latency network, and
TCP becomes a barrier to application performance over the WAN.
1-56 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Selective Acknowledgement (Cont.)
WAN
Cisco WAAS uses selective acknowledgement (SACK) and extensions to minimize the amount
of data that must be retransmitted in the case of data loss. With selective acknowledgement, the
recipient is able to stream acknowledgements back to the sender for segments of data that have
been successfully received. These SACKs also free up capacity of the window to allow the
sender to continue transmission. If a segment is lost, only that segment is retransmitted. Add
Cisco WAEs to the path to handle the retransmission, and the communicating nodes never
know that a packet was lost.
SACK allows a receiver to identify blocks of data from within a window that has been
received. The sender is required to retransmit missing blocks only. SACK is defined in RFC
2018.
Forward acknowledgement (FACK) is an extension to SACK that is used to aggressively
request retransmission of blocks that are missing from within a window after later blocks have
been received.
Duplicate SACK (DSACK) is another extension to SACK. DSACK allows a receiver to notify
the sender that duplicate blocks of data have been received.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-57
Cisco WAAS Large Initial Windows
TCP slow-start is a mechanism that initially throttles a connection until it can determine the
optimum window size to allocate based on network conditions. With every successful
roundtrip, the congestion window is doubled, starting from a single segment size (1460 bytes).
At this rate, it can take a large number of successful roundtrips before a connection reaches the
maximum window size, or a packet is lost, and the connection is transitioned into congestion
avoidance mode. Congestion avoidance mode allows the connection to operate at high levels of
throughput as long as congestion or packet loss is not encountered.
Short-lived connections that typically complete in a small number of roundtrip exchanges are
typically starved of bandwidth because they do not live long enough to exit the slow-start
phase.
1-58 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS Large Initial Windows
(Cont.)
Packet Loss
TFO
TCP
Round Trips
Cisco WAAS Large Initial Windows increases the initial maximum segment size from 1460
bytes to 4380 bytes to help connections exit slow-start more quickly. With each successful
roundtrip, the congestion window (cwnd) is doubled, but the initial starting size is three times
the value of the unoptimized starting size. This feature allows connections to more quickly take
advantage of available WAN bandwidth during the congestion avoidance phase.
Note The congestion window is the amount of data that can be outstanding and unacknowledged
in the network at any given time between two connected nodes. It is also referred to as the
number of segments that can be sent per roundtrip.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-59
Standard TCP Congestion Avoidance
Packet loss causes connection to enter into Return to maximum
linear congestion avoidance (+1 cwnd per ACK) throughput could take
cwnd dropped by 50% on packet loss a very long time!
Segments per Round Trip (Congestion Window)
Exponential
Slow Start
(2x pkts per RTT)
Low throughput
during this period
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Round Trips
1-60 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAAS TFO Congestion Avoidance
Adaptive Increase to cwnd Cwnd decreased by 1/8 on
cwnd = cwnd + f(cwnd, history) packet loss vs 1/2 with TCP
cw nd
Cisco
WAAS TFO
Cisco WAAS employs the Binary Increase Congestion (BIC) congestion avoidance system as
part of TFO to improve throughput and enable bandwidth scalability in environments that
experience higher levels of packet loss. WAAS TFO, and specifically the BIC congestion
avoidance system, maintains a history of packet loss encountered in the network to determine
the best rate at which to return to maximum throughput. TFO uses a binary search to adaptively
increase the size of the congestion window, resulting in a stable and timely return to higher
levels of throughput. Unlike standard TCP, TFO decreases the congestion window by only one-
eighth (rather than one-half) when packet loss is encountered, to mitigate the majority of the
performance penalty.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-61
Comparing TCP and TFO
TFO
cw nd
TCP
Comparing TCP to TFO with advanced congestion management shows that TFO offers a more
timely return to maximum levels of throughput. The area shown in red in the figure is
potentially unused network capacity that TFO is able to fully use. With WAAS TFO and
advanced congestion management, the WAN link is used efficiently to its maximum potential.
1-62 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Advanced Compression
This topic examines the need for WAN compression and describes the benefits of Cisco WAAS
advanced compression, including DRE.
WAN without
compression
WAN with compression
As applications grow to require additional network bandwidth resources, the network itself can
become a bottleneck, especially if insufficient bandwidth is available to support the
applications. Network compression functions compress the data in transit and then rebuild the
original messages at the other side of the link. This compression minimizes the amount of data
that must traverse the network while still maintaining message and data validity and integrity.
Some data sets are not good candidates for compression, unless adaptation is first performed:
Previously-compressed data: No additional compression can be provided by
computational compression. Previously-compressed data provides a good opportunity for
data suppression.
Previously-encrypted data: Minimal additional compression is provided by computation
compression. This data also provides a good opportunity for data suppression, as long as
session-based encryption is not being used.
Adaptation could include the local termination of encryption, the application of compression,
followed by a re-encryption of the transmission.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-63
Data Transfer Without Compression
WAN
WAN
Congestion!
1-64 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Data Transfer With Compression
WAN
Network compression not only minimizes the amount of data that must use the network, but
also allows for a larger degree of throughput as experienced by the recipients of the
transmission. As traffic enters the WAE and is DRE encoded and compressed, it takes up less
network capacity. When the encoded and compressed message reaches the other side of the
network, it is decompressed and decoded. This process allows traffic flows to take advantage of
the high bandwidth LAN on the other side of the link.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-65
Cisco WAAS Advanced Compression
LZ LZ
DRE DRE
Synchronized
context
Cisco WAAS employs two distinct forms of compression, each addressing different bandwidth
requirements.
The first form of compression is DRE. DRE is a process that allows WAEs deployed on either
side of the WAN to maintain a history of previously-seen data segments from TCP messages,
based on the configured application policy. By keeping a history of previously-seen data
segments, the WAE can check to see if a piece of data being transmitted has been seen before
or not. Assuming the data is a repeated segment, instructions can be sent to the distant device,
defining how to rebuild the data segment, rather than sending the data segment itself. DRE
maintains data and message integrity without compromise by ensuring that the rebuilt message
maintains one-hundred percent coherency with the original message. DRE can dramatically
reduce network bandwidth consumption by ensuring that data only traverses the network once
within a history. With DRE, as much as 100:1 compression can be realized.
The second form of compression is persistent Lempel-Ziv (LZ) compression, which is a
standards-based compression method found in many applications, including zip technologies.
The Cisco WAAS implementation of LZ leverages a long-lived session-oriented history per
TCP connection to better improve compression ratios. Persistent LZ extrapolates patterns and
redundancy within the dataset being transferred, using a smaller connection-specific history
than DRE, and is useful in providing compression for application data that has never been seen
by DRE. With persistent LZ, as much as 10:1 compression can be realized, and persistent LZ
can even compress encoded messages that are generated by DRE.
1-66 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
DRE Overview
4. The Edge WAE adds the chunks and signatures to its local DRE database.
5. The Edge WAE forwards the file to Client A.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-67
DRE Overview (Cont.)
Server
Client A Branch WAE Data Center WAE
LZ LZ
8
Client B 10 7
DRE 9 DRE
6. Client B requests the same file. However, the file was changed on the server after the
previous transaction.
7. The Core WAE chunks the file, and locates signatures in the local DRE database for the part
of the file that was not changed.
8. The Core WAE sends signatures with no data for those chunks that are in the DRE database,
and sends data with new signatures for the remaining chunks.
9. The Edge WAE receives the signatures and data. For signatures received without data, the
WAE retrieves the matching chunks from local DRE database. Chunks received with
signatures are added to the database.
10. Edge WAE reassembles file and forwards to Client B.
1-68 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
DRE Block Diagram
Fingerprint Fingerprint
Chunk Identification Chunk Identification
P-LZ P-LZ
DRE and LZ compression are applied based on the configuration of the application traffic
policy that identifies the incoming flows. As data is buffered in the WAEs TCP proxy, it is
shifted into DRE, where the following processes are applied:
Chunk identification: This process breaks the data set into smaller, more manageable
chunks by using a fingerprinting function.
Signature generation: All identified chunks are assigned a signature.
Signature matching: Generated signatures are compared against the existing DRE context,
which is a shared database between two connected WAE peers. If redundant signatures are
identified, the chunk is removed and the signature remains, instructing the distant WAE to
replace the signature with the associated chunk from its context. If redundancy is not
identified, the chunk and signature are added to the local context, and remain in the
message to instruct the distant WAE to also update its context.
The output of this function is an encoded message which can then be compressed by LZ and
sent back to the TCP proxy for forwarding across the WAN.
When received by the other WAE, the message is first uncompressed (LZ) and then passed to
DRE to identify signatures and chunks. Signatures received with no data are replaced by data
segments from the DRE context, and chunks received with signatures are added to the local
DRE context. After the message is rebuilt and verified as accurate, the message is forwarded
toward the destination.
Following is a list of DRE terminology:
Signature: This is a 5-byte label that references a chunk of data, generated per identified
chunk.
Chunk: A chunk is a segment of data identified within a transmission. It is identified by a
fingerprint function.
Fingerprint: The fingerprint function is a sliding window operation used to identify chunk
boundaries within a transmission.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-69
FIFO: This term identifies a first-in-first-out database for chunks that have been identified
between communicating DRE peers (WAEs).
FIFO clock: This term identifies the timestamp associated with FIFO data. The FIFO clock
is used to synchronize communicating DRE peers (WAEs).
DRE and Persistent LZ compression are application policies that are configurable. Application
policy is negotiated between two WAEs upon establishment of the TCP proxy. Part of this
exchange includes synchronization of the DRE database, which is discussed later.
1-70 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Fingerprinting and Chunk Identification
Window
A single-pass is used to identify
No boundary found chunks at multiple levels:
Window Basic chunks
Boundary identified!
Chunk aggregation (nesting)
After chunks are identified, DRE
begins pattern matching:
Chunk1 Window
Looks for largest chunks first
5-byte signature
Looks for smaller chunks if necessary
The encode function is applied to traffic entering a WAE and is configured to leverage DRE.
TCP proxy temporarily buffers data to provide DRE with a large amount of data to analyze. Up
to 32KB of data can be analyzed at one time. The fingerprinting and chunk identification
process is accomplished in the following steps:
First, the encode function generates a 16-byte message validity signature in Message-
Digest algorithm 5 (MD5) format to be used by the distant WAE to validate that the
message, after decoding, is an identical match to the encoded message. This step ensures
data and message integrity. If a hash collision is detected (that is, the same signature for
two different pieces of data), a synchronous instruction is sent to the peer to flush the
relevant entries in the DRE context. The calculation of this message validity signature also
includes device-specific key data as well to prevent against hash vulnerabilities.
Second, the encode function identifies chunks, using a sliding window to analyze data to
locate content-based break-points within the data. All content between break-points is
considered a chunk.
Finally, a 5-byte signature identifier is generated for each chunk identified.
DRE attempts to match chunks on four different levels in a process called chunk aggregation:
Level-0 chunk: This is a basic chunk. Level-0 chunks can be identified for data segments
that are as small as 32 bytes, but are commonly found approximately every 256 bytes.
Level-1 chunk: Level-1 chunks are commonly found approximately every 1024 bytes, and
reference multiple level-0 chunks.
Level-2 chunk: Level-2 chunks are commonly found approximately every 4096 bytes, and
reference multiple level-1 chunks.
Level-3 chunk: Level-3 chunks are the largest chunk. Level-3 chunks are commonly found
approximately every 16K bytes, and reference multiple level-2 chunks.
DRE pattern matching and chunk identification is done efficiently, and only one pass is made
against the entire data stream being analyzed.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-71
A Fully Chunked Message
ORIGINAL DATA
A fully-chunked message is shown in the figure. Notice that for the original data, many level-0
chunks are identified. Multiple level-1 chunks are identified, and each is a representation of all
of the level-0 chunks that make up the level-1 chunk. Multiple level-2 chunks are identified,
and each is a representation of all of the level-1 chunks that make up the level-2 chunk.
Multiple level-3 chunks are identified, and each represents all of the level-2 chunks that make
up the level-3 chunk.
Each chunk identified receives a 5 byte signature. For redundant data that has been identified,
the chunk is replaced with the signature, to provide as much as 2500:1 compression for a chunk
of data. Note that pattern match on a level-3 chunk = 16KB / 5B = approximately 3000:1
compression. Overall, DRE can provide an overall compression ratio from 2:1 to as much as
100:1.
1-72 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Chunk Identification
Original Message
Match
Match
5B 5B 5B
Chunk3 Chunk4 Chunk5 Chunk6 3 Match
Add Add Add
5B 5B 5B 5B 5B 5B
4 3
Add Add
After all of the level-0 through level-3 chunks are identified and signatures are assigned to
each, DRE begins to pattern match against the identified chunks. Given that the chunk
boundaries are found based on the content being transferred, the break-points are always
identified at the same location within the transmission. Changes to data are contained within a
chunk and do not affect neighboring chunks. If a change occurs on a chunk boundary, only the
two neighboring chunks are affected. Changes are isolated to the chunk or chunks where the
change was inserted:
Typical: A new chunk must be added to the local context and transmitted in full for each
chunk experiencing a change.
Worst case: A change inserted at the location of a chunk break-point only invalidates the
two adjacent chunks. The new chunk must be added to the local context and transmitted in
full.
After the message is fully chunked, DRE begins a top-down pattern match. It first searches for
matches on the largest chunks, and continues to smaller chunk sizes as necessary. Notice that as
each chunk is identified, the chunk of data is removed from the message and only the signature
for the data remains.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-73
Chunk Identification (Cont.)
A fully chunked message contains:
Signatures only for previously-seen patterns
Signatures and data for non-redundant patterns; used to update the
adjacent WAE
16-byte MD5 hash of original message to verify integrity after the
rebuild
Chunk aggregation (nesting)
The message is then passed to LZ compression, based on policy, and
to the TCP proxy to return to the network.
MD5
After the fully chunked message completes the pattern matching process, the resulting output is
an encoded message containing the following:
Signatures and chunks for non-redundant data to be added to the peer WAEs DRE context.
Signatures with no chunks for redundant data; the peer WAE inserts the data on behalf of
the chunk upon receipt.
The original 16 byte MD5 message validity signature, which is used by the peer WAE to
ensure that the rebuilt message is an identical representation of the original message.
The message is then passed to persistent LZ compression, based on the configured application
policy, for an additional layer of compression.
The decoding WAE first decompresses the message received using LZ compression, and then
begins a process of identifying:
Standalone signatures with no chunk attached
Signatures with attached chunks
Message validity signatures
After these components are identified, the following processes are performed:
Standalone signatures with no chunk attached are replaced by the corresponding chunk of
data contained in the local context.
Signatures with attached chunks are added to the local context and the signature is removed
from the message.
A new message validity signature is generated.
If a standalone signature is received and no chunk is stored in the local context, a negative-
acknowledge character (NACK) is sent to the encoding WAE indicating that the signature and
chunk must be resent. This allows the WAE to update its context. If a standalone signature is
1-74 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
successfully replaced with a chunk, an asynchronous ACK is sent to the other WAE to notify
that the decoding for that chunk was successful.
After the message is fully rebuilt, a new message validity signature is generated and compared
against the message validity signature that was sent by the encoding WAE. If the two
signatures match, the message is considered completely valid and is forwarded toward the
intended destination. If the message validity signatures do not match, a synchronous NACK is
sent to the encoding WAE indicating that the message was rebuilt using a chunk that was
inaccurate. This message instructs the encoding WAE to resend all signatures and chunks
related to that message to the receiving WAE to update its local context, and to ensure the
message can be rebuilt safely.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-75
DRE Synchronization
Upon connection establishment, DRE peers compare FIFO clock information from their
respective contexts. This includes the head and tail of the context, described as follows:
Head: The head is oldest entry contained in the context, and the first to be evicted if
additional capacity is needed.
Tail: The tail represents the newest entry contained in the context, and the last to be evicted
if additional capacity is needed.
FIFO clock timestamps are not related to actual system time. Instead, they are relative to the
connection time itself. Generally, FIFO tail and head times are nearly identical as context is
maintained per connection. However, there are circumstances where these timestamps might
not be closely identical, for example, when a Core context is partially or fully flushed to
support other, more active contexts; or when a disk failure is encountered.
FIFO clocks are synchronized as follows:
Upon connection establishment: An exchange of head and tail FIFO values occurs to
identify areas within the context that are still valid.
Upon eviction: Eviction causes synchronization to again exchange head and tail FIFO
values to identify areas within the context that are still valid.
Intermittently: Intermittent synchronization assumes that no eviction has taken place, and
DRE contexts continue to resynchronize every 5 minutes.
Using a bidirectional context, that is inherently cross-protocol, means that a single context is
used regardless of the direction of traffic flow. This approach is useful, for example, when the
download of a new file helps to provide significant compression when the file is returned in the
reverse direction of traffic flow, even if using a different application protocol.
1-76 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Successful Synchronization
Head (oldest)
In circumstances where the two communicating WAEs can successfully negotiate a range of
entries that are common on both WAEs, the context area containing common entries within the
least common denominator are maintained and used on both WAEs, as the signatures and data
appear on both WAEs. Entries that are outside of the negotiated range are flushed.
After the DRE contexts are synchronized, the two have identified areas from within the local
context that are still considered valid (data exists on both WAEs) and can be used for
transmissions involving this WAE pair. The areas within the context that can not be safely used
are immediately flushed from the context.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-77
Failed Synchronization
Head (oldest)
In circumstances where two communicating WAEs can not negotiate a range of entries to
maintain, the negotiated FIFO clock value is set to zero, and both contexts are flushed.
From here, the WAEs must rebuild the context with new data. No redundancy can be found
until chunks and signatures are populated in the context.
1-78 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Combining Cisco WAAS TFO and DRE
As shown in this slide, DRE can combine with TFO to provide exponential increases in overall
network throughput. TFO helps transmitting nodes to effectively fill-the-pipe. When coupled
with DRE, which is applying high levels of compression, TFO is then able to fill-the-pipe with
compressed data, resulting in even higher levels of throughput.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-79
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
1-80 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Lesson 3
Application Acceleration
Technical Overview
Overview
This lesson discusses the application-specific acceleration capabilities provided by Cisco Wide
Area Application Services (WAAS) for the Common Internet File System (CIFS) protocol to
enable server and storage consolidation while maintaining end-user performance expectations.
Objectives
Upon completing this lesson, you will be able to explain how Cisco WAAS application-specific
acceleration improves performance for file and print protocols. This includes being able to meet
these objectives:
Identify the need for application-specific acceleration
Identify the acceleration capabilities provided by Cisco WAAS for the CIFS file services
protocol
Discuss print services capabilities of Cisco WAAS
The Need for Application Acceleration
This topic examines the need for application-specific acceleration and explains why standard
compression, flow, and data suppression optimizations are insufficient to ensure high-
performance access to centralized file servers over a WAN.
Many application protocols can not be adequately optimized through simple compression and
transport optimizations alone. Application protocols are commonly developed in utopian
environments where the client and the server are located on the same LAN or are positioned
close to each other. Application-induced or protocol-induced latency and unnecessary data
transfers hinder overall end-user performance.
CIFS is an example of protocols that require protocol-specific acceleration. By effectively
making a portion of a file system available on the network, the node then has to ensure all of
the same file system semantics are maintained. This includes:
Authentication: Authentication verifies that the requesting user, process, or node is a valid
entity. Authentication can be performed locally or through an authentication provider such
as Active Directory.
Authorization: Authorization ensures that the requesting user, process, or node has the
appropriate privileges to access the requested information.
Security: Security includes auditing and authentication, authorization and accounting
(AAA).
Access control and locking: These functions ensure that files or portions of files are
locked properly to enable collaboration, and maintain data integrity.
Input and output operations: These operations include open requests, create requests,
close requests, read operations, write operations, seek, and find.
1-82 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
The Need for Application-Specific
Acceleration (Cont.)
Write data
Close file
File system protocols are notoriously chatty and require a large number of ping-pong
operations. These operations are typically very small, highly uncompressible (commonly with a
zero-byte length), and must occur in sequence before the next process can begin.
Applying compression to communications between the client and server certainly minimizes
the amount of bandwidth consumed by each protocol message, and applying transport
optimizations to these communications improves the ability of each message to efficiently and
fully use available network capacity. However, many hundreds or thousands of messages must
still traverse the WAN in sequence.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-83
The Need for Application-Specific
Acceleration (Cont.)
In this example of
a 2MB Word
document open,
over 1000
messages are
exchanged.
With a 40mS RTT
WAN, this
equates to over
52 seconds of
wait time before
the document is
usable.
This example shows the impact of accessing a file over a WAN. With a 40mS round trip time
(RTT) WAN, opening a 2MB Microsoft Word document requires the exchange of thousands of
messages, the vast majority occurring in sequence. This process directly impacts the response
time of the application that the user is accessing. Performance is hindered due to application-
induced latency caused by the CIFS protocol.
1-84 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
CIFS Acceleration
This topic describes the protocol-specific acceleration capabilities applied by Cisco WAAS to
provide LAN-like performance to centralized file servers.
WAN
Files
FILE.DOC
Cache
Advanced WAN optimization layer
Disconnected mode of operation improves throughput and efficiency
allows R/O access to fully-cached DRE eliminates redundant network data
content when the server is unreachable TCP optimizations to improve protocol
ability to fully use the network
Cisco WAAS provides the most innovative and robust file services optimizations for industry:
Application protocol interface for CIFS to handle protocol message workload at the Edge
to mitigate the impact of latency through message suppression, local response handling,
protocol caching, operation batching, message prediction, read-ahead, and pre-fetch
Application data and meta data cache to serve usable content at the Edge to mitigate
unnecessary data transfers when safe; validate-on-open to verify that file data has not
changed; global locking to ensure coherency and enable global collaboration scenarios
Network compression through DRE and Lempel-Ziv (LZ) persistent compression to
minimize bandwidth usage during data transfer situations
Transport Flow Optimizations (TFO) to improve utilization of the available network
capacity
Download the Wide Area File Services (WAFS) Benchmark Tool from Cisco Connection
Online (CCO). This utility stages data to a file server and then executes a script that makes calls
against these files, including OPEN, READ, WRITE, SAVE, and CLOSE operations. The
amount of time taken to perform these tests can then be saved to a comma separated value
(CSV) file for viewing and graphing. The results shown in the figure represent the typical
performance improvement provided by Cisco WAAS in CIFS environments.
Cisco WAAS acceleration is safe and requires no coherency configuration. The level of
optimization applied is directly related to the type of file being opened, and the state of the
opportunistic lock that is granted to the user. For single-user situations, Cisco WAAS can
employ the breadth of its optimizations to dramatically improve performance. For multi-user
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-85
situations or no-oplock situations, Cisco WAAS can safely apply many optimizations to
improve performance.
For example; when a user is editing a single Microsoft Word file, WAAS employs all available
optimizations to improve performance, and also for Microsoft Access database files and other
collaborative data sets; when a single user is working with an object, Cisco WAAS employs the
full optimization suite. When multiple users are working with the same file, if necessary,
WAAS will downgrade its level of CIFS acceleration to accommodate multi-user scenarios in
order to preserve data integrity, coherency, and safety.
Cisco WAAS is effective for the most common CIFS applications including Microsoft Office
(Word, PowerPoint, Excel), MS Access (and other database applications that use CIFS),
computer-aided design/computer-aided manufacturing (CAD/CAM) applications, My
Documents storage, desktop backup and restore, and other applications such as imaging.
1-86 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WAAS CIFS Acceleration Services
IP
Network NAS
Files
Connectivity Directive
Optimized Connections Core
Edge Service
Service Core
Service
Core Cluster
Connectivity Directive
To perform acceleration for CIFS, a Wide Area Application Engine (WAE) must be configured
with the appropriate WAFS service:
WAFS Edge Service: The WAE with this service is deployed in close proximity to the
remote users that need high performance access to the centralized file server. This WAE
performs protocol optimizations, such as latency mitigation, read-ahead, operation
batching, message prediction, metadata caching, and safe file caching (with file coherency
validation). Note that this WAE always propagates messages that are critical to data
integrity and protocol correctness (authentication, authorization, file OPEN requests, file or
region LOCK requests, synchronous WRITE requests, flushes) to the origin server.
WAFS Core Service: The WAE with this service is deployed in close proximity to the file
servers that remote users need to access. The WAFS Core WAE performs termination of
protocol acceleration and provides aggregation and access to the centralized file servers. A
fan-out ratio of 50:1 is enforced on WAFS Core devices, meaning that a WAFS Core WAE
can support up to a maximum of 50 WAFS Edge devices.
A core cluster must be configured, even if only one WAFS Core WAE is deployed. Edge
WAEs are mapped to Core Clusters and not directly to the Core WAEs themselves. The
Core Cluster component enables high availability, load-sharing, and fail over between all
Core WAEs within the cluster.
A WAE configured with 2GB of memory or more can be configured to run both the WAFS
Edge and WAFS Core services concurrently. Running these services concurrently on a
WAE that has less than 2GB of memory is not supported and not recommended. It is also
recommended that any device participating as a WAFS Core have a minimum of 2GB of
memory.
Configuring a WAE with WAFS Edge and WAFS Core services concurrently should only
be used in scenarios where the WAE is:
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-87
Adjacent to file servers that remote users want to access; in this case, configure the Core
service
Adjacent to users that want to access remote file servers that have nearby WAEs; in this
case, configure the Edge service
1-88 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Data Caching and Integrity
IP
Network NAS
OPEN
FILE.DOC
Files
AAA, OPEN, LOCK
FILE.DOC
APPROVED, LOCKED, VALIDATED
Edge Core
The WAE running the Edge File Services service is able to cache previously-seen files and
metadata information so that a file can be served locally on the next request. Each time a file is
opened, the WAE validates the state of the file with the origin file server to make sure that
cached contents are identical to those stored on the file server. The file server is also
responsible for authentication, authorization, and file locking, and the WAE propagates related
control messages synchronously using the WAN optimization capabilities of WAAS.
Prepositioning can also be used to prepopulate an Edge WAE file cache to improve the
performance of first user access.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-89
Integration with WAN Optimization
WAN
FILE.DOC
TRANSPORT FLOW OPTIMIZATION FILE.DOC
DRE CACHE DRE CACHE
LZ LZ
Edge Core
The file services optimizations provided in Cisco WAAS can leverage the WAN optimization
components of WAAS in any scenario where information must be transferred over the WAN.
Such examples include:
Cache miss: The file does not exist in cache; in this case DRE suppresses redundant data
segments during the transfer based on the data contained in the DRE history. Persistent
Lempel-Ziv (LZ) compression is used to compress any unsuppressed segments that must be
transferred, and TFO are employed to improve the behavior of TCP. For example, if the file
has been accessed previously but has been evicted from the file cache or otherwise
considered invalid (file has changed on the server), data from the file might be resident in
the DRE compression history. Otherwise, data from other files can be used if the data is
identical to data patterns found in the file being transferred.
Cache invalidation: In this situation, a user accesses a file that has changed on the origin
server after the file was cached. Assuming the segments that comprise the file are in the
DRE cache, the transfer of the data to rebuild the new version of the file in cache happens
very quickly and consumes very little bandwidth.
Write operations: The process of saving a file is accelerated due to the DRE cache having
cached contents from the file. Only the changed data must traverse the WAN, assuming the
segments that comprise the file are still contained in the DRE cache.
Other control messages: Other control messages such as authentication, authorization, and
locking are accelerated through DRE, TFO, and LZ.
Note Clearing the DRE cache on a WAE also restarts the CIFS acceleration services. This
causes the sessions and connections that are established to be torn down. Clients
automatically regenerate these sessions and connections.
1-90 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Intelligent File Prepositioning
IP
Network NAS
Files
Distribute
FILE.DOC Fetch
at 3am FILE.DOC
Edge Core
FILE.DOC
File prepositioning allows an administrator to schedule the distribution of a set of files and
directories to an Edge cache. This function transfers the contents of the file to the edge cache
and also populates the DRE cache. This method significantly improves performance for the
first user access, and is helpful in situations such as:
Software distribution environments: These environments require the installation of
service packs, hotfixes, and antivirus definition files.
Software development environments and CAD/CAM: These environments typically
involving large packages containing many small or large files. Objects that have not
changed can be served from the local cache; objects that have changed are delivered in an
accelerated manner through DRE and LZ. Only a handful of these objects tend to change
on a daily basis, and the write back of updated files is accelerated due to the DRE cache.
Imaging environments: It is helpful to prepopulate an Edge device with any images that
are relevant to the operations of the day or week, for example, patient medical images.
Transfer of new images is accelerated through DRE, LZ, and TFO.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-91
File Prepositioning Job Flow
The Core WAE scans the file server and filters the file set
according to directive criteria.
The results of the filtered scan are sent to the Edge WAE, and the
Edge WAE again filters the contents of the cache to determine
what is necessary to preposition.
The Edge WAE requests each required file or file segment from
the Core WAE, which leverages DRE and LZ compression for the
data transfer.
LIST LIST
Send FILE123.DOC
NAS
FILE123
Files Core DOC Edge
WAE WAE
When the administrator defines a preposition directive in Central Manager, the following
processes are executed:
Step 1 The Edge WAE connects to the Core WAE and sends preposition parameters. These
include:
File server to connect to
Share to gather data from
Root path to search from
File pattern to attempt to match
Whether or not to search subdirectories from the root path
Time filters
File size filters
Step 2 The Core WAE performs the scan against the server based on the criteria provided
and returns a match list, representing the results of a filtered scan, to the Edge WAE.
Step 3 The Edge WAE compares the match list against the current state of the file cache
and creates a delta list. Any file that does not exist in the cache or has been changed
is added to the delta list.
Step 4 The Edge WAE then submits requests sequentially to the Core WAE based on the
files contained within the delta list.
Step 5 The Core WAE fetches the file and stores it in the preposition staging area. The
Core WAE then instructs the Edge WAE to download the file.
Steps 4 and 5 are repeated until the delta list has been exhausted or the limitation parameters of
the preposition directive have been met.
1-92 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Note Prepositioning is only available for CIFS file servers. Prepositioning populates the DRE
cache on both WAEs involved in the transaction. This is useful when users access files that
have changed, as the rebuild of the cache is efficient and high-performance, assuming the
segments that made up the original transfer of the file still exist in the DRE context.
Prepositioning can also be used as a mechanism for warming the DRE context for other
applications, including web, email, video, database, and others.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-93
Integration Example: Software
Distribution
Download
Data Center Branch Office ServicePak.msi
from \\pluto
Router
NAS
WAN
Core
Distribute
\\Pluto\SWUpdates ServicePak.msi Service
Edge Pack
Prepositioning is a useful feature in environments where large amounts of data must be readily
available in the remote office. Using prepositioning helps consolidate not only file servers, but
also software distribution servers.
1-94 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
File Blocking
Save
SONG.MP3
IP
Network NAS
X MP3
Edge Core
Files
The Edge WAE can be configured to block operations against specific types of files, for
example, MP3, and JPG files. This ability helps to control the types of files that can be stored
on the data center file server, and minimizes the transfer of unsanctioned data, to eliminate the
use of network resources. In many cases, this kind of functionality can also be employed on file
servers or network-attached storage (NAS) devices also.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-95
File Services Flexible Integration
Options
Non-transparent integration using published names:
Core Edge
\\BR1-Pluto\Demo
\\Pluto\Demo Core WAE Edge WAE
Name: Core1 Name: BR1Cache
Core \\Pluto\Demo
\\Pluto\Demo Core WAE Edge
Name: Core1 WAE Edge
The WAE file services optimizations can be integrated into the network in either a transparent
or non-transparent fashion.
With non-transparent mode, the WAE takes on the personality of a file server and appears as a
node on the remote office LAN. In non-transparent mode, client computers map network drives
to shares that appear to be located on the network name that the WAE is publishing on the
network. In non-transparent mode, the WAE appears as a local file server. This published name
can also be added to Dynamic Frequency Selection (DFS) as a link target, allowing users that
have already deployed DFS for global namespace capabilities to continue to leverage that
investment. In the example shown in the figure, the Pluto server in the data center has a share
called demo. When a windows client starts browsing the network, the client on the right sees a
BR1-Pluto server (the WAE) with the same share when the local WAE is configured to publish
names. All the same security, authorization, and auditing settings are applied, because WAAS
simply passes through messages critical to security and data integrity. The exported file server
shares are available in the clients network neighborhood just as any other server on the
network. Multiple servers that are represented by the WAE appear as multiple servers to the
user without having multiple WAEs. When using name publishing, the published name can be
the original name with a prefix or a suffix, or an alias name.
In transparent mode, the WAE does not appear as a file server on the remote office LAN, and
clients map network drives to shares that are being made accessible by file servers or NAS
devices in the data center. In transparent mode, the WAE relies on the network, via Web Cache
Communication Protocol version 2 (WCCPv2) or Policy-Based Routing (PBR), to provide the
WAE with flows to optimize, and the WAE provides the optimizations transparently.
CIFS traffic that is accessed through WAAS file services optimizations receives the additional
benefit of DRE, TFO, and LZ. Additionally, CIFS traffic that does not use the file services
optimizations can still leverage the benefit of DRE, TFO, and LZ, but this does not provide
performance similar to using the file services optimization capabilities of WAAS.
1-96 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
R/O Disconnected Mode of Operation
WAAS is designed to be resilient during periods of WAN disconnection. Two types of WAN
outages are identified by Cisco WAAS, and each is handled in a different manner:
Intermittent disconnection: This term refers to periods of loss of WAN connectivity
lasting less than 90 seconds, in which case the WAFS Edge WAE buffers user operations.
If the WAN returns to service and the WAFS Edge WAE is able to successfully reconnect
to the WAFS Core WAE, the user sees no impact. The WAFS Edge WAE always attempts
to reconnect to the Core WAE to which it was originally connected. If the WAFS Edge
WAE is unable to reconnect to the original WAFS Core WAE, the session is broken and
regenerated. The user might see a disconnection to the file server in this case. If this
happens, the user can save data locally and merge the changes back into the document on
the file server after reconnection.
Prolonged disconnection: This term refers to periods of loss of WAN connectivity lasting
longer than 90 seconds, in which case the WAE enters a prolonged disconnection mode,
and all state is cleaned up on the Edge WAE and the Core WAE. At this point, the Edge
WAE can enter into read-only disconnected mode, assuming the file server is configured
for this mode in Central Manager. If this mode is not configured, the file server is no longer
accessible through WAAS, although offline files and folders within Windows can be
configured.
WAASv4 provides a R/O disconnected mode of operation that allows users to have read-only
access to fully-cached files during periods of prolonged WAN disconnection. A series of
functions is implemented specifically to support servers and shares defined for R/O
disconnected access:
Aggressive file caching of files accessed on-demand (read-ahead and file read-ahead):
This function ensures that files are fully cached in the Edge WAE so they can be available
if the WAE enters a prolonged disconnection mode lasting more than 90seconds.
Metadata and access list prefetch: This function ensures that access control information
is cached by the Edge WAE for the purposes of authorization during disconnection.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-97
Preposition: This optional function is used to continually update the Edge WAE cache and
ensure that files are available in the Edge WAE cache if the WAE enters a prolonged
disconnection mode.
When WAAS file services enters prolonged disconnected mode, all CIFS sessions are
disconnected in both the remote office and the data center. If read-only disconnected mode is
not configured, the user does not have access to the file server. Windows Offline Files and
Folders can be configured as an alternative to read-only disconnected mode, providing users
with the ability to continue working during the period of disconnection, and resynchronizing
changes back to the origin file server when the connection is re-established.
If read-only disconnected mode is configured, the WAEs still enter prolonged disconnected
mode, which destroys user sessions in the remote office and in the data center. User sessions
must be restarted, which requires authentication with a domain controller, which must be
reachable on the network. The WAFS Edge WAE can self-authorize the user based on cached
ACLs from the origin file server. After the user re-authenticates successfully, the Edge WAE
exports the server and acts on its behalf, providing read-only access to cached files and folders
based on the cached access control information. With read-only disconnected mode, the last set
of cached files and last set of cached ACLs is used. If the file server is unreachable through
WAAS for a long period of time and files or access control information has changed, the
contents in the Edge WAE will not be the same as those on the origin file server.
For read-only disconnected mode to work properly, the WAE must be configured for Windows
authentication and be successfully joined to the domain.
1-98 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Integrated Print Services
This topic provides a high-level description of Cisco WAAS print services, which help to
minimize the administrative traffic that must traverse the WAN.
Cisco WAAS provides Windows-compatible print services to keep print jobs local to the
branch office. This keeps bandwidth-intensive print traffic from consuming precious WAN
capacity and ensures that print performance is predictable. Almost any printer is supported, as
the WAE does not require special software to support a particular printer due to the use of Raw
mode queues (client handles the rendering). Cisco WAAS printing provides printing to any user
regardless of whether the WAN is connected or disconnected, as it does not need to integrate
into a Windows domain. Cisco WAAS print services uses guest mode printing and does not
allow for the definition of access control for print queues. Each user is able to manage their
own jobs, and the print administrator is able to manage any job. This is identical to the behavior
of interacting with a Windows print server.
Cisco WAAS allocates 1GB of data to the PRINTSPOOL file system. This storage capacity can
not be manually allocated and is shared by all of the print queues. Although this storage
capacity can support a recommended maximum of 100 concurrent queues, 20-25 is the
recommended number for adequate storage allocation per queue, and there is no hard limit or
enforced maximum number of queues that can be defined. Cisco WAEs acting as print servers
support up to a maximum of 100 concurrent printing users and up to a maximum of 500
concurrent print jobs. The print job timeout is 60 seconds. Cisco WAAS print services requires
the WAFS Edge service be configured.
Cisco WAAS print services eliminates the need to leave a server in the branch office to provide
local printing capabilities. Cisco WAAS print services leverages SAMBA and Common Unix
Printing System (CUPS) to provide print services to the branch office. By using Cisco WAAS,
Windows-compatible print services can remain in the branch, keeping print jobs from needing
to traverse the WAN.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-99
WAAS print services rely on users configured through the command-line interface (CLI) for
print queue administration and print driver repository administration. No authentication or
authorization is required for print services (users will still attempt to authenticate and authorize
and any request will be responded to as a successful request), so any user in the remote office
can print to a print queue that is configured on the WAE regardless of whether the WAN is
connected or disconnected.
Cisco WAAS self-authenticates users that are attempting to print, and usernames are
maintained with the active job set. As such, a user can only modify or manipulate their own
jobs using standard Windows printer management tools. Users that authenticate to the print
server using administrative credentials can manipulate any job running on the WAE.
1-100 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Acceleration for Centralized Print
Services
WAN JOB
JOB
Optimized FILE Print
FILE
Connections FILE.DOC
Cisco WAAS also provides optimizations to improve performance for print jobs that traverse
the WAN from the branch office client workstation to a data center print server, as shown in the
slide. By employing CIFS acceleration, DRE, TFO, and persistent LZ compression, access to
centralized print servers and spooling of jobs consumes far less WAN bandwidth capacity and
completes in a much shorter period of time. While Cisco WAAS can be used as a print server
replacement by employing print services on WAEs deployed in the branch, IT organizations
that want to centralize print services find that Cisco WAAS provides a solution for that
environment as well.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-101
Printer Driver Distribution
Download
driver
Data Center Branch Office and print
DC
WAN JOB
Distribute HP
Print LaserJet
Upload Driver FILE
Drivers Print
Central Manager can be configured as a repository for print drivers. After it is configured as a
repository, Central Manager can be accessed directly and print drivers can be uploaded to it.
After the drivers have been uploaded, they can then be distributed to Edge print server WAEs.
Printer driver distribution need be employed only when branch office WAEs are acting as print
servers in remote offices.
1-102 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-103
1-104 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Module Summary
This topic summarizes the key points that were discussed in this module.
Module Summary
In this module, you learned how Cisco WAAS can bridge the gap between centralized IT
infrastructures and the service needs of remote users. You learned how Cisco WAAS provides
powerful WAN optimization and application acceleration technologies to optimize application
performance over the WAN, and to allow IT to consolidate costly remote office infrastructure.
You also learned about the WAE appliance platform and router-integrated network module, and
the licensing model for Cisco WAAS.
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-105
1-106 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Module Self-Check
Use the questions here to review what you learned in this module. The correct answers and
solutions are found in the Module Self-Check Answer Key.
Q1) What are three factors that hinder application performance over the WAN? (Choose 3.)
(Source: Applications and the WAN)
A) QoS
B) Latency
C) Bandwidth
D) Packet Loss
E) TCP fragmentation
Q2) Which network features can be crippled by technologies that use tunnels between
acceleration devices? (Source: Cisco WAAS Introduction)
A) QoS
B) Firewall policies
C) NetFlow
D) Access Lists
E) All of the above
Q3) Which Cisco WAAS hardware platform would be recommended for a data center
deployment supporting 7,500 concurrent optimized TCP connections? (Source: Cisco
WAE Performance and Scalability)
A) network module content engine (NM-CE)
B) WAE-512
C) WAE-612
D) WAE-7326
Q4) Why are application-specific optimizations needed to complement WAN optimization?
(Source: Application Specific Optimizations)
A) Application-specific latency
B) Bandwidth constraints
C) Sensitivity to WAN conditions
D) All of the above
Q5) What optimizations for file services protocols does WAAS provide? (Source: WAAS
Optimizations for File Protocols)
A) Protocol proxy
B) Data cache
C) Meta data cache
D) Intelligent read-ahead
E) Operation batching
F) Message prediction
G) Preposition
H) Disconnected mode
I) All of the above
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-107
Q6) How does WAAS handle a brief WAN outage of less than 90 seconds? (Source:
Disconnected Mode of Operations)
A) The user session is disconnected and reconnected immediately
B) The user session is disconnected and reconnected after the document change
C) The user session is disconnected and reconnected manually
D) The disruption is masked and the user is not impacted
Q7) What level of access does WAAS file server disconnected mode provide? (Source:
Disconnected Mode of Operations)
A) Read-only
B) Read-write
C) Local file server
D) Asynchronous write-back
Q8) What is a common usage scenario for WAAS file preposition? (Source: Using
Prepositioning)
A) Software distribution environments
B) CAD/CAM environments
C) Medical imaging environments
D) Software development environments
E) All of the above
Q9) How can Cisco WAAS optimize print services?
Local print services
Centralized driver distribution
Optimize remote print server access
All of the above
Q10) Which three of the following messages are exchanged during establishment of a TCP
connection? (Choose 3.) (Source: Introduction to TCP)
A) SYN
B) ACK
C) Finish (FIN)
D) Synchronize acknowledge (SYN-ACK)
Q11) What is the BDP of a DS3 with 100 ms of latency? (Source: Introduction to TCP)
A) 562 Kb
B) 450 Kb
C) 562 KB
D) 4.5 MB
E) 4.5 Mbps
Q12) What are the primary optimizations provided by Cisco WAAS TFO? (Source:
Introduction to TFO)
A) Window Scaling
B) Selective Acknowledgement
C) Large Initial Windows
D) Binary Increase Congestion Avoidance
E) All of the above
1-108 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Q13) In what scenarios is window scaling effective? (Source: Window Scaling)
A) When MWS < BDP
B) When MWS > BDP
C) When MWS = BDP
D) When MWS > 2X BDP
Q14) After packet loss is detected, which TCP extension helps minimize the amount of data
that must be retransmitted? (Source: Selective Acknowledgements)
A) Selective Acknowledgement
B) Slow-start
C) Congestion avoidance
D) Large initial windows
Q15) What feature circumvents bandwidth starvation for mouse connections? (Source: Large
Initial Windows)
A) Selective Acknowledgement
B) Slow-start
C) Congestion avoidance
D) Large initial windows
Q16) Which statement accurately describes the purpose of WAAS TFO? (Source: Binary
Increase Congestion)
A) Improves packet loss
B) Improves congestion
C) Masks problematic WAN conditions
D) Compresses data
Q17) What is the purpose of WAN compression? (Source: Need for WAN Compression)
A) To minimize bandwidth
B) To consume bandwidth
C) To make links virtually larger
D) To improve latency
Q18) What is the function of DRE? (Source: WAAS Compression Architecture)
A) Suppress unnecessary TCP control messages
B) Suppress unnecessary TCP data messages
C) Suppress unnecessary User Datagram Protocol (UDP) control messages
D) Suppress unnecessary UDP data messages
Q19) Which three of the following are components of DRE? Choose three. (Source: Data
Redundancy Elimination)
A) Fingerprint and chunk identification
B) Pattern matching
C) LZ compression
D) Synchronization
E) TCP proxy
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-109
Q20) Which two of the following represent primary functions of a DRE encoder? Choose
two. (Source: DRE Encoding)
A) Identify chunks
B) Synchronize contexts
C) Eliminate redundancy
D) Verify message validity
Q21) Which two of the following represent primary functions of a DRE decoder? (Source:
DRE Decoding)
A) Rebuild messages
B) Synchronize contexts
C) Eliminate redundancy
D) Verify message validity
Q22) Which two of the following are forms of DRE data integrity verification? (Choose
two.) (Source: DRE Decoding)
A) Signature ACK/NACK
B) Message validity verification
C) Context synchronization
D) Shared context
1-110 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Module Self-Check Answer Key
Q1) B,C,D
Q2) E
Q3) D
Q4) D
Q5) I
Q6) D
Q7) A
Q8) E
Q9) D
Q10) A,B,D
Q11) C
Q12) E
Q13) A
Q14) A
Q15) D
Q16) C
Q17) C
Q18) B
Q19) A,B,D
Q20) A,C
Q21) A,D
Q22) A,B
2007 Cisco Systems, Inc. Cisco Wide Area Application Services 1-111
1-112 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Module 2
Module Objectives
Upon completing this module, you will be able to design Cisco WAAS solutions, including
network design, interception method, and solution sizing. This includes being able to meet
these objectives:
Describe how Cisco WAAS is integrated into an existing network infrastructure
Describe how to size Cisco WAAS solutions based on performance, scalability, and
capacity sizing metrics
2-2 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Lesson 1
Objectives
Upon completing this lesson, you will be able to describe how Cisco WAAS is integrated into
an existing network infrastructure. This includes being able to meet these objectives:
Describe how Cisco WAEs can be deployed physically in-path within the network to
provide a simple method of network integration
Describe how Cisco WAAS can be deployed in an off-path configuration
Describe WCCPv2 and explain how it functions as a network interception option
Describe how Cisco WAAS can leverage PBR as a network interception option
Describe how Cisco WAAS can be integrated in the enterprise data center with the
Application Control Engine (ACE) line card for the Catalyst 6500 series switch
Define how devices automatically discover each other in the optimization path
Discuss how Cisco WAAS handles situations where asymmetric routing is encountered
Discuss how Cisco WAAS maintains network transparency and minimizes impact to
features that rely on packet header visibility
Physical Inline Deployment
This topic describes how Cisco WAEs can be deployed physically in-path within the network
to provide a simple method of network integration.
IP
Network
Cisco WAE devices can be integrated into the network using either an off-path deployment
mechanism (relying on the network to redirect flows) or an in-path deployment mechanism,
whereby the WAE itself is physically in the network path of all traffic. In most cases, a physical
in-path deployment is preferable only in environments where an off-path deployment is not
possible.
With physical in-path deployments, the Cisco WAE sits between the LAN switch and the next
adjacent device (generally a firewall or a router) and selectively optimizes flows that are based
on the configured policy of the traffic being seen. For instance, all non-TCP traffic is
immediately passed through the device without modification. For TCP traffic that is traversing
the device, the WAE examines the configured policy to see if there is a match and if
optimization is configured.
Cisco WAE in-path deployment requires use of the four-port in-path network card. This in-path
network card provides fail-to-wire functionality to ensure that if the WAE fails (loss of power,
hardware failure, software failure, other conditions), then the path between the two adjacent
network devices is effectively bridged. In this way, a WAE failure does not block traffic
between the LAN and the WAN.
2-4 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAE Physical Inline Deployment
Physical inline interception allows the WAE to be physically deployed in the path between the
LAN switch and the next upstream device, which is generally a WAN router or firewall. In this
position, all traffic passes through the WAE, and optimizations can be applied based on the
configured policy. The WAE physical inline card includes a mechanical fail-to-wire that is
triggered upon any hardware error, nonrecoverable software error, or power failure, and thereby
ensuring network connectivity is not permanently interrupted in the event of a WAE failure.
The WAE inline card provides four 1000BaseTX copper Gigabit Ethernet interfaces. These
interfaces are grouped in two two-port groups, and each two-port group represents an inline
pair. If a WAE is deployed between a single router and a single switch, for example, only one
inline pair is used. With two two-port groups, the WAE can be deployed between two switches
and two routers (or two firewalls). Additionally, WAEs can be serially clustered back to back in
the physical path to provide load sharing and failover.
The Cisco WAE inline card supports 802.1q and allows for the explicit configuration of
VLANs to be examined for optimization. The card is supported in all WAE appliance models.
Cisco WAAS is transparent to the underlying network, and WAEs have the ability to
automatically discover each other.
For high-availability environments, WAEs with inline cards can also be clustered in a serial
(daisy-chain) fashion. Serial clustering is compatible with WAE automatic discovery, and
during the process of automatic discovery, the outermost WAE endpoints always take on
ownership of optimization. In the case of a serially-clustered pair of WAEs, the outermost
WAE always is the first to take on ownership of optimization, and only in the case of the
outermost WAE reaching TCP connection capacity does the inner WAE begin applying
optimization. At this point, the outermost WAE is unable to take on additional connections to
optimize and handles them as pass-through connections. This form of clustering and load
sharing is commonly referred to as spill-over load balancing.
MGMT
WAN
WAE1
MGMT
WAN
WAE1 WAE2
This figure shows two examples of inline deployments; one with a single WAE device, and the
other with a serial cluster of two WAE devices. In both configurations, the WAEs
management port (generally GigabitEthernet 1/0) must also be attached to the LAN switch and
assigned an IP address in a routable VLAN for management and other purposes. The WAEs
must be deployed between two LAN-capable devices, that is, between a switch and a router, or
between a switch and a firewall.
If a firewall is present in the network and the firewall is providing VPN tunnel termination,
then the WAE is deployed between the switch and the firewall. If a firewall is present in the
network, and the firewall is providing security services only (no tunnel termination), then the
WAE is deployed between the firewall and the router.
2-6 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Inline Interception Deployment Modes (Cont.)
WAN
WAE1
WAN
WAE1 WAE2
This figure shows the same deployment situations as the previous slide, but in this example,
two WAN connections are present. In this example, both inline port groups on each WAEs
inline card are used, one inline port group per LAN-WAN connection.
IP
Network
IP
Network
Router LAN
Interface
LAN
LAN
The figure shows a basic in-path configuration. Notice that in this basic configuration, only one
inline group is used.
When connecting a WAE in between two devices using GigabitEthernet, either a straight-
through cable or crossover cable can be used on either side or any combination thereof.
When connecting a WAE in between two devices using FastEthernet, a straight-through cable
is used on one side, and the other cable is the opposite of the cable type used to connect the two
devices natively without a WAE in between them. For example, if connecting a WAE with
FastEthernet between a router and a switch, a straight-through cable is used on one side, and a
cross-over cable is used on the other, because the crossover cable is the opposite of what is
normally used to connect the switch to the router.
If a firewall appliance is physically in-path between the router and the switch, the WAE inline
card WAN interface is connected to the firewall LAN interface.
2-8 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
In-path WAE Configurations
IP IP
Network Network
IP WAE In-Line Adapter
Network
Router LAN Router LAN
IP Interface Interface
Network
This example shows a basic in-path configuration with redundant WAN connections and
redundant LAN. Notice that both inline groups are used, and the original network paths are
retained by injecting the WAE into the path with the same inline group for each of the original
connections.
WAE
In-Line Adapter
Router LAN
Interface
WAE
In-Line Adapter
LAN
WAE appliances can also be clustered inline. This kind of clustering is called serial clustering
or spill-over clustering. The WAEs must be configured connected back-to-back. With serial
clustering, if one WAE fails, or otherwise becomes overloaded, then the other is able to provide
optimization. The figure shows a basic serial cluster configuration.
2-10 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
In-path WAE Configurations (Cont.)
IP Router LAN
Network
Interface
IP Router LAN
Network WAE
Interface In-Line Adapter
WAE
In-Line Adapter
The example in the figure shows a serial cluster with redundant WAN connections, routers, and
switches. For single-switch situations, the two LAN interfaces of the inline card of the WAE
closest to the LAN both can be connected to the same LAN switch.
WAEs should be deployed in a location conducive to allowing Cisco WAAS to provide a high
degree of optimization. LANs are typically designed with three autonomous layers, each
serving a different purpose:
Core: The Core Layer provides high-speed forwarding between Distribution Layers and
the aggregation of multiple-Distribution Layers. The Core Layer often provides a gateway
to the Internet edge, which is generally another Distribution Layer. Network services are
typically not implemented in the Core, because high-speed forwarding is desired.
Distribution: The Distribution Layer provides aggregation for connected Access Layer
networks. The Distribution Layer is commonly responsible for routing and other network
services, such as security, server load balancing, and wireless switching. End nodes such as
servers, workstations, and wireless access points are rarely attached to the Distribution
Layer.
Access: The Access Layer is typically departmentalized or deployed within a small
physical boundary and provides network connectivity to end nodes, including servers,
workstations, and wireless access points.
In single subnet or small network scenarios, WAEs are typically deployed in proximity to the
network boundary router. In multisubnet or large network situations, WAEs are typically
deployed in the network Distribution Layer.
WAE deployment in the Access Layer is not recommended unless WAAS is being used for a
specific set of nodes or for a specific application. Access Layer deployment makes termination
of optimization difficult for nodes connected to other Access or Distribution Layers. If a WAE
is deployed in the Access Layer, modifications must be made to the routing topology to
intentionally send traffic to the WAE if it is desired that traffic to other Access Layer nodes be
optimized.
This level of optimization relative to deployment locale is called locality. High locality to a
reference point means that the device in question is deployed very close (generally, directly
attached) to that reference point. Low locality to a reference point means that the device in
2-12 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
question is deployed in a way that is not in proximity (generally, not directly attached to) of
that reference point. Generally speaking, a high degree of locality to the WAN entry-exit point
is most preferred, because it provides the broadest level of optimization.
High locality to the core yields a more global level of optimization:
Intercept traffic going to and coming from the WAN exclusively (based on placement of
interception).
Closer to the WAN entry-exit prevents intrasite access from traversing the WAEs.
Provides optimization for all attached Distribution and Access Layers.
High locality to the Access Layer yields a more focused level of optimization:
Optimization is restricted to a specific Access Layer unless significant changes to network
routing are introduced.
Can cause intrasite access to traverse the WAEs, which causes unnecessary WAE resource
utilization.
The same principals regarding locality and physical location in the network hold true for both
methods of Cisco WAAS integration (in-path and off-path). For in-path deployments, it is
recommended that Cisco WAEs be deployed as close to the WAN boundary point as possible,
as shown in the example in this figure.
2-14 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WAE Placement With Respect to
Firewalls
WAN
Firewall
WAE
WAN
Firewall WAE
If a firewall is present in the network, and the firewall is providing VPN tunnel termination,
then the WAE should be deployed between the switch and the firewall. If a firewall is present
in the network, and the firewall is providing security services only (no VPN tunnel
termination), then the WAE should be deployed between the firewall and the router.
IP
Network
Cisco WAE
Cisco WAE devices can be deployed as in-path (using the inline card) or off-path devices. As
off-path devices, the Cisco WAE device attaches to the LAN just like any other node on the
network. Packet interception and redirection methods within the network itself are used to
forward packets to the WAE for examination and optimization. Network interception must be
established for both directions of traffic flow and be available in two or more locations between
communicating nodes. This positioning is necessary to facilitate WAE optimizations, which
require decoding at the distant end of the network. The optimizations applied by the WAE are
transparent to ensure compatibility and compliance with features that are already configured in
the network routers, switches, and firewalls.
2-16 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Network Integration Overview: Integrated
IP
Network
Cisco WAE
Network
Module
Another form of off-path deployment is physical integration, that is, the NME-WAE (network
module enhanced WAE) router-integrated network module. The NME-WAE is actually an off-
path device deployed physically in the router and leverages WCCPv2 or PBR for traffic
interception and redirection. Other than the physical integration aspects of the service module,
the NME-WAE acts and behaves like any off-path appliance does.
Note The NME-WAE must be configured with WCCPv2 or PBR for traffic interception. Inline
interception is not possible with the NME-WAE, and the ACE module can not be used to
integrate a large number of NME-WAEs.
LAN
LAN WAN
WAN
interface
interface interface
interface WAN
Service
Service
Module
Module
Internal
Internal
interface
interface
Service
Service
Module
Module
interface
interface
The internal architecture of the NME-WAE router-integrated network module is identical to the
architecture of an off-path appliance. The module itself has two Gigabit Ethernet interfaces
(GigabitEthernet1/0, which is internal, and GigabitEthernet2/0, which is external). The internal
Ethernet interface connects directly to an internal interface on the router itself.
When configuring the router to support the NME-WAE, a separate subnet is required. An IP
address within this subnet must be assigned to the routers internal IP address (that the NME-
WAE directly connects to) and the NME-WAEs internal interface must also be assigned an IP
address within this subnet.
2-18 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Off-Path Deployment
IP
Network
Infinite Loop
WAE Device
Cisco WAAS is a transparent solution for WAN optimization and application acceleration,
meaning that key information is preserved to allow network functions to continue operation,
including packet header information. Given that the router has no way to distinguish whether
traffic has already been redirected to a WAE for optimization, the WAE must be deployed on a
subnet that is separate from the nodes that are optimized by the WAE (separate physical
interface or separate logical interface, that is, a separate VLAN). Traffic interception is
configured on the router interface adjacent to the client or server, which redirects traffic to the
WAE. After optimization, the WAE returns traffic on an interface where interception is not
configured and explicitly configured to be excluded from future redirection operations. This
process prevents redirection of the traffic that has already been handled by a WAE and
eliminates the possibility of infinite interception loops. Generally, the WAE is deployed on a
routable subnet that is shared only by routers with no clients or servers.
Fa1/0
Redirect
Exclude
PBR or
WCCPv2
Subinterface
Fa0/0.10 IP
Network
Fa0/0.20
Redirect
Exclude
WAE Device
Note The NME-WAE uses a tertiary interface internally , because the routers internal interface is
dedicated to connectivity to the NME-WAE.
2-20 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
One-Arm Off-Path Deployment
1
4
IP
Network
2
3
Pros: Cons:
Simplicity Performance constrained
Single interface Higher router utilization
Each WAE (appliances and the NME-WAE) have a minimum of two Gigabit Ethernet
interfaces. The simplest form of off-path deployment, called one-arm, uses only one of those
two interfaces. In one-arm off-path deployments, one of the WAEs Gigabit Ethernet interfaces
is attached directly to the router or to the dedicated WAE VLAN on the switch. The second
interface is not used.
In one-arm off-path deployments, all traffic to and from the WAE must pass through the router,
even if the traffic is destined for a user in the same site. While one-arm off-path deployment is
the simplest deployment mode (one interface to configure), it can create additional workload on
the router, which may translate to additional CPU utilization. The amount of extra workload
and CPU utilization is subjective and based on throughput and number of packets per second.
Note One-arm off-path is the deployment mode that is used only when configuring the internal
interface of the NME-WAE network module.
IP
Network
The WAE interface
must be routable.
This is the primary interface. The default
gateway is the attached router interface IP.
The example in the figure shows the details and requirements of the on-router one-arm
deployment mode. In this mode, the WAE has an interface directly attached to an interface on
the router. A single interface is used for both management and optimization traffic.
The WAE must have its default gateway configured as the adjacent router interface IP address,
and the primary interface must be set to the interface that is attached to the router. PortChannels
are probably not permitted in this situation, because most routers do not support PortChannel.
In all configurations, the WAE VLAN must be routable and the WAE must be reachable
throughout the network.
This is the default configuration for the NME-WAE. The internal interface is always on router,
that is, always directly connected to the router through the backplane to the internal router
interface dedicated to the NME-WAE.
2-22 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
One-Arm Off-Path Deployment (Cont.)
IP
Network
An equally common one-arm deployment mechanism, as shown in the figure, is with the WAE
connected off-router, i.e. attached to the LAN switch. In this mode, the WAE has one or both
interfaces attached to a switch port in a VLAN that is connected through an 802.1q trunk to the
router. The VLAN that the WAE is attached to is separate from the VLAN that the users or
servers are attached to. In this mode, a single interface (or PortChannel of both interfaces) is
used for both management and optimization traffic.
The WAE must have its default gateway configured as the adjacent router VLAN subinterface
IP address, and the primary interface must be set to the interface (or PortChannel) that is
attached to the optimization VLAN. PortChannels are permitted in this scenario, assuming the
switch that the WAE is attached to supports PortChannels.
This configuration is not applicable to the NME-WAE, as the internal interface is directly
connected to the router via the backplane.
3 2
Pros: Cons:
Better performance Additional switch port consumed
Lower router utilization Additional configuration
Interface adjacency to node
Usually feasible in branch only
An alternative to the one-arm off-path deployment mode is to use the second interface (the
external interface on the NME-WAE). The first interface is connected directly to the router or
to the switch in the WAE VLAN, and the second interface connects to the switch in the VLAN
with the users or the servers. With this deployment mode, all traffic from the WAE has to go
through the router with the exception of any traffic going to a node adjacent to the WAEs
second interface (users or servers that are in the same VLAN). Two-arm off-path mode can
provide a slight improvement in performance and less overall router CPU workload.
Note The NME-WAE can support this configuration if the external interface is used. The internal
interface is always directly connected to the router, so there is only one cable going from the
NME-WAE to the switch.
2-24 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Two-Arm Off-Path Deployment (Cont.)
Client VLAN
IP
Network
The secondary interface resides
in the client or server access WAE interface
VLAN for improved performance: Management interface
The VLAN must be routable Primary interface
No default gateway is needed Optimization interface
Default gateway router WAE VLAN IP
WAE must be reachable
The figure shows a configuration that is on-router in two-arm deployment mode. In this mode,
the WAE has an interface directly attached to an interface on the router to support optimization
and management, and an interface attached to the client VLAN. This mechanism is commonly
used to enable improved performance, assuming users or servers are Layer 2 adjacent to the
WAEs secondary interface, because only one default gateway can be configured on the WAE.
Performance is improved because traffic can be returned to the user or server directly (Layer 2
adjacency is required) without crossing the router.
As with all deployment situations, the WAE interfaces must be attached to routable VLANs
and reachable throughout the entire network. The WAE must have its default gateway
configured as the directly connected router interface IP address, and the primary interface must
be set to that interface as well. PortChannels are not permitted in this situation, because the
interfaces are connected to two separate devices.
This configuration is applicable to the NME-WAE when using the external interface (shown as
the secondary interface in this figure). The internal interface is always directly connected to the
router through the backplane, and the default-gateway of the NME-WAE always points back to
the routers internal interface IP address.
IP
Network
The secondary interface resides WAE VLAN
in the client or server access Management interface
VLAN for improved performance: Primary interface
The VLAN must be routable Optimization interface
No default-gateway is needed Default gateway router WAE VLAN IP
The VLAN must be routable
This configuration is off-router in two-arm deployment mode. In this mode, the WAE has both
interfaces directly attached to the LAN switch, but both interfaces are in separate VLANs. One
interface provides optimization and management, and is configured as the primary interface.
The other interface is used to improve performance, assuming users or servers are Layer 2
adjacent to the WAEs secondary interface, because only one default gateway can be
configured on the WAE. In this mode, traffic can be returned directly to the user or server
without crossing the router.
This deployment situation is common when higher levels of performance are desired and the
WAE can not physically attach one of its interfaces to the router. As with all deployment
situations, the WAE interfaces must be attached to routable VLANs and reachable throughout
the entire network. The WAE must have its default gateway configured as the IP address of the
routers WAE VLAN subinterface, and the primary interface must be set to that interface as
well. PortChannels are not permitted in this situation, because the interfaces are connected to
two separate VLANs.
The WAE does not support 802.1q trunking in off-path mode.
This configuration is not possible with the NME-WAE, because the internal interface is always
directly attached to the router through the backplane.
2-26 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Hierarchical Network Placement: Off-Path
For off-path deployments, it is generally recommended that WAEs be deployed with high
locality to the WAN routers or the device performing interception. Generally, this device is one
of the following:
Integrated Services Router in the branch office (WCCPv2, PBR)
A Catalyst 6500 in the data center (WCCPv2, PBR, ACE)
A data center WAN router (WCCPv2, PBR)
WAEs are deployed in a location conducive to allowing Cisco WAAS to provide a high degree
of optimization. LANs are typically designed with three autonomous layers, each serving a
different purpose:
Core: The Core Layer provides high-speed forwarding between Distribution Layers and
the aggregation of multiple Distribution Layers. The Core Layer often provides a gateway
to the Internet edge, which is generally another Distribution Layer. Network services are
typically not implemented on the Core Layer, because high-speed forwarding is desired.
Distribution: The Distribution Layer provides aggregation for connected Access Layer
networks. The Distribution Layer is commonly responsible for routing and other network
services, such as security, server load balancing, and wireless switching. End nodes, such
as servers, workstations, and wireless access points, are rarely attached to the Distribution
Layer.
Access: The Access Layer is typically departmentalized or deployed within a small
physical boundary and provides network connectivity to end nodes, including servers,
workstations, and wireless access points.
In single subnet or small network scenarios, WAEs are typically deployed in close proximity to
the network boundary router. In multisubnet or large network scenarios, WAEs are typically
deployed in the network Distribution Layer.
WAE deployment in the Access Layer is not recommended unless WAAS is being used for a
specific set of nodes or for a specific application. Access Layer deployment makes termination
2-28 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Interception Using WCCPv2
This section introduces Web Cache Communication Protocol version 2 (WCCPv2) and
discusses redirection, load balancing, and failover. WCCPv2 represents the preferred method of
network interception for WAAS.
WCCPv2 interception:
Out-of-path with redirection of flows
to be optimized (all flows or selective
flows via a redirect-list)
Automatic load-balancing, load Original
Flow
redistribution, fail-over, and fail-
through operation
Scalability and high availability:
Up to 32 WAEs within a service Service
Group
group and up to 32 routers Interception
Redirection
Linear performance and scalability
increase as devices are added
Optimized
Seamless integration: Flow
Switching Platforms
The recommended switching platforms are:
Catalyst 6500 with Supervisor 1a, Supervisor 2, or Supervisor 32 with IOS 12.1(27)E
Catalyst 6500 with Supervisor 720 (Native mode) with IOS 12.2(18)SXF5
Catalyst 6500 with Supervisor 720 (Hybrid mode) with IOS 12.2(18)SXF5 and CatOS 8.5
Catalyst 4500 or 4900 with IOS 12.2(31)SG
Security Platforms
The recommended security platform is:
PIX 515/535 or ASA 5500 with version 7.1.2.4
Note Earlier software versions can work; however, they are not recommended due to bugs,
stability, compatibility, or other reasons.
2-30 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Introduction to WCCPv2
WAE1
R1
IP
Network R32
WAE32
Cisco developed WCCP within Cisco IOS software to enable routers and switches to
transparently redirect packets to nearby caching devices. WCCP does not interfere with normal
router or switch operations. Using WCCP, the router redirects requests to configured TCP ports
and sends the redirected traffic to caching devices instead of the intended destination. WCCP
also balances traffic loads across multiple local caches while ensuring fault-tolerant and fail-
safe operation. As caches are added or deleted from a cluster, the WCCP-aware router, or
switch, dynamically adjusts its redirection map to reflect the currently available caches,
resulting in maximized performance and content availability.
WCCP allows content caching devices to join a service group with a router. A service group
defines the types of protocols that can be optimized by a cache and the types of traffic that can
be processed. The router manages the service group and determines how to forward appropriate
traffic to another service group member (cache). Because WCCP is only an interception
mechanism, nodes must be able to reach one another natively.
WCCPv2 is used with Cisco WAAS to provide the WAE with messages that can be optimized
and to enable high availability, load balancing, and failover to provide an enterprise class
solution. With WCCPv2, the WAE is configured with a router list, and the appropriate services
are enabled. In the case of Cisco WAAS, the TCP promiscuous mode services are enabled
(service group 61 and 62), and the WAE is instructed to join these service groups with the
router. Assuming the service groups are also configured on the router, the WAE joins the
service groups with the router, and the router begins forwarding packets that match the service
group criteria to the WAE for local handling.
With WCCPv2, up to 32 routers can participate in a service group as service group servers, and
up to 32 WAEs can join a service group as service group clients, thereby allowing for
scalability and fault tolerance. WAEs can join service groups without disruption. The function
of the server routers is to manage the service group and examine ingress or egress traffic, based
on redirection configuration, to see if that traffic matches the criteria of the running service
groups. When matching traffic is identified, the server forwards that traffic to a client according
to the load distribution mechanism that is specified. The job of the client WAEs is to specify
the configuration parameters for the service group and receive traffic from the server routers.
Note Service group servers are the most commonly used routers, but many switches and firewalls
can also run WCCPv2 as well.
WCCPv2
WCCPv2 is an Internet draft and not a standard. The Internet draft is authored by Cisco and has
been widely accepted as the de facto standard for content routing and network interception. A
copy of the WCCPv2 Internet draft can be found at http://www.wrec.org/Drafts/draft-wilson-
wrec-wccp-v2-00.txt.
The following chronology identifies the messages sent and received in a WCCPv2 service
group connection:
1. The cache transmits a WCCP2_HERE_I_AM message to each defined router or multicast
address. This message contains details about the cache including the IP address and the
service groups that the cache wants to participate in. After receiving this message, the
router responds with a WCCP2_I_SEE_YOU message if the cache meets group
membership criteria that was specified by the shared-secret message digest algorithm 5
(MD5) authentication password or access list. Upon receipt of the WCCP2_I_SEE_YOU
message from the router, the cache responds with another WCCP2_HERE_I_AM message
with the Receive ID field matching that of the router message.
2. At this point, the cache becomes usable, and the router begins redirecting traffic to the
cache based on service group assignments. WCCP2_HERE_I_AM and
WCCP2_I_SEE_YOU messages are sent every 10 seconds as a service heart beat. WCCP
forwards traffic to an available cache using either Layer 2 redirection or generic routing
encapsulation (GRE), the default. One of the components of the WCCP2_I_SEE_YOU
message is the advertisement of supported forwarding mechanisms. If a method is not
listed, it is assumed that GRE tunneling is to be used by default. Redirection negotiation is
done per service. A cache and a router can use different redirection mechanisms for
different services. Layer 2 redirection specifies that the redirecting router is to rewrite the
Ethernet frame address and forward the updated frames to the cache. GRE tunneling
specifies that a GRE tunnel is to be built between the router and the cache, and the original
packets are encapsulated in this tunnel and delivered to the cache.
2-32 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WCCPv2 Interception
FastEthernet0/0 Serial0
Redirect In Redirect Out
Router
Interception is the process of examining packets that are traversing an interface and comparing
them to criteria defined for active service groups. When a packet matches the relevant criteria
(protocol), it is redirected to a service group client (WAE).
Interception is configured on the router in one of two ways:
Ingress redirection: The router examines traffic as it enters an interface from an external
network. Generally, ingress redirection is less resource intensive than egress redirection.
Egress redirection: The router examines traffic as it is leaving an interface toward an
external network. Generally, the traffic has already entered an interface prior to reaching
the interface where egress redirection is configured, and a load has already been placed on
the router resources.
Note Ingress redirection is recommended to minimize the impact of WCCPv2 on router resources.
WAE Device
Redirection occurs after traffic has been identified and intercepted. Redirection occurs when a
router moves an intercepted packet to a service group client WAE, based on the load-balancing
mechanism defined for the service group. WCCPv2 redirection is performed using one of two
redirection mechanisms:
GRE: GRE redirection is the more commonly used redirection mechanism when working
with routers and firewalls as service group servers. With GRE redirection, the WCCP
router maintains a GRE tunnel connection to the WAE and forwards packets that match the
service group criteria to the WAE. With GRE redirection, the WAE does not need to be
adjacent to the network router. Instead, the WAE can be one or more hops away. The
tunnel provides the connectivity and plumbing necessary to carry packets.
Layer 2 redirect: Layer 2 redirect is more commonly used when working with LAN
switches. With Layer 2 redirect, the WCCP router does not maintain a tunnel with the
WAE. Instead, the packets are intercepted and the WCCP router rewrites the Ethernet
frame header and forwards the packets to the WAE. With Layer 2 redirect, the WAE must
be Layer 2 adjacent to the WCCP router.
As a best practice, use GRE encapsulation when working with routers for WCCPv2, as most
routers do not support L2 redirection. L2 redirection is used when working with switches, as
redirection can be performed in hardware. This approach minimizes the CPU workload on the
WAE and the switch, and can improve performance.
2-34 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WCCPv2 Load Balancing
WCCPv2 provides a configurable assignment method that allows load balancing across service
group clients within the service group. This load-balancing scheme can use a hash algorithm of
the parameters specified (IP address, port, and so on) and distribute the load across the service
group clients, either evenly or based on configurable weighting.
With Cisco WAAS, two service group numbers are used to make up the TCP promiscuous
service group. These two service group numbers are 61 and 62. Both of these service groups
notify the router that any traffic that matches IP Protocol 6 (TCP) is to be intercepted and
redirected to a service group client WAE. Service group 61 performs load balancing based on
source IP address, while service group 62 performs load balancing based on destination IP
address.
With WAAS, both service groups must be in the path of traffic flow. For example, if a client is
communicating with a server, one of the two must be in the path for packets traveling from the
client to the server, and the other must be in the path for packets traveling from the server to the
client.
This approach ensures that the same WAE is used as the redirection destination regardless of
direction. As node-to-node communications involve a source IP address and a destination IP
address, which are generally reversed in the reverse path, the source IP address going one way
is the destination IP address for traffic going in the reverse direction.
WAE Devices
A B C
Router
WCCPv2 uses HERE_I_AM and I_SEE_YOU messages as heart beat messages. These
messages are exchanged every 10 seconds to verify that nodes are available. If a router does not
see a HERE_I_AM message for 25 seconds, it begins the process of removing the WAE from
the service group. This process begins with a unicast query sent from the router to the WAE to
check if it is ready to be removed from the group. If no response is received, the WAE is
removed. If the WAE chooses to remain in the service group, then it notifies the router of this
determination.
The heart beat is considered stateful in that the service group server and clients are in constant
communication with processes that are tightly coupled with the WAE core optimization
function. If a software interruption occurs on a WAE that prevents it from functioning, the
device remains on the network and the likelihood of a running WCCP process is low. In this
manner, WCCPv2 heart beat and availability monitoring is considered very stable and reliable.
2-36 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WCCPv2 Failover
If a WAE in a service group fails, the portion of the load that it was
handling is automatically distributed to other WAEs in the service
group.
If no additional WAEs are available, the service group is taken
offline, and packets are not redirected.
A B
X C
If a WAE fails, or is otherwise removed from a service group, the portion of the load
represented by the buckets that the WAE is handling is distributed to the remaining WAEs in
the service group. If no other WAEs are available, the service group is taken offline, and any
configured instances of interception are negated. These instances of interception still appear in
the router configuration but are not used, because no service group client devices are available.
PBR:
Out-of-path with redirection of flows to be
optimized (all flows or selective flows with an
access list).
Original
WAE is treated as a next-hop router. Flow
High availability:
Failover capability allows a secondary WAE to
be used if the primary WAE fails.
Policy Route
IP SLAs ensure availability by tracking WAE WAE = Next Hop
liveliness.
Seamless integration:
Optimized
Transparency and automatic discovery. Flow
Supported on all WAE platforms.
PBR is another interception mechanism that can be used with Cisco WAAS. With PBR, one or
more WAEs is configured as a next-hop router for TCP traffic. PBR supports failover, but it
does not support load sharing. WAE availability can be tracked through IP Service Level
Agreements (IP SLAs) in Cisco IOS.
PBR is supported on all Cisco WAE platforms.
2-38 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Introduction to PBR
WAE1
R1
Traffic Flow
WAE2 R2
IP
Network
WAE3 R3
R4
WAE4
While WCCPv2 is the preferred network interception and redirection mechanism for WAAS,
PBR provides an alternative for situations where WCCPv2 can not be used. With PBR, the
WAE is configured as a next-hop router for specific types of traffic as defined by access lists.
To use PBR with WAAS, PBR route criteria is based on access lists that specify TCP traffic
coming from or going to specific IP subnets.
PBR provides support for an unlimited number of WAEs, but it is generally limited to two
because of the active-standby nature of PBR. An unlimited number (limited only by address
space) of WAN routers can be deployed that use the WAEs as next-hop routers.
While WCCPv2 provides load balancing and scalability, PBR can use only one next-hop route
at a time per route map. In this way, multiple WAEs can be deployed in a location, and each
can be listed as a next-hop router. However, the first WAE is the only one receiving packets,
and the remainder are unused until the first WAE is taken offline or otherwise failed.
If all WAEs go offline, the policy route is considered invalid, and the routing tables built by
routing protocols or static routes are used.
2-40 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
PBR Availability Monitoring
PBR can use IP SLAs to monitor the availability of a next-hop router WAE to ensure that the
next hop is available before sending packets to that device. IP SLAs can monitor the
availability of a WAE with one of three mechanisms:
Cisco Discovery Protocol (CDP) neighbor relationship: This method requires that the
WAE is directly attached to the router with no intermediary device. The router checks the
CDP database to verify that the next-hop router WAE is online before considering it as a
valid next hop.
Internet Control Message Protocol (ICMP) echo: This method does not require a direct
attachment to the router. With this method, the router periodically sends ICMP echo
messages to the WAE to see if the WAE responds. ICMP echo is the preferred mechanism
for next-hop router tracking when using WAAS. The time between ICMP echoes is
configurable, and is configured to use the lowest interval for checking WAE availability
(20 seconds).
TCP connection attempts: This method does not require a direct attachment to the router.
With this method, the router periodically attempts TCP connections to the next-hop device
on a specific TCP port using a specific source TCP port. The period of time between
connection attempts is not configurable.
If the WAE is unresponsive to the configured tracking mechanism, the router considers the
next-hop to be unavailable and proceeds to the next next-hop router in the list. If all next-hop
routers are offline, the router considers the policy route invalid and uses the routing tables built
by standard routing protocols or static routes.
Note IP SLAs that use ICMP echo are recommended because the granularity is 20 seconds,
whereas the granularity for CDP and TCP is much higher.
WCCPv2 is the preferred mechanism for traffic interception and redirection. PBR is used only
in cases where WCCPv2 can not be used. WCCPv2 is process based and provides stateful
monitoring of WAE availability, scalability to 32 nodes and 32 routers within a service group,
load distribution, failover, and high availability. PBR provides basic failover capabilities but
relies on nonstateful mechanisms to verify next-hop availability, whereas WCCPv2 failover is
handled by a running process that is closely tied to the optimization framework of the WAE.
2-42 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Data Center Deployment Using ACE
This section describes how Cisco WAAS can be integrated in the enterprise data center with the
Application Control Engine (ACE) line card for the Catalyst 6500 series switch.
Industry-leading functionality:
Solution for scaling servers, appliances, and
network devices
Virtual partitions, flexible resource assignment,
security, and control
Cisco WAAS can be deployed by installing ACE or CSM line cards for the Catalyst 6500
Series of intelligent switches. This solution meets the needs of the most demanding enterprise
data center environments in terms of performance, scalability, and availability. The ACE
module can scale to 4 million TCP connections with a setup rate of 350 thousand TCP
connections per second and provide up to 16Gbps of throughput. Additionally, ACE represents
the industrys leading solution for server load balancing, network device load balancing,
virtualization, and application control.
Note Cisco WAAS can also be deployed by using the Cisco Content Services Module (CSM) line
card for the Catalyst 6500 Series switch. The CSM is functionally similar to the ACE module
in the case of Cisco WAAS integration, but it does not provide the same level of
performance or functionality as the ACE module.
The Cisco ACE module for the Cisco Catalyst 6500 series switch is recommended as an
interception option in the enterprise data center. While WCCPv2 can scale to support 32
devices, the ACE module can scale to support thousands of devices.
The ACE module can be deployed in one of two configurations:
Data center entry (DCE): The DCE configuration provides the highest degree of locality
to the WAN boundary, which allows Cisco WAAS to be used to optimize a broad degree of
applications within the data center. Generally, the DCE configuration is applied in the
Distribution or Core Layer of the data center LAN.
Server Load Balancing (SLB): The SLB configuration provides the highest degree of
locality to a set of servers, which allows Cisco WAAS to be used to optimize those servers
only, or, traffic to any location that happens to pass through that particular Access Layer.
Generally, the SLB configuration is applied in the distribution or Access Layer of the data
center LAN.
The figure shows the DCE configuration of ACE for Cisco WAAS.
Note It is generally recommended that the DCE configuration be used, as it provides a broader
degree of optimization applicability. The SLB configuration is generally only recommended in
environments where asymmetric routing can not be overcome in the network and the
optimization needs to be moved as close to the endpoints as possible. Alternatively, source
NAT can be used on the ACE module to circumvent asymmetric routing, but this does not
provide source IP transparency.
2-44 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
ACE Server Load Balancing Configuration
The figure shows the SLB configuration of ACE for Cisco WAAS. This configuration is useful
in environments where asymmetric routing can not be overcome within the network by moving
optimization closer to the nodes being optimized.
Number of 2 1 32 16000
Active WAEs (serial cluster, (not practical but
tested limit) possible)
2-46 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Automatic Discovery
This topic examines how Cisco WAE devices automatically discover each other in the
optimization path and describes situations where more than two, or less than two, WAE devices
are located in the optimization path.
Cisco WAAS provides an automatic discovery mechanism that allows WAE devices to
automatically identify each other during the course of a TCP connection setup. With auto-
discovery, administrators do not need to define the subnets that can be optimized, the devices
that can be optimized, the devices that can terminate optimizations, or the communication
between WAEs.
Auto-discovery uses TCP packets to identify itself and the other WAE devices that are in the
network path. This is accomplished with the TCP option during the setup of the TCP
connection between the communicating nodes. The TCP option allows the WAEs that are
closest to the communicating nodes to establish a peering relationship and then negotiate the
level of optimization to apply to the connection. The TCP option specifies the first node to see
the option (the WAE that marked the packet) and the last node to see the option (which is
constantly overwritten as each additional WAE in the network path sees the marked packets).
The auto-discovery mechanism occurs on a connection-by-connection basis, which allows the
WAE to use different levels of optimizations for different nodes or different applications.
Attempt connection:
Src port, Dst port
Sequence Number TCP SYN
Window Size, Checksum Acknowledge connection:
Options (MSS, SACK, etc.) Attempt connection:
Src port, Dst port
Acknowledge connection: TCP SYN, ACK Sequence Number
Acknowledgement Number
Sequence Number Window Size, Checksum
Acknowledgement Number TCP ACK Options (MSS, SACK, etc.)
Window Size, Checksum
Options (MSS, SACK, etc.)
GET HTTP/1.1
(Applicatio
n Data)
Cisco WAEs automatically discover each other during the establishment of TCP connections
between communicating nodes as shown in the figure.
A TCP connection setup is called a three-way-handshake. The three-way-handshake establishes
connection parameters between two communicating nodes to ensure guaranteed, reliable
delivery of application data. The TCP synchronize and start (SYN) message is issued when a
node wants to establish a connection with another node. The SYN packet contains information
about the source and destination TCP ports, window size, and TCP options. The receiving node
responds with a TCP SYN acknowledgement (ACK), which establishes the connection for the
reverse direction. After the SYN ACK has been received by the initiated node, it responds with
a TCP ACK, and applications using the connection are able to exchange application data.
2-48 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Auto-Discovery: TCP SYN
WCCPv2 WCCPv2 B
A
or PBR or PBR
WAN
I would like
to accelerate
WAE1 this connection! WAE2
Here are my details.
When the WAE receives a TCP-SYN packet, it adds a 12-byte Cisco-unique TCP option. This
TCP option includes a request to optimize the connection and a definition of the requested
optimization. In addition to the requested optimizations, the TCP option also includes the
device ID of the WAE that first received the TCP-SYN packet. This device ID is added to a
field that identifies the last WAE to receive the TCP-SYN packet. The packet is then returned
to the network for delivery to the destination. At this point, no optimizations have yet been
applied.
WCCPv2 WCCPv2 B
A
or PBR or PBR
WAN
When the next WAE in the path receives the marked TCP-SYN packet, it knows about the first
WAE in the path and its request to optimize the connection. That WAE then adds its device ID
to the last WAE field in the TCP Options field and forwards the packet to the intended
destination. Notice that the TCP-SYN packet is still marked in case additional WAEs are closer
to the intended destination.
2-50 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Auto-Discovery: TCP SYN ACK
When the server responds with the TCP SYN ACK, WAE2 marks
TCP options to acknowledge the optimization and to identify itself
to WAE1.
The marked TCP SYN-ACK packet is forwarded toward the client
and intercepted on the other side.
WCCPv2 WCCPv2 B
A
or PBR or PBR
WAN
Acknowledge
acceleration!
WAE1 WAE2 Here are my details.
When WAE2 receives an unmarked TCP SYN-ACK message, it knows that it is the closest
WAE to the node. WAE2 then applies a Cisco-specific 12-byte option to the TCP SYN-ACK
packet with its device ID listed as the first WAE and also the last WAE to see the SYN-ACK
packet. WAE2 then specifies the optimizations that it wants to apply to the connection based on
the configured policy and forwards the packet to the intended recipient.
After WAE1 receives the TCP SYN ACK with the optimization
confirmation and details about WAE2, the defined policy (or
negotiated optimizations) can be acknowledged.
The TCP SYN-ACK packet is forwarded to the client.
WCCPv2 WCCPv2 B
A
or PBR or PBR
WAN
ACCELERATION
WAE1 CONFIRMED! WAE2
When WAE1 receives the marked TCP SYN-ACK packet, it discovers that WAE2 is the
closest WAE to the destination of the connection. Because WAE1 was the first to see the TCP-
SYN message (it was unmarked), it strips the marking and forwards the TCP SYN-ACK packet
to the destination.
2-52 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Auto-Discovery: TCP ACK
After the SYN ACK is received, the TCP proxy is initiated for the
connection, and WAE1 sends a TCP ACK to WAE2 to
acknowledge optimizations.
WAE2 can then send a TCP ACK to server B.
Client A sends a TCP ACK to WAE1.
WCCPv2 WCCPv2 B
A
or PBR or PBR
WAN
ACCELERATION
WAE1 WAE2 CONFIRMED!
To confirm that the connection should be accelerated, WAE1 sends a marked TCP-ACK packet
back toward the original destination of the connection. When this marked ACK packet is
forwarded to the destination, intercepted, and then sent to WAE2, WAE2 then learns that the
connection is to be optimized. The least common denominator of the application policy
configuration on both WAEs is used. From there, the TCP proxy is started, and separate
connections are maintained between the client and WAE1, between WAE1 and WAE2, and
between WAE2 and the server.
A:D SYN
A:D SYN(OPT)
A:D SYN(OPT)
4. When the TCP-SYN packet with options is received by the destination, the destination
drops the options and responds back with a TCP SYN-ACK message.
5. The SYN ACK is intercepted and redirected to WAE C, and as the SYN-ACK packet is
unmarked, WAE C knows that it is the closest WAE in proximity to the server. WAE C
then applies TCP options to the TCP SYN-ACK packet, identifying itself and the level of
optimizations it wants to apply, and it forwards the packet to the intended recipient.
6. As the SYN ACK is intercepted and redirected to WAE B, WAE B learns about WAE C
and the level of optimizations that it can apply to the connection. WAE B then forwards the
SYN-ACK packet (without options because it knows that it is the closest WAE in the path)
to the client, and then sends a TCP ACK, which is marked with options confirming the
level of acceleration that are to be applied, to WAE C.
7. When the TCP ACK is sent, the TCP proxy service is started, and the WAEs begin
applying optimizations that are based on traffic policy.
2-54 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Auto-Discovery: Three or More WAEs
A B C D E
A:E SYN
A:E SYN(OPT)
A:E SYN(OPT)
A:E SYN(OPT)
E:A SYN ACK
E:A SYN ACK(OPT)
E:A SYN ACK(OPT)
E:A SYN ACK
A:E ACK(OPT)
A:E ACK(OPT)
A:E ACK A:E ACK
If more than two WAEs are in the path, the intermediary WAEs go into pass-through mode for
the connection and allows the two outer WAEs to manage the connection and optimization.
A:C ACK
A:C ACK
If there is only one WAE in the path, auto-discovery fails, and the WAE goes into bypass (pass-
through) mode for the connection. No optimizations are applied to this connection. This is
common in deployments where one of the two locations involved in the TCP connection does
not have a WAE.
2-56 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Auto-Discovery and WAE Failure
The WAE TCP proxy independently manages TCP sequence (SEQ) ACK numbers. The SEQ-
ACK numbers used between the client and the WAE are different than those used between the
two WAEs, and different than those being used between the distant WAE and the server. If a
WAE fails, the receiving node sees segments with SEQ-ACK numbers that it was not expecting
and causes a TCP connection reset (RST) to be sent. The application can then choose to re-
establish the connection, which begins the auto-discovery process, again.
In this example, the client request to the server takes the path of
R1-R2, and the server response takes the path of R3-R4-R5-R6:
R2 Server
Client R1 IP Network
IP Network
R3
R6
R4
R5
IP Network
Asymmetric routing occurs when traffic takes one path between communicating nodes and a
different path for traffic flowing in the reverse direction. For application acceleration
technologies that do not provide auto-discovery, asymmetric routing makes connection
management difficult. In the figure, three sites are shown with three separate WAN
connections: One connection runs from site 1 to site 2, one connection runs from 2 to 3, and
one connection runs from 3 to 1. A client connection to the server uses one path, and the return
traffic uses an alternate path. Cisco WAAS supports environments that have asymmetric
routing as long as each of the boundary routers within a site (R6 and R1, R2 and R3, and R4
and R5) share the same network interception and redirection configuration. WCCPv2 is highly
recommended in environments where asymmetric routing is possible, because it is more
stateful and process-based than PBR, and because there is a higher likelihood of multiple
WAEs in locations where multiple WAN links exist.
2-58 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Asymmetric Routing and WCCPv2
Client R1
IP
Network
WAE WCCP Service Group
TCP Promiscuous
R2
This example shows how WAAS supports asymmetric routing when network boundary routers
share a common interception and redirection configuration. In these cases, the WAEs is
configured to register against both routers as TCP promiscuous devices.
R1
WAE
R2
As the interception configuration is shared and identical across boundary routers, network load-
balancing and distribution processes are the same regardless of the router the traffic was
forwarded to or through. Thus, either router can hash the load and forward to the same WAE.
2-60 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WAAS Asymmetric Routing Deployment
In this example, sites are not well connected (low bandwidth, high
latency), and routes between client and server are asymmetric
(R1-R2 vs R3-R4-R5-R6):
R2 Server
IP
Client R1 Network
IP WAE3
Network
WAE1 WAE4
R3
WAE2
WAE5
R6 WAE6
R4
R5
IP Network
The figure shows WAAS integration with asymmetric routing. Each of the boundary routers
(R1-R6, R2-R3, R4-R5) share a common network interception and redirection configuration.
Regardless of the router encountered, the same WAE is chosen. When the client sends data to
the server, it takes the R1-R2 network path. The return path to the client is R3-R4-R5-R6.
In auto-discovery and asymmetric environments where one direction has more WAEs than the
other direction, the TCP options used by Cisco WAAS also include the first device ID and the
last device ID. During the auto-discovery process, the first WAE to see an unmarked TCP
connection setup packet (SYN, SYN ACK, ACK) adds its device ID to the first device ID and
last device ID fields of the option. At each WAE that is traversed by the packet, the last device
ID field is overwritten with the device ID of the local WAE. Regardless of the number of
intermediary WAEs, the connection setup message contains only two WAE device IDs: the
first WAE to see the message and the last WAE to see the message.
In this example, two WAEs are in the path between the client and the server for one direction,
and three WAEs are in the path between the server and the client. Auto-discovery and shared-
interception and redirection configurations enable WAAS to support such topologies.
IP
Network
Sites that are well connected using links that are high bandwidth (>=100Mbps) and low latency
(<2ms) and low packet loss can be treated as a single site from a network interception and
redirection configuration perspective. This approach helps to minimize the challenges
associated with deploying WAAS in environments where two well-connected data centers are
located in a given geographical region. In these situations, you can have router-to-WAE GRE
traffic traversing the high-speed low-latency interconnect without causing problemsunless
the network is already saturated, and a large volume of WAN traffic is to be redirected to a
WAE on the other side of the link.
Note This configuration is not recommended for sites that are not well connected or provide low
bandwidth, high latency, or high packet loss.
2-62 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Asymmetric Routing and Well-Connected Sites
(Cont.)
R2 Server
IP
Client R1 Network
IP WAE3
Network WAE1
WAE2
DWDM
R3
WAE4
R4
IP
Network
The example in the figure shows a deployment using network interception and redirection
configuration sharing across well-connected sites. In some cases, using WCCP can have an
impact on the route taken by return traffic. In the example, the server is trying to use R4 as its
return path to the client. When WCCPv2 on R4 intercepts and redirects the traffic to WAE3,
WAE3 uses R2 as its default gateway.
With physical inline interception, the WAE can sit in-path for up to
two WAN connections concurrently, and thereby address
asymmetric routing in most situations.
Server
WAE2
R2
Client
R1
WAE1
R3
WAE2
R6
R5 R4
Similar to asymmetric routing environments where WCCPv2 or PBR is used, physical inline
interception can also be deployed in such a way that asymmetric routing is supported. In the
example in the figure, each WAE is deployed physically inline between the local switch and the
two WAN routers. If traffic takes a divergent path on the return, then it still passes through the
same WAE, assuming that the WAE is physically inline for both WAN connections.
2-64 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Asymmetric Routing and ACE
WAE2 WAE2
The Cisco Application Control Engine (ACE) behaves in a similar way to WCCPv2 in terms of
common asymmetric routing situations. ACE can be configured in a high-availability cluster
such that traffic handled by a cluster member on a divergent path can be forwarded to the
appropriate ACE module and then redirected to the appropriate WAE based on the flow state
maintained by the WAE. The WAE then handles the flow accordingly and places the flows
back on the network toward the intended destination with the configured default gateway. In
the example in the figure, if the WAE default gateway is the router on the right, the router on
the right is used, thereby bypassing the configuration on the server, which uses the router on the
left as its default gateway. If the default gateway is an HSRP or VRRP virtual router address,
then the physical router that owned the HSRP or VRRP virtual router address at that time is
used.
Note In complex asymmetric routing environments, the ACE module can be configured to use
source IP address network address translation (source NAT). By using source NAT, packets
sent out by the ACE through the WAE have the original IP address masqueraded, replaced
by the IP address of the ACE in that VLAN. In this way, when the end node responds to the
packet, the response goes to the IP address of the ACE as opposed to the IP address of the
other end node.
Network Transparency
reconfiguration.
Cisco WAAS supports all three facets of transparency: client transparency (no client software
or configuration changes), server transparency (no server software or configuration changes),
and network transparency (no feature reconfiguration). Network transparency is a feature
unique to Cisco. Network transparency enables Cisco to maintain compliance and compatibility
with the largest set of features already configured in the packet network. Many features require
visibility into Layer 3 (IP) and Layer 4 (TCP) packet header information to make
determinations on how specific types of traffic are to be handled.
Many non-Cisco application acceleration products use a dynamically configured or statically-
defined IP tunnel between devices. Tunnels can masquerade critical packet header information
from the network and force network administrators into drastic feature reconfigurations. For
instance, if TCP port information is masqueraded by a non-Cisco accelerator, the upstream
router is unable to then determine what application the traffic is related to, which can cause
problems for things such as access lists, firewall policies, or quality of service (QoS). If IP
address information is masqueraded, then similar problems can occur.
Only those network features that require visibility to the application payload can be impeded by
WAAS. If Cisco WAAS is configured to compress and suppress redundant data from a flow,
and the network feature is configured to examine the traffic after the WAE optimization, the
feature is not be able to locate the strings it needs to perform an assigned function.
2-66 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Network Feature Compatibility
Feature Supported? Importance
Layer 3 and Layer 4 header visibility Yes Provides visibility to network features,
monitoring, reporting.
DSCP and TOS preservation Yes Preserves QoS markings on packets received by
the WAE.
QoS Yes Traffic classification, prioritization, and marking in
the router is fully supported. WAN QoS is not
supported, but it is also not necessary.
Router queuing configuration Yes Use different queuing mechanisms for different
traffic types as necessary.
Policing, shaping, and rate limiting Yes Rate limiting and policing is fully supported; can
police or rate limit based on compressed data
depending on where deployed. Shaping is
supported.
MPLS Partial MPLS decapsulation occurs before packets are
forwarded to the WAE. WCCP and WAE are not
VRF-aware
The goal of WAASv4 with transparency is to maintain compatibility with a broad set of IOS
capabilities.
The following defines features from the Network Feature Compatibility table:
Layer 3 and Layer 4 header visibility: Cisco WAAS provides full visibility to IP and
TCP header information. This information is critical to the proper operation of network
value-added features such as those found in this table. Non-transparent accelerators do
NOT provide IP and TCP header preservation, which defeats many of these features.
DSCP and TOS preservation: Cisco WAAS preserves existing DiffServ Code Point
(DSCP) and Type of Service (TOS) markings to comply with QoS configurations. For any
packet received by a WAE containing a pre-existing marketing in the ToS byte, the ToS
byte will be applied to the outgoing (optimized or pass-through) packet to preserve end-to-
end QoS semantics.
QoS: Cisco WAAS preserves DSCP and TOS markings, and the L3/L4 header. As Cisco
WAAS is a transparent acceleration solution, it can leverage existing QoS capabilities in
the network, including classification (identifying the flow based on its characteristics), pre-
queuing operations (such as policing, marking, dropping), advanced queuing architectures
(LLQ, CBWFQ, PQ, shaping), and post-queuing optimizations (link fragmentation and
interleaving, packet header compression). Non-transparent accelerators break QoS
classification, which renders the remainder of router-based QoS nearly useless.
CoS: Cisco WAAS does not need to support class of service (CoS), as this is commonly
used in WAN Layer 2 protocols. Cisco WAE appliances use Ethernet for Layer 2.
NBAR: Cisco WAAS provides partial support for Network Based Application Recognition
(NBAR), because there can be situations where NBAR needs to find application strings
within optimized (heavily compressed) data. This occurrence would likely be infrequent, as
application strings are generally found at the beginning of the connection before
optimizations are applied.
2-68 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Network Feature Compatibility (Cont.)
2-70 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Objectives
Upon completing this lesson, you will be able to describe how to size Cisco WAAS solutions
based on performance, scalability, and capacity sizing metrics. This includes being able to meet
these objectives:
Describe the key areas of understanding required to adequately design a Cisco WAAS
solution
Describe the performance and scalability characteristics of each Cisco WAE platform and
understand the positioning of each
Describe how the WCCPv2 configuration can be manipulated to achieve design
requirements
Cisco WAAS Design Fundamentals
This topic describes the characteristics of the IT operating environment that need to be
understood before designing a Cisco WAAS solution, and which product capabilities must be
examined to ensure proper alignment of the correct platforms in a Cisco WAAS design.
Designing a Cisco WAAS solution is simple and straightforward; however, designers are best
positioned if they have an intimate understanding of their organizations IT infrastructure.
Before any design exercise, be sure to examine the business challenges being experienced to
ensure that Cisco WAAS can help overcome the challenge:
Application performance over the WAN: Cisco WAAS employs powerful WAN
optimization and application acceleration technologies that help to improve performance
for applications that are already centralized. Cisco WAAS also helps to ensure that
performance metrics are maintained for applications associated with infrastructure that is
being consolidated in such a way that users can access the application over the WAN.
WAN bandwidth upgrades and network upgrades: Cisco WAAS compression
technologies, such as DRE and persistent LZ compression, help to minimize bandwidth
consumption on the WAN and can help mitigate the need for a bandwidth or network
upgrade. Many users find that deploying Cisco WAAS also helps to alleviate bandwidth
utilization to provide network capacity to support other applications, such as VoIP or video.
Server, storage, and application consolidation: Cisco WAAS provides performance
improvement capabilities that allow IT organizations to consolidate costly server, storage,
and application infrastructure from distributed locations back to the data center. This
provides significant cost relief and minimizes management overhead of distributed
systems.
Global collaboration: Environments that share data across locations (for example,
marketing collateral, software development, computer-aided design/computer-aided
2-74 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
manufacturing, or CAD/CAM) Cisco WAAS enables consolidation of infrastructure and
performance improvements to make global collaboration mimic local collaboration.
Desktop management, software distribution, patch management: Cisco WAAS enables
consolidation of these services in the data center. By leveraging WAN optimization and
application acceleration capabilities, Cisco WAAS can help provide near-LAN response
times for delivery of packages from such systems while also helping to offload the servers
in the data center.
Disaster recovery/business continuance (DR/BC) and replication: Cisco WAAS helps
to improve performance of replication applications, and backup and restore over WAN
functions. This helps to ensure that geographically distributed storage repositories are kept
more closely synchronized, which helps improve recoverability if a failure is encountered.
Compliance: Cisco WAAS also helps to ensure compliance and regulations that are
focused on data availability and replication are met by enabling centralization and
improving performance for data movement facilities.
The following data about the underlying network infrastructure should be considered when
undergoing a Cisco WAAS design:
Number of locations and connectivity to each: Understand how many locations (branch
offices, regional hubs, data center locations) require optimization capabilities; how much
bandwidth is available to each site; the amount of latency between locations; and the
amount of expected packet loss.
Network architecture: Understand what devices are responsible for moving and
manipulating data throughout the network including switches, routers, firewalls, and other
appliances. Understand the software version on devices as well as the configuration of
these devices.
Per-location network characteristics: Understand how the network is used at each
location, traffic patterns, and workload and if multiple network paths exist (links that are
load balanced, active or standby). Understand path selection mechanisms in the network
and symmetry or asymmetry of network routing.
Ownership and management of network elements: Understand how the network is
managed and what role the network plays in end-to-end application performance and
visibility. For example, quality of service (QoS) configurations and requirements, network
management requirements, and monitoring and analysis requirements.
High-availability requirements: Understand which locations have high-availability
configurations, for example, redundant WAN links, redundant routers, redundant LAN
switches, and redundant connections.
The following data about the applications using the network and the nodes that are using the
applications is also important to understand:
Workstations and servers: Understand what operating systems are in use, patch revision
levels, and configuration.
Applications: Understand what applications are in use and how they are used, and the size
and frequency of use of application objects or content. Understand application protocols
and workload patterns. Identify which applications are candidate for optimization and
acceleration and how Cisco WAAS can provide benefit to these applications, potentially
even enabling centralization and consolidation.
Collaboration requirements: Understand which applications or data sets are shared
among users locally within a site or among users within distributed sites and how Cisco
WAAS can improve collaboration through performance improvements or minimize
2007 Cisco Systems, Inc. Designing Cisco WAAS Solutions 2-75
opportunity for coherency challenges with distributed data through centralization and
consolidation.
Per-user storage capacity: Understand how users distributed throughout the enterprise use
storage capacity per user, per workgroup, and per location. Understand how much capacity
each user requires from an active working set perspective and how much history is
retained, as well as any user quotas in place.
Desktop management: Understand which IT services are in use to manage distributed
workstations, such as desktops and laptops, including software distribution systems, patch
management systems, antivirus systems and definition file updates, and remote
administration. These systems can benefit from the capabilities provided by Cisco WAAS
and might become candidates for consolidation.
User count: Understand how many users are in each location and the application workload
and productivity characteristics.
2-76 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WAE Sizing Guidelines
Optimized TCP
Connections
WAN Bandwidth
Capacity
Disk Capacity
License
Number of Peers
High Availability
Designing a Cisco WAAS solution requires understanding of the six components shown in the
figure:
Optimized TCP connections: Each Cisco Wide Area Application Engine (WAE) has an
optimization capacity that is defined by the number of TCP connections that can be
optimized concurrently.
WAN bandwidth capacity: Each Cisco WAE has a WAN throughput capacity.
Disk capacity: Each Cisco WAE has disk storage capacity and sizing should account for
storage requirements for each location.
License: Three licenses are available, and each Cisco WAE should be configured with the
license that is required to support its function on the WAAS topology.
Number of peers: Each Cisco WAE has a fan-out capacity defined by the number of
currently connected peers.
High availability: Any location requiring high availability (for example, data center
locations) should be designed with N+1 design principles.
High Availability
Cisco WAAS solutions must be designed based on a number of factors including the number of
TCP connections that are going to be optimized by each WAE within the topology. For WAEs
within branch offices, this is generally a simple calculation, however, for a WAE within a data
center serving as an aggregation point for multiple WAEs in multiple branch offices, more
consideration must be given.
Most productive users tend to have anywhere between 15 to 20 open TCP connections at any
given time. This includes connections for instant messaging, e-mail, web browsing, enterprise
applications, file shares, stock tickers, weather bugs, and many other processes running on the
users PC. Of these 15 to 20 open TCP connections, generally only 4 to 7 TCP connections
require optimization or are related to an application that can be optimized by Cisco WAAS. In
many cases the open connection count per user is less, however, 4 to 7 is a conservative
estimation that can be used safely in most cases.
For an accurate view of how many connections a user has open, two tools can be used. The first
is NetFlow, which can provide the number of completed flows in a given location. The second,
which provides real-time and historical views of connection data, is Microsoft Performance
Monitor, which is included in each Microsoft operating system available today.
2-78 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Optimized TCP Connections
Optimized TCP Microsoft Performance Monitor:
Connections TCP Performance Object > Connections Established
WAN Bandwidth
Capacity
Disk Capacity
License
Number of Peers
High Availability
The figure shows the output of Microsoft Performance Monitor on a user workstation. To use
Microsoft Performance Monitor (also known as perfmon) to view the real-time count of open
TCP connections, open perfmon in the Microsoft Management Console (MMC) and examine
the TCP Performance object, Connections Established counter. This shows all of the
connections open at a given time on a client workstation. The netstat utility also shows a list of
all of the open connections, including the four-tuple (source IP, destination IP, source TCP
port, destination TCP port), which allows you to discern which connections can and should be
optimized:
C:\Documents and Settings\Administrator>netstat -n
Active Connections
Proto Local Address Foreign Address State
TCP 2.1.1.207:389 2.1.1.207:9804 TIME_WAIT
TCP 2.1.1.207:389 2.1.1.207:9805 TIME_WAIT
TCP 2.1.1.207:3389 10.21.81.179:49154 ESTABLISHED
TCP 2.1.1.207:9782 171.70.145.48:80 ESTABLISHED
TCP 10.10.10.100:135 10.10.10.100:9801 ESTABLISHED
TCP 10.10.10.100:1025 10.10.10.100:1259 ESTABLISHED
TCP 10.10.10.100:1025 10.10.10.100:1261 ESTABLISHED
TCP 10.10.10.100:1025 10.10.10.100:1461 ESTABLISHED
TCP 10.10.10.100:1025 10.10.10.100:9802 ESTABLISHED
TCP 10.10.10.100:1259 10.10.10.100:1025 ESTABLISHED
TCP 10.10.10.100:1261 10.10.10.100:1025 ESTABLISHED
TCP 10.10.10.100:1461 10.10.10.100:1025 ESTABLISHED
TCP 10.10.10.100:9800 10.10.10.100:445 TIME_WAIT
TCP 10.10.10.100:9801 10.10.10.100:135 ESTABLISHED
TCP 10.10.10.100:9802 10.10.10.100:1025 ESTABLISHED
TCP 10.10.10.108:9806 10.10.10.108:445 TIME_WAIT
Cisco WAAS solutions need to be designed with WAN bandwidth capacity per location in
mind. Each WAE device has a recommended WAN bandwidth capacity that can be supported.
It is only necessary to size the Cisco WAAS solution based on the maximum amount of
optimized WAN throughput. For instance, if sizing for a location with a T3 (45Mbps), but only
20Mbps of that capacity needs to be optimized; then a device supporting 20 Mbps can be used
for that location. Tools found in the network such as Network Based Application Recognition
(NBAR) and NetFlow provide an accurate view of link utilization based on application to help
guide the design. If redundant links are employed in the network that is used in a load-balanced
active/active configuration, the sum total of these links should be considered when designing
for that location. If redundant links are employed in the network that are used in a failover
configuration, that is, active/passive, then the capacity of the largest link in the location should
be considered when designing for that location.
Cisco WAE devices have recommended WAN bandwidth capacity; however, the Cisco WAAS
software does not restrict the optimized output of these devices. For instance, a device rated for
a 20 Mbps link can actually drive more than 20 Mbps of optimized throughput over the WAN.
2-80 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Storage Capacity Requirements: Edge
Optimized TCP Dynamic
Dynamic Capacity
Capacity
Connections Utility
Utility Capacity
Capacity
Disk Capacity
License
Number of Peers
High Availability
Each Cisco WAE device has some amount of storage capacity. This capacity should be aligned
with the storage requirements for a given location. Storage requirements for a given location
can be broken down into four categories: dynamic capacity, utility capacity, static capacity, and
total storage capacity.
Dynamic capacity is defined as the amount of disk capacity necessary to support a history of
interactive user operation. This is primarily relevant to the traffic that is traversing a given
network connection over a period of time that impacts the compression history for that location.
In most cases, one week of compression history is adequate to ensure high performance access
to the most recently accessed applications, content, and other data.
Utility capacity is defined as the amount of disk capacity necessary to support IT infrastructure
and services at the remote office. These services include software distribution (application
installation files), patch management (hotfix files, service pack files), antivirus definition file
updates, and desktop management functions.
Static capacity is defined as the amount of disk capacity necessary to support interactive user
access to historical data such as home directories. Most branch offices today have file servers,
and these file servers are commonly configured in such a way that users are allowed to store
only a fixed amount of data on them (disk quotas enforce this capacity utilization limit). It is
best to provide up to two weeks of history of previously accessed files for such data in the
remote office depending on the size, type, and nature of the files being stored.
Total capacity is defined as the amount of storage capacity that was immediately available to
the users prior to deploying WAAS. It is not realistic to try and size a location based on the
amount of capacity that was readily available to users prior to deploying WAAS given that
users need interactive access only to the most frequently used pieces of data. Most data that
ages beyond one week are accessed exponentially less than data that is only one to two days
old.
2-82 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Storage Capacity Requirements: Edge (Cont.)
While it is generally not necessary to embark on a study of the exact amount of disk required to
support the dynamic capacity requirements (compression history) of a given location, this slide
is provided as a reference. For instance, for a location with a T1 connection (1.544Mbps) where
the WAN is commonly 75 percent used for five days a week for 50 percent of the day, you can
calculate the amount of storage capacity required to support one week worth of compression
history data.
1.544 Mbps is equal to 192 KB/s. Conversion from bits per second (bps) to bytes per second
(Bps) requires that the value in bits per second be divided by 8. This means that the WAN
connection is able to support up to 192 KB per second of data throughput. To convert this value
to bytes per day, multiply by 60 (seconds to minutes), then by 60 again (minutes to hours), and
then by 24 (hours per day). In this example, 192 KB/s x 60 x 60 x 24 becomes 16.6 GB per day.
Using the above metrics (75 percent utilized for five days per week for 50 percent of each day),
you can calculate the disk capacity requirements to support a week assuming all data is net
new. 16.6 GB/day x 5 days per week equates to 83 GB for the five-day week. 83 GB x 75
percent utilization per day x 50 percent of each day equates to 31.3 GB of disk capacity
required to support a given week of data.
The last piece of this equation is making assumptions on the amount of redundancy found in
that data, because the DRE feature of Cisco WAAS removes redundant pieces of data from the
network. Making the assumption that the data is 4:1 compressible (75 percent redundant, which
in many cases is very conservative) yields 31.3 GB x (1 - .75) yields approximately 7 GB of
disk capacity for compression history (dynamic capacity) to support that location.
Utility capacity, which is the amount of disk capacity needed to support IT infrastructure
services, such as software distribution, should also be examined. It is best to have the last one
month of previous software patches, services packs, applications, and other infrastructure files
readily available in the edge device. This kind of data is generally not changed by the remote
user; rather it is read and installed on the user workstation.
2-84 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Storage Capacity Requirements: Edge (Cont.)
Optimized TCP
Connections Static capacity is defined as the amount of disk
capacity necessary to support interactive user
WAN Bandwidth access to historical data, such as home
directories, generally two weeks of file history:
Capacity
Each user likely has capacity on a file server with
some form of quotas being enforced.
Disk Capacity Typically files that are stored are significantly less likely
to be accessed after they have been idle for over one
week.
License This content is generally dynamic at the beginning of
its life and is interactively accessed, or otherwise
changed or updated by a user at the edge, and
commonly consumes capacity in the CIFS acceleration
Number of Peers cache at the edge of the network
After a week of being idle, the content is unlikely to be
accessed in a read/write fashion again.
High Availability
The static capacity required at the edge of the network is defined as the amount of disk storage
necessary to support interactive user access to historical files. It is best to provide capacity to
support the last two weeks worth of files accessed from the user home directory. These files,
unlike the utility capacity, are likely accessed in a read/write fashion.
Cisco WAAS core devices must also be sized appropriately. Unlike edge devices, where disk
sizing takes into account factors such as utility capacity and static capacity, the core Cisco
WAAS devices need to be sized only according to dynamic capacity. Cisco WAAS core
devices do not cache files; they cache only segments of TCP data identified by DRE.
It should also be assumed that every peer is not active at the exact same time. In this way, some
level of oversubscription can be designed in the core device storage capacity sizing. For
instance, if each of the branch office locations requires 7GB per week, and the assumption is
made that only one in two peer (edge) devices are active at a given moment, then 3.5GB of
capacity per edge device is required per peer on the core device.
2-86 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
License Requirements
Optimized TCP The appropriate licenses should be purchased per
Connections Cisco WAE in the WAAS topology:
Transport: A WAE configured with transport is only providing
WAN Bandwidth WAN optimization capabilities (DRE, TFO, LZ) and no
Capacity application-specific acceleration (that is, CIFS/print).
Enterprise: A WAE configured with enterprise is providing
WAN optimization capabilities (DRE, TFO, LZ) along with
Disk Capacity application-specific acceleration (that is, CIFS/print).
Central Manager: A WAE configured with Central Manager is
providing centralized management services for WAEs within
License the topology.
The appropriate memory configuration should also
be purchased based on license and services:
Number of Peers Enterprise with WAFS edge: minimum 1GB memory
Enterprise with WAFS core: minimum 2GB memory
High Availability Enterprise with both services: minimum 2GB memory
2-88 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Fan-out and Number of Peers
Optimized TCP Each WAE device, when acting as a core device, has
Connections a set of fan-out limits that must be considered:
Devices acting as an edge device only do not need to be
WAN Bandwidth examined; arbitrary cross-site access does not impact fan-out
Capacity scalability metrics.
Devices acting as both an edge and a core need to be examined
as a core device.
Disk Capacity Each WAE platform, when deployed as a Central
Manager, has a limit on the number of child WAEs it
can manage within the topology:
License It is recommended that Central Manager be clustered (active or
standby) in environments with more than 25 managed WAEs.
Central Manager clustering does not change the number of
WAEs that can be managed: It provides only failover capabilities
Number of Peers and not management load-balancing.
High Availability
Each WAE device that is acting as an aggregation point for remote WAEs has a limit in terms
of the number of peers it can manage. This is called the fan-out, that is, the number of edge
devices that a core device can support. When examining fan-out, arbitrary cross-site interactive
access does not need to be considered, unless an edge device is also serving as a dedicated core
for a large number of remote users that also is sitting behind a WAE in their respective
locations.
The Central Manager also has a fan-out limit based on the hardware platform and memory
configuration of the WAE that is acting as a Central Manager. It is best to dedicate (but not
required) two WAEs to be configured as Central Manager WAEs. It is recommended that
redundant Central Manager WAEs be deployed in any medium-to-large scale deployment (25
locations or more) or where high availability is a requirement.
Using two Central Manager WAEs does not increase the number of WAEs that can be
managed, because the Central Manager is clustered in an active or standby configuration.
High Availability
Any locations that require high availability (generally large branch offices, regional offices, and
data center locations) should be configured with the appropriate number of WAEs and one
extra WAE. This configuration is called N+1, where N is the number of WAEs necessary to
support the workload requirements of the location and +1 is an extra WAE device that provides
enough extra headroom to continue providing adequate levels of service should a single device
fail. When integrated with load-sharing interception mechanisms, such as WCCPv2, ACE, or
inline, the load is shared among all WAEs in some capacity. When integrated with non-load-
sharing interception mechanisms, such as PBR, only the first WAE is used until it fails.
2-90 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Cisco WAE Device Positioning
This topic describes the Cisco WAE platform metrics that must be examined when designing a
Cisco WAAS solution and where each Cisco WAE platform is intended to be deployed.
ACE
Performance
WAE-7326
Regional
Office or Small
Data Center WAE-612
Branch or Remote
Office
WAE-512
NME-WAE-502
NME-WAE-302
Scalability
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.72-19
The Cisco WAE family of devices is comprised of devices that range from the small branch
office all the way up to the enterprise data center:
NME-WAE-302: A router-integrated network module for Cisco Integrated Services
Router (ISR) models 2811, 2821, 2851, 3825, and 3845. This module is targeted to the
small branch office where only WAN optimization capabilities are required.
NME-WAE-502: A router-integrated network module for the Cisco ISR models 2811,
2821, 2851, 3825, and 3845. This module is targeted at the small and medium branch
offices where WAN optimization and application acceleration capabilities are required.
WAE-512: 1RU appliance for small or medium branch office locations, regional office
locations, or small data center deployments.
WAE-612: 1RU appliance for medium or large branch office locations, regional office
locations, or data center deployments.
WAE-7326: 2RU appliance for the very large branch, regional office, or enterprise data
center.
The Cisco ACE module provides exceptional scalability in the data center for Cisco WAAS.
Max
Drive Max CM Core
Max Edge WAN Link Optimized
Mem Max Capacity/ Optimized Scalability Fan-Out
Platform CIFS Capacity Throughp
(GB) Drives Max TCP (Devices (Number
Sessions (Mbps) ut
Capacity Conns Managed) of Peers)
(Mbps)
Current Platforms
NME-WAE-302 .5 1 80/80 250 n/a 4 90 n/a 1
This table shows the performance, scalability, and capacity metrics necessary for designing a
WAAS solution. Each WAE platform has a variety of static system limits that should be
considered, including:
Memory: This figure represents the memory configuration of each of the WAEs. Memory
directly impacts each of the performance and scalability metrics of the platform and also
defines which services can be run on a given platform.
Drive unit capacity, system capacity, and maximum number of drives: This is the size
of the drives that can be installed in the WAE and the maximum usable system capacity
(RAID-1 protected for systems with two or more drives). The amount of disk capacity
determines the size of the compression history and cached data on each platform.
Maximum optimized TCP connections: This is the maximum number of TCP
connections that can be optimized by a given WAE. When this value is exceeded, the WAE
passes through new connections without optimizations until the connection count falls
below 95 percent of maximum system capacity.
Maximum CIFS sessions: This is the maximum number of Common Internet File System
(CIFS) sessions that can be accelerated by a given WAE acting as a Wide Area File
Services (WAFS) Edge. CIFS sessions count against the TCP connection count. For a
WAE acting as a WAFS core, 2GB of memory, or more, must be installed. CIFS sessions
count as optimized TCP connections for the WAFS core and do not need to be counted
separately.
WAN link capacity: This is the recommended amount of WAN link capacity that the
WAE can fully optimize. This is not an enforced number, and in many cases, the WAE can
drive beyond the capacity limits shown in the figure. With compression and DRE disabled
(using just TFO only), the WAE can drive far larger amounts of capacity, up to 450Mbps.
Maximum optimized throughput: This is the maximum amount of optimized throughput
that can be driven through the WAE platform. This figure represents perceived application
performance, not WAN bandwidth. For very high bandwidth deployments, Cisco WAAS
2-92 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
can provide reduction of bandwidth utilization but might not show throughput
improvement if the amount of WAN bandwidth is larger than the maximum optimized
throughput of the system. Maximum throughput is probably not reachable through a single
client connection and likely requires multiple concurrent high-performance flows
traversing the WAE.
Central Manager scalability: This is the maximum number of devices a WAE can
manage when acting as a WAAS Central Manager.
Core fan-out: This figure represents the maximum number of peer WAEs that a WAE can
support when acting as an aggregation device. Fan-out numbers below 50:1 are not
enforced limits, but 50:1 is an enforced limit. The numbers here are recommendations to
accommodate a beneficial compression history per connected peer. Fan-out numbers do not
account for arbitrary cross-site access.
The numbers shown in this table are static system limits. When the WAE reaches these static
system limits, the following occurs:
Overload on number of concurrent TCP connections: After the optimized connection
count reaches 100 percent of the maximum system value, the WAE begins passing through
new TCP connections. After the value falls below 95 percent, the WAE again accepts new
connections to optimize. WAEs have been tested with hundreds of thousands of TCP
connections beyond those that are being optimized with little effect on performance.
Overload on number of concurrent CIFS sessions for WAFS Edge WAEs: The WAE
blocks additional CIFS sessions.
Cisco SEs and Cisco partners have access to a WAE sizing tool that can help simplify the
design of a WAAS solution for your network. Please consult with your Cisco SE or partner SE
on the use of this tool.
Note Some legacy platforms, such as the File Engine (FE) and Content Engine (CE) support
Cisco WAAS. These include the FE/CE-511 (identical to WAE-511), FE/CE-611 and CE-566
(identical to WAE-611), and FE/CE-7326 (identical to WAE-7326).
Note The NME-WAE-302 is a transport-only platform. It does not support CIFS acceleration.
WCCPv2 is a powerful interception protocol that provides users the ability to deploy Cisco
WAAS in an off-path mode while providing fail-through operation, failover, load-balancing,
and warm removal and insertion of new devices. WCCPv2 is also flexible in terms of how it is
configured, providing users with a means of ensuring optimal system operation.
Cisco WAAS uses two WCCP service groups: 61 and 62. The service groups are identical in
that they instruct the router to promiscuously intercept and redirect all TCP traffic to one of the
WCCP child devices (WAEs). Where they differ, however, is what predictor is used to
determine how traffic is load balanced across WAEs within a given location. Service group 61
uses the source IP address as the predictor for load balancing, and service group 62 uses the
destination IP address as the predictor for load balancing.
Given that one service group needs to be in the path of traffic for each direction of traffic flow,
placement of these services determines how traffic is pinned to a WAE. For instance, in a
branch office, if service group 61 is applied on the LAN side of the router for ingress traffic
(traffic going toward the WAN from the branch office), the router load balances the flows
across the child WAE devices based on the source IP address of the flow. With that in mind,
traffic from a particular user is redirected to the same WAE each time that user has traffic
leaving the network. In the reverse direction, when using service group 62 on the WAN side of
the router for ingress traffic (traffic coming into the branch office from the WAN), the router
load balances based on the destination IP address. This means that traffic destined to a
particular user is always redirected to the same WAE.
2-94 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Ingress interception is always preferred over egress interception. Ingress interception on a
network device performs interception and redirection as traffic is entering an interface from the
outside. Using ingress interception equates to lower utilization of network device resources
such as CPUs. Egress interception, on the other hand, performs interception after the traffic has
passed through the route processor and is getting ready to exit an interface. This means that the
traffic has already consumed additional cycles on the device, and in some cases can lead to
higher CPU utilization.
61/in LAN and 62/in WAN keep flows from LAN WAN
a particular client pinned to the same WAE
in both directions of traffic flow yielding 62/in
better likelihood of compression per
client. 61/in
62/in
Load balancing is based on nodes
within the location.
Most routers support only GRE-redirect, GRE-return, and hash assignment, which are
default WCCP service configuration parameters.
2007 Cisco Systems, Inc. All rights reserved. WAAS v4.0.72-23
This figure shows two examples based on the text in the previous slide about placement of
service groups 61 and 62. In the top example, traffic going to a particular server is pinned to the
same WAE for each direction of traffic flow. This effectively yields higher overall compression
when users access a common server. In the lower example, traffic from a particular client is
pinned to the same WAE for each direction of traffic flow. This effectively yields higher
overall compression for the user if data being accessed across multiple servers has
commonality.
In general, it is recommended that the top example be employed in the data center and the
bottom example be employed in the branch office.
2-96 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
WCCPv2 Configuration: Service Isolation
Branch: 62/in LAN and 61/in WAN keep DC: 62/in WAN1 and 61/out WAN1 keep
flows to a particular server pinned to the flows to a particular server pinned to the
same WAE in both directions of traffic flow same WAE in both directions of traffic flow
yielding better likelihood of compression yielding better likelihood of compression
per server. per server.
Load balancing is based on nodes No ACLs are required to not redirect flows
outside of the location. to and from the unoptimized branch.
WAN1 LAN
IP
Network
WAN2
61/in 61/out
62/in
It is generally recommended that only ingress interception and redirection be employed when
using WCCPv2; however, there are times when it is desirable to use egress interception and
redirection, as well. The figure shows a configuration in the data center where a combination of
ingress and egress interception and redirection is used. In this case, the primary reason for
configuring WCCPv2 this way is to prevent traffic from the second branch office (bottom left)
from having its traffic intercepted and redirected to a Cisco WAE device. This type of
configuration is applicable in environments where the routers have multiple WAN interfaces
and is commonly referred to as service isolation mode. With service isolation mode, all
interception and redirection is confined to a single interface, and only traffic traversing that
interface is intercepted and redirected to Cisco WAAS.
When configuring WCCPv2 in the data center on the data center switches, it is important to
remember that routed interfaces (that is, Layer 3 interfaces such as Virtual Switch Interfaces, or
SVIs, otherwise known as VLAN interfaces) must be the location where WCCPv2 interception
and redirection is configured. WCCPv2 can not be configured on a layer 2 interface. Much like
with a router configuration, WCCPv2 must be configured such that traffic going out toward the
WAN and traffic coming in from the WAN is intercepted and redirected to the local WAE
devices. If multiple Layer 3 paths exist (for instance, if the connections between the switches
are Layer 3) then each path should be evaluated to ensure that flows do not bypass redirection.
An alternative to using WCCPv2 configuration on the switches is to employ the Cisco Content
Services Module (CSM) or the Application Control Engine (ACE) module for the Catalyst
6500 series. These modules allow for data center scalability and integration without requiring
Layer 3 interfaces.
2-98 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Module Summary
Cisco WAE appliances and network modules integrate into the network
using in-path interception or off-path interception techniques such as
WCCPv2, PBR, or ACE.
Cisco WAE devices automatically discover one another and negotiate
optimization policy.
Cisco WAAS provides service transparency, which supports any network
feature that requires visibility to packet header information, and also
supports environments with asymmetric routing.
Designing a Cisco WAAS solution involves understanding of the business
challenges, network environment and configuration, and applications and
utilization characteristics.
WAN bandwidth capacity, disk capacity, and fan-out ratios that should be
considered when designing a Cisco WAAS solution.
Cisco WAEs have static system characteristics based on the supported
number of optimized TCP connections.
2-104 Cisco Wide Area Application Services Technical Training (WAAS) v4.0.7 2007 Cisco Systems, Inc.
Module Self-Check Answer Key
Q1) B
Q2) B,C
Q3) A
Q4) E
Q5) A
Q6) B
Q7) D
Q8) ACDF
Q9) C
Q10) B
Q11) B
Q12) A
Q13) B