Você está na página 1de 19

Cisco Support Community

Home

ASA oversubscription / interface errors Troubleshooting


Document
Thu, 05/31/2012 - 00:43

Panagiotis T Ka... Aug 5th, 2010


Table of Contents

1 Introduction
2 Identification
2.1 Problem nature
2.2 CPU
2.3 Interfaces
2.4 Load
3 Mitigation / Alleviation
3.1 Processes
3.2 Traffic
3.3 Optimize throughput
3.4 Flow Control
3.5 Active/Active failover
3.6 More hardware

1 Introduction
There are many times where indications of oversubscription or excessive load on a firewall or a
network device are not enough to prove if oversubscription is really happening. Also, it is often
confusing how to identify and solve such issues. This document will present the basic
troubleshooting steps that someone needs to take in order to pinpoint an oversubscription
problem on a Cisco ASA firewall and will propose potential solutions to overcome it. The
corresponding document for the FWSM is located here.

2 Identification
The most important aspect of solving an oversubscription issue is its identification. Network

engineers will often incorrectly attribute network problems to excessive traffic which leads
devices like the firewalls to be wrongly considered as the bottleneck. Other times they will
focus on other parts of the network in cases were the firewall processing power is not enough
to handle the traffic. There can be multiple indications of load problems on firewall devices and
putting them together will help us understand if traffic is indeed the reason of the problem or
if we should focus elsewhere. That is what this section will try to describe.

2.1 Problem nature


Oversubscription almost never occurs by itself. It will most of the times be presented as
another network problem that results from it. Such often include packet loss, slow response or
drops. In general, an oversubscribed device that can't handle the load will inevitably drop
some packets. Packet drops will affect sensitive applications or will cause TCP re-transmissions
and affect the user experience by making transactions look as if they are taking more time to
complete. If we wanted to summarize the problems that occur due to excessive load we would
describe them as network degradation. Of course, someone must be careful and NOT attribute
all problems that fall under the "degradation umbrella" as load issues. The indications we will
present below will help more on identifying if such issues should be attributed to excessive
load.

2.2 CPU
A "busy" firewall device will almost always show it on its CPU. We can check the CPU use with
the command "show cpu".

ASA# show cpu


CPU utilization for 5 seconds = 14%; 1 minute: 10%; 5 minutes: 10%

A CPU ranging above 80%-90% could indicate high traffic load. As a side note, the "show cpu
profile".can also be provided to TAC so that they will be able to identify the processes that the
CPU is spent.
Also, CPU hogs can show when the CPU is too busy to pull packets off the line:
ASA# show process cpu-hog
Process:
LASTHOG At:
PC:
Call stack:

telnet/ci, NUMHOG: 1, MAXHOG: 12, LASTHOG: 12


20:18:08 EST Nov 8 2010
888c7e5 (suspend)
888c7e5
92f6581
92d65de
92d6d71
80cbaf7
80cbfcb
80c2d1f
80c3e66
80c4910
80626e3

CPU hog threshold (msec):

3.47

80c2575

Last cleared: None

In the above example, the telnet process is hogging the CPU. During this time, the CPU is not
available to pull packets of the NIC and route them through the firewall.

2.3 Interfaces
Another important indicator of oversubscription will be interface errors. A couple of commands
to check the interfaces are "show interface" and "show interface | i errors"

ASA# show interface | i errors


0 input errors, 0 CRC, 0 frame,1567 overrun, 0 ignored, 0 abort
0 output errors, 0 collisions, 0 interface resets
0 input errors, 0 CRC, 0 frame,124 overrun, 0 ignored, 0 abort
0 output errors, 0 collisions, 0 interface resets
0 input errors, 0 CRC, 0 frame,987564 overrun, 0 ignored, 0 abort
0 output errors, 0 collisions, 0 interface resets
...
ASA#
ASA# show interface
...
Interface Ethernet0/1 "", is up, line protocol is up
Hardware is 88E6095, BW 100 Mbps, DLY 100 usec

Auto-Duplex(Full-duplex), Auto-Speed(100 Mbps)


MAC address ffff.ffff.ffff, MTU 1500
IP address 10.10.10.1, subnet mask 255.255.255.0
2050839 packets input, 133555759 bytes, 0no buffer
Received 2044728 broadcasts, 0 runts, 0 giants
0 input errors, 0 CRC, 0 frame,3276 overruns, 0 ignored, 0 abort
0 L2 decode drops
6364 packets output, 2970714 bytes,332 underruns

0 output errors, 0 collisions, 0 interface resets


0 babbles, 0 late collisions, 0 deferred
0 lost carrier, 0 no carrier
input queue (curr/max packets):hardware (4/13) software (0/0)
output queue (curr/max packets):hardware (0/2) software (0/0)
...

Interface overruns, no buffer and underruns often show that the firewall cannot process all
the traffic it is receiving on its NIC. Overruns and no buffers indicate that input traffic is too
much on a given interface. The interface maintains a receive ring where packets are stored
before they are processed by the ASA. If the NIC is receiving traffic faster than the ASA can
pull them off the receive ring, the packet will be dropped and either the no buffer or overrun
counter will increment. Underruns behaviour similarly but deal with the transmit ring instead.

2.4 Load
Next it is worth checking the traffic that the device is seeing. We need to clear the traffic
("clear traffic" command) statistics before checking them ("show traffic" command). We are
doing that because we want to see the traffic while the problem is occurring and thus be able
to tell if load is related to the problem investigated. Looking the aggregate traffic output from
"show traffic" carries information since the last reload or the last time the counters were
cleared, so it will not help us identify how much traffic the box is seeing for the time we are
troubleshooting. After the "clear traffic" we let the box collect statistical information for 25minutes and we do "show traffic" to get the traffic the interfaces saw.

ASA# clear traffic


...
...5 minutes go by...
...
ASA# show traffic
...
---------------------------------------Aggregated Traffic on Physical Interface
---------------------------------------Ethernet0/0:

received (in 1137.180 secs):


8985 packets

773519 bytes

7 pkts/sec

680 bytes/sec

transmitted (in 1137.180 secs):


3946 packets

276317 bytes

3 pkts/sec

242 bytes/sec

1 minute input rate 243555 pkts/sec, 731777777 bytes/sec


1 minute output rate 3434534 pkts/sec, 291777777 bytes/sec
1 minute drop rate, 0 pkts/sec
5 minute input rate 35435353 pkts/sec, 792545454444 bytes/sec
5 minute output rate 343423 pkts/sec, 3614444444 bytes/sec
5 minute drop rate, 0 pkts/sec
...

Monitoring tools and Netflow can also help on identifying traffic and connection rates.
We can then calculate the aggregate throughput the device is passing by examining the traffic
that all physical interfaces saw (output of "show traffic") and we will be able to understand if
it is being pushed to its limits. In order to do that we need to check the device specs:
For the ASA, we can read from the ASA model comparison document

Cisco ASA 5500


Series
Model/License

5510
5505
Base /
Base/Security
Security
Plus
Plus

Maximum firewall
throughput
(Mbps)

150 Mbps

300
Mbps

Maximum firewall
connections

10,000 /
25,000

50,000 /
130,000

4000

9000

85,000

190,000

Maximum firewall
connections/secon
d
Packets per
second (64 byte)

5520

450
Mbps

5540

650
Mbps

5550

5580-20

5580-40

1 Gbps
5 Gbps
10 Gbps
(real-world (real-world (real-world
HTTP), 1.2 HTTP), 10 HTTP), 20
Gbps
Gbps
Gbps
(jumbo
(jumbo
(jumbo
frames)
frames)
frames)

280,000 400,000

650,000

1,000,000

2,000,000

12,000

25,000

36,000

90,000

150,000

320,000 500,000

600,000

2,500,000

4,000,000

There are long discussions that people could start trying to tell if a firewall or any other device

is hitting its traffic processing limits or not. Experience has shown that there is controversy on
what the numbers show and what engineers consider as being close to the numbers or not. It
is worth clarifying a few points. Let's use the ASA5510 as an example. Its name throughput is
300Mbps, as we see on the table above. So the question is, "if my ASA5510 sees about
280Mbps should it be 100% CPU or not?". A quick answer would be "No". Though, we must not
forget that there are many factors involved in this question. In the network industry name
speeds of devices come out under certain tests. These tests are repeated and an average is
presented as the maximum speed. Though, not always is "real-world" traffic the same traffic as
the one used in the tests. We could use the aforementioned ASA5510 for example. Usually, the
name speed tests involve stateless protocols with big packets. For a TCP web browsing
application though, the packets are much smaller and TCP uses ACKs and is a "synchronized"
protocol by nature. That would add more load to the firewall itself, which would make its
maximum throughput value drop. On top of that, if the ASA has http inspection configured
(which will do deep packet inspection for http) then we understand that its maximum
processing throughput would be less than 280Mbps. It is obvious that even though 300Mbps is
indeed the throughput the device can achieve, its real-world throughput, based on applications,
traffic nature and configuration could practically be less. That is why in our performance
documents we also try to provide other metrics. These include the "packets per seconds" (pps)
and what is often seen as "real-world HTTP". For example in the ASA table we can see that the
5510 can do 190K pps (small 64-byte packets). These metrics could also be used against the
interface statistics collected from the device in order to decide if the box is pusehd to its
limits.
Another consideration on top of traffic load for the firewall devices is connection and
connection rates. That is another field that could trigger various disagreements. The command
we would use to see the connections on our firewall are "show conn count" and "show
resource usage".

ASA5510# show conn count


2 in use, 86 most used
ASA5510# show resource usage
Resource

Current

Telnet
Syslogs [rate]

Peak

Limit

1
1

1
293

Denied Context
5

N/A

0 System
0 System

Conns

86

10000

0 System

Xlates

116

N/A

0 System

Hosts

49

N/A

0 System

ASA5510-multi-context# show resource usage

Resource

Current

SSH
Syslogs [rate]

Limit

1
118

Conns

89

Xlates

150

Hosts
Conns [rate]

Peak

15
103

1
348

15

unlimited

893
1115
18

Denied Context

unlimited
unlimited
unlimited

4694unlimited

0 admin
0 context1
0 context1
0 context1
0 context1
0 context1

...

Now, let's ask one more questions for the output from our ASA5510 above: "In the peak
connection rate I see about 5K connections and in the specifications I read that the maximum
supported rate is 9K conns/second. 5K is much less than 9K, so is the ASA exceeding its
limits?". For someone to be able to answer that question he would need to keep in mind that
the rate that is mentioned in the specifications is the average rate per second. To explain it
better, here are a few examples:

Let's say we have a stable rate of 9K per second. This connection rate conforms to the
ASA5510 limits.
Now let's see we have 90K new conns per 10 seconds. That is also a rate of 9K per
second.and conforms to the ASA5510 limits
Now let's say we have 81K new conns. for 1 second and the next 9 seconds we have 1K.
That makes us total 90K per 10 seconds which equals to average 9K per second which
conforms with 9K conns/second. But the ASA was oversubscribed for 1 second while it was
seeing a rate of 81K/second.

So, it is obvious that bursts of traffic or connections could affect the performance of a firewall
even if the averages over time does not seem to exceed the limits.
Additionally, having few connections through the box does not necessarily mean that traffic is
not high. Theoretically speaking, someone could have 10 connections passing 1Gbps each and
thus oversubscribing an ASA with very few conns.

3 Mitigation / Alleviation
Now, it is equally important to mention options for overcoming an oversubscription issue. We
would suggest to the reader to keep in mind that if a device is oversubscribed it is usually
best to add more processing power by using more or more powerful devices. Though, there
might be cases where we could get away with it by implementing some workarounds after
identifying the root cause and the traffic profiles. Determining causes of

oversubscription/excessive load should rely on external tools and traffic analysis.

3.1 Processes
When the CPU is high, we can try to see where it is spent and then we might be able to
alleviate it from the process that take most CPU cycles. We can collect the output of the "show
process" command, wait for 1 minute and collect it once more.

ASA# show process

PC

SP

STATE

Lwe 0805510c d52a0cf4 09fbeed8

Runtime

SBASE

Stack Process

0 d529edf0 7544/8192 block_diag

Mrd 081beaa4 d52d087c 09fbe438

873 d52b0a38 123848/131072 Dispatch Unit

Msi 08f6348f d5784f8c 09fbde4c


Thread

13 d5783088 7792/8192 y88acs06 OneSec

Mwe 08068bc6 d578938c 09fbde4c


Thread

0 d57874e8 7576/8192 Reload Control

Mwe 08070976 d5794314 09fc07f8

0 d5790760 12496/16384 aaa

Mwe 08d094ed d60111ec 09fbde4c

4 d57948e8 6872/8192 UserFromCert Thread

Mwe 08c331eb d57987f4 d57d47d0


Process

0 d5796a70 6920/8192 Boot Message Proxy

Mwe 080a49f6 d579d37c 09fc0854


Process
Mwe 080a4f05 d579f4a4 09fbde4c
Lwe 081bdecc d57a8b9c 09fceba8
Mwe 08498525 d57b11c4 09fbde4c

107 d5799488 8968/16384 CMGR Server

20 d579d610 7696/8192 CMGR Timer Process


0 d57a6c98 7216/8192 dbgtrace
172 d57af440 4712/8192 eswilp_svi_init

Msi 0861af45 d57c4734 09fbde4c


Thread

28 d57c2850 6952/8192 MUS Timeout Check

Mwe 08d094ed d5a3845c 09fbde4c

0 d57cb0e0 7016/8192 netfs_thread_init

Mwe 09378625 d57d952c 09fbde4c

0 d57d76d8 7612/8192 Chunk Manager

Msi 0894d40e d57dbcdc 09fbde4c


Collector

22 d57d9df8 7560/8192 PIX Garbage

Mwe 08932ea4 d57eadfc 09ebdb4c

0 d57e8ef8 7904/8192 IP Address Assign

Mwe 08b41146 d597d8dc 09f02838

0 d597b9d8 7904/8192 QoS Support Module

Mwe 089c501f d597faa4 09ebebd0

0 d597dba0 7904/8192 Client Update Task

Lwe 093c1dba d5984404 09fbde4c

685 d5980570 15888/16384 Checkheaps

Mwe 08b44e65 d598c86c 09fbde4c

1535 d5988bf8 5648/16384 Quack process

Mwe 08b9e1f2 d5994bf4 09fbde4c

1 d598cd80 31888/32768 Session Manager

Mwe 08cb45b5 d599aae4 d7cbd3b0

4 d5997090 14312/16384 uauth

Mwe 08c52475 d599d11c 09f0f884

0 d599b218 7376/8192 Uauth_Proxy

Msp 08c893ce d59a35b4 09fbde4c

2 d59a16b0 7792/8192 SSL

Mwe 08cb1f46 d59a5754 09f15434

0 d59a3870 7272/8192 SMTP

Mwe 08caac96 d59a98dc 09f15398

30 d59a59f8 15096/16384 Logger

Mwe 08cab4c5 d59ab9f4 09fbde4c


Thread

0 d59a9b80 7728/8192

Syslog Retry

Mwe 08ca511e d59adb9c 09fbde4c

0 d59abd08 7192/8192 Thread Logger

Mwe 08e9c492 d59d83a4 09f492e8

0 d59d64c0 7040/8192 vpnlb_thread

...

Then he can do the diff of the "Runtime" column for all the processes (keep in mind that a
process might show up twice or more). By sorting the diffs from maximum to minimum we can
see the processes that take most of the CPU. Introduced in ASA 8.2, command show
processes cpu-usage non-zero sorted can be used instead.
There are cases where, for example, we might see an inspection process or the logging
process taking most of the CPU. In such cases we can disable the inspections if they are not
needed or turn down the logging level and save some CPU for the device. Please note that
processes like "Dispatch_Unit" and "interface polling" relate to regular packet processing and
there is not much that can be done to alleviate the CPU from them.

3.2 Traffic
If the traffic hitting the firewall is excessive, we can also try to send only necessary traffic
through it. Although, this solution is not practical in most setups, there might be cases where
someone has alternate routes for his traffic and he might not need to "firewall" all packets. In
such scenarios he can use policy based routing (PBR) to divert to the firewall only traffic that
needs to be "firewalled".

3.3 Optimize throughput


For the ASA5550 and ASA5580, by leveraging the IO bridges appropriatelly someone might be
able to optimize the maximum throuput of the box. Further information on how to do that in
ASA 5550 and 5580 is located here.

3.4 Flow Control


For instances were traffic is extremely bursty (i.e. 5Gbps for burts of 5ms), dropped packets
can occur if the burst exceeds the buffering capacity of the FIFO buffer on the NIC and the
receive ring buffers. Enabling pause frames for flow control can alleviate this issue by letting
the upstream device to "hold on" with the bursts. More information on how to enable flow
control can be found under the corresponding model sections here.

3.5 Active/Active failover


In case of using two firewalls in failover in Active/Standby mode, if the Active Unit cannot
handle the traffic you might be able to temporarily use an Active/Active setup to share it
between both units. You would need to have the firewalls in multi-context mode and have one
or more contexts active on the primary unit and one or more contexts active on the
secondary. That way both firewalls will be passing traffic (for the context/s that they are
active) and might not be oversubscribed. Though, you need to remember that in case one of a
units failure, all contexts (thus all traffic) will be running on one unit and then you will be back
to an oversubscribed scenario. Active/Active failover for oversubscription cases should only be
used (if used at all) as a temporary solution with precaution, until a permanent solution is put
in place.

3.6 More hardware


Finally, the ultimate solution would be for someone to add more hardware to his network or
use more firewalls. That way he could divert traffic that the device/s can handle and there
would be no oversubscription.
Rating
1
2
3
4
5
Overall Rating: 5 (3 ratings)

Comments

Collapse all
Recent replies last

grim Wed, 05/30/2012 - 07:23


Hi. Great document! Thanks. I suggest you add something around L2 decode drops. This is
especially useful where the upstream switch and firewall are configured for 802.1q. If the
VLANs are not pruned between the switch and firewall, the firewall will drop all frames
received tagged with an unrecognised (ie not configured on the firewall) VLAN number. This
can cause additional load on the firewall NIC. Pruning the VLANs to only those configured on
the firewall physical interface reduces interface load. Also worth noting that CPU HOG
messages are Syslog'd as tracebacks (%ASA-7-711002). Regards.

See More

Pavel Pokorny Thu, 05/31/2012 - 00:43


Hi, not only tagged frames (which not pruned), but also BPDU packets and other stuff you can
see in FP L2 rule drop (l2_acl). That also makes a lot of mess....

See More

Eric Rising Thu, 05/17/2012 - 12:53


Thank you PK, this has been very useful to us.

See More

wuhao_xiaotong Thu, 06/23/2011 - 08:33


what a great job, it helps me knowing much than before about the troubleshooting issue for
the ASA!

See More

sean.wa@gmail.com Mon, 10/18/2010 - 11:23


Hello Pkampana,
Nice work. It is a good document. It really helps a lot. But regarding to the overrun and input
errors, it is different from other Cisco documents. In your example about overrun errors, input
errors is 0, overrun errors are 3276. However based on other document Input errors = Runts +
Giants + CRC + Frame + Overrun + Ignored + Abort. According to your document, high
overrun errors may be because oversubscription on an interface. According to other Cisco
troubleshooting guide, high input and overrun errors may be because of mismatch speed and
duplex.
We have a ASA 5550 which has a lot of input and overrun errors and L2 decode drops. I
opened TAC SR 615730925. In our case, I do not see mismatch speed and duplex as both sides
are configured as auto/auto and "show interface" shows correct speed and duplex. But show
traffic did not show exceesive traffic on the interface either.
The response I got from TAC are that we might have an overload interface
Can you help us clarify what is the cause of these high input and overrun errors?
Thanks,
"
I do apologies for the confusion on the Input errors, the L2 decode drops are not part of the
output.

Input errors = Runts + Giants + CRC + Frame + Overrun + Ignored + Abort


We rely on Cisco official documentation, what you can see on the Cisco Support Community
are general guidelines and examples that could vary from the real output.
Could you please send me a copy of the "show traffic output" and the screen shoot of the
ASDM?
Is this affecting any traffic on your network?

I hope to hear from you soon.


Regards,
Godfrey Corrales
Security Team
Cisco TAC Support Engineer
Email: gcorrale@cisco.com
(407) 241-2965 ext 3704
"

Image version is 8.0(4)

here is interface info


Interface GigabitEthernet0/1 "", is up, line protocol is up
Hardware is i82546GB rev03, BW 1000 Mbps, DLY 10 usec
Auto-Duplex(Full-duplex), Auto-Speed(1000 Mbps)
Description: link to g9/3 PrivateNet Inside
Available for allocation to a context
MAC address 001e.1312.ccf1, MTU not set
IP address unassigned
3884020376 packets input, 4224599980546 bytes, 0 no buffer
Received 22382183 broadcasts, 0 runts, 0 giants
48422943 input errors, 0 CRC, 0 frame, 48422943 overrun, 0 ignored, 0 abort
31947155 L2 decode drops
2705229596 packets output, 792268977657 bytes, 0 underruns
0 output errors, 0 collisions, 1 interface resets
0 late collisions, 0 deferred
0 input reset drops, 0 output reset drops, 0 tx hangs
input queue (curr/max packets): hardware (8/33) software (0/0)
output queue (curr/max packets): hardware (0/202) software (0/0)

GigabitEthernet0/1:
received (in 381268.650 secs):
3890040853 packets
4231288232604 bytes
10000 pkts/sec 11097005 bytes/sec
transmitted (in 381268.650 secs):
2709519449 packets
793464758386 bytes
7005 pkts/sec 2081004 bytes/sec
1 minute input rate 15200 pkts/sec, 16836427 bytes/sec
1 minute output rate 10902 pkts/sec, 3050236 bytes/sec
1 minute drop rate, 0 pkts/sec
5 minute input rate 16043 pkts/sec, 17778454 bytes/sec
5 minute output rate 11426 pkts/sec, 3209117 bytes/sec
5 minute drop rate, 0 pkts/sec
GigabitEthernet0/2:
Resource
Telnet
Syslogs [rate]
Conns
Xlates
Hosts
Conns [rate]
Inspects [rate]
Syslogs [rate]
Conns
Xlates
Hosts
Conns [rate]
Inspects [rate]
Syslogs [rate]
Conns
Xlates
Hosts
Conns [rate]
Inspects [rate]
Syslogs [rate]
Conns
Xlates
Hosts
Conns [rate]
Inspects [rate]
Syslogs [rate]
Conns
Xlates
Hosts
Conns [rate]
Inspects [rate]
Syslogs [rate]

Current
1
241
5223
236
734
111
23
3
112
14
35
23
8
89
1760
612
378
26
15
3
13725
117
528
709
638
189
1961
336
1005
69
101
148

Peak
Limit
2
5
18952 unlimited
7005 unlimited
591 unlimited
1214 unlimited
1185 unlimited
999 unlimited
8 unlimited
896 unlimited
14 unlimited
47 unlimited
297 unlimited
34 unlimited
982 unlimited
2250 unlimited
1822 unlimited
497 unlimited
330 unlimited
280 unlimited
91389 unlimited
15705 unlimited
117 unlimited
770 unlimited
1843 unlimited
1743 unlimited
3262 unlimited
4608 unlimited
3897 unlimited
1923 unlimited
1892 unlimited
1888 unlimited
8627 unlimited

Denied Context
0 ad
0 ad
0 ad
0 ad
0 ad
0 ad
0 ad
0 A
0 A
0 A
0 A
0 A
0 A
0 PE
0 PE
0 PE
0 PE
0 PE
0 PE
0 Pr
0 Pr
0 Pr
0 Pr
0 Pr
0 Pr
0 Pu
0 Pu
0 Pu
0 Pu
0 Pu
0 Pu
0 U

Conns
Xlates
Hosts
Conns [rate]
Inspects [rate]

3513
19
72
73
6

4292 unlimited
19 unlimited
115 unlimited
266 unlimited
106 unlimited

0 U
0 U
0 U
0 U
0 U

5 sec 1 min 5 min Context Name


8.9% 9.1% 9.5% system
2.3% 2.3% 2.3% ad
0.4% 0.6% 0.6% A
11.6% 4.4% 4.0% Pu
14.5% 15.8% 16.4% Pr
1.8% 2.1% 2.1% PE
2.4% 2.5% 2.5% U

See More

Panagiotis T Ka... Mon, 10/18/2010 - 12:23

Hello Pkampana,
Nice work. It is a good document. It really helps a lot. But regarding to the
overrun and input errors, it is different from other Cisco documents. In your
example about overrun errors, input errors is 0, overrun errors are 3276.
However based on other document Input errors = Runts + Giants + CRC + Frame +
Overrun + Ignored + Abort. According to your document, high overrun errors may
be because oversubscription on an interface. According to other Cisco
troubleshooting guide, high input and overrun errors may be because of mismatch
speed and duplex.
We have a ASA 5550 which has a lot of input and overrun errors and L2 decode
drops. I opened TAC SR 615730925. In our case, I do not see mismatch speed and
duplex as both sides are configured as auto/auto and "show interface" shows
correct speed and duplex. But show traffic did not show exceesive traffic on
the interface either.
The response I got from TAC are that we might have an overload interface

Can you help us clarify what is the cause of these high input and overrun
errors?
Thanks,

...

Thank you for the feedback Sean. The outputs you see in the snippets are not real. I arbitrarily
chose the numbers that you see, so they are a little inaccurate. I was trying to convey what
the counters mean.
I would suggest you to eliminate the dup,ex mismatch case (you already did), checking for a
bad cable and then look at the traffic the ASA sees. You would need to clear traffic and "sh
traffic" as the doc explains. Check how close the overall throughput is compared to the 5550
name speeds.
I hope it helps.
PK

See More

golly_wog Wed, 09/08/2010 - 15:14


Hey - please sort this doc out, it's like reading a thriller and then getting to the end and
finding pages missing from your book!!!
Thanks
;-)

See More

Panagiotis T Ka... Wed, 09/08/2010 - 16:04


I apologize. This doc was published by mistake. I am still in the process of writing it.
Please come back soon...

See More

Pavel Pokorny Wed, 03/09/2011 - 02:40


Hi,
Great doc.
Is it complete now?
BR
Pavel

See More

Panagiotis T Ka... Wed, 03/09/2011 - 05:57

Hi,
Great doc.
Is it complete now?
BR
Pavel

Hi Pavel,
Yes, it covers more or less everything that can be done to investigate and try to solve ASA
oversubscription.
Feedback welcome.
Take care,
PK

See More

https://supportforums.cisco.com/document/47506/asa-oversubscription-interface-errrs-troubleshooting

Você também pode gostar