Você está na página 1de 24

IN CONFIDENCE

Juniper Networks M40 BGP Layer 3 MPLS VPN


Scalability Test Report
Tony Dann, Arnaud Gibier, David Hay, Stuart Prevost
Futures Testbed
April 11th 2002

Permission is given for this publication to be reproduced provided it is reproduced in its entirety and that a similar
condition, including these conditions, is included in the reproduction. Reproduction of parts of this publication is
permitted provided the source is clearly acknowledged. For further details, please contact the publisher.
Neither party shall make any announcement concerning the use of the BT Service or use the others name for
external promotional or marketing purposes without the other Partys prior written agreement
BTexact Technologies maintains that all reasonable care and skill has been used in the compilation of this
publication. However, BTexact Technologies shall not be under any liability for loss or damage (including
consequential loss) whatsoever or howsoever arising as a result of the use of this publication by the reader, his
servants, agents or any third party.

IN CONFIDENCE

CONTENTS
1

Executive Summary................................................................................................... 4

Introduction ............................................................................................................... 5

3
Test Methodology ...................................................................................................... 6
3.1
Testing Topology ...................................................................................................... 6
3.2
Router Configuration................................................................................................. 7
3.3
Testing process........................................................................................................ 7
3.4
Version control ......................................................................................................... 8
4
VPN Scalability Test Results...................................................................................... 9
4.1
Test Case 1 - Routes per VRF................................................................................... 9
4.2
Test Case 2 - Routes per VRF plus filter and counter................................................ 10
4.3
Test Case 3 - Routes per VRF plus filter, counter and policer. ................................... 11
5
Network Scenario Test ............................................................................................. 14
5.1
Test Topology. ....................................................................................................... 14
5.2
Test Results........................................................................................................... 15
6

Conclusion .............................................................................................................. 18

References............................................................................................................... 18

Appendix A Detailed Test Results......................................................................... 19

Appendix B Juniper Hardware .............................................................................. 23

Released Issue 1: 11th Apr 2002

Page 3 of 24

IN CONFIDENCE

Executive Summary

Network and service providers are currently deploying large national networks to support next
generation IP services targeted at high revenue generating business markets. Delivery of IP VPNs capable of enriched multimedia content is a key objective. To do this, carriers require
service edge routers that are scaleable, reliable and stable.
The latest generation of IP virtual private network (IP VPN) services are based upon MPLS
technology, following the RFC2547 IETF solution. Carriers appreciate the benefits of this
technology but need to be convinced that vendor implementations of the solution can scale as
the number of customers and size of customer networks increases. The Virtual Routing and
Forwarding (VRF) tables used to build MPLS VPNs increase in size in proportion to the
number of sites within a VPN. The number of VRF instances on a device increases with the
number of VPNs. There is therefore a need to ensure that memory management within a
router is handled in a stable and reliable manner as the number and size of VRF instances
increases.
This report presents the results and analysis on the testing of the Layer 3 MPLS VPN
scalability undertaken by BTexact Technologies for Juniper Networks. The aim of the testing
was to define the scalability characteristics of the M40 router with regard to the number of
routes that can be held in a VRF table versus the number of VRFs configured on the router.
The results show that the M40 provides a very stable platform for the scaling of VRFs. When
the limit of the M40 is reached and more routes are being advertised than can be stored in the
routing table error messages are generated by the router but no instability, traffic loss or
increase in latency for the existing traffic occurs. Such stability is one vital consideration for
operators deploying the device to support MPLS VPNs.
This report shows that when using BGP to insert routes into each VRF, approximately 450,000
routes can be stored by the M40 router. These routes can be split almost linearly across the
VRFs, for example if 1000 VRFs are configured it was proved that each could hold
approximately 440 routes with no errors or instability in the M40. Similar linear results were
found using OSPF, RIP or Static protocols to insert the routes into the VRF. This behaviour is
important to carriers as they seek to deploy scalable networks to support MPLS virtual private
networks. The limitation of 450,000 routes gives a balance to the number of VRF instances
and the size of individual VPN membership that is suitable for the current deployment of
services. We believe this value will satisfy the scalability requirements for a single MPLS PE
router for the next few years.
It is very important that the edge MPLS node can support the required classification, policing
and accounting for each end VPN customer, so that service differentiation can be effected in
large scale. The addition of policers, counters and filters is shown to have a minimal impact on
the number of routes and traffic latency figures.
It is also important that carriers can expect stability with scalability under operational
conditions, not just in isolated assessments of performance. The scalability results were shown
to be unchanged when the M40 is placed under a heavy processor load supporting 50 MPIBGP sessions, 50 MPLS LSPs and connected to an OSPF grid of 50 simulated routers
The overall conclusion of this evaluation programme is that in a test environment, as detailed
in this report, the M40 router proves to be a highly scalable and stable platform in relation to
the scalability of its VRF routing tables.

Released Issue 1: 11th Apr 2002

Page 4 of 24

IN CONFIDENCE

Introduction

The layer 3 MPLS VPN scalability testing was performed by BTexact Technologies at its
independent test facility located at Adastral Park in the UK. The results presented in this
th
th
report were tested during the period February 11 to March 8 2002.
The tests were performed to satisfy the requirements agreed between BTexact Technologies
and Juniper Networks[1]. The main aim of these tests was to verify the maximum number of
routes that could be held in a VRF on the M40 router. This is one important consideration to
carriers as they deploy edge routers to support the latest generation of IP services for
business. Any network solution must scale linearly as the number of customers, and the size
of customer networks, increases. The device behaviour must include graceful, reliable and
predictable performance as the network becomes overloaded.
Three different test scenarios were defined as follows:

Test Case 1:
Initially a baseline set of tests was performed which determined the number of routes which
could be held in a VRF table relative to the number of VRFs that were configured on the
router. Four different protocols were used to put the routes into the routing table to determine
whether or not the number of routes that could be held was related to the protocol being used.
Test Case 2:
Once the baseline set of tests has been performed the configuration on the router was altered
so that each VRF had a firewall filter and a counter on its outgoing interface. Once this
configuration had been established a subset of the baseline tests was performed to evaluate
whether or not the addition of filters and counters has an impact on the number of routes that
can be held per VRF. This subset of tests was performed with 5, 10 and 15 filter statements
and counters per VRF. The filters were defined in such a way that when there were N VRFs
and M terms in each firewall filter, MxN unique counters were created on the packet forwarding
engine.
Test Case 3:
The third set of tests involved altering the configuration again by adding a policer to each VRF
in addition to the filter and counter added in test case 2. Once this was established a similar
subset of tests to test case 2 was performed.

In addition to the three scenarios defined above a final scenario was tested to verify the results
from the previous cases when the router was subject to typical parameters that may be seen in
a deployed VPN network:
Network Scenario Test:
This test ran used exactly the same configurations as Test Case 3 with each VRF subject to a
filter, counter and policer but the router being tested was also configured to run OSPF, support
MPLS LSPs and maintain VPN MP-IBGP sessions.

In all of the test cases the number of routes that could be put into the VRF was recorded in
addition to the latency of one traffic stream across the router.
In all test cases Agilent RouterTesters were used to generate the routes to be inserted into the
VRFs and also to measure the latency of the traffic across the test configuration.
Released Issue 1: 11th Apr 2002

Page 5 of 24

IN CONFIDENCE

Test Methodology

This sections details the Juniper Networks M Series routers that were used to perform the
scalability testing and also the test equipment used. Firstly the test topology is defined and
then the process that was used to perform the tests is detailed. This includes how the router
configurations were generated and also the iterative process that was used to obtain the
results.
Also included in this section are any limitations that were discovered during the testing period.
This includes limitations on both the routers and the test equipment.
3.1

Testing Topology

Figure 1 shows the topology that was used to perform the testing for the first three test cases.
The topology used for the Network Scenario test is detailed in section 5. The M40 is the router
under test and is a PE router in the BGP Layer 3 MPLS VPN model. A Gigabit Ethernet (GE)
port on the M40 is split into a number of VLANs (sub-interfaces) each one of which goes to a
simulated CE router. The M20 router acts as the CE routers in that it inserts routes into the
VLANs and hence into the VRFs on the M40. The routes themselves are injected into the M20
via an E-BGP session to the RouterTester and then redistributed into every VLAN.

Simulated CE Routers

Juniper M20

GE Link
1 VLAN per CE
Upto 1000 CEs

PE Router

Juniper M40

OC-48
Link

OC-48
Link

E-BGP Routes from RT

Traffic Flow

Figure 1 VPN Scalability testing configuration


The advantage of using the M20 to redistribute the routes is that when n VRFs are defined the
th
routing table on the M20 only needs to hold 1/n of the total number of routes held by the M40.
Hence it is guaranteed that the M40 will be router which is stressed and not the M20. The
consequence of this method is that exactly the same routes are put in each VRF on the M40.
However as RFC 2547 allows overlapping addresses for the case that two sites in different
VPNs are using the same private addresses this provides a good test. Additionally, as this is a
scalability test the composition of the routes is not a critical factor.
The original and ideal scenario would be to use a GE port on the RouterTester straight into the
M40 and inject the routes directly. However although 4 ports of GE were available on the
RouterTester the current limitations of these ports is that only a total of approximately 25
Released Issue 1: 11th Apr 2002

Page 6 of 24

IN CONFIDENCE

OSPF sessions could be configured per port. Similar limitations are found when using BGP. It
should also be noted that the RouterTester software does not currently support RIP. To
achieve the target of 1000 VRFs this would mean that 40 ports of GE would be required on the
RouterTester and hence on the M40. This was not considered a practical solution and no
workaround for these limitations was found during the timescales of the testing.
3.2

Router Configuration

The configurations for the router were generated using PERL scripts that were modelled on a
basic outline provided by Juniper Networks. For each test two configuration files were
required, one for the PE and one for the CE router. Prior to the commencement of each test
firstly a baseline configuration would be loaded onto the M20 and M40 and then the PERL
generated configuration files would be merged onto the relevant router. All of the PERL scripts
used were saved and hence can be used to repeat any of the testing as required.
The configurations for the static route cases were very large as they contained all of the routes
for the VRFs. Hence the set system compress-configuration file command was used to
reduce their size.
3.3

Testing process

Once the configurations had been loaded and committed onto the two routers a TCL script
was used to control the RouterTester. This script enabled the user to insert a baseline set of
routes and then to slowly add more until the limit of the M40 was found. The last stable value
was recorded and also the step size of the number of routes being added to give the error
range. For the total period of testing a single traffic stream was passed across the router by
the RouterTester and monitored for anomalies while the routes were being injected. This
stream ran at approximately 20% load of 64 byte packets on the OC-48 port that is equivalent
to 416 Mbit/s (or approximately 50% load on the GE link).
A number of indicators were used to determine when the M40 has reached its limitation:

Routes in the routing forwarding table when not fully populated (show route
forwarding table summary)
monitor start messages gives the SCB errors on the telnet login
Free kernel memory and system messages on the SCB card. The system message
seen on the SCB card when no more routes can be added is as follows:
Feb 27 14:53:18 M40-PE scb RT: Failed prefix add IPv4:152 - 160.8.1.20/30
(no memory)

The following graph shows how the used memory increase linearly with routes being added
until it reaches 100% (results from the OSPF test case 1 with 500 CEs). With no routes the
free memory is 28% and with no VRFs configured was found to be 18%.

Released Issue 1: 11th Apr 2002

Page 7 of 24

IN CONFIDENCE

Used Kernel Memory vs Routes Added

Used Memory

120
100
80
60

SCB Used Kernel


Memory (%)

40
20
0
0

500

1000

No. of Routes

Figure 2 Used Kernel Memory increase with added routes


Once it was determined that the M40 has reached its limitation then the maximum number of
routes that could be stored in the router without errors or instability was recorded. In addition
the step size was recorded which is the value that was added to the stable routes when the
limitation of the M40 was reached. The average latency of the traffic going through the router
was also recorded at this time.
For test case 3 the policer was tested to ensure that it would limit the traffic but the
measurements were made without any traffic limitations.
3.4

Version control

Juniper Router:
JUNOS Version Number The original JUNOS version under test was 5.1R2.4 as stated in by
Juniper Networks in their testing requirements document [1]. This image was found to support
700 BGP VRFs. Following discussion with Juniper Networks a new image was obtained (5.120020207-Y49638) which allowed 1000 BGP VRFs to be created on the M40. All tests
beyond this were performed with this new JUNOS image.
The changes incorporated in the new version obtained are due to be released by Juniper
Networks in the next JUNOS maintenance release 5.1R4 due April 2002.
Appendix B contains the outputs of the show chassis routing-engine and the show chassis
hardware detail for both the M40 and M20 used for testing.
Agilent RouterTester:
The RouterTester version used for all of the testing was 4.1.0.16c, which is the latest build
from Agilent.

Released Issue 1: 11th Apr 2002

Page 8 of 24

IN CONFIDENCE

VPN Scalability Test Results

This section presents the results from the three test cases. The full test result tables are
detailed in Appendix A.
It should be noted that all latency values quoted in this section is the total latency for the
packet across the test configuration i.e. across both the M20 and M40 routers.
4.1

Test Case 1 - Routes per VRF

The following graph shows the maximum number of routes that can be held in a VRF given the
number of VRFs that have been configured on the router.

Routes per VRF for given Protocols

20000
18000

No. of Routes

16000
14000

BGP

12000

RIP
Static

10000

OSPF
8000
6000
4000
2000
0
25

50

75

100

200

300

400

500 600

700

800 900 1000

No. of VRFs

Figure 3 Graph of maximum number of routes held by VRFs


It can be seen from these results that the protocol that is used to put the routes into the VRF is
irrelevant.
The next graph shows the latency values for the traffic across the test configuration.

Released Issue 1: 11th Apr 2002

Page 9 of 24

IN CONFIDENCE

Packet Latency vs No. of VRFs


18
16

Latency (us)

14
12

BGP
10

RIP
Static

OSPF
6
4
2
0
25

50

75

100

200

300

400

500

600

700

800

900

1000

No. of VRFs

Figure 4 Graph of Latency vs No. of VRFs


From the above graph it can be seen that as the number of VRFs is increased then the latency
values actually decrease. This is an expected behaviour as the number of routes in the VRF
routing tables decreases as the number of VRFs is increased. Therefore the lookup time
required by the router to find the destination for the traffic packet will be less.
It can also be seen that there is no significant difference in latency for the protocols with the
exception of when static routing is used. In this case the latency is consistently about 1
microsecond less. This difference is most likely accounted for by the fact that in the results
shown above different prefixes were used for testing the static case (/24 was used) than for
the other protocols (/30 was used).
4.2

Test Case 2 - Routes per VRF plus filter and counter

This test case measures the impact of the presence of firewall filters and counters on the
number of routes that can be held in a VRF. In this test a multi-term filter was configured on
each of the 500 VRF interfaces with each term containing a counter. The filters were
configured in such a way that for M terms in a filter 500xM unique counters would be created
on the forwarding engine. The results of the tests are shown in the following graph.

Released Issue 1: 11th Apr 2002

Page 10 of 24

IN CONFIDENCE

No. of Routes

Routes per VRF for 500 configured VRFs


(with filter and counter)
850
800
750
700
650
600
550
500

BGP
RIP
Static
OSPF

10

15

No. of terms in filter


Figure 5 Graph of Routes per VRF vs Filter terms for 500 configured VRFs each with
filter and counter.
From this graph it can be seen that the addition of the filters causes a slight drop in the number
of routes that can be held in the VRF routing table. For 15 terms per filter the number of
routes that can be stored is reduced by the order of 5-6%. In this case there would be 15x500
unique counters configured on the forwarding engine.
The following graph shows how the latency varies as the number of terms in the filter is
increased.

Latency for 500 configured VRFs


(with filter and counter)

Latency (us)

17
16
BGP

15

RIP

14

Static
OSPF

13
12
0

10

15

No. of terms in filter


Figure 6 Graph of Latency vs Filter terms for 500 configured VRFs each with filter and
counter.
From this graph an increase in latency is seen as the number of terms in the filter is increased.
However for up to 15 terms in the filter this is of the order of less than 1 microsecond and is
not considered a significant impact on the traffic.
4.3

Test Case 3 - Routes per VRF plus filter, counter and policer.

This final section of tests repeats the previous section but a policer is added onto each
interface which would limit the traffic if it exceeded a set value.
Released Issue 1: 11th Apr 2002

Page 11 of 24

IN CONFIDENCE

The following graph shows the effect on the number of routes that can be held in each VRF
when there is a filter each containing a single policer and in addition one counter per term on
the VRF instance.

No. of Routes

Routes per VRF for 500 configured VRFs


(with filter, counter and policer)
850
800
750
700
650
600
550
500

BGP
RIP
Static
OSPF

10

15

No. of terms in filter


Figure 7 Graph of Routes per VRF vs. Filter terms for 500 configured VRFs each with
filter, counter and policer.
By comparing this graph to the one in the previous section it can be seen that adding a policer
has no additional impact to the addition of the filter and counter.
The following graph shows how the latency varies as the number of terms in the filter is
increased.

Latency for 500 configured VRFs


(with filter, counter and policer)

Latency (us)

17
16

BGP

15

RIP

14

Static
OSPF

13
12
0

10

15

No. of terms in filter


Figure 8 - Graph of Latency vs Filter terms for 500 configured VRFs each with filter,
counter and policer.
Again by comparing this graph to the one in the previous section it can be seen that adding a
policer has no significant additional impact to the addition of the filter and counter.
Examination of the results shows that an increase of approximately 0.2 microseconds in
incurred with the addition of a policer.

Released Issue 1: 11th Apr 2002

Page 12 of 24

IN CONFIDENCE

After this testing additional counters were added onto the interfaces that directly connected to
the RouterTester ports and the following table shows the latency values that were measured.
Description
One counter (Case 3)
Additional counter on M40 input only
Additional counter on M20 output only

Latency (us)
15.04
15.36
15.68

Table 1 Latency values with additional counters


From the results it can be seen that the addition of just a counter to an interface adds
approximately 0.32 microseconds to the latency of a stream passing through that interface
under the test conditions as described in section 3.3.

Released Issue 1: 11th Apr 2002

Page 13 of 24

IN CONFIDENCE

Network Scenario Test

This section details the modifications made to the test topology to enable the network scenario
test and the results that were obtained from these tests. The full results are listed in Appendix
A.
5.1

Test Topology.

The following diagram shows the general principle behind the layer 3 VPN network. The CE
routers run an routing protocol session to a PE router to advertise the routes from its private
network. These routes are then passed across the providers core network using MP-IBGP to
all other PEs which are connected to a CE within the same VPN. The diagram also shows
how the Agilent RouterTester can be used to simulate both the CE and the provider network.

Provider Core
Network

PE

CE

Routing Protocol
Session

Routing Protocol
Session

Agilent
RouterTesterCE

Agilent POS
Interface

PE

IBGP Session

Device Under
Test

Figure 9 BGP Layer 3 MPLS VPN network


The current version of RouterTester is only able to support one MP-IBGP session per OC48
port and so this was solely used to simulate a 50 router P network. One of these simulated
routers was configured as a PE router to send traffic into the VPNs and across the network.
Additionally an MPLS LSP (using RSVP) was configured between the M40 under test and
each every simulated router. Another port on the M40 was then connected to an M160 and
sub-interfaces were used to configure 50 MP-IBGP sessions to simulate connection to other
PEs. This is all illustrated in the following diagram.

Released Issue 1: 11th Apr 2002

Page 14 of 24

IN CONFIDENCE

OC48 Link
50 Subinterfaces
50 MP-IBGP sessions

Simulated Provider Network


Simulated CE Routers

Juniper M20

GE Link
1 VLAN per CE
Upto 1000 CEs

Juniper M40

OC-48
Link

Simulated PE

OC-48
Link

0.0

1.0

0.1

0.10

1.1

1.10

E-BGP Routes from RT

4.0

Traffic Flow

4.1

4.10

RouterTester
simulated OSPF grid
(50 routers)

Figure 10 Network Scenario Testing configuration


The outcome of this configuration is that in addition to the previous tests the M40 being tested
now has to also support the following:

5.2

50 MP-IBGP sessions
50 MPLS RSVP LSPs
OSPF connection to a grid of 50 routers
Test Results

Once this topology had been configured the methodology of testing was exactly the same as
the previous tests. Test Case 3 was repeated for 500 BGP CEs and the following graph show
the number of routes that the M40 was able to hold compared to that originally recorded in
Test Case 3.
During the testing the following error messages were seen on the telnet session being used to
interrogate the M40:
Mar 7 12:12:30 M40-PE rpd[545]: RPD_OS_MEMHIGH: using 807932 kB of memory, 113
percent of available
This did not affect the performance of the router which continued to forward traffic and
participate in the routing protocols. All that is being indicated by this message is that the
routing engine is temporarily paging to the hard disk to circumvent a shortage of physical
memory.

Released Issue 1: 11th Apr 2002

Page 15 of 24

IN CONFIDENCE

No. of Routes

Routes per VRF for 500 configured VRFs


(with filter, counter and policer)
850
800
750
700
650
600
550
500

Case 3
Network Scenario

10

15

No. of terms in filter


Figure 11 Graph of Routes per VRF for 500BGP CEs for the Networks Test scenario vs
Case 3.
The next graph show how the latency differs between the results measured for Case 3 and the
results measured under the Network Test Scenario.
Latency for 500 configured VRFs
(with filter, counter and policer)

Latency (us)

17
16
Case 3

15

Network Scenario

14
13
0

10

15

No. of terms in filter


Figure 12 - Graph of Latency vs Filter Terms for 500BGP CEs for the Networks Test
scenario vs Case 3.
The conclusion that is reached from these results is that even under significant additional load
the number of routes that can be held in a VRF is not significantly changed.
The one major difference between the Network Scenario test and the previous testing was
found to be the time taken to install the routes into the VRFs. This is illustrated in the following
graph which shows the average and maximum latency values firstly for installing 100 routes
per VRF (50,000 routes in total) without any MP-IBGP sessions and then also with 50 MPIBGP sessions configured to the M160. It was found that by monitoring the maximum latency
of all the packets crossing the VPN network gave a very good indication of the time taken for
routes to be installed into VRFs. The RouterTester was set to sample every second and give
the average value of the latency of all of the packets in that sample and also give the
maximum value for that one second sample.

Released Issue 1: 11th Apr 2002

Page 16 of 24

IN CONFIDENCE

Latency variation during route injection for 500


CEs with VPN I-BGPs (100 routes)
30

Latency (us)

25
20
Average Latency
15

Max Latency 50 MP-IBGPs


Max Latency 0 MP-IBGPs

10
5
0
1

26

51

76 101 126 151 176 201 226 251

Time (secs)
Figure 13 Graph of Latency variation with and without MP-IBGP sessions
As can be seen from the graph the average latency is constant over the whole test period
indicating that only a very small number of packets are being affected by the increase in load
on the processors during the installation of the routes. The major point to be noted from this is
that when the M40 has to support 50 MP-IBGP sessions the time taken to install routes can
increase by a factor of three or four.
In conclusion it can be seen from the results in this section that although the M40 has to
perform much more processing under the Network Test Scenario the number of routes which
can be held by the VRFs and hence the scalability figures remain unchanged.

Released Issue 1: 11th Apr 2002

Page 17 of 24

IN CONFIDENCE

Conclusion

A service provider must consider many factors when selecting a platform to support MPLS
VPN services. BTexact Technologies has a great deal of experience in the assessment and
evaluation of technology, and believes that issues such as equipment stability, scalability and
raw performance are very important factors in selection of a suitable platform.. The big issue
facing network operators today is how to achieve the return on investment from IP services
that they once enjoyed from voice services. Platform cost is a big factor and IP-VPNs allow the
cost of the infrastructure to be shared by a number of customers. It is therefore vital that the
platform deployed can support a large number of customers, and can support customers with
both large and small network requirements
From the test results shown in this report it can be concluded that the M40 router is capable of
scaling up to the hardware limitations of the router with no instability or traffic loss seen.
When using BGP to insert the routes into the VRFs, approximately 450,000 routes can be
stored by the M40 router. These can be split almost linearly across the VRFs, for example if
1000 VRFs are configured it was proved that each could hold approximately 440 routes with
no errors or instability in the M40. Similar linear results were seen for the other protocols
tested.
When using a single VRF with BGP or Static routes many more routes can be added into the
routing table in comparison to OSPF and RIP as would be expected. For 25+ VRFs on the
M40 there is no difference between the BGP, RIP, OSPF or Static routes in relation to adding
routes into the routing table.
The addition to each PE-CE interface of a 15 term filter with each term containing a counter
causes approximately a 5-6% drop in the number of routes which can be held in the VRF
routing table and causes a slight increase in the latency across the test configuration (less
than 1 microsecond). Note that the total number of counters invoked was 15xN where N is the
number of VRFs.
The addition of a policer has no significant additional impact on the results obtained when
using just a filter and counter.
Finally it has been shown that even with the increased processing load of supporting 50 MPIBGP sessions, 50 MPLS RSVP LSPs and connected to a 50 router OSPF grid the scalability
figures for the M40 remain unchanged.

7
[1]

References
Testing Requirements, BGP L3 MPLS VPN Scalability Independent Testing at BT
Futures Labs, Gary Tate.

Released Issue 1: 11th Apr 2002

Page 18 of 24

IN CONFIDENCE

Appendix A Detailed Test Results

Test Case 1 Maximum number of routes per VRF


Protocol BGP:
No. of VRFs
25
50
75
100
200
300
400
500
600
700
800
900
1000

No of Routes Error
17900
9000
5900
4400
2200
1400
1050
830
680
560
490
440
440

Margin (+)
100
100
100
100
50
100
25
10
20
20
20
20
20

Latency (us)
15.32
15.16
15.13
15.07
15.06
15.06
14.95
14.92
14.93
14.87
14.98
15.01
14.66

No of Routes Error Margin(+)


15000
1000
9200
100
5500
100
4200
100
2000
100
1250
100
1000
100
800
10
660
10
550
10
470
10
410
10
350
50

Latency (us)
15.33
15.39
15.36
15.39
15.36
15.32
15.32
15.12
15.12
15.12
15.02
15.02
14.87

Protocol OSPF
No. of VRFs
25
50
75
100
200
300
400
500
600
700
800
900
1000

Released Issue 1: 11th Apr 2002

Page 19 of 24

IN CONFIDENCE

Protocol RIP
No. of VRFs
25
50
75
100
200
300
400
500
600
700
800
900
1000

No of Routes Error Margin(+)


15000
1000
8700
100
5800
100
4200
100
2100
100
1400
100
1000
100
810
10
660
10
550
10
470
10
410
10
350
50

Latency (us)
15.33
15.39
15.39
15.37
15.2
15.19
15.24
15.08
15.08
15.08
14.92
14.92
14.72

No of Routes Error Margin(+)


17000
1000
8600
100
5700
100
4300
100
2100
100
1400
100
1000
100
820
10
700
10
600
10
520
10
460
10
410
10

Latency (us)
14.41
14.41
14.41
14.41
14.4
14.25
14.25
14.25
14.25
14.25
14.25
14.24
14.25

Protocol Static
No. of VRFs
25
50
75
100
200
300
400
500
600
700
800
900
1000

Released Issue 1: 11th Apr 2002

Page 20 of 24

IN CONFIDENCE

Test Case 2 Adding Firewall filters and counters


Protocol
BGP

No. of terms
0
5
10
15

No. of Routes
830
800
780
780

Error (+)
10
10
10
10

Latency
14.92
15.55
15.65
15.73

RIP

0
5
10
15

810
800
780
780

10
10
10
10

15.08
15.76
15.81
15.91

OSPF

0
5
10
15

800
790
770
770

10
10
10
10

15.12
15.75
15.82
15.87

Static

0
5
10
15

820
790
780
780

10
10
10
10

14.25
14.87
14.87
15.04

Test Case 3 Adding Firewall filters, counters and policers for 500 configured VRFs
Protocol
BGP

No. of terms
0
5
10
15

No. of Routes
830
790
780
780

Error (+)
10
10
10
10

Latency
14.92
15.71
15.82
15.88

RIP

0
5
10
15

810
790
780
770

10
10
10
10

15.08
15.93
16.03
16.09

OSPF

0
5
10
15

800
770
760
760

10
10
10
10

15.12
15.93
16.03
16.09

Static

0
5
10
15

820
780
770
770

10
10
10
10

14.25
15.04
15.04
15.2

Released Issue 1: 11th Apr 2002

Page 21 of 24

IN CONFIDENCE

Network Test Scenario Test Case 3 plus 50 MP-IBGP sessions, 50 LSPs and OSPF
Protocol
BGP

No. of terms
0
5
10
15

Released Issue 1: 11th Apr 2002

No. of Routes
810
780
770
770

Error (+)
10
10
10
10

Latency
14.67
15.6
15.62
15.65

Page 22 of 24

IN CONFIDENCE

Appendix B Juniper Hardware

M20 Hardware:

lab@platinum> show chassis hardware detail


Hardware inventory:
Item
Version Part number Serial number
Description
Chassis
53572
M20
Backplane
REV 09 710-001517 AY3257
Power supply A REV 05 740-001465 002901
AC
Power supply B REV 05 740-001465 002631
AC
Display
Host 0
REV 04 740-003239 9001026655
Host 0
33000007c8452401 Present
SSB slot 0
REV 01 710-001951 AS8087
Internet Processor II
SSRAM bank 0 REV 02 710-001385 341516
2 Mbytes
SSRAM bank 1 REV 02 710-001385 340794
2 Mbytes
SSRAM bank 2 REV 02 710-001385 341149
2 Mbytes
SSRAM bank 3 REV 02 710-001385 340395
2 Mbytes
SSB slot 1
N/A
N/A
N/A
backup
FPC 2
REV 03 710-003308 BB4648
E-FPC
SSRAM
REV 02 710-001385 240455
2 Mbytes
SDRAM bank 0 REV 04 710-000099 354090
64 Mbytes
SDRAM bank 1 REV 04 710-000099 354148
64 Mbytes
PIC 0
REV 06 750-002558 BB4379
1x OC-48 SONE T, SMSR
FPC 3
REV 03 710-003308 BB4612
E-FPC
SSRAM
REV 02 710-001385 354703
2 Mbytes
SDRAM bank 0 REV 04 710-000099 354153
64 Mbytes
SDRAM bank 1 REV 04 710-000099 354147
64 Mbytes
PIC 0
REV 06 750-002558 BB4328
1x OC-48 SONET, SMSR
lab@platinum> show chassis routing-engine
Routing Engine status:
Slot 0:
Current state
Master
Election priority
Master (default)
Temperature
33 degrees C / 91 degrees F
DRAM
768 Mbytes
CPU utilization:
User
0 percent
Background
0 percent
Kernel
0 percent
Interrupt
0 percent
Idle
100 percent
Serial ID
33000007c8452401
Start time
2001-11-12 11:36:51 UTC
Uptime
85 days, 22 hours, 46 minutes, 4 seconds
Load averages:
1 minute 5 minute 15 minute
0.00
0.00
0.00
Routing Engine status:
Slot 1:
Current state
Empty
lab@platinum>

Released Issue 1: 11th Apr 2002

Page 23 of 24

IN CONFIDENCE

M40 Hardware:
lab@ap1cor1> show chassis hardware detail
Hardware inventory:
Item
Version Part number Serial number
Description
Chassis
00158
M40
Backplane
REV 08 710-000073 AA2133
Power supply A Rev A1 740-000235 000157
DC
Power supply B Rev 03 740-000234 000702
AC
Maxicab
REV 08 710-000229 AS0060
Minicab
REV 04 710-001739 AR0842
Display
REV 07 710-000150 AA5133
Host
REV 04 740-003239 9001026631
Host
44000007c8646501 Present
SCB
REV 02 710-001838 AN0469
Internet Processor II
SSRAM bank 0 REV 02 710-001385 339903
2 Mbytes
SSRAM bank 1 REV 02 710-001385 339976
2 Mbytes
SSRAM bank 2 REV 02 710-001385 339918
2 Mbytes
SSRAM bank 3 REV 02 710-001385 339998
2 Mbytes
FPC 2
REV 09 710-000175 AA4809
SSRAM
REV 01 710-000077 100315
1 Mbyte
SDRAM bank 0 REV 01 710-000099 100789
64 Mbytes
SDRAM bank 1 REV 01 710-000099 100782
64 Mbytes
PIC 0
REV 02 750-000613 AA3176
1x OC-12 SONET, SMIR
PIC 1
REV 04 750-000603 AA4278
4x OC-3 SONET, SMIR
PIC 2
REV 05 750-000614 AA3212
1x OC-12 ATM, SMIR
PIC 3
REV 02 750-002785 AN3262
1x G/E, 1000 BASE-S X
FPC 4
REV 03 710-003308 BB4633
E-FPC
SSRAM
REV 02 710-001385 354384
2 Mbytes
SDRAM bank 0 REV 04 710-000099 354966
64 Mbytes
SDRAM bank 1 REV 04 710-000099 354942
64 Mbytes
PIC 0
REV 04 750-000612 AP6237
2x OC-3 ATM, MM
PIC 1
REV 02 750-002785 AL9714
1x G/E, 1000 BASE-S X
PIC 3
REV 07 750-002303 AN8432
4x F/E, 100 BASE-TX
FPC 6
REV 01 710-001197 AA8631
SSRAM
REV 01 710-000077 100965
1 Mbyte
SDRAM bank 0 REV 01 710-000099 501390
64 Mbytes
SDRAM bank 1 REV 01 710-000099 501391
64 Mbytes
PIC 0
REV 03 750-000617 AA4545
1x OC-48 SONET, SMIR
lab@ap1cor1> show chassis routing-engine
Routing Engine status:
Temperature
34 degrees C / 93 degrees F
DRAM
768 Mbytes
CPU utilization:
User
0 percent
Background
0 percent
Kernel
0 percent
Interrupt
0 percent
Idle
100 percent
Start time
2002-02-01 16:50:17 UTC
Uptime
4 days, 17 hours, 36 minutes, 17 seconds
Load averages:
1 minute 5 minute 15 minute
0.03
0.01
0.00
lab@ap1cor1>

Released Issue 1: 11th Apr 2002

Page 24 of 24

Você também pode gostar