Você está na página 1de 34

Campus Network Report

JANUARY 2013 JUNE 2013

Information Technology Services


University Networking and Infrastructure, NEA
June 20, 2013

Table of Contents
Executive Summary............................................................................................................................................... 3
Network Design .................................................................................................................................................... 4
Service Provider Edge ....................................................................................................................................... 4
Edge Distribution .............................................................................................................................................. 5
Campus Core ......................................................................................................................................................... 6
Campus Distribution and Access Layer ............................................................................................................. 6
Wireless Core .................................................................................................................................................... 7
VPN/Firewalls ................................................................................................................................................... 7
DSL Network ..................................................................................................................................................... 7
Management Network ..................................................................................................................................... 8
Frey Data Center ............................................................................................................................................... 8
CYMRU .............................................................................................................................................................. 9
ResNet .............................................................................................................................................................. 9
Network Infrastructure ................................................................................................................................... 10
IP Subnet and Addressing Plan ........................................................................................................................... 12
VLAN Scheme ...................................................................................................................................................... 13
Routing Review ................................................................................................................................................... 14
OSPF/BGP ....................................................................................................................................................... 14
IPv4/IPv6 ......................................................................................................................................................... 15
Network Performance Review ............................................................................................................................ 16
Campus Network Backbone............................................................................................................................ 16
External Connectivity ...................................................................................................................................... 17
ResNet ............................................................................................................................................................ 17
Wireless Network ........................................................................................................................................... 19
Notable Accomplishments .................................................................................................................................. 21
Data Center ..................................................................................................................................................... 21
Core Network.................................................................................................................................................. 21
Wireless Network ........................................................................................................................................... 21
Switching Infrastructure ................................................................................................................................. 21
Performance Monitoring ................................................................................................................................ 22
Areas of Concern................................................................................................................................................. 23
VPN/Firewall Module and PCI Compliance..................................................................................................... 23

Management Network ................................................................................................................................... 23


DMZ Network across Distribution Sites .......................................................................................................... 23
IPv6 Management........................................................................................................................................... 24
Wireless: 802.11b ........................................................................................................................................... 24
Network Management Systems ..................................................................................................................... 24
Appendix A .......................................................................................................................................................... 25
Campus Core Bandwidth Utilization ............................................................................................................... 25
Appendix B .......................................................................................................................................................... 29
Distribution Sites with Sustained Throughput Rates above Normal .............................................................. 29
Appendix C .......................................................................................................................................................... 31
Appendix D.......................................................................................................................................................... 33
Wireless Client Count by Authentication, spring 2013 ................................................................................... 33

Executive Summary
This network report is an attempt at providing a review of the LSU network describing the details of its design
and configuration, as well as highlighting metrics that describe its size, performance, and progress. The goal
of this exercise is to provide transparency to University Networking and Infrastructure (UNI) peers and other
Information Technology Services (ITS) units so they can have a better understanding of how the network
works, how the network grows, how the network performs, and what it takes to maintain it. Another goal of
this report is to establish a baseline of the network environment that can be used to compare data against
future reports. Finally, one last goal of this report is to utilize it as a mechanism to see the progress of the
network as it evolves, identify areas that need improvement or change in design, and develop sound life cycle
plans that are based on identified needs.
This report is intended to be produced biannually, with the first report being delivered before the end of
June, and a second report delivered before the end of December. The report is strictly focused on systems
that are directly or indirectly the responsibility of the Network Engineering and Architecture team, a
department of the University Network and Infrastructure unit.
The report is broken down into eight sections. A description of the network topology is provided in order to
give the reader a clear understanding of all the network components that make up the LSU network and how
they interact with each other. Details on the network infrastructure are provided in terms of number of
devices and network nodes. This should allow to put into perspective the size of the network and what it
takes to manage it. Specifics into the Internet Protocol (IP) addressing, network subnet structure, and virtual
local area network (VLAN) scheme are given to understand how addressing allocations are made and
distributed, and how they are routed to the local and global network. A routing review explores the internal
and external gateway protocols used on the LSU network, explains their functionality, and documents their
current state. Performance measurements are provided to analyze the network usage in various locations of
the network and highlight significant changes. Throughout the year, multiple changes take place on the
network in the form of upgrades, redesign, and addition of new services. A compilation of the most notable
accomplishments are mentioned in this report. Finally, areas of concern are listed to indicate the major
challenges that are currently faced.
The data presented for this report is gathered from various sources such as Cacti graphs, OpenNMS, MySoft
reports, Ciscos Network Control Systems (NCS) statistical data, and network switches. In terms of bandwidth
usage, it should be noted that the reported averages and percentages were calculated using recorded
averages, which are daily average samples and take into account weekends, holidays, and non-business
hours. This is significant in that the averages might not reflect busy hour patterns, which on average are
much higher than the reported values. However, all the graphs provided in this report give enough visual
information to provide a good idea as to what those values are.
Overall, the network was found to be in good standing with significant improvements that have enabled
better performance and provided additional capacity. Investments have been made in key areas of the
network such as the data center, core network, life cycle, and wireless. Certain areas of concern have been
identified for some time, but limited staff and various responsibilities have prevented these concerns from
being addressed. The rest of the year will continue to see additional work addressing network life cycle
replacements, data center and firewall design, and IPv6 research, provided we have the personnel to do this.

Network Design
The LSU Network is modeled for robustness and redundancy. It includes four core layer 3 switches, fully
meshed. Each core switch is connected to three other core switches at 10Gbps. This design utilizes Equal Cost
Multipath Open Shortest Path First (OSPF) to provide a full 30Gbps of load-balanced interconnect full duplex
capability.
The majority of the buildings on the LSU campus are connected to the network core via layer 3 distribution
switches. These distribution switches connect to two of the four core switches at either 10Gbps or 1Gbps
speeds via single mode fiber. The distribution switches then downlink to each access layer switch at either
10Gbps, 1Gbps, or 100 Mbps. These access layer switches provide the copper ports that user's computers
plug directly into.
From a conceptual standpoint, the LSU network can be visualized as a number of separate modules that
provide the necessary flexibility for ease of implementation and the required focus to address specific
requirements. The following is a brief description of each of these modules.

LaTech

Campus Core

ResNet

Edge Distribution

ISB

Service Provider Edge


MPLS

Management Network
DBYD

CMB

CYMRU (Foo Router)


Bogon route filtering

CSC

CSC

Main
Firewall

Main
Router

ATHA

ATHA

Backup
Firewall

Backup
Router

LONI

Frey Data Center


CSC

DSL Network

ATHA
or

Campus Distribution and Access Layers


Wireless Core

COX

VPN and Remote


Access

Building
Distribution

CAMD

VPN/Firewalls
...

Metro-E/
AT&T

Building
Access

FETI

HRPO
...

10 Gig
1 Gig

Wireless Access

LEVEL 3

South
Campus

Figure 1. LSU Network: Modular Network Design.

Service Provider Edge


The infrastructure in this module provides connectivity to the internet for the entire LSU campus. The
Louisiana Optical Network Initiative (LONI) is LSUs primary Internet Service Provider (ISP). In addition, LSU
contracted with Cox Communications for backup internet connectivity until the end of Spring 2013. On June
21st, the contract with Cox came to an end and LSU contracted with LONI for Internet backup connectivity.
The diagram below represents the updated topology:

Edge Distribution

Service Provider Edge

CSC

CSC

Main
Firewall

Main
Router

LONI
ATHA

DBYD

Backup
Firewall

Backup
Router

10 Gig

1 Gig

Figure 1-2. Updated Topology with LONI Backup Internet

In addition to our main service providers, the LSU network utilizes various other services, such as Metro-E,
Multiprotocol Label Switching (MPLS), virtual private network (VPN), and dark fiber, to allow external LSU
entities to have direct connectivity to the LSU network.

Edge Distribution
This module provides both the capabilities to protect the LSU network, via firewall policies and deep packet
inspection, and the distribution capabilities to connect to the rest of the world. As depicted in figure 1, the
edge distribution is composed of a pair of Juniper SRX firewalls and two Cisco Catalyst layer 3 switches, all
configured in a fully redundant design. From an internet reliability standpoint, the edge distribution is multihomed to two ISPs, with LONI being the primary connection, and COX Communications the backup.1
From the edge distribution module, LSU peers with their ISPs via Border Gateway Protocol (BGP). The
network prefixes below are advertised to our ISPs, which in turn they advertise to the Internet:

130.39.0.0/16 (LSU)
192.16.176.0/24 (management)
199.190.250.0/23 (CAMD)
199.190.252.0/24 (CAMD)
204.90.32.0/20 (LOUIS)
204.90.48.0/22 (LOUIS)
173.253.128.0/17 (LSU Wireless)
96.125.0.0/17 (LSU Wireless)
2620:105:B000::/40 (LSU)

Refer to figure 1-2 for updated topology with LONI as Internet backup.

The edge distribution module also contains a VPN and remote access component. This allows LSU users to
connect remotely to the LSU campus via a VPN client. In addition, this component provides the means to
connect LSU affiliated sites to our campus. Examples of these sites are the LSU Golf course, the Chancellors
house, and the Rural Life Museum.

Campus Core
The campus core is the backbone of the LSU network. Its primary function is to provide the necessary
capacity performance, and resiliency to interconnect all distribution sites.

Campus Distribution and Access Layer


In this module all distribution devices dual connect to the campus core2. In addition, the distribution layer
aggregates all devices from the access layer. It is within the distribution layer that Virtual Local Area Network
(VLAN) definitions, and routed interfaces are defined, as well as policies for quality of service (QoS), security
definitions, and segmentation. Even though there is uplink redundancy provided by the dual core
connectivity, the distribution layer does not have device redundancy, except for critical locations.
It should also be noted that even though the majority of the campus buildings have a dedicated distribution
layer device, there are exemptions. The figure below highlights buildings that rely on other distribution layer
devices for connectivity to the network. This design is done for reasons such as lack of fiber infrastructure. It
should be understood that the distribution sites in these locations act as a single point of failure for all other
buildings connected to it.

Dual connectivity provides connection redundancy only. The LSU campus has limited fiber geo-diversity per
building. This means that physical disturbance of a buildings fiber optic cable can bring down both distribution
uplinks since they take the same physical path.

ICONS
LSH

INFO

LSUP

Distribution
Device

Residential Halls

Access Device

ELAB
UPSB

SRSC
LCAC
TAYL
CSC
RESNET

LSB

ENRG

ATHA-3750

SYST

DSLAM Network

DBYD-L2

Campus Core
DORN

AGMT
AGES

FSB

WLRS
FSCS

FSW

ERD

CMC

FOB
SEA

COAS

MILR

EFFE

WL08

RNR

MIDL
LVSP

STRG

WILN

ALEX

SCCB

WSS
Softball

AGCH

ATHA-FOB
Trunk for
XOS Video
System

HILL
SHO2

COLI
SCSC

SCTC

SCB3

SCCB

SCSL

ALE

Figure 2. Campus Buildings without Dedicated Distribution Devices.

Wireless Core
The wireless core is made up of four distribution layer devices each with Cisco Wireless LAN Controller
modules (WiSMs). The network design centralizes all wireless LANs into an intelligent system that enables
ease of deployment, facilitates management, and is both highly available and scalable. In addition, it provides
all the features of a true enterprise system such as seamless mobility, advanced security, high performance,
and intelligent RF control.

VPN/Firewalls
As depicted in figure 1, this module contains security devices that are implemented for very specific
applications. Examples of those applications include PCI complaint vendor networks, departmental firewalls,
and VPN tunnels to external sites.

DSL Network
For locations where the campus fiber infrastructure is not available, this module provides the means to
connect to the main campus via a digital subscriber line (DSL) modem. The table below lists all the sites that
are currently being serviced via DSL.

1
2

Womens Soccer for wireless


Parking Garage Trailer at OPH

15
16

Dairy Farm - Student Apts.


Hazardous Waste Fac.

29
30

UNIO WebCam on LSA


Horticulture Teaching Facility

3
4

ENGR: Old EngShops RM 246


DLAB Back Shed

17
18

Blowout School
Dairy Manager's House

31
32

SUGA (Old Credit Union)


Natatorium Office

5
6

North Stadium Gate 1


Motor Pool Building

19
20

Active; No description
SREC Adventure Complex

33
34

TIGS Drug Testing


Natatorium Score Booth

7
8

ENGR: Old Eng Shops RM 248


ENGR: Old Eng Shops RM 124

21
22

Gennex LSU LAN


WN-1 (Res Life)

35
36

Golf Course
APPO (ATH Physical Plant Office)

9
10

Building Services Shop


Ag Center Warehouse Gourrior

23
24

Friends of the Library


Old Surplus Warehouse (Movers)

37
38

Gennex VPN
Dairy Farm - Main Bldg.

11
12

Facilities Landscape Office


SREC

25
26

Active; No description
LADDL Construction Trailer

39
40

LA House
Poultry Science

13
14

L-Club temp
Old Forestry

27
28

Green House Bldg 440-8


Control Utility

41

Dean French House

Table 1. Campus Buildings on DSL Network.

Management Network
The management network consists of a pair distribution devices located in Frey Computing Services and
David Boyd. Its original intent was to allow for the connection of devices which would be used for Secure
Shell (SSH) and Simple Network Management Protocol (SNMP) management of network devices. At some
point, the Network Operations Center (NOC) was created with the intention of having them monitor the LSU
Network, the LONI network and provide specific services for the management of Internet 2 as a backup to
the NOC at Indiana University. A couple of years later, the first border firewall (Juniper ISG-2000) was
installed at LSU. When the firewall was installed, it became increasingly difficult to manage all of the
exceptions that had to be made to allow management devices to connect to LSU network devices as well as
manage devices which are off campus on LONI or Internet 2. As a result, network management resources
such as NMS, STRM, SPLUNK, Cisco LMS, which are used for the day to day management of the LSU, LONI and
I2 networks, were moved to a dedicated LONI connection that is not subject to firewall rules. This move also
allowed for expanded use of routing protocols, such as OSPF and BGP, to enhance capabilities of the
management network.
Two specific network prefix advertisements to the internet are made from the management network:

130.39.2.0/24
192.16.176.0/24

As already mentioned, these two advertisements are made specifically from the management network, and
as such, they do not traverse the perimeter firewall.

Frey Data Center


The data center module is made up of high performance switches designed specifically for mission-critical
data center environments. Some of the most important features implemented in this module are high
10Gbps Ethernet density, resiliency, scalability, and high availability. In addition, the module acts as a unified
8

server access platform by providing connectivity to rack and blade servers, as well as converged fabric
deployments.
10G MMF link
10G SMF link
1G MMF link
1G copper link
10G copper link

LSU
Campus Core

David Boyd Hall

E4/24

E3/24

E3/1
E4/1

E3/24

E3/1

VPC 1,
PC 1

Non-UCS servers

E4/1
E3/2

E3/2

Frey Computing Center

E4/24

E4/23

E4/23

E3/23

E3/23

Legacy DC switch
VPC 350 - X
E4/2

E4/2

VPC 200, PC 200


VPC 2,
PC 2
Nexus
5548UP

Nexus
5548UP

NAS Storage Network

Nexus
5548UP

VPC 300

EMC
Isilon NAS

VPC 211

VPC 210

UCS System
6248
6248

VPC 101-199

VPC 3,
PC 3

VPC 301

Nexus
5548UP

Repeat this
to support
a total of 8
Nexus 2248
FEXs and
servers

VPC 101

Non-UCS servers

Figure 3. LSU Frey Data Center.

CYMRU
LSU utilizes a free service provided by Team Cymru (pronounced "kum-ree.") in an effort to black hole bogon
routes. A bogon is a route that should never appear in the Internet routing table. A packet routed over the
public Internet (not including over VPN or other tunnels) should never have a source address in a bogon
range. These are commonly found as the source addresses of distributed denial of service attack (DDoS)
attacks.
In the LSU network there is a single layer 3 switch, named Foo Router, which peers with two Cymru bogon
routers. From this peer, Cymru advertises a list of bogon prefixes (IPV4/IPv6) to LSU, and these prefixes get
injected into the LSU network via an internal BGP (iBGP) peer with each of the four core routers.

ResNet
The Residential Network module contains all the residential halls that are managed by the Department of
Residential Life. The network design is different from the rest of the campus. The ResNet network is made up
9

of a single distribution device located in Frey Computing Services, and all residential halls connect to this
device via layer 2 trunks.
From a routing perspective and strictly speaking about wired connectivity, the ResNet network is part of the
LSU routing domain, but takes a different path to the Internet via a dedicated LONI connection. This gives the
ResNet unrestricted access, inbound and outbound, to the internet. Traffic bound to the LSU network is
restricted via an access control list (ACL) at the distribution site, with the exception for essential services such
as DNS, DHCP, web and secure-web. Any other services needed require the use of VPN client connectivity.
From a wireless perspective, ResNet users are offered the same set of Service Set Identifiers (SSID) that are
offered to the rest of the campus. As such, wireless traffic for the ResNet is routed via the LSU core and
ultimately via the border router.
Four network prefix advertisements to the internet are made from the ResNet network:

76.165.224.0/19(LONI)
130.39.164.0/24(LSU)
130.39.142.0/23(LSU)
2620:105:B040::/44(LSU)

Network Infrastructure
In terms of device and cabling infrastructure, the LSU network is a complex network made up of a multitude
of networking components.
The primary components that provide connectivity to the network are distribution layer devices (multilayer
switches) and access layer devices (switches and access points). In addition, security devices provide filtering,
inspection, and remote connectivity capabilities. The numbers shown in table 2 represent the network
device infrastructure at the end of spring 2013.

Device Type
Building Distribution Layer
Data Center Distribution Layer
Main Core Devices
Access Layer Devices
Wireless Access Points All (NCS)
Wireless Controllers
Wireless Distribution Layer
Perimeter Firewall
Main Campus Remote Connectivity/VPN
Departmental Firewalls
DSL Access Multiplexer

Details
Cisco Catalysts
Nexus 7009
Cisco Catalysts 6500E
Cisco /HP
Cisco
Cisco WiSM1
Cisco Catalysts 6500E
Juniper SRX
Cisco ASA
Cisco ASA
Adtran DSLAM

Qty
106
2
4
1030
2427
12
4
2
3
6
4

Table 2. Network Devices Count on the LSU Network.

10

Even though the majority of the network device infrastructure is Cisco, there are still a number of legacy layer
2 switches from Hewlett-Packard (HP) that continue to be part of the LSU network. These devices, however,
are scheduled to be replaced as part of a multi-year network replacement plan. The devices will be replaced
starting in fiscal year 2013-2014, and hoping to have all of them replaced in fiscal year 2014-2015.
Model No.
2810
2600
2610
2626
2900
2910
3500
4208
5406
5412
6200
TOTAL:

Qty
40
8
77
17
13
2
1
14
1
1
4
178

Table 3. HP Switches on the LSU Network.

Active network nodes represent the data nodes that departments pay for and are enabled on the LSU
network infrastructure. The table below displays the total number of active network nodes, as well as their
break down according to location and wireless.
37186

40000
35000
30000

29148

25000
20000
15000
10000
5000

5709
1672

657

0
Wired Nodes Wired Nodes WAP Nodes WAP Nodes
ResNet
ResNet

Active
Network
Nodes Total

Table 4. Active Network Nodes Breakdown and Total3.

Note that the total number of active WAP nodes, 2329, differs from the total number of APs shown in table 2,
2427. The difference is 98 WAP nodes that are not being billed.

11

IP Subnet and Addressing Plan


The LSU network has a number of public IP address allocations, some of which are owned by the university,
and others which are owned by other entities:
CIDR

Description

OrgName

130.39.0.0/16

General Use

LSU

192.16.176.0/24

Management

LSU

199.190.250.0/23

CAMD

199.190.252.0/24

CAMD

204.90.32.0/20

LOUIS

204.90.48.0/22

LOUIS

173.253.128.0/17

Wireless

LSU

96.125.0.0/17

Wireless

LSU

76.165.224.0/19

ResNet

LONI

2620:105:B000::/40

IPv6

LSU

Table 5. Public IP Address Blocks.


The two IP blocks for wireless are fully reserved for the LSU wireless network only and they are almost at
capacity. The 130.39.0.0/16 block is the primarily space used for the rest of the campus. Based on the table
below, 76% of the available space is in use.

Block Size

Available

Subtotal

1024

9216

512

1536

256

10

2560

128

10

1280

64

12

768

32

12

384

16

144

48

TOTAL:

15936

Table 6. Available Free IP Space in 130.39.0.0/16.

There are currently no statistics on the amount of IP addressing space that it is utilized on a continuous basis.
However, on average, a typical new building will utilize approximately a block of 256 to 518 IP addresses from
the 130.39.0.0/16 block.

12

VLAN Scheme
On average, every distribution site has a total of 9 VLANs defined. The main purpose of these VLANs is to
segment the network in a way that provides enough flexibility from an IP addressing standpoint, as well as for
security and routing reasons.
VLAN ID
70
100
300
301
315
400
500
915
920

Description
Security-cameras
Users
devices-static
devices-dhcp
VoIP
Video Conference
DMZ
Management
Wireless Aps

IP Space
private
public
private
private
private
public
public
private
private

Table 6. Typical IPv4 VLAN Scheme.


Because every distribution site can have different needs, additional VLANs are added on an as-need basis.
Due to the complexity of the network, adding VLANs to the current scheme is highly discouraged. VLANs 300
and 301 have been identified as having similar needs, and as such, are currently being collapsed into a single
network prefix under VLAN 300. In addition, because VLAN 400 utilizes a public IPv4 space, and its usage is
very low, it is under consideration to remove it from all distribution sites and only leave it in place in locations
where it is actually being used. Finally, it should be noted that table 6 only refers to IPv4 and it has a
combination of public and private (RFC 1918) IP address spaces. The corresponding IPv6 prefixes are all public
addresses.

13

Routing Review
OSPF/BGP
Open Shortest Path First is the dynamic routing protocol that is used to exchange routing information within
the LSU network. In order to provide a level of hierarchy, the OSPF network is subdivided into two main
routing areas. The main area is called the core or backbone, and all other areas are directly connected to the
core.
Following Ciscos best practices, there are a number of factors that influence OSPF scalability and
performance

Number of adjacent neighbors for any one router: Any one router should have no more than 60
neighbors.
Number of adjacent routers in an area: Generally, an area should have no more than 50 routers
Number of areas supported by any one router: To maximize stability, one router should not be in
more than three areas.
Designated router (DR) selection: It is a good idea to select routers that are not already heavily
loaded with CPU-intensive activities to be the DR and BDR. In addition, it is generally not a good idea
to select the same router to be the DR on many multi-access links simultaneously.
Area Number

12

13

14

23

24

34

Number of routers

22

18

20

25

Table 7. Number of Routers per Major OSPF Area.

In addition, performance also depends on the amount of routing information exchanged among routers
within the same area and the entire OSPF autonomous system. Routing information in OSPF depends on the
number of routers and links to adjacent routers in an area.
On average, every distribution site on campus announces about 10 network prefixes. Because of lack of
summarization in the IPv4 space, all these prefixes are announced individually. Table 8 contains information
on the number of network prefixes announced per OSPF/BGP process ID and broken down by core devices
and border switches. It also contains information on the total number of OSPF areas that each switch belongs
to.
A couple of observations can be made from the table below. A clear difference can be seen in the number of
prefixes advertised via IPv4 versus IPv6. The IPv6 prefixes are nearly half of those advertised via IPv4. This is
due to the flexibility that IPv6 provides in terms of its much larger address allocation. In addition, the large
number of prefixes advertised via BGP are expected as these are part of the Cymru service and we accept a
full list of bogons.

14

IPv4
IPv6

Protocol
OSPF 2055
BGP 2055
OSPF 2055
BGP 2055

OSPF Area

CSC Core
1400
5337
754
75986
0
12
13
14
130.39.0.3
130.39.0.110
130.39.3.5
130.39.254.28
130.39.254.36
192.168.95.128
130.39.0.0

DBYD Core
1446
5337
699
75986
0
12
23
24
130.39.0.110
130.39.254.33
192.168.95.128
130.39.0.0

CMB Core
1467
5337
711
75986
0
14
24
34
130.39.254.28
130.39.254.36
130.39.0.0

ATHA Core
1485
5337
764
75986
0
13
23
34
130.39.0.110
130.39.254.33
130.39.0.0

CSC Border
1478
1
338
1

Table 8. OSPF IPv4/IPv6 Subnet Prefixes and Areas per Router.

IPv4/IPv6
The LSU network uses a dual-stack implementation of IPv6, which means that the IPv6 network runs on top
of the IPv4 infrastructure. From a routing perspective, IPv4 requires OSPF version 2, and IPV6 requires OSPF
version 3. Even though these two versions essentially make the protocols independent, the OSPF network
design and structure for both protocols, as it applies to LSU, are identical.

15

Network Performance Review


Campus Network Backbone
There were no major campus-wide backbone outages year-to-date. Core upgrades during the semester
caused minor outages in specific locations, but those issues have been addressed. In addition, QoS
configuration, as applied in the entire network, is under review as it has been recognized as not being
properly handling traffic prioritization, and in some instances it has been found to be detrimental to the
overall network performance.
The minimum uplink connectivity from a distribution site to the campus core is 1 gigabit, with the exception
to locations where there is no fiber infrastructure. As 10 gigabit continues to expand on the LSU network,
more and more sites are starting to be uplinked at 10 gigabit. The chart below represents the distribution of
uplink speeds on the LSU campus.

TOTAL

10 Gig
34%

1 Gig
66%

Figure 4. Building Uplink Connection Speeds.

All network traffic traverses the campus core. Bandwidth utilization statistics for all core uplinks can be
reviewed in Appendix A. All core uplinks experienced normal traffic patterns with relatively comparable
traffic load distributions. CPU utilization remained at an average of 10% across all cores, which highlights the
benefits and efficiency of the hardware switching technology utilized on the network infrastructure.
Network uplinks from the core to distribution sites ranged in average bandwidth usage from 10Mbps to
40Mbps for the majority of the sites with few exceptions. Three distribution sites were noticed to have
sustained throughput rates above normal. One of those sites, the Residential Network, justifies those rates
due to having a single distribution device for all residential halls. For the other two sites, Semmes Parking
Garage and University Public Safety, attention will need to be paid to ensure that the sustained throughput
does not increase or causes network resource constraints. The bandwidth usage graphs for these three sites
can be seen in appendix B.

16

External Connectivity
There were no major issues as it relates to external connectivity. Two minor issues were experienced. The
first issue was experienced in terms of slow internet connectivity and was related to the Intrusion Detection
and Prevention engine on the perimeter firewall. This issue was resolved and the condition remains stable.
The second issue was related to a system crash on the supervisor engine of the main border router. The
supervision engine was found to be in good condition and continues to be in operation. To prevent the
potential of a recurring issue, a second supervisor engine was installed and should provide the adequate
redundancy.
In September of 2012 LSUs commodity Internet bandwidth increased from 500Mbps to 1.849Gbps. This was
in part as a result of the new Internet usage and charge policy as recommended by the LONI Management
Council and approved by the Board of Regents. The new policy established two service level standards, of
which LSU signed-on for the Multiple Provider Guarantee (MPG) level.
The average Internet bandwidth usage for the spring 2013 was 982.6MBps. This is in comparison to fall 2012,
which saw an average of 815.89Mbps. Percentage increase of Internet bandwidth usage from fall 2012 to
spring 2013 can be seen in table 9. Details for statistics on Internet bandwidth utilization at the border router
are provided in appendix C.

Percentage Increase in
Bandwidth

Total In

Total Out

IPv4 In

IPv4 Out

IPv6 In

IPv6 Out

18.68%

26.87%

23.06%

17.40%

-2.25%

80.56%

Table 9. Bandwidth Consumption Percentage Increase from fall 2012 to spring 2013.

Recent network improvements have increased the capacity of 10 gigabit connections at the core allowing the
flexibility to provide 10 gigabit to new constructions and existing buildings in need of additional bandwidth.

ResNet
The traffic distribution for the ResNet network is split between wireless and wired. As seen in table 10, the
bandwidth utilization for internet bound traffic as compared to the LSU bound traffic are comparable. It must
be noted that the majority of the LSU bound traffic is wireless traffic.
Destination
LSU
Internet

in (Mbps)

out (Mbps)

105.36

83.48

21.77

160.41

Table 10. ResNet Bandwidth Utilization for spring 2013.

17

Figure 5. ResNet to Internet, Total Bandwidth Utilization, spring 2013.

Figure 6. ResNet to Internet, Total Bandwidth Utilization, fall 2012.

Figure 7. ResNet to LSU, Total Bandwidth Utilization, spring 2013.

18

Figure 8. ResNet to LSU, Total Bandwidth Utilization, fall 2012

Wireless Network
As mobile devices and wireless technology proliferate, wireless usage continues to grow rapidly. In terms of
usage by user type, students continue to be the primary consumers of the LSU wireless network.
Staff

Faculty

Guest

250
1.84%

479.34
3.54%

774.9
5.72%

12050
88.90%

Students

Figure 9. Average Number of Wireless Users by User Type.4

In terms of bandwidth usage, the exact averages were undetermined due to technical reasons, but it is
estimated that it is potentially 33% of the average Internet bandwidth, or 324Mbps.
The protocol distribution chart below indicates that 802.11g continues to be the prevalent wireless protocol
in use. 802.11n in both 5GHz and 2.4GHz continue to show an even distribution. The averages for 802.11b
indicate that this protocol is barely used.
4

The numbers in this graph come from the aggregate of two radius servers. Unfortunately, the scripts that enabled
data gathering for one of the radius servers stopped working since March of 2012. So the numbers shown in this
graph do not represent the actual number of users that were seen on the wireless network. This issue has been
corrected as of June 19th, 2013.

19

Figure 10. Wireless Client Count by Protocol

20

Notable Accomplishments
The following are notable accomplishments that UNI made in the area of networking:

Data Center Upgrade


Redesigned and upgraded Frey Data Center. The new architecture uses Cisco Nexus 7K and Nexus 5K and
provides enhanced performance and resiliency.

Core Network Upgrade


Upgraded all core network switches with new supervisor modules and installed new linecards to add more
10Gbps capacity. The upgrade made use of the latest Cisco supervisor engine 2T, which provides a two
Terabit (2080Gbps) crossbar switch fabric. This translates into optimized switching performance for the LSU
core. The table below shows the additional 10Gbps capacity that was realized as a result of this
improvement.

DBYD

10

10G PORT CAPACITY

ATHA

CMB

CSC

Figure 11. Available 10 Gigabit Port Capacity at Core Locations

Wireless Network Upgrade


The wireless network has been undergoing a major upgrade since last year. As part of UNIs multi-year
network replacement plan 633 legacy wireless access points were upgraded at the end of spring 2013. The
upgrade replaced 802.11a/g access points (AP) with dual-band 802.11n APs providing increased data rates,
performance and intelligence. In addition, three new-generation wireless controllers, as well as two high
availability controllers were purchased. These components have been installed but the actual migration
wont occur until the fall of 2013.

Switching Infrastructure Life Cycle


Aging layer 2 switching devices have also been replaced as part of the UNIs multi-year network replacement
plan. The focus for fiscal year 2012-2013 was the replacement of Cisco 2950 switches, as these are devices
are no longer supported and only provide 100Mbps for user-facing ports. The replacement switches are
21

1Gbps-capable with Power over Ethernet (PoE), and 10Gbps uplink-capability. At the end of spring 2013, only
46 switches remained to be replaced.

Performance Monitoring Enhancements


There are many factors that can affect network performance. Without adequate tools, the detection,
analysis, and troubleshooting of such factors can be very challenging. In the past year, UNI implemented two
perfSONAR nodes that were obtained as part of an Internet 2 grant. These nodes are part of a global
infrastructure that enables performance measurement and ease of troubleshooting. The implementation of
these nodes in the LSU network has enabled UNI to provide better service to the High Performance
Computing (HPC) and Research community. In addition, UNI is working with Rseaux IP Europens Network
Coordination Centre (RIPE NCC) to join their Internet measurement network to provide a better
understanding of the state of the Internet. Finally, UNI in collaboration with v6Sonar has implemented IPv6
agents throughout the campus in order to enable additional IPv6 visibility and ultimately provide a better
experience for its users and the rest of the world.

22

Areas of Concern
VPN/Firewall Module and PCI Compliance
As depicted in figure 1, the VPN/Firewall module addresses specific networking needs to different entities.
One of those needs is PCI compliance. To address this need, firewall appliances are installed at every
distribution site that has a PCI requirement. This firewall provides both network segmentation and stateful
inspection. Currently there are only 9 distribution sites that have a firewall appliance for this purpose. The
challenge that is presented by this solution is both in terms of scalability, maintenance, and troubleshooting.
The firewall appliances used for this solution need to be managed individually. Their configuration is complex
as it requires them to run OSPF with the distribution site, advertise a private IP network prefix that is NATed
at the perimeter firewall, and provide remote access capabilities. As this solution expands, it has the
potential of adding more complexity to the overall network and prevent UNI from providing adequate
service. A solution must be found that can simplify the setup to ease the existing challenges. This can be in
the form of a more centralized solution that can provide easier management and control.

Management Network
Through difference events that have taken place over time, the management network evolved into a network
that today serves multiple purposes. As such, it cannot be accurately described as the management network.
Below is a list of a number of applications that run on the management network:

Network gateway for LONIs staff


Peering router for CYMRU
Network gateway for management resources such as NMS, STRM, SPLUNK, and Cisco LMS.
Card swipe system
VPN remote access for UNI staff
DNS server OSPF peer

Due to the number of applications that run on the management network, the complexity of its setup needs
to be addressed and simplified. A determination needs to be made as to which services need to be moved
and where they need to be moved.

DMZ Network across Distribution Sites


DMZ networks as implemented in the LSU network do not follow the traditional convention of a true DMZ.
On the LSU campus, DMZ networks are simply configured as a segmented network at every distribution site.
They are routed throughout the entire LSU network and have specific security policies at the perimeter
firewall that gives them greater visibility to the outside world. This is in contrast to the traditional convention
of a DMZ network, whose main purpose is to be an isolated network with public access but segmented or
restricted from the LAN. The concern is that if any device that is part of the DMZ network is compromised,
that device would have access to the entire LSU network. The solution would be to look into a design that
centralizes DMZ resources and provides the adequate measures to protect the rest of the LSU campus
network.

23

IPv6 Management
LSU has been an early adopter of IPv6 and has made great progress in the past years. However, a challenge
continues to exist because the current dual-stack implementation means that for every network action that
involves IPv4 a corresponding IPv6 action must take place. Attention must be paid so that IPv6 is addressed
and is as carefully managed as it is in IPv4. This includes network definitions, security policies,
troubleshooting, and network design.

Wireless: Eliminating 802.11b


The wireless medium is a shared service. In areas where wireless clients with different data rate capabilities
coexist, the slowest performing client can have an impact on the overall performance. As wireless has
progressed over the years, utilization of the 802.11b has dramatically gone down. 802.11b only provides data
rates of up to 11Mbps. Based on figure 9, disabling 802.11b is of great advantage to the LSU wireless network
as it would greatly improve overall performance. Continuing the support of this protocol will not only make it
challenging for UNI to provide quality service, but it will have a direct impact on the experience of LSU users.

Network Management Systems


Two areas in the realm of network management have been of concern for UNI. For years UNI has relied on
Ziptie for device configuration backups. Unfortunately this open source project is no longer supported.
Backups are extremely important and the need for a reliable solution is a must. In addition, UNI does not
have the ability to deploy bulk network configuration changes. These changes are usually required as a result
of network changes or even security concerns. Without an automated tool, the changes must be made
manually and this process can be very time consuming and error-prone. The appropriate tools are necessary
that would allow the UNI group to address these issues efficiently and reliably.

24

Appendix A
Campus Core Bandwidth Utilization

25

26

27

28

Appendix B
Distribution Sites with Sustained Throughput Rates above Normal

Figure X. ResNet CSC Core Uplink

Figure X. UPSB DBYD Core Uplink

Figure X. Parking Garage, DBYD Core Uplink

29

Figure X. Parking Garage, CSC Core Uplink

30

Appendix C

Figure X. Total Bandwidth Utilization, spring 2013

Figure X. Total Bandwidth Utilization, fall 2012

Figure X. IPv4 Bandwidth Utilization, spring 2013

31

Figure X. IPv4 Bandwidth Utilization, fall 2012

Figure X. IPv6 Bandwidth Utilization, spring 2013

Figure X. IPv6 Bandwidth Utilization, fall 2012

32

Appendix D
Wireless Client Count by Authentication, spring 2013

33

Você também pode gostar