Escolar Documentos
Profissional Documentos
Cultura Documentos
BY : DINESH
MALHOTRA
Introduction- Evo Controller 8200/Multi
• The Evo Controller 8200/Multi introduces a common building practice, processing device and
switch for both GSM BSC and WCDMA RNC. It contains one subrack of a WRAN Evo
Controller 8200/RNC and two subracks of a GSM Evo Controller 8200/BSC in one cabinet.
• From a radio and transport network point of view, the RNC and BSC functions are completely
separated and treated as separate logical nodes. However, the GSM BSC and WCDMA RNC
applications are run on the same type of Hardware (HW). Thanks to the same type of HW all
in one cabinet, fewer spare parts are needed and the floor space can be saved. The Evo
Controller 8200/Multi cabinet can also be used for free expansion later, becoming either a
pure RNC or a pure BSC cabinet.
• The common HW is as follows:
• Cabinet (BYB 501)
• EGEM2 subrack
• Evo Processor Board (EPB)
• Ethernet switches (SCXB and CMXB)
• Power and Fan Module (PFM)
• The EPB is the key enabler of the multi controllers. It can be moved from a BSC subrack to an
RNC subrack, or the reverse, without the need to be reprogrammed in any way - simple
plug&play is sufficient.
Capacity and Benefit of Evo Controller
• Evolved Controller ~ RNC or BSC
• Higher Capacity (More signaling and more data traffic, approximately 5 times to RNC3820 R1)
• Low Cost to the performance
• All IP (Evo-ET for ATM would be introduced later)
• W11B~Software release supported for EVO.
• RNC and BSC may stay together in one Cabinet
• HSPA Evolution with 100 Mbps peak-rate for downlink and upto 12 Mbps for Uplink.
• 2304 cell 738 Iub (16592 cells in theory)
• 155448 active users (in theory)
• EVO-BSC capacity 4095 TRX---HW ready to support upto 8000 TRX.
8200/RNC cabinet .
RNC Modules
RNC modules divide the EvoC 8200/RNC into smaller resource units. An RNC module consists of a processing
unit on a Module Controller (MC) and a number of associated devices within an Evo Processor Board (EPB).
The number of RNC modules can differ in the Main Subrack (MS) and in Extension Subracks (ES) as the node
configuration is customized.
Subrack Equipment
The MS and ES can each house up to 28 boards. The subracks can contain EPBs, System Control Switch Boards
(SCXBs) and Common Main Switch Boards (CMXBs), as well as some Dummy Boards (DBs. The EGEM2 subrack
has 28-slots where each slot is 15 mm wide. Each slot
has a duplicated 1 Gbps- and 10/40 Gbps Ethernet connection. The total backplane switching capacity per
subrack is 960 Gbps. Up to 3 EGEM2 subracks can be used, which gives a total of 84 slots for plug in units.
Each subrack is self-contained and includes temperature controlled fans.
Main Subrack Software Configuration
EISL
EVO-C Hardware Building
Blocks
nal
p tio
O
SPB
GPB
GPB SPB
EPB
SPB
SPB
– Generic processor board used for all RNC processing tasks. Each EPB1
board is equipped with two multi-core processors.
• The EPB combines the roles of GPB, SPB and ET-IPG (which are
used in RNC 3820) on the same board.
– The four (‘c1’ and ‘c2’) EPB boards located in the Main Subrack are
used only for central main processing tasks (both 1+1 redundant).
– The rest of the EPB boards (up to 20 in the main subrack and up to 24
in the extension subracks) are used for the Blade (traffic processing)
role.
EPB1 (cont..)
• The EPB1 board has 2 processors with 8 cores = total 16 cores.
• In Evo Controller 8200, the roles on each core are pre-defined making
it easy to predict capacity expansions. The capacity scales with each
EPB blade installed. The pre-defined core configuration is:
• 3 cores for MC
• 8 cores for DC
• 1 core for CC
• 1 core for PDR
• 1 core for CPP programs
• 2 cores for IP termination
– Every EPB1 blade is configured with MC, CC, DC and PDR roles all on the
same board. Thus one individual call is handled within the same EPB
board, reducing signaling between boards in the Evo controller.
RNC Software deployment
C1 (1+1) C2 (1+1)
-Equal to 3820 C1 + 2 x SCTP FE - 2 x SCTP Front End
-Load balanced between cores - Central device handling & UE reg
(except JVM and Bare Metal) - RFN server (moved from TUB)
“Blade” (3 – 68)
Primary processor Secondary processor
BDH + CPP PDR device + CPP
IP (bare metal) IP (bare metal)
CC device DC device
RNC Module + RANAP DC device
RNC Module + RNSAP DC device
RNC Module + PCAP DC device
DC device DC device
DC device DC device
SCXB3
• System Control Switch Boards (SCXB3) ROJ 208 395/1
– SCXB is used to carry node internal control signaling and manages system clock
distribution, and connections between EGEM2 subracks for control traffic.
– There are two SCXBs in every subrack belonging to two physically separated
LANs.
• Each subrack contains a redundant SCXB pair, in slots 1 and 27
• All device boards and switch boards in a subrack are connected to both
SCXBs through a 1Gbps backplane connection. .
– The MS SCXBs are connected to the ES SCXBs by 10Gbps front panel Control
Inter Switch Links (CISLs) - (slot 1 to slot 1 and slot 27 to slot 27).
CMXB
CMXB
EPB
SCXB
1G
SCXB
A: to ES1 (A) (10G) 1G
B: to ES2 (A) (10G)
C-G: Spare (10G)
H-I : Not Used (1G)
J: 2048kHz or 10mHz sync ref
K: Not Used
L: Management - RS232
CMXB
CMXB
EPB
SCXB
1G
SCXB
M: DBG – ETH 1G
N: 2048kHz or 10mHz sync ref (QMA con)
10G 10G
ES1 and ES2 – Cabling recommendations:
A: to MS (A & B) (10G)
B: Spare (10G)
C to H: Not used
CMXB
CMXB
EPB
SCXB
SCXB
1G 1G
CMXB3
• Common Main Switching Board 3 (CMXB3) ROJ 208 392/1
– Carries user plane data and node external traffic.
– Each subrack contains a redundant CMXB pair, in slots 2 and 28. All CMXB boards
will be interconnected in a double star topology with Ethernet Subrack Links (ESL)
– slot 2 to slot 28, slot 2 to slot 2 and slot 28 to slot 28.
– The board has 24 (10Gbps) Backplane ports which reach all (non switch) device
boards within the same subrack.
– Also has 8 front panel (1 or 10Gbps) ports that can be used to connect to other
CMXBs inside the node or as external ports for outgoing traffic. Ports A –D are
HW prepared for 40Gbps.
10G
10G
CMXB
CMXB
EPB
MS – Cabling recommendations:
A: to APP (Iub) (10G, HW prepared for 40G)
B: to ES1 (10G, HW prepared for 40G)
10G
C: to ES2 (10G, HW prepared for 40G) 10G 10G
D: Cross-link in MS (10G, HW prepared for 40G)
E: to APP (IuPS/CS, Iur) (10G)
CMXB
CMXB
EPB
F: to APP (10G)
G: to APP (10G)
10G 10G
H: to APP (10G) 10G
CMXB
CMXB
EPB
ES1 and ES2 – Cabling recommendations:
A: to MS (10G, HW prepared for 40G)
B: cross-link in ES (10G, HW prepared for 40G)
4x10G 4x10G
D to H: Not used
APP APP
4x10G 4x10G
EvoET
• The EvoET board is an optional board to handle ATM
transport for Iub, Iur, and Iu-CS in the RNC and
Packet Abis over TDM transport in the BSC.
• The EvoET logically terminates the node external
transmission interfaces. Different versions exist for
different standards and transmission speeds e.g.
STM-1/VC4 and STM-1/VC12. The board uses a 1+1
redundancy principle.
• There are 8 ports per EvoET. The number of EvoETs
required and number of activated ports per board is
dependant on the transport requirements.
• The EvoET uses the same board positions as the
EPBs. The maximum no decapacity is therefore
reduced when ATM- or TDM transport is required
Evo-C Provides 10G and 1G Ethernet to
all Slot Positions
White fronts and new LEDs
– The Active Patch Panel (APP) is the main physical point of connection for
transmission. There are two APP units placed at the bottom, below the Main
Subrack (MS).
– The APP provides O&M connections for a service terminal where one Ethernet
connection and one RS232 connection can be used to connect to the Evo
Controller.
APP – EXTERNAL IP INTERFACES
• EVO external interfaces are very similar to what we have for RNC3820 with regards to
IP/cabling for OAM and bearers.
– The only differences are related to the slots used for the internal connection points in the
subracks.
IuCS,
IuPS
&
Iur
•
1. One pair of OAM Ethernet connections
2. One pair of 1 or 10 GigE connections to carry
• Iub over IP traffic (User Plane and Control Plane)
• 3. One pair of 1 or 10 GigE connection to carry
• IuCS, IuPS and Iur over IP traffic (bearer and signaling)
BYB 501 with BFD 538 and CAS/CAXB
Cabinet Switch
SW Description - overview
›W11B EVO (W11.2) Software Changes:
– The HW platform is completely new. In between the feature SW, (which is the same
as W11B), and the HW there is a middleware layer (e.g. in device & resource handling
and user plane handling) that adapts the HW to the feature SW layer. That layer is
updated in order to adapt the SW to the blade concept and the IP infrastructure.
– Platform: CPP9 (vs CPP8 for 3820)
›Features:
– Same features as W11B
– Dynamic Iu/Iur Signaling – This is a new feature required for EVOc
›Maintenance/Merging:
– W11.1 (W11B) & W11.2 (W11B EVO) Merging in W12B
– Standard TR mapping process between releases
Hardware Comparison
RNC 3820 EVO-C 8200
Subrack High Capacity Subrack EGEM2 Subrack
Subrack switching 200 Gbps 960 Gbps
Capacity 8 Gbps Iub throughput Hardware 20 Gbps max. Iub throughput
prepared for 20 Gbps HW-prepared for 50 Gbps
Capacity (2) Max. 32 module MPs and 40 SPBs In a max. configuration of 68 EPB blades: 204
MCs, 68 PDR devices, 68 CC devices and 544
DC devices
One dual-core
Control plane, Iub processor with Three Module Controllers (MCs) on the
links one RNC primary processor (PIU)
module on
each core
Three dual-core Two DC devices and one CC device on the
processors and PIU, Six DC devices and one PDR device
User plane, PDR, CC, DC on the secondary processor (PIU Device)
devices devices arranged
into four different
SPB_TYPES