Escolar Documentos
Profissional Documentos
Cultura Documentos
tolerance is provided by the two-layered ESWT approach by providing two independent (no inter-connection) planes. Spanning tree According the switching matrix principle no spanning tree protocol (IEEE 802.1d) needs to be provided in order to prevent looping packets in a network. This is according to the staged ESWT topology without loops
Addressing Capabilities
In order to be able to talk within the ITCE domain (UNIX/LINUX) the IP protocol is the standard communication protocol on top of MAC layer. For the call engine domain legacy call engine message communication principles are still used while for inter-communication between both domains the MiddleWare communication layer and its mechanism is used. The Middleware layer uses UDP on top of IP/MAC as communication transport mechanism. Application or Service addressing is subject of the Middleware utilization. The addressing capability enables to address a CE and the final receiver application inside a CE (e.g. a state machine process of a FMM). For call engine it is the msg_id. In ESWT environment it is the service port id. In Ethernet switch environment:
Physical CE addressing is based on MAC address. Logical CE addressing is based on IP address. On top port service address is used to select the appropriate receiver
application. For the Dual Processor Board (CPCB) as well for the Jaluna Virtualisation approach the addressing scheme has been extended. A PBA or blade may be composed of:
one single processor only (CPCA) multiple processors. In Alcatel 5020 MGC dual-P is implemented on CPCB
board.
MGC in maximum 4 are foreseen. In principle also several secondary LINUX instances may run in such an environment as well. The addressing scheme supports up to 16 VMs for the call engine domain as well for the LINUX domain. Computing MAC-Addresses The PCE-addresses of ITCEs and call engine-CEs or call engine-VMs must be unique. For the cPCI hardware release1 a dedicated rack clock distribution system had been invented with a cabling structure like a tree and SFI signal injection with dedicated numbering scheme in order to identify a processor blade inside the system. The processor blade position had been identified via the back panel. For the cTCA hardware (cTCA from CCPU) this SFI signal and proprietary clock distribution cabling is not provided anymore. The location reference is retrieved from a chassis id, which has impact on the applied principles. In order to guarantee a unique MAC address in the ESWT transmission media each PBA (CE) must evaluate its own MAC address value according to a defined rule. In the cTCA environment CEs or VMs of ITCE-domain and call engine-domain will evaluate their MAC address from the programmable vendor Chassis Id (= chassis_id) stored in EEPROM on the backpanel plus the cPCI slot position_# extended by the CPU_id information and the virtual machine identity. The chassis-Id itself is retrieved via IPMI from the FlexManager. The slot position is retrieved via IPMI port interface reference. In addition with introduction of dual-Pentium boards the CPU-Id will be evaluated via IPMI-address. Note: We have now to consider in addition the VM-Id which is needed to distinguish between various virtual machines running on the same processor. Retrieval of the information is done in a boot-loader sequence and it specifies the correct own location ID that is then used to setup a configuration file name. Loading of this configuration file from OAM-CE to target-CE is then being requested. Retrieved information is also used to compose a PCE_Id (pseudo NA - a 16-bit value compliant to the call engine Network Addresses range) which is clal engine-compliant in case of call engine-CEs (ITCE domain bit set in case of ITCE-domain).
For call engine-CEs/VMs the call engine Network Address assigned at loading time must correspond to the data defined off-line. This input is used to compute an call engine compliant PCE_Id and from this value the final CE MAC_a address and the second MAC_b = MAC_a + 40h is assembled. The address composition principle is introduced in order not to jeopardise exiting data tools or application implementations relying on traditional number ranges. The principle is described in TRS-ESWT Communication. The PCE_Id is usually treated as a Network address while that value represents a port address of a switching media. E.g. a processor blade has usually one physical board reference but may have several port addresses connected to a switching network. With the Virtualisation approach one port address of the board is used to enter the blade while each VM on the CPU running on that blade has an individual PCE_id. This PCE_id may act as an individual virtual Ethernet Switch address assigned to an (virtual) ESWT port. The PCE_Id/NA is part of the lower two bytes of the composed MAC address value while the upper 3 bytes define the vendor id. The remaining byte is set = 2 according historical reason. In principle it is random. In effect the MAC address is always unique in an Ethernet network. The domain distinction must be done within the lower two bytes of MAC because call engine tools and online software (kernel) allow only an 16 bit address value in range of an integer where only these lower two bytes are used in routing data. Therefore bit 12 (not used in real DSN environment) is set for ITCEs.
In Alcatel 5020 MGC this results in following sequence of actions which is listed below and visualized in the two figures following afterwards:
Figure 3. MAC Address composition for call engine-Domain / ITCE-Domain (except PLDA)
Physical CE Address and Physical Blade Reference In the past the physical call engine control element identity was equivalent also to the physical location where an call engine blade was connected to the switch. Since only Ethernet connectivity existed this location reference could not be correlated to the (DSN) switching port address. The location reference is now retrieved via IPMI / FlexManager interface (cTCA hardware). One physical blade will have several physical addresses per individual VM/call engine-CE. From a maintenance point of view there must be a correlation done afterwards to identify a blade with a physical location reference in terms of a CCPU rack/chassis/blade location identity. As described above the VM identity is subject of the computed MAC address for each individual VM running on a blade as a physical address. The blade Ethernet port does not need to have an individual physical address like an individual MAC. It needs to be operated like an uplink port in a staged Ethernet switch topology accepting packets for several individual MAC addresses learned to be reachable in the next switch stage behind. The port has to be operated therefore in the promiscuous mode.
Figure 5. MAC Address composition in the call engine-JALUNA context Addressing aspects for the Dual Processor blade approach For the CPCB blade (Dual Pentium) a 6 port Ethernet Switch (ASIC) is mounted while the up/downlink ports towards the board boundary are also operated in promiscuous mode. The onboard ESWT is configured in VLAN mode in order to compose a two layered switch approach with no interconnections amongst the grouped ports. LCE_Id and IP address composition For the hardware release 1 the IP addressing scheme was derived from the LCE-id split into two ranges. There was a range up to 1024 defined for the call engine domain while the range from 1024 upwards was reserved for ITCEs. For Alcatel 5020 MGC a range for call engineCEs and one for ITCEs shall be reserved. The LCE id is used to calculate the IP address.
Alcatel 5020 MGC
Therefore a domain-bit (call engine or ITCE) is used to distinguish two sets of IP addresses. In addition some external Servers shall be connected with an IP address not conflicting each other. Therefore an extra indication bit is foreseen. This allows in addition to support a multicast mechanism for loading call engine-CEs/VMs while not overloading LINUX based ITCEs. LINUX based Ethernet controller drivers are operated on interrupt driven mode on receipt of an Ethernet frame while call engine-OSN ALWAYS performs active polling principle in order to keep control in any situation. LINUX CEs would be flooded and unfortunately killed by the number of interrupts generated at loading time. For the server domain and hardware management domain some IP addresses as a sub-range need to be reserved and shall be retrieved via DHCP service from OAMCE. The IP address composed of the LCE_Id is defined as a private IP address in the range (172.16/17.x.x - according standard ) and therefore not routed to any public domain. If the external packet network is not a public domain but again a private customer network the used IP addresses must be out of the private IP address range and must not conflict with these private addresses and subnet address ranges (172.16/17/18.x.x). The lowest 12 bits (1 1/2 bytes) are equivalent to the relevant 12 bits of the LCE_Id. Note: the lowest 4 bits of the 16 bit LCE_Id cannot be used and were used in the beginning of call engine to identify child TCEs reachable via a DataLink.
Intra-call engine-Communication
Transport Protocol Connectivity
The Transport Protocol Connectivity enables two CEs to transfer data (secured or not secured). Both CEs talk the same Transport protocol, e.g. TCP or only IP.
The figure above shows a logical view of which PBA type is connected to which transmission media and which protocol stacks are applicable. All call engine-PBAs have already Ethernet connectivity and are connected to an Ethernet switch only (two planes). This is used to reach ITCE and as well for fast communication between call engine-CEs Ethernet transmission flow For call engine message communication via ESWT there is a handling that deviates from the rule defined in chapter Ethernet transmission flow: in this case the Virtual Path (VP) protocol is used that provides flow control (handshake), overload control and windowing with re-sequencing. Current implementation supports/is configured for a window size of 1 and therefore no re-sequencing is required at receiver side. This allows both ports to be used randomly and is toggled each time even for a distinct communication relationship of two CEs. The window size set to 1 restricts the communication flow to a destination in terms of number of packets but shall be evaluated in combination with the transmission capacity of the switch as well with the processing capacity of the CE. Effective windowing is important in case of high performant processor boards and slow transmission networks while the packets need a long time for transmission.
Inter-Domain Interfaces
Middleware (call engine CE ITCE)
A Middleware layer exists in call engine call control CEs and ITCEs and is used for the communication between the call engine domain and ITCE domain. For any communication within the call engine domain the existing message communication mechanism is used. For intra-ITCE-communication messages are sent via IP not making use of any middleware functionality. Middleware provides location transparency and routing function for message delivery. Communication between applications mapped on call engine CE and applications mapped on ITCE use UDP/IP protocol with a limited middleware layer on top. The middleware layer provides the following functions:
Alcatel 5020 MGC
Dynamic Registration:
ITCEs become known to call engine-CEs middleware by pre-configuration IP addresses according ITCE-LCE-ids. A command handler is used to update this configuration data i.e. to add or remove ITCEs. Each call engine-CE (middleware) uses this list of configured IP-addresses to register itself to all of these known ITCEs. The call engine-CE passes within the registration the information about its CE-type and the available (externally addressable) functions/services to the ITCEs. ITCEs confirm the registration from call engine-CEs with the information about its ITCE-type and the already registered and thus addressable functions/services. If a UNIX process is started at the ITCE-side, it registers with the function/service identification to the middleware. The middleware is responsible to propagate this function/service identification to all known/registered call engine-CEs.
Functional addressing The sending middleware user shall be able to address the destination (i.e. the requested functionality/service) only via a functional Service ID. The sender shall neither need to know any individual service provider instance nor on which destination CE(s) such a service provider is located. The destination CE shall be selected by the source CE based on registration information received from other CEs. If multiple instances have registered to provide the same service, then functional addressed messages shall be distributed to them (load sharing). Therefore there is no guarantee that subsequent functional addressed messages are delivered to the same service provider instance. Note: This addressing principle (but not the semantic and syntax) corresponds to call engine Basic Msg. addressing. Functional addressing is expected to be used for simple one shot request-response queries or to establish a communication relation (i.e. a session) with one (of several possible) service provider instance. Further communication within a session is via instance addressing. Qualified functional addressing A qualified functional addressing allows a functional addressing of a service provider on a specific CE, by providing the Service ID and a LCE (or PCE) ID as address information. Note: This addressing principle (but not the semantic and syntax) corresponds to call engine Basic Into or call engine Basic Onto addressing, depending if the given qualifier is a LCE-ID or PCE-ID. In the call engine subsystem the LCE-IDs and PCE-IDs are provided by the OSN. In the Unix subsystem the LCE-IDs shall be configured as (middleware) configuration data. In order to guarantee system wide unique LCE-IDs a value range shall be used which is not used by call engine. The Unix PCE-IDs shall be generated from the physical board location, using the same algorithm as call engine. Instance addressing Instance addressing provides the addressing of a specific service provider instance on a specific CE. The instance addressing shall support both logical and physical addressing option.
The logical instance addressing The logical instance addressing allows addressing a specific service provider instance on the active CE, the standby CE or both CEs of an active/standby CE pair identified by a LCE ID. The physical instance addressing shall allow addressing a specific service provider instance on a dedicated physical CE identified by a PCE_ID. Note: The addressing principle (but not the semantic and syntax) of physical instance addressing corresponds to call engine directed to addressing. The logical instance addressing has no correspondence on call engine. The aim of the logical instance addressing is to make an active/standby switchover within a LCE (active/standby CE pair) transparent to the sender.
in charge of CCPU and outside of ALCATEL's responsibility Translating a set of actions given from outside (e.g. via operator ITF) into a
set of (standard) commands for hardware management according to PICMG 2.9 or 2.1 (IPMI or hot swap signals)
an ALCATEL software product using the API provided by FlexManager providing the "intelligence" of the hardware management system; in this
module we will make the decisions and define the actions to be taken.
the interface for all hardware-related actions / reporting which will all go via
the Chassis-Controller The figures below will show...
System Maintenance
The ALCATEL 5020 MGC consists of several domains, which are all implemented on the same hardware-platform.
the legacy call engine-domain the ITCE-domain the server-domain the hardware-management-domain
This is also reflected in the System Maintenance. For the customer some of the differences are combined and unified in the CMC view. This means, that we have internally different system maintenance parts. For details see the chapters below:
call engine-domain
The principles of system maintenance functionality of the call engine legacy domain are still valid although we have no longer call engine legacy hardware but some hardware related alarms will now come from the ITCE-domain via IPMI / FlexManager / CHACO. This is possible because these slots are visible for CHACO due to some hardware-CAE data provisioning. The call engine legacy domain is beside their other functionality, still responsible for the master alarm panel function (there is no physical Master Alarm Panel any more - see chapter Master Alarm Panel indication for x-domain alarms below). With Alcatel 5020 MGC we introduce a Virtualisation concept for call engine-CEs. For the call engine Maintenance handling there is no difference between a virtual CE and a CE running exclusive on one CPU. To avoid too many individual alarm reports for virtualised CEs in case of a failing hardware a correlation is done on base of a CPU. In rare cases where some virtualised CEs cannot be handled/accessed by call engine Maintenance Commands anymore a Hard Reset command is provided . It will trigger via CHACO a hard reset of the related CPU (see chapter hardware Reset API). The command consists of the following actions:
disable all CEs of the CPU trigger the HW reset via CHACO wait for reply from OAMCE (completion code) initialise the CEs of the CPU, (if successful reload of the Jaluna platform was
performed)
ITCE-domain
As the call engine domain, the ITCE domain provides supervision and alarming for their hardware and software. The related alarm reports are sent from the OAM towards CMC. The alarming via the "master alarm panel" is done via the call engine domain.
Server-domain
There is an alarming via the "master alarm panel" which is done via the call engine domain for some specific events. The call engine domain passes these events to the CMC, additionally.
Alcatel 5020 MGC
For the EAxSs except EABS these events are visible over the WBEM interface. For EABS the events are only visible locally on an X-Window layout. For EABS: The above-mentioned standard alarming chain can be switched (during installation) to a direct alarming to CMC via SNMP (except HW related alarms which come from the ITCE-domain via IPMI / FlexManager / CHACO - This is possible because these slots are visible for CHACO due to some hadrware-CAE data provisioning). For other servers: Additionally to the standard alarming chain, the complete Server Platform software is available in order to manage the hardware.
hardware-management-domain
The FlexManagers are supervised by the ITCE-domain. Single or double faults will be reported by the CHACO (see chapter System Maintenance)
To replace the master alarm panel the following approach is now selected:
Errors of the call engine domain are send directly to CMC as standard call
engine alarms using the ROMA interface towards CMC.
Errors and Alarms detected by the system management of the ITCE domain
will be reported to the CMC via SNMP events.
Alarms of the call engine domain and the ITCE domain can be displayed via
the Alarm view application. Accessing this application via GoGlobal from the OWP the alarms and their severity can be looked at the OWP.
For more info w.r.t. hardware-management see as well chapter System Maintenance.
The OAMCEs are synchronized with external NTP servers in the customer IP
network.
All diskless ITCEs are synchronized with the OAMCEs that work as relay
server. The PLDAs of the call engine part are synchronized with the OAMCEs using the Simple Network Protocol (SNTP) as described in RFC2030, based on the standard NTP protocol. The local time is calculated from the received UTC time inside the active PLDA and broadcasted to all other call engine control elements via a propriety protocol called "Local Day Time Protocol" (LDTP) periodically and on call engine control element request. As the call engine part is no longer the master of the time no MMC commands are provided to display, modify or adjust the time and date. N.B. As the OAMCEs do not have an accurate time source as well, they act as relay server which themselves are synchronized with high accurate NTP servers
in the customer network. Due to this no GUI is provided on OAMCE to set the date or time information.
ITCE domain
For the ITCE domain there exist rudimentary mechanisms only. complete BACKUP / RESTORE The complete BACKUP / RESTORE was a 1 to 1 copy of the magnetic disc to tape in release 1. During this task the OAMCE must run in maintenance mode (e.g. ORACLE database inactive). For Alcatel 5020 MGC we use the disk of the collocated local OWP as backup medium, which replaces the tape from a functional point of view. Backup can not be taken from running OAM (OS/DB Restrictions) but only from OAM in maintenance mode. Therefore to avoid maintenace mode as far as possible...
Full Backup will be taken once after Installation before Startup of OAM Backup will be put to OWP disc and then further to DVD if required copy to CD/DVD at local OWP requires SW to write more than one CD/DVD
because of size of Backup.
Backup has to be taken from OAM A and OAM B Data-Backup has to be taken periodically via partial backup method (no
maintenance mode required) Restore of original status after OAM crash...
USB stick needed to run Linux copy Backup from OWP disc to OAM A resp. OAM B (restore from DVDs/CDs
to OWP disc to be done locally at OWP, if Backup is not available on disc already)
put actual FRB on top (FRBs available on CD, loaded at OWP, copied to
OAM A and OAM B in Maintenance Mode (via SCRIPTs)
RESTORE uses the ORACLE import function and copies configuration data onto the OAMCE's magnetic disc. Partial BACKUP is allowed during normal operation. Partial RESTORE is done via SCRIPTs to both OAMs in maintenance mode.
The figures below will show the storage media in Alcatel 5020 MGC and their assignment to CEs. In addition there is a description of the SCSI connectivity of the storage media:
Loading
Principles
In Alcatel 5020 MGC There are the following parts that are to be loaded:
loading independent from other loading independent from other loading independent from other
hardware-management domain
domains
ITCE-domain
domains
For the call engine- and ITCE-domain there is a common first loading step now. In a second step the call engine-part and the UNIX-part are loaded independently from each other. Note: That the diskless instances of the server-domain are considered like ITCEs from the platform point of view. Loading of server-domain instances that have their own disk is totally independent from the rest of the system and is considered as server-domain loading. Note: That the call engine-part can only be loaded when the OAM-agent is working. Loading of the HW-manager is totally independent from the rest of the system. Out of scope for this Release is: An automatic process for simultaneous SW replacement on all domains to introduce context dependent packages (not supported). A manual procedure describes the sequence of tasks to be done for the update of single and multiple domains. Package replacement (not required). Package Replacement will be required for follow-on projects based on this release.
two-step network-loading concept. The other domains' loading is totally decoupled from the call engine- and ITCE-domain.
Figure 14. Disk-node loading and First step of ITCE/call engine network-loading
Figure 15. Second step of ITCE/call engine network-loading The initial network-loading of call engine- and ITCE-domain requires that OAMCE is loaded and operational. BIOS of PLDA ensures that for these initial steps there is the same behaviour than for any other call engine-CE on cTCA. This means that the attempt to start from colocated HD (which comes first in the boot-sequence) will fail due to file contents on PLDA-disk will be invalid. BIOS will continue in its boot-sequence and will end at network-boot. In the first step of the network-boot sequence (which is common for all diskless nodes of the call engine include JALUNA-type and all diskless nodes of the ITCE-domain include diskless server-nodes) we have the following sequence:
do some Board Diagnosis (hardware test, memory tests, debug interface, etc) start Board initialization (here PXE)
CE Loader will
execute board firmware upgrade if required (in flash memory) perform the "geographical location identification" i.e. physical location (shelf/slot) with IPMI commands builds the MAC address according to predefined assignment/algorithm (related to position) setup a configuration files name according to evaluated geographical location read the configuration file from OAMCE if there is no configuration file for the request it is assumed implicitly that the request comes from an call engine-CE for which no individual configuration-files have been created per slot; therefore the default configuration file for call engine-CEs will be selected in this case which defines call engine bootstrapping; this will trigger if the request identifies a configuration file for a valid LINUX position, then In the second step of the network-boot sequence (which is different for the nodes of the call engine- and ITCE-domain) we have the following sequence:
remains in memory to support loading, initialization and online software (call engine Kernel) initializes the hardware and software configuration (init GDT, LDT, IDT, TLB, etc; execute fast test (CD40 pattern), init ROM Data area) Initiates Bootstrap
Valid NAs for this loading procedure are non-PLDA-NAs only; for them adapted HdS performs the well known call engine-loading procedure which covers Send LoadBID to PCE 0xC and PCE 0xD (with resp. MAC address) Receive load packets (= a loader agent) In case of a system start-up (STUP) or partial STUP, the STUP slave loader will be loaded into the memory; loader slave executes (STUP) In case of a single CE loading (CE INIT), the CE INIT slave loader will be loaded into the memory; loader slave executes (CE INIT)
The following three figures show the two loading packages that we have to consider for the JALUNA-based approach and a comparison to the non-JALUNA approach:
Figure 18. Call Engine-JALUNA based approach - related to pure call engine approach The following two figures will show a scenario of the first step of network-boot and an overview of possible branches in the two-step network-boot scenario.
Initial disk preparation for OAMCEs For the initial A5020 MGC 12 installation OAMCE disks are prepared at ALCATEL site and delivered to the customer. Follow-on disk preparations (e.g. in case of disk damage) are done as described below:
OAMCE is booted from an USB stick (this stick contains a LINUX operating
system, and disk preparation scripts)
The USB stick is removed now and the OAMCE boots up.
Disk preparation for PLDAs If neither the system disk nor the backup disk is prepared the PLDAs cannot be loaded. The procedure to build PLDA disks is the same for initial disk preparation and for follow-on software upgrades. A DVD is delivered to the customer. This DVDs contains the call engine build. PLDA disk preparation consists of the following steps:
Insert call engine-DVD into local OWPs drive Boot one (isolated) PLDA from an USB stick (this stick contains a LINUX
operating system, and disk preparation scripts)
The USB stick is removed now and the PLDA will boot now (according to
mechanisms defined in chapter Loading )
When executing its loaded HdS PLDA will ask the operator for load-medium
(which has to be backup-HD in this case!)
Error correction loading for call engine-domain One DVD contains the new call engine GLS/PLS files and a script. This DVD is inserted into the OWPs DVD drive. The script can be executed on the OAMCE after the device is mounted. This script copies all call engine files via FTP to partition 1 of both PLDAs system disk. Afterwards Error Correction loading is started on PLDAs. A further step could be to transfer the call engine files to the OAMCE via FTP directly.
Server-Domain
EABS: For initial disk preparations and follow-on deliveries DVDs are delivered to the customer. These DVDs contain the SW to be loaded. The procedure to be followed is the similar to the OAMCE follow-on disk preparation activities:
The DVD is mounted in local OWP DVD drive Server is booted from an USB stick (this stick contains a LINUX operating
system, and disk preparation scripts)
hardware-management-Domain
The HW-management-domain (FlexManager) is loaded independently from local Flash Disk. A procedure to upgrade the Flash Disk of FlexManager is described inside the documentation of the chassis supplier
cold start-up warm start-up (pre-loading of a new package while the old package still
provides call handling) in order to meet maximum 3 minutes outage time requirement. No first step loading is done in this task.
Package Replacement / Context Dependent Loading A package replacement procedure within this release is not supported.
The server-domain (e.g. EABS) must be able to handle an old and a new
package in parallel.
OA&M
This chapter deals with the OA&M concept for the ALCATEL 5020 MGC. There will be other NEs besides this one in the NGN that have to operated but they are out of scope of the description below. This chapter deals with OA&M as it is provided for the multiple domains that are available inside the ALCATEL 5020 MGC. The ALCATEL 5020 MGCNE consists of four domains (OAM point of view - see figure below): 1. call engine domain 2. ITCE domain 3. Server domain (EABS) 4. hardware-management (ChassisManager)
Figure 21. ALCATEL 5020 MGC OAM concept 1. call engine domain: The CMC can access the call engine-domain via OAMCEs only. LINUX kernel functionality provides address of the internal IP addresses known by the PLDAs to the external IP addresses known by CMC. to be operated remote from CMC to be operated from OWP as remote CMC terminal the features are handled via TOFs and TPs
2. ITCE domain: The ITCE-domain is operated via the WEBEM interface on the OAMCEs. ITCEs can only be addressed via the OAM agent (cTCA rack) for the operator's interface GUIs have been developed the Web interface of the OAM agent can be addressed by OWP (via the integrated Browser) or by CMC
3. Server domain: Access to the server-domain (EABS) is done directly via their RTM. EABS: Access to the EABS from CMC is done directly via their RTM. to be operated remote from CMC to be operated locally from OWP via EXCEED to EABS The Ethernet ports at the own RTM of the EABS are also used for handling of the CDRs. For EABS an X-Window surface (EXCEED) is used others: Access to other servers is done like to the MGC in general via the OAM-Network. for the other servers the WEBM surface is used 4. hardware-management domain: to be operated remote via TELNET session (also from OWP)
The local OWP is directly connected to the ALCATEL 5020 MGC via 2 Ethernet interfaces (one to each plane of the internal fabric switch different chassis)
Note: A third Ethernet interface is directly connected to the customer IP network to access applications on the CMC. This interface can as well be used to access OAMCEs via their RTM ethernet interface. As COM3 of both OAMCEs is connected to the FlexConsole software installation can be controlled without serial interfaces. The Ethernet interfaces directly connected to the ALCATEL 5020 MGC are used for:
Maintenance and installation activities on the ALCATEL 5020 MGC BACKUP server functionality on local OWP for call engine and MGC part.
The figure below shows the connectivity of local OWP and CMC to ALCATEL 5020 MGC.
Figure 22. Access of CMC and OWP to ALCATEL 5020 MGC - cabling
OAM-features
Mixed mode TPs There are some tasks, which require OA&M activities on call engine - and ITCE-domain. However inside TPs CMC cannot trigger GUIs in the ITCE domain. Consistency of data provisioned via ORJs
The eDoc of the Alcatel 5020 MGC is an integrated part of the OAM agent. It
describes the ITCE part only.
OAM interfaces
The CMC will support ROMA over TCP/IP to communicate with Alcatel 5020 MGC (OAMCE).
TOF / TP package
The call engine part of the MGC is operated via TOFs and TPs remote from CMC only. The local OWP accesses the CMC application via "GoGlobal". The CMC is able to handle several call engine releases in parallel..
CMC functionality
The CMC inherits the functionality of the A1360 SMC. Several applications as (e.g. Alarm view, network element) were enhanced to deal with NGN network elements. Software management and remote backup is very close related to a customers network and therefore not part of the generic Alcatel 5020 MGC.
Platform-aspects of N7-solution
The following figure shows the principle hardware-configuration of the N7-solution in Alcatel 5020 MGC. Note that it is possible to connect up to 4 E1-links where - in total - up to 32 N7-signalling-links can be handled according to actual requirements.
More details w.r.t. hardware-items can be found in chapterhardware-components used in the Flex21 Chassis.
Figure 23. hardware-configuration for N7-solution in Alcatel 5020 MGC The grouping of functions related to hardware-parts can be found in the figure below. Note that MTP1 and MTP2 are coming with the PMC from ADAX (firmware) while MTP3 functionality is ported from the MCCM module (call engine high speed N7 hardware-module running VxWorks) onto LINUX based SLN7S.
Figure 24. Functional grouping related to hardware-components for N7-solution This software is considered as OBC-software from call engine-software point of view, which means that SLN7S will have an application-part for loading and data-synchronisation. Initially the SLN7S uses standard loading procedure for LINUX-CEs as defined in Loading. Afterwards SLN7S will be contacted by an call engine-CE (which is logically the "TCE" in the legacy world - but now implemented simply as a function on any call engine-processor) that will give him additional data (the other way round compared with call engine legacy OBC-loading). The call engine-CE will have a new data-table that indicates the OBCs via its addresses. This table (which is independent from call engine-routing-tables) translates the LCE-ID of the target "TCE" (whose function may be mapped into any of the new call engine-processor-types) into the PCE-ID of the logically
co-located OBC. The "TCE" will see from this table's contents which OBCs he has to care about (see following figure for abstraction):
Figure 25. TCE-OBC view - comparison of traditional legacy and new configuration The new translation-table is to be used as well for specific IPPoE communication that is used between MTP3 part on SLN7S and the logical "TCE"-part that represents the communication partner. For more details on IPPoE see chapterIPPoE (call engine CE ITCE)
Debugging
The call engine and ITCE-domains do not interfere each other. No common tool to be foreseen. Access to the softswitch for ALCATEL stuff is granted locally via the local OWP and from remote via MODEM lines (attached to the local OWP) or via WAN to the local OWP.
If two sessions in parallel are necessary an additional access is possible via a MODEM attachable to the Flexmanager directly or via WAN (OAM-network) directly to the OAM or EABS. Caution the MODEM at the Flexmanager works only at the active one. From the local OWP, OAM or EABS the chassis manager (Flexmanager) of the OAM chassis can be accessed via a Telnet-Session (see picture).
Figure 26. Remote debugging concept in Alcatel 5020 MGC For the call engine domain the following tools are available in this project:
MPTMON
Note: A new variant of this manual will be created to reflect the commands that are no longer applicable (e.g. treatment of legacy HW)
Server Platform Trace Subsystem Tool: This tool enables tracing on RM,
COCO and IPACC nodes. The output is written into Log files on OAM. Detailed tracing has to be enabled first by setting the trace level per node and process of interest. The executable resides on the OAM.
tcpdump: This tool allows the tracing of IP-packages on Linux CEs (OAMCEs
and diskless ITCEs)
Ethereal Tool: This tool allows the tracing of IP-packages on the IP network
and has to be installed on the local OWP or OAM.
operating system (iptables) does the address translation from external IP address of OAMCE with port 102 to internal RCDS IP address., port 102