Você está na página 1de 94

Foxboro Evo™

Process Automation System

The Foxboro Evo System


Planning and Sizing

*B0700AX* *S*

B0700AX

Rev S
March 12, 2018
Schneider Electric, Foxboro, Foxboro Evo, FoxCAE, FoxPanels, FoxView, and I/A Series are trademarks of
Schneider Electric SE, its subsidiaries, and affiliates.
All other brand names may be trademarks of their respective owners.

Copyright 2004-2018 Schneider ElectricAll rights reserved.

SOFTWARE LICENSE AND COPYRIGHT INFORMATION


Before using the Schneider Electric Systems USA, Inc. supplied software supported by this docu-
mentation, you should read and understand the following information concerning copyrighted
software.
1. The license provisions in the software license for your system govern your obligations
and usage rights to the software described in this documentation. If any portion of
those license provisions is violated, Schneider Electric Systems USA, Inc. will no lon-
ger provide you with support services and assumes no further responsibilities for your
system or its operation.
2. All software issued by Schneider Electric Systems USA, Inc. and copies of the software
that you are specifically permitted to make, are protected in accordance with Federal
copyright laws. It is illegal to make copies of any software media provided to you by
Schneider Electric Systems USA, Inc. for any purpose other than those purposes men-
tioned in the software license.
Contents
Figures................................................................................................................................... vii

Tables..................................................................................................................................... ix

Safety Information ................................................................................................................. xi

Preface................................................................................................................................. xiii
Purpose .................................................................................................................................. xiii
Who This Document Is For ................................................................................................... xiv
What You Should KnowBe Aware Of This ............................................................................ xiv
Revision Information ............................................................................................................. xiv
Reference Documents ............................................................................................................ xiv
Glossary ................................................................................................................................. xvi

1. Overview ........................................................................................................................... 1
Introduction .............................................................................................................................. 1
Accessing Spreadsheets .............................................................................................................. 3
Accessing Spreadsheets from the Electronic Documentation Media ...................................... 3
Object Manager Multicast Optimization (OMMO) ................................................................. 4

2. System Planning................................................................................................................ 5
Workstations ............................................................................................................................. 5
Virus Scanning ..................................................................................................................... 6
Virus Scan Software on Windows Platforms .................................................................... 6
Virus Scan Software on Solaris Platforms ......................................................................... 6
SMONs ............................................................................................................................... 6
SMONs on I/A Series Software v8.4.x and Earlier ........................................................... 7
SMONs on I/A Series Software v8.5-v8.8 and Foxboro Evo
Control Core Services v9.0 or Later ................................................................................. 7
Limitations on Number of Switches Assigned to a Single System Monitor ...................... 7
OS Configurable Parameters ................................................................................................ 7
FoxView ............................................................................................................................. 11
Alarming ............................................................................................................................ 14
Historians ........................................................................................................................... 15
Printers ............................................................................................................................... 17
Application Object Services ................................................................................................ 17
Applications ....................................................................................................................... 17
Number of Self-Hosting or Auto-Checkpointing FDC280s, FCP280s, or CP270s Supported By a
Boot Host .......................................................................................................................... 18

iii
B0700AX – Rev S

Control ................................................................................................................................... 18
Alarming ............................................................................................................................ 19
Control Distribution .......................................................................................................... 20
Peer-to-Peer Relationships .................................................................................................. 20
OM Scan Load ................................................................................................................... 20
AIM*API Application Examples .................................................................................... 21
Peer-To-Peer Examples .................................................................................................. 22
FoxView Application Examples ..................................................................................... 23
Control Processor Load Analysis ........................................................................................ 24
Block Processing Cycle ....................................................................................................... 24
Running with 100 ms and 50 ms Block Processing Cycle (BPC) ................................... 24
Phasing .......................................................................................................................... 24
CNI ........................................................................................................................................ 25
CNI Data Throughput Limits and CPU Usage .................................................................. 25
Limitation When Working With Maximum Connections Through a CNI ........................ 26
Inter-CNI traffic (Bytes Per Second) .................................................................................. 26
Broadcast Considerations ................................................................................................... 27
Remote Compound List Configuration on the Remote CNI ......................................... 27
Retry Sink Connections ..................................................................................................... 28
Connection Time Considerations ....................................................................................... 29
Recovery After a Detected Failure of All Communication Between CNIs .......................... 30
Limitation of Sink Points in a Workstation to help Ensure Reconnection Following CNI Reboot
31
I/O Points ............................................................................................................................... 32
Network .................................................................................................................................. 32
System Planning Summary Tables .......................................................................................... 34
Data Access to Control Core Services Objects ......................................................................... 36

3. System Sizing .................................................................................................................. 39


Workstations with Multiple CPU Cores Enabled .................................................................... 39
Workstation Summary Worksheet ..................................................................................... 39
Workstations with Single CPU Core Enabled ......................................................................... 42
Workstation Summary Worksheet ..................................................................................... 43
Control Processors .................................................................................................................. 48
Maximum Loading Table ................................................................................................... 49
Inter-Network Traffic ............................................................................................................. 52
Example – Gradual Migration ............................................................................................ 53
LI Traffic Rates ....................................................................................................................... 57

Appendix A. Upgrading SMONs on I/A Series Software v8.4.x


and Earlier to v8.5-v8.8 or Control Core Services v9.0 or Later .......................................... 59
Overview ................................................................................................................................. 59
System Definition v2.9 ....................................................................................................... 59
Quick Fixes Needed ................................................................................................................ 59

iv
B0700AX – Rev S

Installation Sequence ............................................................................................................... 60


Known Detected Issues/Workarounds .................................................................................... 61

Appendix B. Site Planning Worksheets ............................................................................... 65

Index .................................................................................................................................... 69

v
B0700AX – Rev S

vi
Figures
2-1. CNI Data BiDirectional Throughput and CPU Usage ............................................... 25
2-2. Inter-CNI traffic (Bytes Per Second) ........................................................................... 26
2-3. Compound Broadcast Transmission Density .............................................................. 27
2-4. Compound Broadcast Density Plot for 10K Compounds ........................................... 28
2-5. Retry Sink Connections Operation Time .................................................................... 29
2-6. Number of Points vs. Time to Connect the First and Last Points ................................ 30
3-1. Original Nodebus Traffic Rates ................................................................................... 54
3-2. Adding an ATS in Extender Mode .............................................................................. 55
3-3. Migrate Nodebus 1 ..................................................................................................... 55
3-4. Migrate Nodebus 4 and Nodebus 5 ............................................................................ 56
3-5. Final Migration ........................................................................................................... 56
A-1. New Nodebus Test - System Monitors ........................................................................ 61
A-2. HOME Display .......................................................................................................... 62
A-3. “Greyed” SMON Selected ........................................................................................... 63
A-4. Selecting NETWORK Button .................................................................................... 64

vii
B0700AX – Rev S

viii
Tables
2-1. OS Configurable Parameters ......................................................................................... 8
2-2. CPs vs. % CPU Load Per Alarms Per Second, For # of Alarm Destinations ................ 15
2-3. OM Scan Load: AIM*API Application Examples ........................................................ 21
2-4. OM Scan Load: Peer-To-Peer Examples ..................................................................... 22
2-5. OM Scan Load: FoxView Application Examples ......................................................... 23
2-6. Windows Multicore Workstation System Planning Summary ..................................... 34
2-7. Windows Workstation System Planning Summary ..................................................... 35
2-8. Solaris Workstation System Planning Summary .......................................................... 35
2-9. Data Access to CIO Objects on CP270 With OMMO Feature Disabled .................... 37
3-1. Windows Workstation with Multiple CPU Core Specification ................................... 39
3-2. Workstation with Multiple CPU Cores Summary Worksheet Example ...................... 40
3-3. Alarm Manager for Workstation with Multiple CPU Cores Worksheet Example ........ 40
3-4. FoxView Worksheet for Workstation with Multiple Cores Enabled Example ............. 41
3-5. AIM*Historian for Workstation with Multiple Cores Enabled Worksheet Example ... 41
3-6. Windows Workstation With Single CPU Core Specification ...................................... 42
3-7. Solaris Workstation Specification ................................................................................ 43
3-8. Workstation with Single CPU Core Summary Worksheet Example ............................ 44
3-9. Alarm Manager for Workstation with Single CPU Core Worksheet Example ............. 45
3-10. Default AST Table for Number of Alarm Managers on a Windows-Based Workstation
with Single CPU Core (Local and Remote) ................................................................. 46
3-11. Default AST Table for Number of Alarm Managers on a Solaris-Based Workstation
(Local and Remote) ..................................................................................................... 46
3-12. FoxView Worksheet for Workstation with Single Core Enabled Example ................... 47
3-13. AIM*Historian for Workstation with Single Core Enabled Worksheet Example ......... 47
3-14. Loading Summary ....................................................................................................... 49
3-15. Station Free Memory (Bytes) ...................................................................................... 50
3-16. Peer-to-Peer Data ........................................................................................................ 51
3-17. Resource Table ............................................................................................................ 51
A-1. Quick Fixes for Upgrading SMON from I/A Series Software v8.4 or Earlier ............... 60
B-1. Workstation Summary Worksheet .............................................................................. 65
B-2. Alarm Manager Worksheet ......................................................................................... 66
B-3. FoxView Worksheet .................................................................................................... 67
B-4. AIM*Historian Worksheet .......................................................................................... 68

ix
B0700AX – Rev S

x
Safety Information

Important Information
Read these instructions carefully and look at the equipment to
become familiar with the device before trying to install, operate, ser-
vice, or maintain it. The following special messages may appear
throughout this manual or on the equipment to warn of potential
hazards or to call attention to information that clarifies or simplifies
a procedure.

The addition of either symbol to a "Danger" or


"Warning" safety label indicates that an electrical
hazard exists which will result in personal injury if
the instructions are not followed.

This is the safety alert symbol. It is used to alert you to


potential personal injury hazards. Obey all safety messages
that follow this symbol to avoid possible injury or death.

DANGER
DANGER indicates a hazardous situation which, if not avoided, will
result in death or serious injury.

WARNING
WARNING indicates a hazardous situation which, if not avoided, could
result in death or serious injury.

CAUTION
CAUTION indicates a hazardous situation which, if not avoided, could
result in minor or moderate injury.

NOTICE
NOTICE is used to address practices not related to physical injury.
Please Note
Electrical equipment should be installed, operated, serviced, and main-
tained only by qualified personnel. No responsibility is assumed by
Schneider Electric for any consequences arising out of the use of this
material.

A qualified person is one who has skills and knowledge related to the con-
struction, installation, and operation of electrical equipment and has
received safety training to recognize and avoid the hazards involved.
Preface

Purpose
This document provides system planning and sizing guidelines for Foxboro Evo™ Control Core
Services (hereinafter referred to as the Control Core Services) systems with the Foxboro Evo
Control Network for the platforms residing on the control network:
♦ Windows® stations (with multiple CPU cores enabled) with Control Core Services
v9.1 or later
♦ Windows® stations (running single CPU core) with I/A Series software v8.4.1-v8.8
or Control Core Services v9.0 or later
♦ Solaris® stations with I/A Series software v8.3
Additional performance and sizing guidelines for the Windows Server 2016 platform can be
found in the documents:
♦ The Hardware and Software Specific Instructions document is included with your
Windows Server 2016 hardware. For information on the current list of these docu-
ments for supported servers, refer to the latest Control Core Services installation
guide.
For a Windows Server 2016 platform running on a virtual machine, refer to Virtual-
ization User's Guide for Windows Server 2016 (B0700HD).
♦ Model H90 Workstation Server for Windows Server 2016 Operating System
(PSS 31H-4H90-16)
Additional performance and sizing guidelines for the Windows Server 2008 R2 Standard platform
can be found in the documents:
♦ The Hardware and Software Specific Instructions document is included with your
Windows Server 2008 R2 hardware. For information on the current list of these doc-
uments for supported servers, refer to the latest Control Core Services installation
guide.
For a Windows Server 2008 R2 Standard platform running on a virtual machine, refer
to Virtualization User’s Guide (B0700VM).
♦ Model H90 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H90)
♦ Model H91 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H91)
♦ Workstation Server for Windows 2008 Software Overview Microsoft Windows Server
2008 Operating System (PSS 21S-1B11 B3).
Additional performance and sizing guidelines for the Windows Server 2003 platform can be
found in the documents:
♦ Model H90, H91, P90, and P91 System Administration Guide (Windows Server 2003,
R2 with Service Pack 2) (B0700BX)
♦ Workstation Server for Windows 2003 Software Overview Microsoft Windows Server
2003 Operating System (PSS 21S-1B10 B3).

xiii
B0700AX – Rev S

Who This Document Is For


This document is for engineers, application managers, plant managers, and other qualified and
authorized personnel involved in the planning and sizing of the implementation of the Foxboro
Evo Control Network in the Foxboro Evo system. It is also suitable for Foxboro personnel.

Be Aware Of This
Prior to using this document, you need to be familiar with Control Core Services process control
principles and strategies. Detailed information about the various Control Core Services software
and hardware elements is available in the “Reference Documents” on page xiv.
You need to also be familiar with Microsoft® Excel™ operating principles and procedures prior
to using spreadsheets.

Revision Information
For this revision of this document (B0700AX, Rev. S), these changes were made:
Global
♦ Updated the document to implement new corporate and product branding.
♦ Rewrote all safety messages.
♦ Updated terminology to meet safety standards.
Chapter 2 “System Planning”
♦ In “Virus Scan Software on Windows Platforms” on page 6, added first paragraph
regarding virus protection software on platforms with Windows Server 2016.
♦ Added note after Table 2-1 “OS Configurable Parameters” on page 8.
♦ Updated “Alarming” on page 14.
Chapter 3 “System Sizing”
♦ Added last two notes to “Maximum Loading Table” on page 49.

Reference Documents
♦ Address Translation Station User’s Guide (B0700BP)
♦ AIM*AT Suite AIM*API User's Guide (B0193YN)
♦ Alarm and Display Manager Configurator (ADMC) (B0700AM)
♦ Application Object Services User’s Guide (B0400BZ)
♦ Control Processor 270 (CP270) and Field Control Processor 280 (CP280) Integrated
Control Software Concepts (B0700AG)
♦ Control Processor 270 (CP270) On-Line Image Update (B0700BY)
♦ Standard and Compact 200 Series Subsystem User's Guide (B0400FA)
♦ Enclosures and Mounting Structures Site Planning and Installation User's Guide
(B0700AS)
♦ K-Series Enclosures Overview - Site Planning and Installation User's Guide (B0700GN)

xiv
B0700AX – Rev S

♦ Field Device Controller 280 (FDC280) User's Guide (B0700GQ)


♦ Field Control Processor 280 (FCP280) Sizing Guidelines and Excel® Workbook
(B0700FY)
♦ Field Control Processor 280 (FCP280) User’s Guide (B0700FW)
♦ Field Control Processor 270 (FCP270) Sizing Guidelines and Excel Workbook
(B0700AV)
♦ Field Control Processor 270 (FCP270) User’s Guide (B0700AR)
♦ Control Network Interface (CNI) User's Guide (B0700GE)
♦ Field Device System Integrator (FBM230/231/232/233) User’s Guide (B0700AH)
♦ FOUNDATION™ fieldbus User’s Guide for the Redundant FBM228 Interface (B0700BA)
♦ FoxPanels Configurator (B0700BB)
♦ FoxView Software V10.4 (B0700FC)
♦ I/A Series Information Suite AIM*Historian User’s Guide (B0193YL)
♦ Implementing FOUNDATION Fieldbus on the I/A Series System (B0700BA)
♦ Integrated Control Block Descriptions (B0193AX)
♦ Integrated Control Configurator (B0193AV)
♦ Model H90, H91, P90, and P91 System Administration Guide (Windows Server 2003,
R2, with Service Pack 2) (B0700BX)
♦ Model H90 Workstation Server for Windows Server(R) 2016 Operating System
(PSS 31H-4H90-16)
♦ Model H90 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H90)
♦ Model H91 Workstation Servers for the Windows Server® 2008 R2 Operating System
(PSS 31H-4H91)
♦ Object Manager Calls (B0193BC)
♦ Power, Earthing (Grounding, EMC and CE Compliance (B0700AU)
♦ Process Operations and Displays (B0700BN)
♦ Software Utilities (B0193JB)
♦ System Definition: A Step-By-Step Procedure (B0193WQ)
♦ System Administration Guide (Solaris 10 Operating System) (B0700CT)
♦ System Definition Release Notes for Windows 7 and Windows Server 2008 (B0700SH)
♦ System Management Displays (B0193JC)
♦ Transient Data Recorder and Analyzer User’s Guide (B0700AL)
♦ The Foxboro Evo Control Network Architecture Guide (B0700AZ)
♦ Control Core Services V9.x System Error Messages (B0700AF)
♦ Virtualization User’s Guide (B0700VM)
♦ Workstation Alarm Management (B0700AT)
♦ Workstation Server for Windows 2008 Software Overview Microsoft Windows Server
2008 Operating System (PSS 21S-1B11 B3)

xv
B0700AX – Rev S

♦ Workstation Server for Windows 2003 Software Overview Microsoft Windows Server
2003 Operating System (PSS 21S-1B10 B3)
♦ Z-Module Control Processor 270 (ZCP270) User’s Guide (B0700AN)
♦ Z-Module Control Processor 270 (ZCP270) Sizing Guidelines and Excel Workbook
(B0700AW).
Most of these documents are available on the Foxboro Evo Electronic Documentation media
(K0174MA). The latest revisions of these documents are also available through our Global Cus-
tomer Support at https://pasupport.schneider-electric.com.

Glossary

AIM*API AIM* Product Line Application Programming Interface


AIM*Historian AIM*API application that collects Control Core Services Objects.
AMS Alarm Management System
AO API The Application Object API that is part of the OM and is used by AOS to
access (for example, create, locate) AO objects.
AO Objects The hierarchically named objects created and managed by either the Applica-
tion Object Services (AOS) or by the AO API on an OM station. They are
similar to CIO compounds, blocks, and parameters but provide increased
flexibility. You define the data parameters, attributes, and so forth. AO
objects take the form application:object.attribute.
API Application Programming Interface
AST Alarm Server Task
ATS Address Translation Station
CBL Carrierband LAN
CIO Control & Input/Output
CIO Objects The hierarchically named process control and input/output objects created
through the control configurator and managed by the control software. CIO
objects take the form compound:block.parm or compound.parm.
Control Core See “Foxboro Evo Control Core Services” on page xvii
Services
Control Core The set of AO, CIO, and OM objects on Foxboro Evo systems for which the
Services Objects OM provides applications with OM Services.
CP Control Processor. The control processor performs any mix of integrated
first-level automation functions such as continuous, sequential, or discreet
logic functions.
FCP270 Field Control Processor 270
FCP280 Field Control Processor 280
FDC280 Field Device Controller 280
FDSI Field Device System Integrator

xvi
B0700AX – Rev S

Foxboro Evo Core software environment, formerly known as “I/A Series (Intelligent Auto-
Control Core mation Series) software”. A workstation which runs this software is known as
Services a “Foxboro Evo Control Core Services workstation”.
Foxboro Evo Formerly known as “FCS Configuration Tools”, “InFusion Engineering
Control Editors Environment”, or “IEE”, these are the Control software engineering and
configuration tools built on the ArchestrA Integrated Development Environ-
ment (IDE).
Foxboro Evo Formerly known as “The Mesh control network”, a network of Foxboro-
Control Network qualified switches which enable communications between workstations, con-
trol processors, and other similar stations. Subsequent references to this net-
work use “the control network”.
Foxboro Evo Formerly known as “Foxboro Control Software (FCS)” and “InFusion”, a
Control Software suite of software built on the ArchestrA Integrated Development Environ-
ment (IDE) to operate with the Foxboro Evo Control Core Services.
Foxboro Evo An overall term used to refer to a system which may include either, or both,
Process Foxboro Evo Control Software and Foxboro Evo Control Core Services.
Automation
System
I/O Input/Output
IPC Inter-Process Communications: a proprietary, Foxboro communications
layer for applications.
IPC Connection When two applications in different stations need a permanent connection
between them, an IPC connection is formed. The number of IPC connec-
tions is fixed based on station type except on workstations where it is an OS
configurable parameter. For change-driven data access through OM open
lists, the OM uses one IPC connection on each station (sink and source)
regardless of how many applications open lists on the sink station.
LI LAN Interface
Multicast An IPC communication mechanism that sends messages to a group of desti-
nations. Since OM resides in every Foxboro station, multicast sends messages
to every station. Therefore the terms broadcast and multicast are used inter-
changeably in this document.
Multicore The multiple CPU core feature, allowing current-gen workstations and serv-
ers to run Foxboro software with multiple CPU cores.
Nucleus Plus An embedded real-time operating system that is used on the FDC280,
FCP280, FCP270, ZCP270, and Address Translation Station (ATS) control
stations
OM Object Manager: a proprietary, Foxboro OS extension that supports data
access to Foxboro Evo objects.
OM API The Object Manager API that provides OM Services.

xvii
B0700AX – Rev S

OM List An OM list is a set of points for which an application wants to receive


change-driven data access. These data points can consist of CIO objects, AO
objects, and OM objects that can reside locally or in remote stations. OM
lists can be opened by user applications using AIM*API or by Foxboro appli-
cations using OM API. When an operator on a workstation brings up a new
display, the connected data points on this display are requested from the sta-
tion containing these points through an OM list. When the AIM*Historian
asks for data collection points, it also uses an OM list. When a CP block has
peer-to-peer block connections, it uses an OM list. While an OM list is
open, it exists in the station that has requested the data (sink side) and a sub-
set of the list exists in the station that contains remote data (source side).
OMMO Object Manager Multicast Optimization - A feature which reduces the net-
work processing overhead on the Foxboro Evo Control Network and
I/A Series Nodebus as well as the OM processing overhead on the Foxboro
stations (Application Workstations, Control Processors, etc.). When enabled,
OMMO reduces the number of multicast network communications initiated
by the Object Manager.
OM Objects The flat named shared objects created and managed by OM Services, includ-
ing shared objects of the types: variable, alias, process, device, letterbug, and
socket.
OM Scanner An OM process that monitors the database of a station and sends data on an
exception basis (change-driven basis) to other stations that have requested the
data. Examples of stations that request change-driven data are CPs (for peer-
to-peer connections) and Workstations (for displays, historians, and other
applications).
The OM scanner task always sends the change-driven data to the OM server
task of the station that requested the data through an OM list.
OM Server An OM process that services all OM message requests. This includes change-
driven data updates, get/set requests, object location requests, etc.
OM Services The object manager services used by applications for creating and deleting
the OM objects and locating and accessing the OM objects, the AO objects,
and the CIO objects.
Peer-to-Peer The control block mechanism that uses OM lists to refresh its block inputs
Connection with data from a remote station. That data can be from CIO, OM or AO
objects. For many of the control strategies peer-to peer connections are
between CIO objects. The block that requests data is referred to as the sink
of the block connection and the block that has the requested remote data is
referred to as the source of the block connection. A block connection is nor-
mally local to another block that exists in the same CP. However, the full
path name defined for a block parameter may be to a CIO object that is in
another CP. This remote type of connection is referred to as a peer-to-peer
block connection.
SMDH System Management Display Handler (Graphical User Interface for Systems
Management)
The control See “Foxboro Evo Control Network” on page xvii.
network

xviii
B0700AX – Rev S

The Control See “Foxboro Evo Control Software” on page xvii.


Software
Unicast An IPC communication mechanism that sends messages to a single destina-
tion.
Workstations Stations that connect to bulk storage devices and optimally to information
networks to allow bi-directional information flow. These processors perform
computation intensive functions as well as process file requests from tasks
within themselves or from other stations. They also interface to a CRT and
the input devices associated with it. These may be alphanumeric keyboards,
mice, trackballs, touchscreens, or up to two modular keyboards. Each proces-
sor manages the information on its CRT and exchanges data with other pro-
cessor modules.
ZCP270 Z-Format Control Processor 270

xix
B0700AX – Rev S

xx
1. Overview
This chapter explains the subject of sizing and the sizing spreadsheets and worksheets.

Introduction
This document is the top level user’s guide for planning and sizing the Foxboro Evo Control Core
Services (referred to as the Control Core Services) elements of the Foxboro Evo Control Network
for:
♦ (Workstations with multiple CPU cores enabled) Control Core Services v9.1 or later
for the Windows operating system.
♦ (Workstations running a single CPU core) I/A Series software v8.4.1-v8.8 or Control
Core Services v9.0 or later for the Windows operating system.
♦ I/A Series software v8.3 for the Solaris operating system.
Lower level documents are referenced to provide detailed specific descriptions, suggestions, and
procedures for major areas such as Control, the control network, and I/O communications. Sys-
tem planning is described with respect to the overall performance and sizing of your Foxboro Evo
system, and does not take into consideration factors such as cost, environment, installation, and
configuration. These factors are described in sales guidelines, sales tools, and other user docu-
ments.
Spreadsheets and worksheets are provided to make a variety of sizing calculations for Control
Core Services workstations and control stations. Control Core Services sizing spreadsheets are
Microsoft Excel® application software packages that execute on a Windows PC and provide auto-
mated calculations based on user input. Worksheets are provided for manual calculations if
spreadsheets are not available.
Use the spreadsheets and worksheets before you configure the final system to expedite the config-
uration process and avoid reconfiguring. They can also be used to assist in developing a process
control strategy that allows for optimum usage of all stations while providing for expedient
throughput for process control blocks.
Determine the distribution of equipment and software to plan and size the control network for
performance. This enables the system to respond well to user actions, control the process in real
time, and meet published performance and sizing specifications for control, alarming, AIM*His-
torian data collection, and so forth.
Additional planning and sizing is needed if the control network is connected through an Address
Translation Station (ATS) to a Nodebus network. Chapter 3 “System Sizing” describes the sizing
calculations for inter-network traffic between the control network and the Nodebus network.
Refer to the “Standard I/A Series Migration Strategies” section in V8.3 Software for the Solaris
Operating System Release Notes and Installation Procedures (B0700RR) for planning recommenda-
tions regarding the ATS usage.

1
B0700AX – Rev S

This document along with the lower level reference documents, sizing spreadsheets and work-
sheets will help you plan and size your system. They provide information and data calculations
about control stations loading, workstations loading, and network traffic. Here are some fre-
quently asked questions.
Control Stations:
♦ How many control stations do I need to support the number and type of I/O points
in my system?
♦ How do I distribute my control process load between control stations?
♦ How many peer-to-peer connections can my system support?
♦ What is the estimated Field Bus Scan Load percentage for each control station?
♦ What is the estimated Control Block Load percentage for each control station?
♦ What is the estimated Sequence Block Load percentage for each control station?
♦ What is the estimated Total Control Cycle Load percentage for each control station?
♦ What is the estimated OM Scan Load percentage for each control station?
♦ What is the estimated CPU Load percentage for each control station to support my
AIM*Historian application?
♦ What is the estimated CPU Load percentage for each control station to support my
FoxView displays?
♦ What is the estimated CPU Load percentage for each control station to support my
workstation applications?
♦ What is the estimated CPU Load percentage for each control station if I choose to use
the default Aprint services for alarm notification?
♦ What is the estimated Idle Time percentage for each control station to support sus-
tained alarm rates, alarm bursts, and alarm destinations?
♦ What is the estimated memory consumption for each control station?
♦ Do the sizing estimates for any control station exceed the recommended control sta-
tion CPU loading guidelines?
Workstations:
♦ Do the default OS configurable parameter settings for each workstation satisfy the
number of connections I need between the workstation and control stations?
♦ What is the estimated CPU Load percentage for each workstation to support my
AIM*Historian application?
♦ What is the estimated CPU Load percentage for each workstation to support my Fox-
View displays?
♦ What is the estimated CPU Load percentage for each workstation if I choose to use
the default Aprint services for alarm notification?

2
B0700AX – Rev S

♦ Do the sizing estimates for any workstation exceed the recommended workstation
CPU Load percentage loading guidelines?
♦ Does the workstation support the multiple CPU core feature (refer to the Hardware
and Software Specific Instructions document shipped with the workstation), and if so,
should I implement it?
♦ How many OM objects will have to be created for each of the unique objects need to
be monitored by this workstation?
Network Traffic:
♦ What is my estimated control network traffic flow and can my network configuration
handle the estimated sustained and peak traffic rates?
♦ If connecting to a Nodebus system using ATSs, do I need to do a total replacement of
LAN Interfaces (LIs) or can I do a gradual migration using an ATS running in
Extender mode?

NOTE
All references to workstations apply to both Windows and Solaris workstations,
unless explicitly referred to as either a Windows workstation or a Solaris
workstation.

Accessing Spreadsheets
Spreadsheets can be accessed on the Foxboro Evo Electronic Documentation media (K0174MA)
or from the Global Customer Support web site (https://pasupport.schneider-electric.com). These
spreadsheets can be run on any personal computer that has Microsoft Excel software. You need to
use Microsoft Office 97 or a later version of MS-Excel.
For hardware and software requirements for your workstation, refer to documentation for the
Excel spreadsheet. Also, refer to the MS-Excel help for general principles of operation.

Accessing Spreadsheets from the Electronic Documentation


Media
To access the spreadsheets on the Foxboro Evo Electronic Documentation media (K0174MA):
1. Insert the CD-ROM disk or DVD-ROM disk in the CD drive.
2. Install the software.
3. Access the file for the spreadsheet desired.

3
B0700AX – Rev S

Object Manager Multicast Optimization (OMMO)


For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core Services
v9.0 or later, a new option called the Object Manager Multicast Optimization (OMMO) feature
is available to reduce the network processing overhead on the control network and I/A Series
Nodebus as well to reduce the OM processing overhead on the Foxboro stations (Application
Workstations, Control Processors, etc.). When enabled, OMMO reduces the number of multicast
network communications initiated by the Object Manager.
OMMO is disabled by default. For technical details on enabling and configuring this feature,
refer to “Object Manager Multicast Optimization (OMMO)” in Object Manager Calls
(B0193BC).

4
2. System Planning
This chapter describes system recommendations and guidelines that you need to follow to help
ensure your Foxboro Evo system does not exceed published Control Core Services performance
and sizing specifications.
The system planning phase results in a Foxboro Evo system that:
♦ Provides fast response to user actions.
♦ Provides real-time control with no overruns.
♦ Handles sustained alarm rates and alarm bursts.
♦ Supports customer applications and data access.
You need to be familiar with the various sizing guidelines related to the configuration of a system
prior to system definition/configuration. For planning control network traffic rates in the Fox-
boro Evo Control Network, refer to The Foxboro Evo Control Network Architecture Guide
(B0700AZ). If you are connecting the control network to a Nodebus network using an ATS in
Extender mode, you need to size traffic rates for the LI associated with the ATS in Extender
mode. Refer to the “Standard I/A Series Migration Strategies” section in V8.3 Software for the
Solaris Operating System Release Notes and Installation Procedures (B0700RR) for planning inter-
network communications between the control network and Nodebus network. System planning
also needs that you determine:
♦ Workstation Loading
♦ Control Station Loading
♦ Distribution of I/O
♦ OS Configurable Parameters

Workstations
According to the general workstation CPU loading guideline, you need to keep the sustained
workstation idle time to at least the recommended value of Reserved CPU Load:
♦ Windows (single CPU core)=40%
♦ Windows (multiple CPU core)=25%
♦ Solaris=40%
Reserved CPU Load percentage helps ensure that the workstation has a reserve performance
capacity to support temporary peak loads for virus scanning, alarm bursts, alarm recovery, histo-
rian data reduction, historian archiving, large application startups, end of shift reports, file print-
ing, file copies, network backup/restore, and so forth.
Workstation planning needs you to consider:
♦ Virus Scanning
♦ System Monitor configuration
♦ OS Configurable Parameters
♦ FoxView displays

5
B0700AX – Rev S

♦ Alarming
♦ AIM*Historian
♦ Application Object Services
♦ Customer applications.
The CPU Load percentage varies significantly depending on platform type. For platforms with
the multicore feature enabled, McAfee scan times are reduced by 40% and the CPU utilization is
reduced by 50%, resulting in improved responsiveness during these scans. Platforms with the
multicore feature enabled have their CPU utilization reduced by 25% during BESR backups,
resulting in improved responsiveness when creating backups.

Virus Scanning
Virus scan protection is needed even if there are no external network connections, because it helps
to protect against file transfers done from local devices.

Virus Scan Software on Windows Platforms


For stations with the Windows Server 2016 operating system, Schneider Electric recommends
that you install virus protection software on your server. You are responsible for determining
which virus protection software fits your system best, and acquiring, installing, and maintaining
this software.
For stations with Windows Server 2008 or earlier operating systems, McAfee® Virus Protection
Software V8.5i, 8.7i, 8.8i or later has been qualified and is recommended for use on V8.2 or later
Windows workstations on the control network. These versions are not supported on Windows
workstations running previous versions of I/A Series software. Additionally, V8.2 and later Win-
dows workstations do not support versions of McAfee Virus Scan software prior to V8.5i.
McAfee virus protection software contains a file exclusion protection option that allows you to
exclude virus checking for selected files. Enabling the file exclusion protection feature can consid-
erably reduce peak load usage and is recommended for all the workstations with single CPU cores
enabled. (For workstations with multiple CPU cores enabled, McAfee’s CPU impact needs to be
negligible with regard to the peak CPU load.)
Because the sustained CPU Load percentage for virus protection is extremely low and the work-
station has a Reserved CPU Load of 40% (for single CPU core workstations) or 25% (for multi-
ple CPU core enabled workstations) that can handle temporary peak loads, the workstation
summary sizing worksheet does not need a separate line entry for virus scanning; virus scan load-
ing is absorbed in the line entry for Reserved CPU Load percentage.

Virus Scan Software on Solaris Platforms


Currently there is no virus scan software available for workstations running I/A Series software
v8.3 for the Solaris operating system.

SMONs
System Monitor (SMON) is used to monitor the status of stations and devices on the control net-
work.

6
B0700AX – Rev S

SMONs on I/A Series Software v8.4.x and Earlier


For I/A Series software v8.4.x and earlier, The System Definition software allows up to 30
SMONs per system.

NOTE
If you want to upgrade SMONs on I/A Series software v8.4.x and earlier, refer to
Appendix A “Upgrading SMONs on I/A Series Software v8.4.x and Earlier to v8.5-
v8.8 or Control Core Services v9.0 or Later”.

SMONs on I/A Series Software v8.5-v8.8 and Foxboro Evo


Control Core Services v9.0 or Later
For I/A Series software v8.5-v8.8 or Control Core Services v9.0 or later, SMON support on Win-
dows workstations is expanded from 30 to 128 System Monitors on stations and switches with
I/A Series software v8.5-v8.8 or Control Core Services v9.0 or later. However, pre-V8.5
stations still only support 30 SMONs. In order to interoperate, the System Monitor name of the
host workstation has to appear in the first 30 names.
If you want to use more than 30 SMONs on a Foxboro Evo system, keep these caveats in mind:
♦ The current limit of 64 stations per System Monitor Domain remains the same.
♦ SysDef 2.10 or later is needed to configure an instance of the control network with
more than 30 System Monitors.
♦ SysDef 2.10 or later can be used to configure up to 128 System Monitors in a Foxboro
Evo system that consists of a Windows-based control network connected to a Node-
bus I/A Series system. Legacy I/A Series systems retain their limit of 30 System
Monitors.
♦ Versions of SMDH running on the Nodebus may need Quick Fixes, contact the-
Global Customer Support Center at https://pasupport.schneider-electric.com.

Limitations on Number of Switches Assigned to a Single System Monitor


In order to optimize the performance of System Monitors on a system with I/A Series software
v8.x or Control Core Services v9.0 or later, the total number of switches assigned to a single
System Monitor cannot exceed fifteen (15), and the total number of switch ports for the switches
assigned to a single System Monitor cannot exceed five hundred (500).

OS Configurable Parameters
Workstations support OS configurable parameters that enable you to fine tune OS extension
resources for a particular application. These OS configurable parameters consist mainly of Object
Manager shared memory resources. They include:
♦ Number of OM lists for change-driven data access
♦ Number of Foxboro Evo objects that can be imported to minimize system multicast
messages
♦ Number of OM objects which also supports the number of Application Objects.
Default values have been set for a typical workstation that supports the recommended guidelines
for workstation applications such as FoxView, Alarm Manager, AIM*Historian, and so forth. You
need not modify the default settings. The OS configurable parameter usage can be viewed using

7
B0700AX – Rev S

the /usr/local/show_params utility. Refer to Application Object Services User’s Guide (B0400BZ)
for information on setting OS configurable parameters.
Table 2-1 contains a list of OS configurable parameters with default and maximum values fol-
lowed by a brief description of each parameter and typical usage of the control network in a Fox-
boro Evo system:

Table 2-1. OS Configurable Parameters

Default Value Min Value Max Value


OS Parameter Name Windows Solaris Windows Solaris Windows Solaris
CMX_NUM_CONNECTIO 200 200 96 96 255 255
NS
URFS_NUM_CONNECTI 200 200 96 96 200 255
ONS
OM_NUM_OBJECTS 4000 4000 450 450 6000 6000
OM_NUM_CONNECTIO 200 200 30 30 255 255
NS
OM_NUM_IMPORT_VARS 1000 1000 1000 1000 10000 10000
OM_NUM_LOCAL_OPEN 100 100 50 50 300 300
_LISTS
OM_NUM_REMOTE_OPE 100 100 50 50 100 100
N_LISTS
IPC_NUM_CONN_PROCS 200 200 96 96 255 255
IPC_NUM_CONNLESS_P 250 250 96 96 255 250
ROCS
GET_SET_TIMEOUT 4 4 2 2 12 12
OM_MULTICAST_OPTIM 0 - 0 - 1 -
IZATION1
IMP_SAVE_PERIOD1 30 - 0 - 1440 -
OMMO_MULTICAST_DE 200 - 50 - 12000 -
LAY1
OMMO_UNICAST_DELAY 100 - 0 - 12000 -
1

1. Parameter available only on Windows systems with I/A Series software, v8.6-v8.8 or Control Core
Services v9.0 or later.

NOTE
Current workstations size the OM Scanner Database for the maximum number of
local lists (OM_NUM_LOCAL_OPEN_LISTS) and maximum number of remote
lists (OM_NUM_REMOTE_OPEN_LISTS), with the maximum of 255 points
per list. Thus, it is not possible to run out of OM Scanner Database entries.

8
B0700AX – Rev S

NOTE
For Solaris workstations with I/A Series software v8.4.2 or later, or Windows work-
stations with I/A Series software v8.6-v8.8 or Control Core Services v9.0 or later,
the values in the “Default Value” column are configured in the \usr\fox\exten\con-
fig\loadable.cfg file, and the values in the “Min Value” and “Max Value” columns
are configured in the \usr\fox\exten\config\user_rules.cfg file.

CMX_NUM_CONNECTIONS
♦ Maximum number of concurrent connections allowed by the workstation.
♦ CMX_NUM_CONNECTIONS ≥ OM_NUM_CONNECTIONS.
URFS_NUM_CONNECTIONS (Solaris Only)
♦ Number of connections used by uRFS.
OM_NUM_OBJECTS
♦ Total number of OM objects that can be created by applications. The number of OM
objects is also used to support the number of Application Objects because they share
OM memory space.
♦ You can use the /usr/local/show_params utility to view the usage of OM objects.
♦ You can use the /opt/fox/bin/tools/som utility (“list” command) to view the names of
OM objects created.
♦ Each FoxView creates ~65 OM objects.
♦ Each Alarm Manager Subsystem creates ~10 OM objects.
OM_NUM_CONNECTIONS
♦ Total number of station connections used by OM Server for local OM change-driven
lists. The number of connections determines how many stations can source data for
workstation displays, AIM*Historian, and user applications.
♦ You can use the /usr/local/show_params utility to view the usage of station
connections.
OM_NUM_IMPORT_VARS
♦ Total number of entries used to save station addresses for Foxboro Evo objects to min-
imize message multicasts.
♦ You can use the /usr/local/show_params utility to view the usage of Foxboro Evo
objects imported.
♦ You can use the /opt/fox/bin/tools/som utility (“imp” command) to view the names of
imported Foxboro Evo objects (for example, compounds).
OM_NUM_LOCAL_OPEN_LISTS
♦ Total number of workstation OM lists that can be opened for change-driven data
access.
♦ Each FoxView opens one list per 1-75 display points.
♦ Each AIM*Historian opens (through AIM*API) one list per 1-255 points sampled.
♦ Each user application opens (through AIM*API) one list per 1-255 points requested
for change-driven access.

9
B0700AX – Rev S

♦ You can use the /usr/local/show_params utility to view the usage of local OM lists.
♦ You can use the /opt/fox/bin/tools/som utility (“opdb” command) to view local OM
lists.
OM_NUM_REMOTE_OPEN_LISTS
♦ Total number of remote OM lists that source data (for example, remote shared vari-
ables) for corresponding local OM lists (for example, FoxView displays) opened on
other workstations.
♦ You can use the /usr/local/show_params utility to view the usage of remote OM lists.
♦ You can use the /opt/fox/bin/tools/som utility (“opdb” command) to view remote
OM lists on workstations.
♦ You can use the /opt/fox/bin/tools/rsom utility (“opdb” command) to view remote
OM lists on control stations.
IPC_NUM_CONN_PROCS
♦ Maximum number of workstation software processes that register for IPC connected
services.
♦ Control Core Services baseline software running on a Windows workstation con-
sumes approximately 35 Control Core Services processes registered for IPC connected
services.
♦ Control Core Services baseline software running on a Solaris workstation consumes
approximately 35 Control Core Services processes registered for IPC connected
services.
♦ You can use the /usr/local/show_params utility to view the usage of IPC connected
services.
♦ You can use the /opt/fox/bin/tools/sipc utility (“list dt” command) to view the names
of the processes registered for IPC connected services.
IPC_NUM_CONNLESS_PROCS
♦ Maximum number of workstation software processes that register for IPC connection-
less services.
♦ Control Core Services baseline software running on a Windows workstation con-
sumes approximately 65 Control Core Services processes registered for IPC
connectionless services.
♦ Control Core Services baseline software running on a Solaris workstation consumes
approximately 70 Control Core Services processes registered for IPC connectionless
services.
♦ You can use the /usr/local/show_params utility to view the usage of IPC connection-
less services.
♦ You can use the /opt/fox/bin/tools/sipc utility (“list cdt” command) to view the names
of the processes registered for IPC connectionless services.
GET_SET_TIMEOUT
♦ The timeout value (in seconds) for which a station has to wait prior to the timing out
GETVAL and SET_CONFIRM operations.

10
B0700AX – Rev S

OM_MULTICAST_OPTIMIZATION
♦ (For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
Enables (1) or disables (0) the Object Manager Multicast Optimization (OMMO)
feature during workstation boot up, as discussed in Object Manager Calls
(B0193BC).
♦ Reduces the number of multicast network communications initiated by the Object
Manager to reduce the network processing overhead on the control network and
I/A Series Nodebus control network hardware (such as a switch, Address Translation
Station (ATS), etc.) and the OM processing overhead on the Foxboro stations, to
increase the efficiency and robustness of Control Core Services network
communications.
♦ Available on Windows-based workstations with I/A Series software v8.6-v8.8 or Con-
trol Core Services v9.0 or later only.
IMP_SAVE_PERIOD
♦(For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
For the Object Manager Multicast Optimization (OMMO) feature, this parameter
determines how often (in minutes) a workstation saves its import table and address
table to local files, which the workstation uses to populate its OM database when
rebooting, as discussed in Object Manager Calls (B0193BC).
♦ Helps to reduce network load as the last known locations of OM objects are loaded
from this file instead of having to be requested from stations on the network through
multicast messages.
OMMO_MULTICAST_DELAY
♦ (For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
Delay time (in milliseconds) between the opening of OM lists that are sent using mul-
ticast message and those that are sent to stations on the Nodebus. Related to the
Object Manager Multicast Optimization (OMMO) feature discussed in Object
Manager Calls (B0193BC).
OMMO_UNICAST_DELAY
♦ (For Windows-based workstations with I/A Series software v8.6-v8.8 or Control Core
Services v9.0 or later only)
Delay time (in milliseconds) between the opening of OM lists that are sent through
unicast message to stations on the control network. Related to the Object Manager
Multicast Optimization (OMMO) feature discussed in Object Manager Calls
(B0193BC).

FoxView
In general, FoxView displays affect Control Core Services control network systems as follows:
♦ Each FoxView display consumes a workstation CPU Load percentage for updating
display values, bar graphs, trend lines, and so forth.
♦ Each FoxView display consumes one workstation OM Server connection per remote
station that sources display points.

11
B0700AX – Rev S

♦ Each FoxView instance and its display consumes these workstation OS configurable
parameters:
♦ OM_NUM_OBJECTS
♦ OM_NUM_LOCAL_OPEN_LISTS
♦ OM_NUM_CONNECTIONS
♦ Each FoxView display causes a control station OM Scan Load percentage based on the
number of display points the control station scans each second.
♦ Each FoxView display causes each control station that sources display points to con-
sume one OM Scanner connection.
FoxView display updates are based on the display scan rate and the fast scan option configured
when building a display using FoxDraw. The display configurable scan rate (which has a default
of 1 s) applies to all stations sourcing display points. It determines how often the source stations
scan the display points and send updated values to the workstation. The fast scan option applies
only to control stations with a BPC of 100 ms or faster that are configured by SysDef to allow the
OM fast scan option. A display configured with the fast scan option, coupled with a control sta-
tion configured for OM fast scan, causes a control station sourcing display points to scan the
points every 100 ms and send updated values to the workstation.
The default display scan rate of 1 second coupled with the default no fast scan option
provides:
♦ Display call-up with initial data values within 1 to 2 seconds.
♦ Display updates of data sourced by Foxboro stations within 1 second.

NOTE
The fast scan option increases the OM Scan load on each control station and
sources display data approximately ten times the normal rate for the display points.
We recommend that you use the FoxView display fast scan option only if you have
control stations running at 100 ms BPC or faster and need an initial display call-up
time less than 1 second or if your data source is external to I/A and the display
update time needs to be faster.

A workstation can support multiple FoxViews (Windows 1-16, Solaris 1-16) and each worksta-
tion worksheet calculates a CPU Load percentage based on a 200-point display with all the dis-
play points changing value every scan cycle. When building FoxView displays, you need to
consider these system impacts:
♦ Displays consume workstation OM Server connections equal to the number of sta-
tions that source the display points. If the number of stations sourcing display points
exceeds the number of OM Server connections, the display will not connect to all
source stations and update all the points. The number of OM Server connections is an
OS configurable parameter (OM_NUM_CONNECTIONS) and can be increased to
correct this condition. The OM multiplexes station connections for all OM lists on a
single workstation. You need not make any modifications to the default value (Win-
dows-200, Solaris-200).
♦ Displays use one local OM list on the workstation for each of 1 to 75 unique display
points. The number of OM local lists is an OS configurable parameter

12
B0700AX – Rev S

(OM_NUM_LOCAL_OPEN_LISTS). The default value (Windows=100, Solaris=


100) does not need you to make any modifications.
♦ Displays need to contain up to 200 points to help ensure an initial call-up time of less
than 2 seconds.
♦ A 200-point display with the default scan rate of 1 second consumes a workstation
CPU Load of:
♦ 1.6% for Windows workstations with multiple CPU cores enabled
♦ 2.0% for Windows workstations with a single CPU core enabled
♦ 5.2% for Solaris workstations.
♦ A 200-point display configured to use the fast scan option consumes a workstation
CPU Load of:
♦ 2.0% for Windows workstations with multiple CPU cores enabled
♦ 4.0% for Windows workstations with a single CPU core enabled
♦ 10% for Solaris workstations.
♦ Displays configured to use the fast scan option increase the workstation CPU Load
percentage because they cause FoxView software to update the display values at a
faster rate and the workstation to process ten times the number of OM Scanner
update messages sent by source control stations every 100 ms.
♦ Displays cause a FCP270/ZCP270 control station that sources display points to have
an OM Scan Load of 1.8% per 1,000 points/second changing every scan cycle.

NOTE
For OM scan loading for the FCP280, refer to the Field Control Processor 280
(FCP280) Sizing Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280
(FDC280) Sizing Guidelines and Excel Workbook (B0700GS).

♦ A 200-point display with the default scan rate of 1 second that has all the display
points sourced by a single FCP270/ZCP270 control station increases the control sta-
tion’s OM Scan Load by 0.4%.
♦ Displays configured to use the fast scan option increase (by ten times) the OM Scan
Load percentage on each 100 ms control station that sources display points and is
configured for the fast scan option. Each source control station scans display points
every 100 ms rather than every 1 second and sends 10 times the number of update
messages.
♦ A 200-point display with the display fast scan option that has all the display points
sourced by a single OM fast scan control station causes a FCP270/ZCP270 control
station OM Scan Load of 4.0%.
♦ To configure displays with the fast scan option, consider the number of FoxView dis-
plays that can simultaneously access data from the same control station. This factor is
covered in the OM Scan Loading section of the Control Station spreadsheets.

13
B0700AX – Rev S

Alarming
In a control network, configure alarm destinations for control station alarms. APRINT services
on each control station send control process alarm messages to the Alarm Management Subsys-
tem (AMS) for configured alarm destinations such as workstations, printers, and AIM*Historian
workstations. It sends multiple alarm messages (1 per destination) for each process alarm occur-
rence.
When planning alarm handling for your system, you need to consider:

NOTE
Be aware that the CP270, FCP280 and FDC280 sizing workbooks allow the adjust-
ment of load assumptions related to block load for alarming; e.g., "Average
BLNALM Block inputs used" and "Percentage of PID blocks using alarm options"
in CP_Load_Assumptions sheet in the sizing workbook. However, there is addi-
tional, unestimated load, to communicate alarms to their destinations. This is part
of the reason for the 70% limit on Core 1 CPU load, for FDC280, and Overall Sta-
tion Load, for CP270 and FCP280, to allow time for this additional alarm process-
ing. This section helps you estimate that additional alarm processing load.
For help adjusting the estimate for block load for alarming for the FCP280, refer to
the Field Control Processor 280 (FCP280) Sizing Guidelines and Excel® Workbook
(B0700FY).
For help adjusting the estimate for block load for alarming for the FDC280, refer to
the Field Device Controller 280 (FDC280) Sizing Guidelines and Excel Workbook
(B0700GS).

♦ How do I create an estimate for the expected sustained alarm rate?


For example, if:
The controller has 5000 blocks, with on average five blocks per control loop, giving
5000/5 = 1000 control loops total, and 5% of the control loops are generating an
alarm per second, then the estimated sustained alarm rate =
1000 * 0.05 = 50 alarms per second.
♦ How many alarm destinations do I need for each controller?
♦ How many alarm messages per second do I need for each controller? Alarm messages
per second = sustained alarm rate * alarm destinations
♦ How heavily loaded are my controllers; how much Idle Time is needed to support the
number of alarm messages?
♦ Aprint increases the control station alarming load significantly for each alarm destina-
tion configured. For an estimate of the % CPU Load per alarm message per second,
refer to Table 2-2. Note that you count one “alarm message” for each process alarm
message for each destination, not the number of APRINT messages on the wire
(which can contain multiple process alarm messages concatenated together). For
example, if a compound generates 100 alarms per second, but the compound has five
alarm destinations, that counts as 500 alarms per second for the purposes of this
estimation.

14
B0700AX – Rev S

Table 2-2. CPs vs. % CPU Load Per Alarms Per Second, For # of Alarm Destinations1

CPU Load Per Alarms Per Second, For # of Alarm Destinations


CP Type 6 or more 5 4 3 2 1
CP270 0.035 0.036 0.038 0.042 0.047 0.065
CP280 0.022 0.023 0.025 0.028 0.033 0.049
FDC280 0.026 0.027 0.029 0.033 0.038 0.057
1. If your system has more than 16 alarm destinations, add 10% to the resulting estimate to
account for the possibility of slightly less efficient message throughput. For example, if the origi-
nal estimate is 12% CPU load, add 10% of 12 for a final estimate of 12 + 12 * .10 = 12 + 1.2 =
13.2 % CPU load.

♦ Example: On an FCP280, Compound 1 generates 100 process alarms per second,


with 5 alarm destinations, using 0.023 * (100 * 5) = 11.5 % CPU and Compound
2 generates 20 process alarms per second, with 2 alarm destinations, using 0.033 *
(40 * 2) = 2.64 % CPU, for a total of 14.14 % CPU.
♦ The Alarm Management Services (AMS) on a Windows workstation with a single
CPU core enabled consumes 2.5% CPU Load percentage for every 100 alarm mes-
sages per second processed.
The Alarm Management Services (AMS) on a Windows workstation with multiple
CPU cores enabled consumes 1.6% CPU Load percentage for every 100 alarm mes-
sages per second processed.
♦ The Alarm Management Services (AMS) on a Solaris workstation consumes 5% CPU
Load percentage for every 100 alarm messages per second processed plus an additional
5% base load.
♦ The AMS base load on a workstation also depends on the number of Alarm Managers
and the refresh rate of each Alarm Manager.

Historians
In general, AIM*Historian affects the control network as follows:
♦ AIM*Historian consumes workstation CPU Load percentage based on data collection
rates, data reduction, and data archiving.
♦ AIM*Historian consumes workstation Disk Load Time percentage and physical disk
space based on Real-Time Point (RTP) file sizes.
♦ AIM*Historian increases control station CPU Load percentage for OM scanning of
data collection points sourced by the control station.
♦ AIM*Historian consumes these workstation OS configurable parameters:
♦ OM_NUM_LOCAL_OPEN_LISTS
♦ OM_NUM_CONNECTIONS
♦ AIM*Historian consumes one workstation OM Server connection for every station
that sources collection points.

15
B0700AX – Rev S

♦ AIM*Historian causes a control station to consume one OM Scanner connection for


collection points it sources.
The data collection rate determines how much data is collected in a particular time frame and is
controlled by:
♦ Delta – specifies the minimum change of a data collection point relative to the previ-
ous collected value. A delta is assigned to each data collection point and needs to be
the controlling variable for the data collection rate.
♦ Collection Frequency – specifies how often the data collection points are scanned for
changes on the stations (for example, control stations) that source the data. AIM*His-
torian supports a slow and fast frequency option. By default, the fast frequency is in
effect, and in most of the implementations there is little need to change to the slow
frequency. In many cases, making the collection frequency the controlling variable can
be used as a convenient alternative to fine-tuning the individual deltas of data collec-
tion points.
♦ Max Time Between Samples – verifies that a value is placed in the database at least at
the specified interval regardless if the value has changed more than the delta since the
last collection. This parameter can be considered to have the opposite functionality as
the Collection Frequency.
Data collection points are combined and collected into RTP files. A new RTP file is started when-
ever the active one is full or when the specified Real-Time Retention Time parameter (RTTIME)
has expired. The finished files are eventually repacked, which shrinks their sizes to about one-third
without compression, and by an additional 40% to 60% when compression is on. Data retrieval is
less efficient from a large number of small files when compared to a small number of large files;
however, if the RTP files are too large, performance problems can be detected because RTP files
may not comfortably fit into physical memory. Disk activity increases and system performance
may decrease.
A workstation can support multiple AIM*Historian instances, each capable of collecting in excess
of 20,000 points. When using AIM*Historian software, you need to consider these system
impacts:
♦ AIM*Historian software consumes an average workstation CPU Load (Windows=
1.05% (multiple CPU cores) or 2% (single CPU core), Solaris=3%) for data collec-
tions that change at a rate of 1,000 points/second.
♦ AIM*Historian software consumes a workstation CPU Load (Windows=1%, Solaris=
1.5%) for every 1,000 data collection points reduced at a rate ≥ 15 minutes.
♦ AIM*Historian software consumes a workstation CPU Load (Windows=5%, Solaris=
4%) for every 5,000 data collection points archived at the default rate.
♦ AIM*Historian software consumes workstation OM Server connections based on the
number of stations that source the data collection points. If the number of stations
sourcing the data collection points exceeds the number of OM Server connections,
not all the data is collected. The number of OM Server connections is an OS configu-
rable parameter (OM_NUM_CONNECTIONS) and can be increased to correct this
condition. You need not modify the default value. The OM multiplexes station con-
nections for all OM lists on a single workstation.

16
B0700AX – Rev S

♦ AIM*Historian software uses 1 local OM list for each of 1 to 255 data collection
points. The number of OM local lists is an OS configurable parameter
(OM_NUM_LOCAL_OPEN_LISTS) and can be increased if necessary.
♦ AIM*Historian software causes a control station OM Scan Loading of 1.7% per
1,000 collection points/second changing every scan cycle.
♦ AIM*Historian software causes a control station OM Scanner connection to be used
by each control station that sources data collection points
♦ The ARCHSIZE parameter controls the size of the RTP files and experience shows
that a good compromise is to configure ARCHSIZE to a value that results in about
one RTP file per day, or very few RTP files per day.
♦ The maximum size of the RTP is estimated to be 1/8 of the physical memory size. For
example, if your Windows computer has 512 MB of RAM, ARCHSIZE need not be
configured greater than 64 MB; if your Solaris computer has 1.0 GB of RAM,
ARCHSIZE need not be configured greater than 128 MB.
♦ A good value for RTTIME is usually 86,400 seconds (1 day).
Refer to I/A Series Information Suite AIM*Historian User’s Guide (B0193YL) for information
regarding the AIM*Historian configuration parameters that you may need to configure based on
your system requirements and constraints. The AIM*Historian Excel spreadsheet HistSize.xls can
be used to estimate the Historian Configuration Parameters.

Printers
The Reserved CPU Load percentage in the workstation sizing spreadsheet is set to include han-
dling printer operations such as system messages and alarm messages. When deciding which
workstations need to host local printers, consider:
♦ Local printers consume approximately 10% of the CPU Load for printing alarms.
♦ All alarm printers need to be operated in the HSD (High Speed Draft) mode. This
allows better system performance when printing alarms and documents.
♦ Printing reports has about the same CPU Load percentage effect as printing alarms
when the alarm rate is 30 alarms/minute or 10% load.

Application Object Services


Application Object Services (AOS) provides hierarchical application objects that are similar to
control objects but are under the control of the user application. When using AOS, consider:
♦ AOS is not a small application. Its size is approximately 25 MB.
♦ AOS uses Object Manager shared memory. Run calcAppSize to determine the correct
size of the OS configurable parameter OM_NUM_OBJECTS.
♦ The workstation CPU Load percentage is low and optimized, buffered AOA accesses
using the AO API are greater than 10,000 per second on a workstation.
For detailed information about AOS, refer to Application Object Services User’s Guide (B0400BZ).

Applications
It is the responsibility of the user to determine the system impact of customer application pack-
ages or third-party applications installed on the control network. Consider these points when
installing application packages on the workstation:

17
B0700AX – Rev S

♦ The general workstation CPU Loading guideline is that you need to keep the
Reserved Overhead percentage (Windows (single CPU core)=40%, Windows (multi-
ple CPU cores enabled)=25%, Solaris=40%) to help ensure enough reserve capacity to
support peak loads for process upsets, large application startups, end of shift reports,
printing, file copies, network backup/restore, and so forth.
♦ Customer applications that access Control Core Services data need to estimate the
workstation CPU Load percentage based on AIM*API performance guidelines.
♦ Third-party applications’ specifications for minimal system requirements (for exam-
ple, RAM size) may affect Control Core Services applications like AIM*Historian.
♦ The number of application software packages.
♦ The size of user-developed applications and programs.
♦ The frequency of application executions.
♦ Simultaneous application executions.
♦ Minimizing system broadcasts and multicasts.
For Windows platforms, you can determine the effect an application has on the workstation by
using the Windows Performance Meter (Programs > Administrator Tools > Performance).
The Windows Performance Meter provides metrics for the system, processor, processes, memory,
physical disk, and so forth.
For Solaris platforms, you can determine the effect an application has on the workstation using
the ps command and the perfmeter utility (click Launch > Applications > Utilities >
Performance Meter).
Depending on the number and types of applications being run at the same time, increasing the
workstation memory may improve system performance. Increased memory usually reduces the
amount of paging and swapping to the physical hard disk.

Number of Self-Hosting or Auto-Checkpointing FDC280s,


FCP280s, or CP270s Supported By a Boot Host
If the auto-checkpointing option is enabled, along with the self-hosting option, the loading
requirements are based partially on the CP idle time and the CP database size. The resulting
maximum total time from checkpoint start to the display of an “installed into flash memory” mes-
sage is 15 minutes per FDC280, FCP280, or CP270. With an auto-checkpoint rate of two hours
(15 minutes x 8 = 2 hours), this results in a maximum of eight FDC280s, FCP280s, or CP270s
that need to be hosted by a single boot host when both the functions are enabled.
If you slow the auto-checkpoint rate or run small databases, you can host more FDC280s,
FCP280s, or CP270s on a single boot host. Even if the self-hosting option is disabled, the auto-
checkpoint timing is roughly the same. The recommended maximum number of FDC280s,
FCP280s, or CP270s that can be hosted also remains unchanged.

Control
This section provides an overview of the system planning and sizing guidelines needed for you to
adequately plan your control strategy on the control network. For detailed specifications regard-
ing these control processors, refer to:
♦ Field Device Controller 280 (FDC280) User's Guide (B0700GQ)

18
B0700AX – Rev S

♦ Field Control Processor 280 (FCP280) User’s Guide (B0700FW)


♦ Field Control Processor 270 (FCP270) User’s Guide (B0700AR)
♦ Z-Module Control Processor 270 (ZCP270) User’s Guide (B0700AN)
In general, you have to determine the number of control locations desired and properly size each
control station. The key performance measures associated with sizing a control station are:
♦ Verify the control station has enough execution time to read and write the I/O.
♦ Verify the control station has enough execution time to execute the installed function
blocks.
♦ Verify the control station has enough memory to hold the control database and
sequence code.
♦ Verify the control station has enough Idle Time to send alarm messages to all config-
ured alarm destinations.
♦ Verify the control station has enough execution time to scan all data points it sources
for FoxView displays, AIM*Historian, Peer-to-Peer connections, and workstation
application packages.
♦ Verify the control station has enough OM Scanner connections for data it sources for
FoxView displays, AIM*Historian, Peer-to-Peer connections, and workstation appli-
cation packages.
♦ Verify the control station can support the number of scanner update messages it sends
based on the number of OM Scanner connections and the OM scan rate. Each scan-
ner update message takes an OM Scan Load of 0.03% for the FCP270/ZCP270 (see
B0700FY for the FCP280 or B0700GS for the FDC280). The OM scanner update
messages are per list with up to 100 scan points per message. An application that
opens a list of 225 points can need 3 update messages per scan cycle if the list points
change every scan cycle. A control station scanning at the fast scan option rate of 100
ms sends ten times the number of scanner update messages it would send if it scanned
at 1 second.
♦ Verify the control station has enough OM Server connections for peer-to-peer sink
data.
For detailed operations on sizing these control processors, refer to these manuals:
♦ FDC280 - Field Device Controller 280 (FDC280) Sizing Guidelines and Excel Work-
book (B0700GS)
♦ FCP280 - Field Control Processor 280 (FCP280) Sizing Guidelines and Excel® Work-
book (B0700FY)
♦ FCP270 - Field Control Processor 270 (FCP270) Sizing Guidelines and Excel Workbook
(B0700AV)
♦ ZCP270 - Z-Module Control Processor 270 (ZCP270) Sizing Guidelines and Excel
Workbook (B0700AW)

Alarming
Alarms and status messages are generated by an Alarm block or by alarm options in selected
blocks. Consider these options:
♦ Number of points with alarm indication

19
B0700AX – Rev S

♦ Priority of alarms
♦ Criticality of alarms within each compound
♦ Devices and applications to be notified of process alarms
♦ Use of the compound alarm inhibit parameter
♦ Frequency of alarms.
The frequency of spontaneous alarms impacts the devices configured to be notified of alarms,
communication traffic on the network, and operator responsiveness. Alarming strategies include:
♦ Providing a significant delta to stop nuisance alarming caused by the process drifting
in and out of alarm when it is near a high or low limit
♦ Using the compound alarm inhibit function to stop alarms on a priority level basis.

Control Distribution
Distribution of the various control schemes among the process control hardware, control proces-
sors and Fieldbus Modules, need you to consider:
♦ CP storage memory needed
♦ CP compound or block throughput
♦ Interprocess communication (IPC) connections
♦ Peer-to-peer relationships
♦ FBMs supported per CP.

Peer-to-Peer Relationships
Peer-to-peer connections between stations are established when a compound:block.parameter in a
source (supplier of data) station is connected to a compound:block.parameter in a sink (receiver
of data) station. An IPC connection is formed in each station. Multiple peer-to-peer connections
between two stations result in only one IPC connection for each station.
For the sink points and connections supported per station type, refer to Table 3-16 “Peer-to-Peer
Data” on page 51.
The change-driven update rate is 0.5 sec even when the BPC and the OM Scanner is running at a
frequency less than 0.5 sec.

OM Scan Load
The OM Scan Load percentage is based on:
♦ The number of data points scanned/second.
♦ The number and size of scanner update messages sent each second for OM list
updates.
Table 2-3 to Table 2-5 have examples of OM Scan Load for CP270 sourcing data.

20
B0700AX – Rev S

NOTE
For OM scan loading for the FCP280, refer to the Field Control Processor 280
(FCP280) Sizing Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280
(FDC280) Sizing Guidelines and Excel Workbook (B0700GS).

AIM*API Application Examples

Table 2-3. OM Scan Load: AIM*API Application Examples

Points per % Points Changing Source CP270 OM


Description Second Scanned per Second Scan Load %
AIM*API Application 10,000 100 16.4
AIM*API Application 10,000 50 11.7
AIM*API Application 10,000 25 9.4
AIM*API Application 9,000 100 14.8
AIM*API Application 9,000 50 10.5
AIM*API Application 9,000 25 8.5
AIM*API Application 8,000 100 13.2
AIM*API Application 8,000 50 9.4
AIM*API Application 8,000 25 7.5
AIM*API Application 7,000 100 11.5
AIM*API Application 7,000 50 8.3
AIM*API Application 7,000 25 6.6
AIM*API Application 6,000 100 9.9
AIM*API Application 6,000 50 7.1
AIM*API Application 6,000 25 5.6
AIM*API Application 5,000 100 8.2
AIM*API Application 5,000 50 5.9
AIM*API Application 5,000 25 4.7
AIM*API Application 4,000 100 6.6
AIM*API Application 4,000 50 4.7
AIM*API Application 4,000 25 3.8
AIM*API Application 3,000 100 5.0
AIM*API Application 3,000 50 3.5
AIM*API Application 3,000 25 2.8
AIM*API Application 2,000 100 3.3
AIM*API Application 2,000 50 2.4
AIM*API Application 2,000 25 1.9
AIM*API Application 1,000 100 1.7
AIM*API Application 1,000 50 1.2
AIM*API Application 1,000 25 0.9

21
B0700AX – Rev S

NOTE
AIM*Historian software is an application that uses AIM*API software. The default
list scan rate for AIM*API software is 0.5 seconds. Scanning 5,000 points every 0.5
seconds is equivalent to scanning 10,000 points/second.

Peer-To-Peer Examples

Table 2-4. OM Scan Load: Peer-To-Peer Examples

Points per % Points


Number Second Changing per Source CP270 OM
Description of Points Scanned Second Scan Load %
Peer-To-Peer Source Station 5,000 10,000 100 17.7
Peer-To-Peer Source Station 5,000 10,000 50 12.4
Peer-To-Peer Source Station 4,500 9,000 100 15.9
Peer-To-Peer Source Station 4,500 9,000 50 11.1
Peer-To-Peer Source Station 4,000 8,000 100 14.2
Peer-To-Peer Source Station 4,000 8,000 50 9.9
Peer-To-Peer Source Station 3,500 7,000 100 12.4
Peer-To-Peer Source Station 3,500 7,000 50 8.7
Peer-To-Peer Source Station 3,000 6,000 100 10.6
Peer-To-Peer Source Station 3,000 5,000 50 7.4
Peer-To-Peer Source Station 2,500 5,000 100 8.9
Peer-To-Peer Source Station 2,500 5,000 50 6.2
Peer-To-Peer Source Station 2,000 4,000 100 7.1
Peer-To-Peer Source Station 2,000 4,000 50 5.0
Peer-To-Peer Source Station 1,500 3,000 100 5.3
Peer-To-Peer Source Station 1,500 3,000 50 3.7
Peer-To-Peer Source Station 1,000 2,000 100 3.6
Peer-To-Peer Source Station 1,000 2,000 50 2.5
Peer-To-Peer Source Station 500 1,000 100 1.8
Peer-To-Peer Source Station 500 1,000 50 1.3

NOTE
The number of Sink stations does not affect the OM Scan Load percentage on the
Source station. The list scan rate for Peer-To-Peer is 0.5 seconds. Scanning 5,000
points every 0.5 second is equivalent to scanning 10,000 points/second.

22
B0700AX – Rev S

FoxView Application Examples

Table 2-5. OM Scan Load: FoxView Application Examples

Number of
WSTA70s, Average
WSVR70s, & Active Average Points Source
AWs with FoxViews Unique per % Points CP270
FoxView per WSTA70, Points Second Changing OM
Connections WSVR70 or per Scanne per Scan
Description to CP270 AW Display d Second Load %
FoxView Application 25 2 200 10,000 100 19.0
FoxView Application 25 1 200 5,000 100 9.5
FoxView Application 25 2 200 10,000 50 15.0
FoxView Application 25 1 200 5,000 50 7.5
FoxView Application 25 2 100 5,000 100 11.5
FoxView Application 25 1 100 2,500 100 5.8
FoxView Application 25 2 100 5,000 50 7.5
FoxView Application 25 1 100 2,500 50 3.8
FoxView Application 10 2 200 4,000 100 7.6
FoxView Application 10 1 200 2,000 100 3.8
FoxView Application 10 2 200 4,000 50 6.0
FoxView Application 10 1 200 2,000 50 3.0
FoxView Application 10 2 100 2,000 100 4.6
FoxView Application 10 1 100 1,000 100 2.3
FoxView Application 10 2 100 2,000 50 3.0
FoxView Application 10 1 100 1,000 50 1.5
FoxView Application 5 2 200 2,000 100 3.8
FoxView Application 5 1 200 1,000 100 1.9
FoxView Application 5 2 200 2,000 50 3.0
FoxView Application 5 1 200 1,000 50 1.5
FoxView Application 5 2 100 1,000 100 2.3
FoxView Application 5 1 100 500 100 1.2
FoxView Application 5 2 100 1,000 50 1.5
FoxView Application 5 1 100 500 50 0.8
FoxView Application 1 2 200 400 100 0.8
FoxView Application 1 1 200 200 100 0.4
FoxView Application 1 2 200 400 50 0.6
FoxView Application 1 1 200 200 50 0.3
FoxView Application 1 2 100 200 100 0.5
FoxView Application 1 2 100 200 50 0.3
FoxView Application 1 1 100 100 100 0.2
FoxView Application 1 1 100 100 50 0.2

23
B0700AX – Rev S

NOTE
The OM Scan Load percentage for the CP270 is based on the number of unique
display points, the lists scan rate (default 1.0 second), and the percentage of display
points changing every second. Examples above are for the default 1.0 second scan
rate. Displays configured for the fast scan option rate will have an OM Scan Load
percentage ten times the default list rate.

Control Processor Load Analysis


The estimated number of CPs needed is based on:
♦ Number of FBMs needed (both remote and local)
♦ Number and type of control loops consisting of compounds of related blocks which
perform a single control scheme
♦ Scan rate per block and compound
♦ CP storage memory capacity
♦ CP throughput based on scan rates and block value point counts
♦ Need for fault-tolerant CPs for critical processes
♦ IPC connections available per CP (supporting various application requests for values)
♦ Peer-to-peer connections (affecting CP storage memory and IPC connections).

Block Processing Cycle


Running with 100 ms and 50 ms Block Processing Cycle (BPC)
Consider the number of blocks to be scanned within a compound and the number of compounds
to be scanned per control station. Control station processing throughput per second is dependent
upon control station type; however, some control blocks count as more than one block and FBM
equipment control blocks (ECBs) have to be factored in as well. An ECB is a software block cre-
ated for FBM data storage and is scanned at the fastest scan rate assigned to any control block
connected to any point on the FBM.
Careful planning is necessary to help prevent control blocks or compounds from being unable to
process within a single scan cycle.

Phasing

NOTICE
POTENTIAL LOSS OF DATA

• Be careful when phasing.


• Do not put the blocks at different phases in the same
control loop.

Failure to follow these instructions can result in data loss.

24
B0700AX – Rev S

Phasing of blocks, which is the staggering of scan periods, need to be used to help prevent block
processor overload. Refer to Control Processor 270 (CP270) and Field Control Processor 280
(CP280) Integrated Control Software Concepts (B0700AG) prior to attempting to phase a station.

CNI
These subsections discuss sizing for the CNI.
For more details on the CNI, refer to Control Network Interface (CNI) User's Guide (B0700GE).

CNI Data Throughput Limits and CPU Usage


The CNI supports a bi-directional update rate of 2000 points per second. The CPU usage when
running at this throughput can be seen at point A on the graph in Figure 2-1.
The CNI has to exceed this throughput in some instances, such as during startup and recovery
from connection loss. To enforce the CPU usage ceiling for burst load conditions, such as the pro-
cessing overhead caused by the Object Manager opening and closing lists, the throughput of
updates that the CNI allows is restricted up to 2500 updates per second (each way). This is indi-
cated at point B on the graph in Figure 2-1. Beyond this point, it is possible that updates could be
received by a local CNI from data sources faster than these can be published to its remote CNI
partner. Under these circumstances, previous values are superseded by the latest data and as a
result of these lost updates, a system message is displayed.

Figure 2-1. CNI Data BiDirectional Throughput and CPU Usage

Up to around 5000 updates per second (2500 in each direction), the CPU percentage usage is
about 5% per 1000 updates. At greater change rates than this, the graph flattens off due to throt-
tling; updates are folded the latest value received is sent as soon as possible.

25
B0700AX – Rev S

Limitation When Working With Maximum Connections Through


a CNI
For a CNI, up to 10,000 connections can be made to remote points in another Foxboro Evo Con-
trol Network. Although it is possible to connect 10,000 points, a typical system may reach a limit
before this count is reached.
The number of connections that can be made is dependent on how efficiently these can be repre-
sented in the scan table on the CNI. The scan table consists of a number of rows, each containing
twenty items. When the Object Manager lists opens on the CNI, they are stored starting with a
new row. If the number of points in a list is not divisible by twenty, some entries in the scan list
are not be utilized and thus the maximum number of connections that can be made is reduced. A
system message is displayed when the maximum capacity of the scan table has been reached. Typ-
ically across a mixture of peer to peer and HMI connections, this message is not displayed until an
excess of 9000 connections are made.

Inter-CNI traffic (Bytes Per Second)

Figure 2-2. Inter-CNI traffic (Bytes Per Second)

The measurement of bytes transmitted shows the same flattening at around 2500 items per sec-
ond in one direction; each update costs 24 bytes, plus packet headers; the rate does not go above
2500 items, therefore the steady-state total traffic does not go above approximately 24*2500 =
60000 bytes per second. The traffic increases if the system is bombarded with subscription
changes, for example an action such as removing thousands of peer-to-peer connections.

26
B0700AX – Rev S

Broadcast Considerations
If broadcast traffic on the control network is a concern, there are a number of aspects related to
the CNI sizing that impact this.

Remote Compound List Configuration on the Remote CNI


During start-up, the CNI broadcasts each of the compounds that are contained in the remote
Compound List. This broadcast operation is rate limited and so the number of compounds con-
figured does not increase the level of broadcast traffic, but simply extend the time period for this.
Figure 2-3 provides an estimate of the broadcast traffic that is experienced.

Figure 2-3. Compound Broadcast Transmission Density

The first three complete broadcasts are at a density of two messages per second, with a 10-second
and then 20-second gap between them. After the third complete broadcast, there is a four minute
gap, and the final broadcast of all compound names is at a density of only one message per two
seconds:

27
B0700AX – Rev S

Figure 2-4. Compound Broadcast Density Plot for 10K Compounds

Retry Sink Connections


Some disconnections and reconfigurations can result in list connections not being made or not
being repaired between the CNI and the source CP, or between another client and its CNI. In
these cases, initiate the “Retry Sink Connections” operation from the System Manager, as
described in System Manager (B0750AP). This operation is similar in function to the “Check-
point” operation on a CP, except that the CNI has no control strategy to save.
The “Retry Sink Connections” operation performs two functions:
♦ The re-broadcast of compounds for which this CNI is proxy for the remote control
network, and
♦ The retry of open lists subscribed by the remote control network, for which this CNI
is the sink.
The compounds broadcast in this case is the same as the last broadcast at startup; see Figure 2-3
and Figure 2-4. The broadcast message density is one message per two seconds.
The compounds broadcast is executed prior to the open list retry; therefore the time to execute
open list retry depends on the number of proxy compound names to be broadcast.

28
B0700AX – Rev S

Figure 2-5. Retry Sink Connections Operation Time

Connection Time Considerations


If a significant number of connections need to be established to points on the remote control net-
work, there is an elapsed time period between the first and last point being connected. As
described, establishing connections need broadcast operations, and the rate limiting of these oper-
ations has an impact. Figure 2-6 shows the elapsed time that may occur as connection counts
increase.
The CNI does not differentiate connection requests from different consumer stations and so con-
ditions where concurrent connection requests from multiple stations could occur need to be con-
sidered if connection time is of importance. It is an option to employ more than one CNI to load
share data the number of connections between two control networks if better performance is
needed.
The graph in Figure 2-6 shows time to return good updates from connected points. For example,
if the local CNI is rebooted, and has to re-open all requested lists with the source CPs. In
Figure 2-6, the software restart consumes most of the first 40 seconds, before the list-open opera-
tions start.

29
B0700AX – Rev S

Figure 2-6. Number of Points vs. Time to Connect the First and Last Points

NOTE
This graph is taken from one example measurement.

The actual time until first update received (blue plot) depends on the distribution of connections
across source CPs. If one source CP is very heavily subscribed, and all list-open requests are
directed at that one CP, then it can take longer to send its first update, due to being busy. If con-
nections are distributed across a number of CPs, the first made lists may start updating while
other lists are still being opened on other CPs. The only certain data in the graph as shown in
Figure 2-6 is that the last point to be connected will send its first update when it is connected, and
not before.

Recovery After a Detected Failure of All Communication


Between CNIs
When communications become unavailable between CNIs, all points are reported Out of Service
(OOS). On recovery of communications, the data updates for all points are delivered from the
local CNI to the remote CNI in order to help ensure that consumers have the latest data. The
throughput of updates is rate limited in the CNI to manage CPU utilisation and the elapsed time
taken to deliver the latest data depends on the number of connections. Another factor that occurs
when the CNIs reconnect relates to synchronisation of the state in the local and remote CNIs. If
there have been changes to connection subscriptions since the communication unavailability, it is
necessary to resynchronize the subscription state between the two CNIs. This can take an elapsed
period of time.
On reconnection, if there is no serious difference between the two CNIs idea of subscriptions, no
work to do to re-synchronize, updates will flow almost immediately, subject only to the 'throt-

30
B0700AX – Rev S

tling' at about 2500 updates per second. 10000 subscriptions would take about four seconds to
complete initial update after reconnect.
On reconnect, if there is a serious difference between the two CNIs, and subscriptions has to be
re-synchronised by the remote CNI sending all its requirements to local CNI again (for example if
either CNI has been restarted), the time taken is the same as the graph shown in Figure 2-6 for re-
boot for all intents and purposes. Since all lists have to be remade, the
resynchronization does not add significantly to that time that is already being taken.

Limitation of Sink Points in a Workstation to help Ensure


Reconnection Following CNI Reboot
The CNI supports up to 10,000 compounds, and the CNI supports up to 10,000 sink points,
but the supported combination of compounds and sink points in a workstation depends:
♦ On the load on the workstation, especially the change update load, and,
♦ On other broadcasts that may be coming into the workstation from stations other
than the CNI.
When a CNI reboots, it re-broadcasts the compounds1 for which this CNI is proxy for the remote
Foxboro Evo control network. These received compound names are queued in the sink stations -
for example, the workstation - for reconnection to the CNI that just rebooted, and then, by proxy,
to the remote control network. For large numbers of compounds in the CNI’s remote Compound
List and large numbers of sink points in the workstation, it is possible for the workstation's queue
of compound names for reconnection to fill up and for some compound names to be dropped,
resulting in points for those compound names that do not reconnect.
Depending on other activity in the system, it is possible, though not necessarily true, that a “Retry
Sink Connections” operation (discussed in System Manager (B0750AP)) could successfully recon-
nect those points that initially did not reconnect upon a CNI reboot; for example, if a “Retry Sink
Connections” operation is performed during a time period during which there are fewer broad-
casts being sent by other stations.
The workstation queue for holding compound names for reconnect processing can hold up to
200 names at a time.
To help ensure that the points reconnect following a CNI reboot or a “Retry Sink Connections”
operation:
♦ Limit the number of sink points in the workstation or,
♦ Limit the number of compounds for the CNI.
These examples help with this trade-off.
♦ If CNI’s compound namespace is 200 compounds or less, all sink points eventually
reconnect, assuming no compound broadcasts from other stations.
♦ If CNI’s compound namespace is greater than 200 compounds, the reconnect out-
come depends on the number of sink points and other workstation load. These
examples involve change update loads into the workstation:

1. At
CNI reboot, the CNI also broadcasts its diagnostic shared variables prior to broadcasting its com-
pound names.

31
B0700AX – Rev S

♦ Case 1: If 10,000 compounds, 10,000 sink points, and no change update load,
3200 of the 10,000 sink points have been observed to successfully reconnect using
a “Retry Sink Connections” operation.
♦ Case 2: If 10,000 compounds, 3000 sink points, and no change update load,
2950 of the 3000 sink points have been observed to successfully reconnect using a
“Retry Sink Connections” operation.
♦ Case 3: If 10,000 compounds, and 2500 sink points with 1000 changes per sec-
ond, all the 2500 sink points are observed to successfully reconnect using a “Retry
Sink Connections” operation.
The success of broadcast messages depends on other broadcast activity occurring on the control
network.
If repeated “Retry Sink Connections” operations attempts prove unsuccessful at reconnecting all
the workstation's sink points, recover by closing and re-opening the workstation's sink lists; e.g.,
close and re-open displays and restart the Historian collection as needed.

I/O Points
The control station user guides and control station spreadsheets provide recommendations and
sizing guidelines for the I/O:
♦ Legacy Y-module (100 Series) FBMs
♦ DIN Rail Mounted (200 Series) FBMs
♦ FOUNDATION Fieldbus (FF)
♦ PROFIBUS
♦ HART
♦ Modbus
♦ FDSI
♦ DeviceNet.

Network
Understand the details of the network traffic flow to plan and implement a control network. A
reasonable qualitative analysis of traffic profiles can be obtained without performing a rigorous
quantitative analysis. To achieve this, a reasonable estimate has to be made. Normally, the designer
needs to know:
♦ What the traffic characteristics are (traffic volume and rates)?
♦ Device throughput
♦ What devices are talking to each other (the traffic flows across the network)?
♦ The physical and logical location of all these devices
♦ What the traffic volumes are by device type and/or technology?
♦ peak and average sustained load
♦ packet/frame size
♦ What is the network percent capacity used?

32
B0700AX – Rev S

♦ How much of the bandwidth is being used?


♦ Is there any evidence of network congestion?
♦ number of packet discards
♦ detected error rates
♦ traffic overhead.
♦ If there are nodes connected through ATSs, what is the ambient multicast rate in the
system?
If you do not adequately understand these traffic flows, you can end up with a slow or non-work-
ing network. Answering these basic questions and performing some planning allows for a nice
load balanced redundant system.
These Control Core Services functionality contributes to the network traffic rate:
♦ FoxView – display updates
♦ AIM*Historian – data collection
♦ Alarming – messages to Alarm Managers, Printers, and Historians
♦ Peer-to-Peer
♦ Application Packages – change-driven data access and get/set operations.
However, the network traffic rates for the control network is not a detected issue with gating
because typical traffic rate calculations for the Control Core Services functionality yield a network
bandwidth utilization of <2%. The control network traffic rates need to be considered only in the
case of third-party applications or user applications that generate high packet rates.
Workstation to workstation operations on the control network, such as copying extremely large
files, can result in a temporary high bandwidth usage up to 50% of the switch port utilization.
These types of operations also need to be considered.
The baseline data that is assumed for the control network bandwidth usage (<2%):
♦ Average Control Core Services packet size is 150 bytes.
♦ Average Control Core Services sustained packet rate will not normally exceed 1500
packets/second (based on <2% bandwidth and 150-byte packets).
The table contains packet size and packet rate data for the 100 Mb device port speeds:
Packet Size (bytes) Max Packets/Second
64 148,810
150 73,529 (2% = 1470)
1518 8,127

NOTE
When measuring switch port utilization on the control network, a given measure-
ment applies only to a given link and the conversations on that link.

Refer to The Foxboro Evo Control Network Architecture Guide (B0700AZ) for planning the control
network, and Switch Configurator Application Software Guide for the Foxboro Evo Control Network
(B0700CA) for configuring the control network.

33
B0700AX – Rev S

I/A Series software v8.1 introduced the feature for connecting the control network to a Nodebus
network using ATSs. If using an ATS in Extender mode, calculate inter-network traffic rates
through the ATS in Extender mode to help ensure that its corresponding Nodebus LI traffic rate
does not exceed the maximum recommended sustained rate of 220 packets/second. All stations
that migrate to the control network and continue to communicate to stations on the Nodebus has
to maintain their original Nodebus communication limits.
Copying a large data stream from a Nodebus through an ATS to the control network is not rec-
ommended. Refer to the “Standard I/A Series Migration Strategies” section in V8.3 Software for
the Solaris Operating System Release Notes and Installation Procedures (B0700RR) for specific details
regarding data transfers between the Nodebus and the control network.

System Planning Summary Tables


The specifications listed in Table 2-6 apply to workstations with the new multiple CPU core fea-
ture enabled.

Table 2-6. Windows Multicore Workstation System Planning Summary

Workstation
Windows Workstation OM Server OM OM
Application CPU Load % Connections Local Lists Objects
FoxView: 1.56% 1 per station 3 lists; 65 per
200 point display at sourcing data 1 list for each FoxView
default scan rate of 1 1-75 points
second
FoxView: 1.89% 1 per station 3 lists; 65 per
200 point display at sourcing data 1 list for each FoxView
fast scan rate 0.1 1-75 points
seconds
Alarm Manager: 1.57% Refer to AMS Refer to AMS 10 per
100 message per User’s Guide User’s Guide Alarm
second (B0700AT) (B0700AT) Manager
AIM*Historian: 1.05% 1 per station 4 lists; N/A
Data collection sourcing data 1 list for each
change rate of 1000 1-255 points
points per second

The specifications listed in Table 2-7 and Table 2-8 apply to the workstations with a single CPU
enabled connected to FCP270s and ZCP270s.
For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).

34
B0700AX – Rev S

For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Table 2-7. Windows Workstation System Planning Summary

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
FoxView: 2% 1 per station 3 lists; 65 per 0.4% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at default 1-75 points
scan rate of 1
second
FoxView: 4% 1 per station 3 lists; 65 per 4.0% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at fast scan 1-75 points
rate 0.1 sec-
onds
Alarm Man- 2.5% Refer to AMS Refer to AMS 10 per N/A Refer to AMS Decreases
ager: User’s Guide User’s Guide Alarm User’s Guide 0.1% per
100 message (B0700AT) (B0700AT) Manager (B0700AT) alarm mes-
per second sage gener-
ated
AIM*Historian: 2% 1 per station 4 lists; N/A 1.7% 1 for each station N/A
Data collection sourcing data 1 list for each sourcing data
change rate of 1-255 points
1000 points per
second
AIM*Historian: 1% N/A N/A N/A N/A N/A N/A
Data Reduction
of 1000 points
reduced >= 15
minutes
AIM*Historian: 5% N/A N/A N/A N/A N/A N/A
Data Archiving
5000 points at
default rate

1. FCP270/ZCP270 control station OM scan load percentage is 2% for 1000 points changing every
second and 0.03% for every scanner update message.
For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Table 2-8. Solaris Workstation System Planning Summary

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
FoxView: 5.2% 1 per station 3 lists; 65 per 0.4% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at default 1-75 points
scan rate of 1
second

35
B0700AX – Rev S

Table 2-8. Solaris Workstation System Planning Summary (Continued)

Workstation Control Station Control


Windows Workstation OM Server OM OM Control Station OM Scanner Station
Application CPU Load % Connections Local Lists Objects OM Scan Load %1 Connections Idle Time %
FoxView: 10% 1 per station 3 lists; 65 per 4.0% 1 for each station N/A
200 point dis- sourcing data 1 list for each FoxView sourcing data
play at fast 1-75 points
scan rate 0.1
seconds
Alarm Man- 5% plus base Refer to AMS Refer to AMS 10 per N/A Refer to AMS Decreases
ager: load 5% User’s Guide User’s Guide Alarm User’s Guide 0.1% per
100 message (B0700AT) (B0700AT) Manager (B0700AT) alarm mes-
per second sage
generated
AIM*Historian: 3% 1 per station 4 lists; N/A 1.7% 1 for each station N/A
Data collection sourcing data 1 list for each sourcing data
change rate of 1-255 points
1000
points/second
AIM*Historian: 1.5% N/A N/A N/A N/A N/A N/A
Data Reduc-
tion of 1000
points reduced
>= 15 minutes
AIM*Historian: 4% N/A N/A N/A N/A N/A N/A
Data Archiving
5000 points at
default rate

1.
FCP270/ZCP270 control station OM scan load percentage is 2% for 1000 points changing every
second and 0.03% for every scanner update message.
For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Data Access to Control Core Services Objects


Applications can use the Object Manager (OM) API to access Foxboro Evo objects that can be
either OM Objects, Application Objects (AO), or Control and I/O (CIO) Objects. OM API
library functions for getting and setting Foxboro Evo objects are better suited for situations in
which you want a single transfer of data. These functions use IPC connectionless services and per-
form a message multicast (by default) or unicast (if the Object Manager Multicast Optimization
feature is enabled) operation if the Foxboro Evo object’s address is not provided by the application
(for example, om_getval) or if the Foxboro Evo object’s address had not been imported by the
OM during a previous data access (for example, getval with import option). Control Core Ser-
vices multicasts are limited by the ATS to 40/sec to Nodebus stations. Nodebus systems that have
CP10s or other GWs based on Comm15 architecture (such as FoxGuard, or Allen-Bradley sta-
tions) need not exceed system multicast rates of 10/second. Systems with the control network
only do not exceed 5,000 (5K) multicast messages per second but can also be limited by control
station CPU load.

36
B0700AX – Rev S

NOTE
If you estimate that the number of multicast messages on your Nodebus and/or the
control network may exceed these specifications, we recommend that you enable the
Object Manager Multicast Optimization (OMMO) feature, which reduces the
number of multicast messages made and enables many of them (global find, get and
set, etc.) to be performed as unicast messages which have a much lower impact on
the networks. OMMO is available on stations with I/A Series software v8.6-v8.8 or
Control Core Services v9.0 or later. Refer to Object Manager Calls (B0193BC) for
more information on this feature.

If a Foxboro Evo object’s address is known, the OM API performs direct connectionless send mes-
sages to the station that sources the data. The maximum data access rates for CIO objects are gov-
erned by the access method (multicast versus direct send) and the control station load. Table 2-9
lists the maximum data access rates to CIO objects for a system with the control network on
which the stations with I/A Series software v8.6-v8.8 or Control Core Services v9.0 or later have
the OMMO feature disabled.

Table 2-9. Data Access to CIO Objects on CP270 With OMMO Feature Disabled

Multicasts (per second) Direct Sends (per second)


CP2701
Idle Time Set Set_Confirm Get Global Find (GF) Set Set_Confirm Get
75% 50 50 50 50 75 75 75
65% 50 50 50 50 75 75 65
55% 50 50 50 50 75 70 60
45% 50 50 50 50 75 60 50
35% 50 40 45 50 75 45 40
25% 50 35 30 40 75 35 35
15% 50 25 25 25 75 25 25

1. For OM scan loading for the FCP280, refer to the Field Control Processor 280 (FCP280) Sizing
Guidelines and Excel® Workbook (B0700FY).
For OM scan loading for the FDC280, refer to the Field Device Controller 280 (FDC280) Sizing
Guidelines and Excel Workbook (B0700GS).

Notes:
1. When using multicasts, the load on a single control station is the sum of all the get/set
operations performed by all the applications in the entire system because each station
has to process each message.
2. Sequence code generates get/set requests using the OM API. Refer to High Level Batch
Language (HLBL) User’s Guide (B0400DF) for sequence code guidelines.

37
B0700AX – Rev S

38
3. System Sizing
These sections present sizing information for workstations, control stations, and I/O points. All
data values presented in tables and worksheets have been rounded to one decimal position.

Workstations with Multiple CPU Cores Enabled


Table 3-1shows the initial Workstation Specification (Windows and Solaris platforms) for the
Foxboro Evo control network with Control Core Services v9.1 or later, for workstations with mul-
tiple CPU cores enabled. The performance and sizing metrics are based on each workstation’s
specifications and thus each worksheet Workstation CPU Factor is 1.0.
Table 3-1. Windows Workstation with Multiple CPU Core Specification

Description Value
System Microsoft Windows 7 Professional Service Pack 1
Computer Pentium ® 4 CPU 3.2 GHz (H92)
Intel® Xeon® CPU E5-1603.0@ 2.80 GHz 2.79GHz
8 GB of RAM
Hard Disk Drives Windows (C:) 48.8 GB
IA (D:) 416 GB
Workstation CPU Factor 1.0

The CPU Load percentage varies significantly depending on platform type. For platforms with
the multicore feature enabled, McAfee scan times are reduced by 40% and the CPU utilization
is reduced by 50%, resulting in improved responsiveness during these scans. As well, platforms
with the multicore feature enabled have their CPU utilization reduced by 25% during BESR
backups, resulting in improved responsiveness when creating backups.

Workstation Summary Worksheet


This section describes a workstation summary worksheet that totals the CPU Load percentage for
the workstation based on the detailed worksheets that follow (for example, FoxView, AIM*Histo-
rian). The CPU Load percentage may not exceed 100%; otherwise, you have to kill the processes
of selected applications or transfer them to another workstation.
First, calculate the CPU Load percentage for each of the detailed worksheets that follow and then
enter their corresponding values into the summary worksheet as shown in Table 3-2. The Work-
station Summary Worksheet normalizes the CPU Load percentage based on the Workstation
CPU Factor for Control Core Services v9.1 or later. Entries for Base CPU Load and Reserved
CPU Load are fixed values and need to be entered in the Workstation Summary Worksheet based
on Windows platform guidelines.

39
B0700AX – Rev S

NOTE
Values in the summary worksheet are based on the Windows 7 workstation exam-
ples from the worksheets that follow in this section. For example, the values for the
“Alarm Manager” entries in the summary worksheet (Table 3-2) are derived from
the Total CPU Load percentage of “1.57” for the Windows workstations in
Table 3-3.

Table 3-2. Workstation with Multiple CPU Cores Summary Worksheet Example

Value (%),
Windows
Description Workstations
1) Base Control Core Services/I/A Series CPU Load 1.0
(Windows=1.0)
2) Reserved CPU Load: 25.0
Windows (multiple CPU core enabled)=25.0
3) Alarm Manager 1.57
4) FoxView 5.01
5) AIM*Historian 1.05
6) Other Applications1 (for example, TDR, Application 5.0
Object Services, and so forth)

7) Total CPU Load % = 38.63


Items 1 + 2 + ((Sum of Items 3-6)*CPU Factor)
1. Refer to the reference document specific to the application.

Examples:
1. Total CPU Load % for a Windows 7 Workstation:
(1.0+25.0) + ((1)*(1.57+5.01+1.05+5.0)) = 38.63
Table 3-3 shows an example calculation for a workstation on the control network with 100
alarms/second and five Alarm Managers.
Table 3-3. Alarm Manager for Workstation with Multiple CPU Cores Worksheet Example

Value (%),
Windows
Description Workstations
1) Number of alarm messages per second from Aprint 100
Services for all CPs
Example: 100 alarms/second
2) Workstation CPU load % for every 100 alarms/second: 1.57%
Windows Formula = 1.57% * (number of alarms / 100)
Windows Example: 1.57% * (100/100) = 1.57%

40
B0700AX – Rev S

Table 3-3. Alarm Manager for Workstation with Multiple CPU Cores Worksheet Example (Continued)

Value (%),
Windows
Description Workstations
3) Total CPU Load percentage = items 2 above 1.57%

NOTE
CPU load for Matching and Filtering are not consid-
ered for sizing calculations as they are negligible.

NOTE
The Sustained Alarm Rate measures the time to process the alarm message traffic
and is independent of the AST refresh rate.

Table 3-4. FoxView Worksheet for Workstation with Multiple Cores Enabled Example

Value (%),
Windows
Description Workstations
1) Number of FoxViews only using displays with default 3.12%
configuration values at (Windows=1.56%, per FoxView
Windows Example: 2 FoxViews = 2 * 1.56 = 4.0%
2) Number of FoxViews using any displays with Fast Scan 1.89%
Option at (Windows=1.8%) per FoxView
Windows Example: 1 FoxView with Fast Scan =
1 * 1.8 = 1.8%

3) Total CPU Load % = sum of items 1 and 2 above 5.01%

NOTE
Actual Total CPU Load percentage is the sum of all FoxView loads.

Table 3-5. AIM*Historian for Workstation with Multiple Cores Enabled Worksheet Example

Value (%),
Windows
Description Workstations
1) CPU Load for data collection change rate 1.05%
(Windows=1.05%) per 1000 points/second
Refer to “Data Collection Rate Example”.

41
B0700AX – Rev S

Table 3-5. AIM*Historian for Workstation with Multiple Cores Enabled Worksheet Example

Value (%),
Windows
Description Workstations
2) Total CPU Load % = item number 1. 1.05%

NOTE
Data archiving and data reduction are not considered for sizing calculations as they
are not sustained loads.

Data Collection Rate Example


Compute the configured data collection rate:
♦ 1000 points at 0.5 seconds = 2000 points/second
♦ 1000 points at 1.0 seconds = 1000 points/second
♦ 1000 points at 10.0 seconds = 100 points/second
♦ Configured data collection rate = 1000 points/second.
Calculate the data collection change rate (All points changing):
♦ = (configured data collection rate) * (% of points changing)
♦ = (1000 points/second) * (1)
♦ = 1000 points/second.
Calculate the CPU Load for data collection change rate:
♦ CPU Load = (data collection change rate / 1000) * CPU Load for 1000/second
♦ = (1000/1000) * CPU Load % for 1000 pts/second
♦ Windows = 1 * 1.05 = 1.05%

Workstations with Single CPU Core Enabled


Table 3-6 shows the initial Workstation Specification tables (Windows and Solaris platforms) for
the Foxboro Evo control network with I/A Series software v8.3-v8.8 or Control Core Services
v9.0 or later, for workstations with a single CPU core enabled. The performance and sizing met-
rics are based on each workstation’s specifications and thus each worksheet Workstation CPU Fac-
tor is 1.0.
Table 3-6. Windows Workstation With Single CPU Core Specification

Description Value
System Microsoft Windows XP Professional Version 2002 Service Pack
2
Computer Pentium® 4 CPU 3.2 GHz (PW380, P92)
512 MB of RAM
Hard Disk Drives XP (C:) 15.6 GB
IA (D:) 217 GB
Workstation CPU Factor 1.0

42
B0700AX – Rev S

NOTE
The legacy Windows workstations (for example, PW340, PW360, PW370) have a
Workstation CPU Factor of 1.5 based on performance and sizing specifications for
I/A Series software releases prior to v8.2.

Table 3-7. Solaris Workstation Specification

Description Value
System Solaris 10 Operating System (6/06 distribution)
Computer UltraSPARC IIIi® (Ultra 25® workstation, P82)
1.34 GHz
1 GB of RAM
Hard Disk Drives 160 GB SATA
Workstation CPU Factor 1.0

NOTE
The Workstation CPU Factors for each Solaris workstation that can be migrated
from V7.x to V8.3 software for the Solaris operating system are:
- P79 workstation, SunBlade 150 (550 MHz) = 2.5
- P80 workstation, SunBlade 2000 (900 MHz) = 1.5
- P81 workstation (silver model), SunBlade 1500/S (1.5 GHz) = 1.0
- P81 workstation (red model), SunBlade 1500/R (1.03 GHz) = 1.3

NOTE
The CPU Load percentage varies significantly depending on platform type.

Workstation Summary Worksheet


This section describes a workstation summary worksheet that totals the CPU Load percentage for
the workstation based on the detailed worksheets that follow (for example, FoxView, AIM*Histo-
rian). The CPU Load percentage may not exceed 100%; otherwise, you have to kill the processes
of selected applications or transfer them to another workstation.
First, calculate the CPU Load percentage for each of the detailed worksheets that follow and then
enter their corresponding values into the summary worksheet as shown in Table 3-8. The Work-
station Summary Worksheet normalizes the CPU Load percentage based on the Workstation
CPU Factor for pre-v8.3 stations migrated to I/A Series software v8.3-v8.8 or Control Core Ser-
vices v9.0 or later. Entries for Base CPU Load and Reserved CPU Load are fixed values and need
to be entered in the Workstation Summary Worksheet based on Windows and Solaris platform
guidelines.

43
B0700AX – Rev S

NOTE
Values in the summary worksheet are based on the Windows and Solaris worksta-
tion examples from the worksheets that follow in this chapter. For example, the val-
ues for the “Alarm Manager” entries in the summary worksheet as shown in
Table 3-8 are derived from the Total CPU Load percentage of “4.5%” and “16.5%”
calculated for the Windows and Solaris workstations in Table 3-9, “Alarm Manager
for Workstation with Single CPU Core Worksheet Example” on page 45.

Table 3-8. Workstation with Single CPU Core Summary Worksheet Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Base Control Core Services/I/A Series CPU Load 1.0 3.0
(Windows=1.0, Solaris=3.0)
2) Reserved CPU Load: 40.0 (single CPU 40.0
Windows (single CPU core)=40.0 core)
Solaris=40.0
3) Alarm Manager 4.5 16.5
4) FoxView 8.0 10.4
5) AIM*Historian 3.1 4.7
6) Other Applications1 (for example, TDR, Application 5.0 5.0
Object Services, and so forth)

7) Total CPU Load % = 61.6 79.6


Items 1 + 2 + ((Sum of Items 3-6)*CPU Factor)
1. Refer to the reference document specific to the application.

Examples:
1. Total CPU Load % for a Sun Blade 1500/R Workstation:
(3.0+40.0) + ((1.3)*(16.5+10.4+4.7+5.0)) = 90.6
2. Total CPU Load % for a Sun Blade 2000 Workstation:
(3.0+40.0) + ((1.5)*(16.5+10.4+4.7+5.0)) = 97.9
3. Total CPU Load % for a Sun Blade 150 Workstation:
(3.0+40.0) + ((2.5)*(16.5+10.4+4.7+5.0))= 134.5
This configuration exceeds CPU 100% capacity.
4. Total CPU Load % for a PW340 Workstation:
(1.0+40.0) + ((1.5)*(4.5+8.0+3.1+5.0)) = 71.9

44
B0700AX – Rev S

Table 3-9 shows an example calculation for a workstation on the control network with 100
alarms/second and five Alarm Managers.
Table 3-9. Alarm Manager for Workstation with Single CPU Core Worksheet Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of alarm messages per second from Aprint 100 100
Services for all CPs
Example: 100 alarms/second
2) Workstation CPU load % for every 100 alarms/second: 2.5% 10.0%
Windows Formula = 2.5% * (number of alarms / 100)
Solaris Formula = 5.0% * ((number of alarms / 100) + 1)
Windows Example: 2.5% * (100/100) = 2.5%
Solaris Example: 5.0% * ((100/100)+1) = 10.0%
3) Base load for Number of Alarm Managers 1.0% 4.0%
Windows Formula = Default Windows AST Table 3-10 Lookup *
(default refresh rate / actual refresh rate)
Solaris Formula = Default Solaris AST Table 3-11 Lookup *
(default refresh rate / actual refresh rate)
Windows Example, 5 AMs at default (3.0) AST refresh rate
32K database: 1 * (3.0/3.0) = 1.0%
Solaris Example, 5 AMs at default (3.0) AST refresh rate
32K database: 4 * (3.0/3.0) = 4.0%
4) CPU Load for Matching and Filtering (per Alarm Manager) 1.0% 2.5%
CPU Load % = (Matching and Filtering Load Coefficient
[Windows=0.2%, Solaris=0.5%]) * (Number of Matches/Filters) *
(Number of Alarm Managers)
Windows Example: 5 matches for 1 Alarm Manager
CPU Load = 0.2% * 5 * 1 = 1.0%
Solaris Example: 5 matches for 1 Alarm Manager
CPU Load = 0.5% * 5 * 1 = 2.5%

5) Total CPU Load % = sum of items 2, 3, and 4 above 4.5% 16.5%

NOTE
The Sustained Alarm Rate measures the time to process the alarm message traffic
and is independent of the AST refresh rate.

45
B0700AX – Rev S

Table 3-10. Default AST Table for Number of Alarm Managers on a Windows-Based Workstation
with Single CPU Core (Local and Remote)

# Alarm AST Refresh CPU Load % CPU Load % CPU Load % CPU Load %
Managers Rate 1K Database 5K Database 10K Database 32K Database
1 3.0 seconds 0.2 0.2 0.2 0.4
5 3.0 seconds 0.2 0.2 0.4 1.0
10 3.0 seconds 0.2 0.4 0.8 1.7
15 3.0 seconds 0.2 0.7 1.0 2.2
20 3.0 seconds 0.2 0.7 1.0 3.0
25 3.0 seconds 0.2 0.8 1.5 3.5

Table 3-11. Default AST Table for Number of Alarm Managers on a Solaris-Based Workstation
(Local and Remote)

# Alarm AST Refresh CPU Load % CPU Load % CPU Load % CPU Load %
Managers Rate 1K Database 5K Database 10K Database 32K Database
1 3.0 seconds 0.9 0.9 0.9 0.9
5 3.0 seconds 4.0 4.0 4.0 4.0
10 3.0 seconds 8.0 8.0 8.0 8.0
15 3.0 seconds 12.0 12.0 12.0 12.0
20 3.0 seconds 16.0 16.0 16.0 16.0
25 3.0 seconds 20.0 20.0 20.0 20.0

Notes:
1. Each of the Number of Alarm Managers tables measures the time to process alarm
changes and is dependent on the AST refresh rate and independent of the sustained
alarm rate, as long as at least one alarm changes per refresh cycle.
2. CPU load is linear based on AST refresh rate. CPU Load formula is based on refresh
rate in table entry lookup.
Formulas:
Windows CPU Load percentage = (Default Windows AST Table 3-10 Lookup
Value for default AST 3.0 second refresh rate) * (default refresh rate/actual refresh
rate)
Solaris CPU Load percentage = (Default Windows AST Table 3-11 Lookup Value
for default AST 3.0 second refresh rate) * (default refresh rate/actual refresh rate)
Example 1: 1 AM at default 3.0 second refresh rate for 32K database
Windows Workstation: 0.4 * (3.0/3.0) = 0.4%
Solaris Workstation: 0.9 * (3.0/3.0) = 0.9%
Example 2: 5 AMs at 0.5 second refresh rate for 32K database
Windows Workstation: 1.0 * (3.0/0.5) = 6.0%

46
B0700AX – Rev S

Solaris Workstation: 4.0 * (3.0/0.5) = 24.0%


Example 3: 25 AMs at default 3.0 second refresh rate for 32K database
Windows Workstation: 3.5 * (3.0/3.0) = 3.5%
Solaris Workstation: 20.0 * (3.0/3.0) = 20.0%

Table 3-12. FoxView Worksheet for Workstation with Single Core Enabled Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of FoxViews only using displays with default 4.0% 10.4%
configuration values at (Windows=2.0%, Solaris=5.2%)
per FoxView
Windows Example: 2 FoxViews = 2 * 2.0 = 4.0%
Solaris Example: 2 FoxViews = 2 * 5.2 = 10.4%
2) Number of FoxViews using any displays with Fast Scan 4.0% 0%
Option at (Windows=4.0%, Solaris=10.0%) per FoxView
Windows Example: 1 FoxView with Fast Scan =
1 * 4.0 = 4.0%
Solaris Example: 0 FoxViews with Fast Scan =
0 * 10.0 = 0%

3) Total CPU Load % = sum of items 1 and 2 above 8.0% 10.4%

NOTE
Actual Total CPU Load percentage is the sum of all FoxView loads.

Table 3-13. AIM*Historian for Workstation with Single Core Enabled Worksheet Example

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) CPU Load for data collection change rate 3.1% 4.7%
(Windows=2.0%, Solaris=3.0%) per 1000 points/second
Refer to “Data Collection Rate Example”.
2) CPU Load for data reduction 3.1% 4.7%
(Windows=1.0%, Solaris=1.5%) per 1000 points, rate >= 15 min-
utes
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.5% = 4.7%

47
B0700AX – Rev S

Table 3-13. AIM*Historian for Workstation with Single Core Enabled Worksheet Example (Continued)

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
3) CPU Load for data archiving 3.1% 2.5%
(Windows=5%, Solaris=4%) per 5000 points at a default rate
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 5.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 4.0% = 2.5%

4) Total CPU Load percentage = item number 1. 3.1% 4.7%

NOTE
Items 2 and 3 are encapsulated in the
workstation reserve CPU load because they
are not sustained loads.

Data Collection Rate Example:


1. Compute configured data collection rate.
♦ 1000 points at 0.5 seconds = 2000 points/second
♦ 1000 points at 1.0 seconds = 1000 points/second
♦ 1000 points at 10.0 seconds = 100 points/second
♦ Configured data collection rate = 3100 points/second.
2. Calculate data collection change rate (½ the points changing).
♦ = (configured data collection rate) * (% of points changing)
♦ = (3100 points/second) * (0.5)
♦ = 1550 points/second.
3. Calculate CPU Load for data collection change rate.
♦ CPU Load = (data collection change rate / 1000) * CPU Load for
1000/second
♦ = (1550/1000) * CPU Load % for 1000 pts/second
♦ Windows = 1.55 * 2 = 3.1%
♦ Solaris = 1.55 * 3 = 4.7%.

Control Processors
This section summarizes important information about resource and loading for the various con-
trol processors. For detailed sizing guidelines, refer to the document and workbook specific to
your desired control processor:

48
B0700AX – Rev S

♦ Field Control Processor 280 (FCP280) Sizing Guidelines and Excel Workbook
(B0700FY)
♦ FDC280 Sizing Guidelines and Excel Workbook (B0700GS)
♦ Field Control Processor 270 (FCP270) Sizing Guidelines and Excel Workbook
(B0700AV)
♦ Z-Module Control Processor 270 (ZCP270) Sizing Guidelines and Excel Workbook
(B0700AW)

NOTE
In the event of a conflict between the information provided in this manual and the
sizing guidelines for a CP/FDC280, the sizing guidelines for the CP/FDC280 have
precedence.

Maximum Loading Table


Foxboro does not recommend any of the following maximums (in any phase where applicable).
The control station is considered to be fully loaded with respect to that parameter when this
upper limit is reached. This helps ensure that adequate time remains for functions that are above
and beyond the routine processing load such as checkpointing, alarm message handling, display
call-up, and so forth.

Table 3-14. Loading Summary

Station Block Field FDC280 FCP280 FCP270 ZCP270


Maximum % for “Fieldbus Scan” (% of BPC) N/A 60.0% 60.0% 60.0%
Maximum % for “Control Blocks” (% of BPC) 70.0% 70.0% 70.0% 70.0%
Maximum % for “Sequence Blocks” (% of BPC) 70.0% 70.0% 70.0% 70.0%
Maximum % for “Control Blocks” + “Sequence 70.0% 70.0% 70.0% 70.0%
Blocks” (% of BPC)
Maximum % for “Total Control Cycle”1 (% of BPC) 70.0% 90.0% 90.0% 90.0%
Maximum OM points per second that can be 18000 18000 12000 12000
scanned.
♦ Actual OM Scan Load percentage will vary based
on the number of points per OM list, but is
expected to be roughly 0 to 20+%.
♦ OM Scan Load has to be considered together
with other loads so the Station Idle Time thresh-
old in this table is not violated.

49
B0700AX – Rev S

Table 3-14. Loading Summary (Continued)

Station Block Field FDC280 FCP280 FCP270 ZCP270


Minimum percentage for “Station Idle Time” (per- 30.0% 10.0% 10.0% 10.0%
centage of CPU Load)
(

NOTE
If Station Idle time is less than 30%,
it affects significantly the ability to
generate bursts of alarm messages,
large scale OM updates (such as
100% value change) and
non-scheduled activity such as check-
points, self-hosting updates and data-
base uploads.

In the absence of the above non-scheduled activities,


the Station Idle Time can go as low as 10.0%.
1. The "Total Control Cycle" for the FDC280 consists of the sum of the % of BPC loads for “Control
Blocks”, “Sequence Blocks”, and “Foreign Device Data Load” (FDLOAD).

Notes on Station Block display:


1. You can check the Control Loading for the last 10 block processing cycles (BPCs) by
selecting “Control Loading” which also displays the number of control overruns.
2. You can check the OM Scanner Loading for the last 12 BPC cycles by selecting “OM
Scanner Loading” which also displays the number of scanner overruns.

Table 3-15. Station Free Memory (Bytes)

Station Block Field FDC280 FCP280 FCP270 ZCP270


Minimum Free Memory Available 1 1 500,000 500,000
in Station
1.
OM list and control database restrictions help prevent the use of too much memory.
The control database is limited up to 22 MB for the FDC280, and 15.75 MB for the
FCP280.

NOTE
Do not load the CP270 so that the “Total Free” memory available is less than the
number of bytes specified in Table 3-15.

50
B0700AX – Rev S

NOTE
Verify that the largest continuous area of memory available to applications
(MAXMEM in the Station block) does not go below 14000 bytes for FDC280s,
FCP280s and CP270s, or errors detected in the Control HMI may occur. If MAX-
MEM goes below this amount, you can either reboot the control processor or per-
form an Online Upgrade operation on the non-FDC280 control processor.

Table 3-16. Peer-to-Peer Data

Station Block Field FDC280 FCP280 FCP270 ZCP270


Total sink points 11250 11250 7500 7500
OM sink connections 30 30 30 30
OM source connections 200 200 100 100

Table 3-17. Resource Table

Blocks User OM Initialized


Hardware Per Block Database Scanner OM Station
Resources Second Names Size Capacity Lists Memory1
FDC280 16,000 82622 22 MB 18,000 75 24000KB (will
vary based
on image
version)
FCP280 16,000 8000 15.75 MB 18,000 75 18700KB
FCP270 10,000 4000 5000KB 12,000 50 5700KB
ZCP270 10,000 4000 5000KB 12,000 50 5700KB

1. This is the approximate maximum memory available. The actual value for an
initialized CP varies slightly depending on the specific station software revision and
configuration.
2. The FDC280 supports up to 256 field devices and up to 8000 I/O points. For exam-
ples of valid block count combinations, refer to Field Device Controller 280 (FDC280)
Sizing Guidelines and Excel Workbook (B0700GS).

NOTE
The “OM Scanner Capacity” is only reached if the remote OM lists opened on the
controller all contain an even multiple of 20 variables. The OM Scanner database is
organized into entries of up to 20 variables, where all the variables in a scanner entry
are for the same OM list. For example, if you open a remote list of 75 variables on a
controller, four (4) scanner entries are used, with variable counts of 20, 20, 20, and
15, with five (5) variables unused in the fourth scanner entry.
Workstations are an exception and contain up to 255 points per entry and therefore
have a one-to-one relationship between a list and scanner entry; that is, you cannot
run out of OM Scanner entries in a workstation.

51
B0700AX – Rev S

NOTE
For sizing the Foxboro Evo control network, refer to The Foxboro Evo Control Network
Architecture Guide (B0700AZ).

Inter-Network Traffic
I/A Series software v8.3-v8.8 or Control Core Services v9.0 and later supports inter-network traf-
fic between the control network and Nodebus network using ATSs. The preferred method of
migration is to replace all Nodebus LIs with ATSs in LI mode at one time. When using the pre-
ferred method, you only need to help ensure that stations that migrate to the control network and
continue to communicate with stations on the Nodebus maintain their original Nodebus com-
munications limits.
If you perform a gradual migration using an ATS in Extender mode followed by ATSs in LI
mode, you have to size the LI traffic rates. The LI with the ATS in Extender mode can become a
bottleneck as each Nodebus migrates to the control network using an ATS in LI mode. Below is a
description of the gradual migration process with sizing calculations needed to help ensure accept-
able inter-network traffic rates. Figure 3-1 depicts a five-node Foxboro Evo system showing the
traffic rates between various LI modules. For example, Figure 3-1 shows a 75 packet per second
traffic rate between Node 4 and Node 5.
1. Determine traffic rates for all Nodebus LIs. Refer to Figure 3-1. Refer to “LI Traffic
Rates” on page 57 for information on computing LI traffic rates on the web.
2. Add connection to the control network by adding ATS in Extender mode to LI (con-
sider using LI with lowest traffic rate). The LI will assume an additional load based on
the ATS traffic rate. Refer to Figure 3-2.
3. Determine the traffic rate for the ATS in Extender mode (traffic between the control
network and Nodebus stations). Compute the new traffic rate for the LI with the ATS
in Extender mode. The new traffic rate for the LI with the ATS in Extender mode =
LI rate + ATS rate to Nodebus stations that are not on the Nodebus that has the ATS
in Extender mode. Refer to Figure 3-2. You can optionally measure traffic rates using
LIPDUS30 shared variable - see Helpful Hint 960.
4. All remaining LIs can be replaced whenever you wish with ATSs in LI mode, as long
as their traffic rates can be added to the LI with the ATS in Extender mode and the LI
does not exceed the maximum recommended sustained traffic rate. Refer to
Figure 3-3. If two or more nodes have high traffic rates between them, migrate the
nodes at the same time. This will not increase the traffic rate through the LI with the
ATS in Extender mode because the traffic between them is routed through the ATSs
in LI mode on the control network. Refer to Figure 3-4.
5. When migrating a node using an ATS in LI mode causes the LI with the ATS in
Extender mode to exceed the maximum recommended sustained traffic rate, you have
to perform a total replacement using ATSs in LI mode (which includes converting the
ATS in Extender mode to LI mode). Refer to Figure 3-5.

52
B0700AX – Rev S

NOTE
IP communications cannot transmit across both an ATS and a LAN Interface sta-
tion due to filtering implemented within the LI modules. There is an IP address
limit of 64 stations per node. If full IP communication support is needed, the net-
work migration plan need to be the preferred method of a replacement of all LAN
Interface modules.

Example – Gradual Migration


1. Determine traffic rates for all Nodebus LIs. Refer to Figure 3-1.
♦ LI1 – 100 packets/sec, LI2 – 50 packets/sec, LI3 – 75 packets/sec, LI4 – 100
packets/sec, LI5 – 75 packets/sec
2. Add connection to the control network by adding an ATS in Extender mode to LI
(consider using LI with lowest traffic rate). Refer to Figure 3-2.
♦ ATS in Extender mode is added to LI2, which has the lowest traffic rate.
3. Determine the traffic rate for the ATS in Extender mode (new traffic between stations
on the control network and Nodebus). Compute the new traffic rate for the LI with
the ATS in Extender mode. Compute LI traffic rates for all LIs that have control net-
work traffic. Refer to Figure 3-2.
♦ ATS in Extender mode traffic rate = 50 packets/sec (between stations on the con-
trol network and Nodebus 2)
♦ LI2 traffic rate = LI2 (Nodebus traffic) + ATS in Extender mode traffic rate =
50 (N1↔N2) + 50 (M↔N3) = 100 packets/second
♦ LI3 traffic rate = LI3 (Nodebus traffic rate) + LI control network traffic to Node-
bus 3 = 50 (N1↔N2) + 25 (N3↔N4) + 50 (M↔N3) = 125 packets/second
4. Migrate Nodebus 1 to the control network using an ATS in LI mode (ATS LI1). This
decreases the Nodebus traffic rate for the LI with the ATS in Extender mode by trans-
ferring all traffic between the migrated Nodebus (Nodebus 1) and the LI Nodebus
with the ATS in Extender mode (Nodebus 2) to the ATS in Extender mode. However,
it does increase the traffic rate for the LI with the ATS in Extender mode by the Node-
bus traffic rates between the migrated Nodebus (Nodebus 1) and all LIs with no ATS
in Extender mode (N1↔N3, N1↔N4, N1↔N5). Refer to Figure 3-3.
♦ LI2 traffic rate = LI2 - LI1 (N1↔N2) + LI1 (N1↔N3) + LI1 (N1↔N4) +
LI1 (N1↔N5) = 100 - 50 (N1↔N2) + 50 (N1↔N3) + 0 (N1↔N4) +
0 (N1↔N5) = 100 packets/second
♦ ATS in Extender mode traffic rate = ATS in Extender mode + LI1 (Nodebus traf-
fic) = 50 + 100 = 150 packets/second
♦ ATS LI1 traffic rate = LI1 (Nodebus traffic) = 100 packets/second
5. Migrate Nodebus 4 and Nodebus 5 to the control network using ATSs in LI mode
(ATS LI4 and ATS LI5 respectively). Refer to Figure 3-4. Both nodes are migrated at
the same time because they have significant traffic between them, and you do not
want to impact LI2 with the ATS in Extender mode.
♦ LI2 traffic rate = LI2 + LI4 + LI5 = 100 + 25 (N3↔N4) + 0 = 125 packets/second.

53
B0700AX – Rev S

NOTE
N4↔N5 traffic is routed through the control network with no impact on LI2.

♦ ATS in Extender mode traffic rate = ATS in Extender mode + LI4 (N3↔N4) =
150 + 25 = 175 packets/second
♦ ATS LI4 traffic rate = LI4 (Nodebus traffic) = 100 packets/second
♦ ATS LI5 traffic rate = LI5 (Nodebus traffic) = 75 packets/second
6. Migrate Nodebus 3 to the control network and change the ATS connected to Node-
bus 2 from Extender mode to LI mode (ATS LI2). Refer to Figure 3-5.
♦ ATS LI1 traffic rate = original LI1 Nodebus traffic rate = 100 packets/second
♦ ATS LI2 traffic rate = original LI2 Nodebus traffic rate = 50 packets/second
♦ ATS LI3 traffic rate = original LI3 Nodebus traffic rate + new control network to
Nodebus 3 traffic rate = 75 + 50 = 125 packets/second
♦ ATS LI4 traffic rate = original LI4 Nodebus traffic rate = 100 packets/second
♦ ATS LI5 traffic rate = original LI5 Nodebus traffic rate = 75 packets/second
The migration from all LI modules to all ATS modules is now complete.

Carrierband LAN

50 (N1↔N2) 50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4)
100 Total 50 Total 75 Total 100 Total 75 Total

LI1 LI2 LI3 LI4 LI5

Nodebus 1 Nodebus 2 Nodebus 3 Nodebus 4 Nodebus 5


(N1) (N2) (N3) (N4) (N5)

Figure 3-1. Original Nodebus Traffic Rates

54
B0700AX – Rev S

Carrierband LAN

50 (N1↔N2) 50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 50 (M↔N3) 25 (N3↔N4) 25 (N3↔N4)
50 (M↔N3)
100 Total 100 Total 125 Total 100 Total 75 Total

LI1 LI2 LI3 LI4 LI5

N1 N2 N3 N4 N5

ATS
(Extender mode)

50 (M↔N3)
50 Total

The Foxboro Evo Control Network (M)

Figure 3-2. Adding an ATS in Extender Mode

Carrierband LAN

50 (M↔N3) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4)
50 (M↔N3)
100 Total 125 Total 100 Total 75 Total

LI2 LI3 LI4 LI5

N1 N2 N3 N4 N5

ATS ATS
LI1 (Extender mode)

50 (N1↔N2)
50 (N1↔N2) 50 (M↔N3)
50 (N1↔N3) 50 (N1↔N3)
100 Total 150 Total

The Foxboro Evo Control Network (M)

Figure 3-3. Migrate Nodebus 1

55
B0700AX – Rev S

Carrierband LAN

50 (M↔N3) 50 (M↔N3)
50 (N1↔N3) 50 (N1↔N3)
25 (N3↔N4) 25 (N3↔N4)
125 Total 125 Total

LI2 LI3

N1 N2 N3 N4 N5

ATS ATS ATS ATS


LI1 (Extender mode) LI4 LI5

50 (N1↔N2)
50 (M↔N3)
50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5)
50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4) 75 (N4↔N5)
100 Total 175 Total 100 Total 75 Total

The Foxboro Evo Control Network (M)

Figure 3-4. Migrate Nodebus 4 and Nodebus 5

N1 N2 N3 N4 N5

ATS ATS ATS ATS ATS


LI1 LI2 LI3 LI4 LI5

50 (N1↔N2) 50 (N1↔N2) 50 (N1↔N3) 75 (N4↔N5) 75 (N4↔N5)


50 (N1↔N3) 25 (N3↔N4) 25 (N3↔N4)
50 (M↔N3)
100 Total 50 Total 125 Total 100 Total 75 Total

The Foxboro Evo Control Network (M)

Figure 3-5. Final Migration

56
B0700AX – Rev S

LI Traffic Rates
The procedure for computing LI traffic rates using the web is:
1. Go to the Global Customer Support web site (https://pasupport.schneider-elec-
tric.com.).
2. Log in.
3. Select Support > Foxboro > Trouble Shooting Guides.
4. Select Tokenbus/Nodebus Troubleshooting Guide.
5. Select Next until the LAN Traffic Rates screen appears.
6. Also view Helpful Hint 960.

57
B0700AX – Rev S

58
Appendix A. Upgrading SMONs on
I/A Series Software v8.4.x
and Earlier to v8.5-v8.8 or Control
Core Services v9.0 or Later
This appendix describes the Quick Fixes needed to upgrade System Monitors (SMONs) on
I/A Series software v8.4.x and earlier to I/A Series software v8.5-v8.8 or Control Core Services
v9.0 or later.

Overview
System Monitor (SMON) support on workstations with I/A Series software v8.4.x and earlier,
which support up to 30 SMONs, can be upgraded to support up to 128 SMONs on Windows
stations with I/A Series software v8.5-v8.8 or Control Core Services v9.0 or later and (up to 508)
switches on the Foxboro Evo Control Network. This may be a combination of domains from the
control network and/or the Nodebus. No other quantities or limits for system sizing are changed
as part of this capability. All other system sizing guidelines and constraints still apply.
If adding this capability to a system that is interconnected to the Nodebus, it is necessary to apply
corrections to the SMDH on the Nodebus side.
Configuration of 128 domains is done through the version of SysDef v2.9 distributed as part of
Quick Fix (QF) 1009574 or through the InFusion Engineering Environment (IEE) v1.2 (or
later).

System Definition v2.9


SysDef v2.9 or later provides the support for these SMON upgrades. SysDef v2.9 is provided in
QF 1009574 or through the InFusion Engineering Environment (IEE) v1.2 (or later).

NOTE
If the system is a combination of both the control network and Nodebus, it is neces-
sary to modify the smonlst.cfg files of the Nodebus stations to control which
domains are shown to the operator. QF 1009574 contains instructions and scripts
to help with the modifications.

Quick Fixes Needed


Several applications have to be upgraded to support the new SMON upgrade. The Quick Fixes
listed in Table A-1 are needed to upgrade each of these applications.
Quick Fixes can be downloaded from the Global Customer Support website:
https://pasupport.schneider-electric.com.

59
B0700AX – Rev S

Table A-1. Quick Fixes for Upgrading SMON from I/A Series Software v8.4 or Earlier

Application Baseline Version Quick Fix


SMON I/A Series software v8.3 QF 1009879 V1.2
SMDH (the control net- I/A Series software v8.4 QF 1011794
work)
SMDH (Nodebus) I/A Series software v6.5 QF 1011797
I/A Series software v7.1 QF 1011798
System Definition 2.9 QF 1009574 Bld.3
(SysDef )
Alarm Management I/A Series software v8.2 QF 1009910
Subsystem (AMS)
HPSTK - QF 1011666
(Optional component)

Installation Sequence
Install in the steps in the given sequence:
1. Read all QF memos which come with the Quick Fixes listed in Table A-1.
2. If necessary, determine the layout of the Nodebus domains.
3. Using System Definition layout, create the Commit media for the system configura-
tion of all System Monitor domains, and SMON/SMDH configuration. For informa-
tion on SysDef, refer to System Definition: A Step-By-Step Procedure (B0193WQ).
4. Install the Commit media on all Window stations with your preferred control
configurator:
♦ Foxboro Evo Control Editors - see Block Configurator User's Guide (B0750AH)
and Hardware Configuration User's Guide (B0750BB)
♦ IACC - I/A Series System Configuration Component (IACC) User's Guide
(B0400BP) for Windows XP or Windows Server 2003, or earlier Windows oper-
ating systems, or I/A Series Configuration Component (IACC) User's Guide
(B0700FE) for Windows 7 or Windows Server 2008 R2 Standard or later.
♦ ICC - Integrated Control Configurator (B0193AV).
5. If necessary, modify the smonlist.cfg file.
6. If necessary, install AMS QF 1009910.
7. Install SMON QF 1009879 V1.2 on Windows stations on the control network.
8. Install the appropriate SMDH Quick fixes (see Table A-1) on Windows stations in
the system being modified.

60
B0700AX – Rev S

Known Detected Issues/Workarounds


The known detected issues with this upgrade and their workarounds are:
1. Nodebus Domain Display – This display may appear “grey”. Operators can double
click through the grey domain. The underlying station status display is available, and
change actions may be initiated.
Operators may also initiate Equipment Change Actions from the control network to
the Nodebus, assuming the workstations are properly configured to do so.
2. Nodebus NETWORK Display – This display may present one or more LAN stations
as grey. The underlying station status display is not available, and Equipment Change
Actions will not be initiated.
Figure A-1 to Figure A-4 provide examples of these two issues detected. They were
obtained from a workstation with Windows XP with I/A Series software v7.1; however,
the results are equivalent on a SOLARIS station with I/A Series software v6.1.x.

Figure A-1. New Nodebus Test - System Monitors

61
B0700AX – Rev S

The HOME display shows the first screen in SMDH with some grey SMONs.

Figure A-2. HOME Display

The SMON display shows the result of selecting a “grey” SMON.

NOTE
Verify that all picks are active.

62
B0700AX – Rev S

Figure A-3. “Greyed” SMON Selected

The NETWORK display shows the result of selecting the NETWORK button.

NOTE
Some LANs are grey.

63
B0700AX – Rev S

Figure A-4. Selecting NETWORK Button

The LAN display shows the results of selecting the grey LAN.

NOTE
No picks are available.

64
Appendix B. Site Planning
Worksheets
This appendix provides a series of blank worksheets for you to fill in when performing your site
planning of the Foxboro Evo Control Network.
Table B-1 to Table B-4 provide worksheets for workstations on the control network.
An example of this Table B-1 worksheet is provided in Table 3-8 “Workstation with Single CPU
Core Summary Worksheet Example” on page 44.

Table B-1. Workstation Summary Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Base Control Core Services/I/A Series CPU Load
(Windows=1.0, Solaris=3.0)
2) Reserved CPU Load:
Windows (single CPU core)=40.0
Windows (multiple CPU core enabled)=25.0
Solaris=40.0
3) Alarm Manager

4) FoxView

5) AIM*Historian

6) Other Applications1 (for example, TDR, Application


Object Services, and so forth)

7) Total CPU Load % =


Items 1 + 2 + ((Sum of Items 3-6)*CPU Factor)
1. Refer to the reference document specific to the application.

65
B0700AX – Rev S

An example of this Table B-2 worksheet is provided in Table 3-9 “Alarm Manager for Worksta-
tion with Single CPU Core Worksheet Example” on page 45.

Table B-2. Alarm Manager Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of alarm messages per second from Aprint
Services for all CPs
Example: 100 alarms/second
2) Workstation CPU load % for every 100 alarms/second:
Windows Formula = 2.5% * (number of alarms / 100)
Solaris Formula = 5.0% * ((number of alarms / 100) + 1)
Windows Example: 2.5% * (100/100) = 2.5%
Solaris Example: 5.0% * ((100/100)+1) = 10.0%
3) Base load for Number of Alarm Managers
Windows Formula = Default Windows AST Table 3-10 Lookup *
(default refresh rate / actual refresh rate)
Solaris Formula = Default Solaris AST Table 3-11 Lookup *
(default refresh rate / actual refresh rate)
Windows Example, 5 AMs at default (3.0) AST refresh rate
32K database: 1 * (3.0/3.0) = 1.0%
Solaris Example, 5 AMs at default (3.0) AST refresh rate
32K database: 4 * (3.0/3.0) = 4.0%
4) CPU Load for Matching and Filtering (per Alarm Manager)
CPU Load % = (Matching and Filtering Load Coefficient
[Windows=0.2%, Solaris=0.5%]) * (Number of Matches/Filters) *
(Number of Alarm Managers)
Windows Example: 5 matches for 1 Alarm Manager
CPU Load = 0.2% * 5 * 1 = 1.0%
Solaris Example: 5 matches for 1 Alarm Manager
CPU Load = 0.5% * 5 * 1 = 2.5%

5) Total CPU Load % = sum of items 2, 3, and 4 above

66
B0700AX – Rev S

An example of this Table B-3 worksheet is provided in Table 3-12 “FoxView Worksheet for
Workstation with Single Core Enabled Example” on page 47.

Table B-3. FoxView Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) Number of FoxViews only using displays with default
configuration values at (Windows=2.0%, Solaris=5.2%)
per FoxView
Windows Example: 2 FoxViews = 2 * 2.0 = 4.0%
Solaris Example: 2 FoxViews = 2 * 5.2 = 10.4%
2) Number of FoxViews using any displays with Fast Scan
Option at (Windows=4.0%, Solaris=10.0%) per FoxView
Windows Example: 1 FoxView with Fast Scan =
1 * 4.0 = 4.0%
Solaris Example: 0 FoxViews with Fast Scan =
0 * 10.0 = 0%

3) Total CPU Load % = sum of items 1 and 2 above

67
B0700AX – Rev S

An example of this Table B-4 worksheet is provided in Table 3-13 “AIM*Historian for Worksta-
tion with Single Core Enabled Worksheet Example” on page 47.

Table B-4. AIM*Historian Worksheet

Value (%), Value (%),


Windows Solaris
Description Workstations Workstations
1) CPU Load for data collection change rate
(Windows=2.0%, Solaris=3.0%) per 1000 points/second
Refer to “Data Collection Rate Example” below.
2) CPU Load for data reduction
(Windows=1.0%, Solaris=1.5%) per 1000 points, rate >= 15 min-
utes
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/1000 * 1.5% = 4.7%
3) CPU Load for data archiving
(Windows=5%, Solaris=4%) per 5000 points at a default rate
Windows Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 5.0% = 3.1%
Solaris Example: 3100 pts at a rate >= 15 minutes =
3100/5000 * 4.0% = 2.5%

4) Total CPU Load percentage = item number 1.

NOTE
Items 2 and 3 are encapsulated in the
workstation reserve CPU load because they
re not sustained loads.

68
Index
A Control stations xvi
Address Translation Station, see ATS alarming 19
AIM*Historian software xvi, 15 estimating number of control stations
CPU load 15, 16 required 24
disk load time 15 execution time 19
OM scan loading 17 load analysis 24
OM scanner connections 16, 17 maximum loading table 48
OM server connections 15 memory 19
OM_NUM_CONNECTIONS 16 OM scan load 19
OM_NUM_LOCAL_OPEN_LISTS OM scanner connections 19
17 OM server connections 19
RTP file size (ARCHSIZE) 17 planning 18
RTTIME 17 CP. See also Control stations
worksheet 41, 47, 68 CPU load
workstation summary worksheet 40, 44, AIM*Historian 15, 16
65 AOS 17
Alarm Manager software 14, 40, 44, 65 applications 18
worksheet 40, 45, 66 displays 13
Alarming in control stations 19 FoxView 11
Alarming software 14 printers 17
AO API xvi reserved 5
AO objects xvi worksheet calculations 39, 43
AOS software 17 workstation summary worksheet 40, 41,
CPU load 17 44, 45, 65, 66
number of objects 17 workstations 39, 43
workstation summary worksheet 40, 44,
65 D
Application Object Services. See also AOS DeviceNet 32
Applications DIN rail mounted FBMs 32
CPU load 18 Disk load time
customer 40, 44, 65 AIM*Historian software 15
performance meter 18 Displays
planning 17 CPU load 13
third-party 33 Distribution of control 20
third-party and customer 18 Documents for reference xiv
ARCHSIZE 17
AST, alarm server task xvi F
ATS xvi Fast scan option 12
FBMs
B DeviceNet 32
Block processing cycle (BPC) 24 DIN rail mounted (200 Series) 32
FDSI 32
C FOUNDATION Fieldbus 32
CMX_NUM_CONNECTIONS 8, 9 HART 32
Control distribution 20 legacy 32

69
B0700AX – Rev S Index

Modbus 32 M
PROFIBUS 32 Maximum loading table 48
FCP270 xvi McAfee virus scanning software 6
FCP280 xvi Memory 50
FDSI xvi, 32 Modbus FBMs 32
Field device system integrator. See also
FDSI N
FOUNDATION Fieldbus FBMs 32 Network
FoxView software 11 bandwidth utilization 33
CPU load 11 planning 32
display guidelines and resource usage 12 Nodebus
OM scan load 12 sizing when communicating to the
scan rates 12 Foxboro Evo Control Network
worksheet 41, 47, 67 52
workstation summary worksheet 40, 44, Number of Alarm Managers worksheet 46
65
O
G Object Manager Multicast Optimization
GET_SET_TIMEOUT 8, 10 36
Object Manager. See also OM
H OM xvii
HART FBMs 32 API xvii
High speed draft mode 17 List xviii
Historian software 15 lists 7
number of connections 9, 12
I number of entries 9
I/O points 32 number of objects 9, 17
IMP_SAVE_PERIOD 8, 11 number of open lists 9, 13, 17
Inter-network traffic 52 number of processes that register for IPC
Inter-process communications. See also 10
IPC number of processes that register for IPC
IPC xvii
connectionless 10
connected services 10
number of remote lists 10
connectionless services 10
objects xviii
IPC connections xvii, 20, 24
IPC_NUM_CONN_PROCS 8, 10 OS configurable parameters 7
IPC_NUM_CONNLESS_PROCS 8, 10 scanner xviii
server xviii
L server connections 9
Legacy FBMs 32 services xviii
LI OM scan load 2, 20
traffic rates 57 AIM*Historian software 17
LI (LAN interface) xvii control stations 19
Loading FoxView software 12, 13
control stations 18, 24, 48 OM scanner connections
CPU 5 AIM*Historian software 16, 17
workstation 5 control stations 19
Loading summary (% of BPC) worksheet FoxView software 12
49 OM server connections
AIM*Historian software 15, 16
control stations 19

70
Index B0700AX – Rev S

FoxView software 11, 12 R


OM_MULTICAST_OPTIMIZATION RAM
8, 11 increasing 18
OM_NUM_CONNECTIONS 8, 9 Reference documents xiv
AIM*Historian software 15, 16 Reserved CPU load 5
FoxView software 12 Resource table worksheet 51
OM_NUM_IMPORT_VARS 8, 9 RTTIME 17
OM_NUM_LOCAL_OPEN_LISTS 8, 9
AIM*Historian software 15, 17 S
FoxView software 12, 13 Scan rates
OM_NUM_OBJECTS 8, 9 default 12
AOS 17 fast scan option 12
FoxView software 12 show_params 8, 9, 10
OM_NUM_REMOTE_OPEN_LISTS 8, Sink peer-to-peer status worksheet 51
10 Sizing
OMMO 4 additional sizing for networks connected
OMMO_MULTICAST_DELAY 8, 11 via ATS
OMMO_UNICAST_DELAY 8, 11 control stations 48
OS configurable parameters 7 workstations 39, 42
default and maximum values 8, 9 SMDH xviii
Other applications Solaris operating system 43
workstation summary worksheet 40, 44, som 9, 10
65 Specifications 39, 42, 43
Spreadsheets
P accessing
Peer-to-peer connections xviii, 19, 24 from electronic documentation
Performance CD-ROM 3
increasing 18 Station block display 49
Phasing 24 Station free memory worksheet 50
Planning 5 Summary worksheet 39, 43
AIM*Historian software 15 System Management Display Handler. See
alarming software 14 also SMDH
AOS 17 System monitor (SMON) 6
applications 17 System planning 5
BPC 24 requirements 5
control distribution 20 System sizing 39
control stations 18
T
FoxView software 11
The Foxboro Evo Control Network 1, 32
I/O points 32 configuration and references 33
OM scan load 20 sizing when communicating to Nodebus
OS configurable parameters 7 52
phasing 24 workstation specifications 39, 42
printers 17 Third-party applications 33
System Monitor (SMON) 6
The Foxboro Evo Control Network 32 V
workstations 5 Virus scanning 6
Printers
planning 17 W
reserved CPU load 17 Windows operating system 1, 42
PROFIBUS FBMs 32 performance meter 18

71
B0700AX – Rev S Index

Worksheets
AIM*Historian 41, 47, 68
Alarm Manger 40, 45, 66
FoxView 41, 47, 67
Loading Summary (% of BPC) 49
Number of Alarm Managers 46
Resource Table 51
Sink Peer-to-Peer Status 51
Station Memory Free 50
Workstation Summary 39, 43
Workstations xix
CPU factor 39, 42, 43
planning 5
sizing 39, 42
specifications 39, 42, 43
summary worksheet 39, 43

Z
ZCP270 xix

72
Index B0700AX – Rev S

3/12/18

73
Schneider Electric Systems USA, Inc.
38 Neponset Avenue
Foxborough, MA 02035-2037
United States of America
www.schneider-electric.com

Global Customer Support


https://pasupport.schneider-electric.com

Você também pode gostar