Escolar Documentos
Profissional Documentos
Cultura Documentos
Disclaimer
Information of a technical nature, and particulars of the product and its use, is given by AVEVA Solutions Ltd and its subsidiaries without warranty. AVEVA Solutions Ltd and its subsidiaries disclaim any and all warranties and conditions, expressed or implied, to the fullest extent permitted by law. Neither the author nor AVEVA Solutions Ltd, or any of its subsidiaries, shall be liable to any person or entity for any actions, claims, loss or damage arising from the use or possession of any information, particulars, or errors in this publication, or any incorrect use of the product, whatsoever.
Copyright
Copyright and all other intellectual property rights in this manual and the associated software, and every part of it (including source code, object code, any data contained in it, the manual and any other documentation supplied with it) belongs to AVEVA Solutions Ltd or its subsidiaries. All other rights are reserved to AVEVA Solutions Ltd and its subsidiaries. The information contained in this document is commercially sensitive, and shall not be copied, reproduced, stored in a retrieval system, or transmitted without the prior written permission of AVEVA Solutions Ltd Where such permission is granted, it expressly requires that this Disclaimer and Copyright notice is prominently displayed at the beginning of every copy that is made. The manual and associated documentation may not be adapted, reproduced, or copied, in any material or electronic form, without the prior written permission of AVEVA Solutions Ltd. The user may also not reverse engineer, decompile, copy, or adapt the associated software. Neither the whole, nor part of the product described in this publication may be incorporated into any third-party software, product, machine, or system without the prior written permission of AVEVA Solutions Ltd, save as permitted by law. Any such unauthorised action is strictly prohibited, and may give rise to civil liabilities and criminal prosecution. The AVEVA products described in this guide are to be installed and operated strictly in accordance with the terms and conditions of the respective license agreements, and in accordance with the relevant User Documentation. Unauthorised or unlicensed use of the product is strictly prohibited. First published September 2007 AVEVA Solutions Ltd, and its subsidiaries AVEVA Solutions Ltd, High Cross, Madingley Road, Cambridge, CB3 0HB, United Kingdom
Trademarks
AVEVA and Tribon are registered trademarks of AVEVA Solutions Ltd or its subsidiaries. Unauthorised use of the AVEVA or Tribon trademarks is strictly forbidden. AVEVA product names are trademarks or registered trademarks of AVEVA Solutions Ltd or its subsidiaries, registered in the UK, Europe and other countries (worldwide). The copyright, trade mark rights, or other intellectual property rights in any other product, its name or logo belongs to its respective owner.
Contents
Page
12.0
Audit Trail Dates and Counts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:5 Cancelled Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:7 Processing of Results and Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:7 Transaction Success and Failure Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:8
Scheduled Updates - Successes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:8 Scheduled Update - Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:9 Failed File Copies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:11
Reasons Other ADMIN Commands Can Fail . . . . . . . . . . . . . . . . . . . . . . . . . . . 7:12 Automatic Merging and Purging of a Transaction Database . . . . . . . . . . . . . . 7:13
ii
12.0
Recommendations for Reconfiguring (User dBs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13:1 Copying Global Projects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14:1 Backing Up Global Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15:1 Using Extracts with Global Projects . . . . . . . . . . . . . . . . . . . . . . . . 16:1
Using Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:1
Extract Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:1 Querying Extract Families . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:2
Setting up an Extract Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:7 Using DACs with Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:8 Using Extracts in DESIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:8
Managing Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:9 User Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:9 Extract Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:9 Command Syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:10
iii
12.0
Extract Flush Commands Failing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Relationship between Extract and User Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to Find Out What You Can Claim. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flushing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Releasing Claims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Issuing Changes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dropping Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Refreshing an Extract. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Deleting a Database that owns Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:16 Variant Extracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:18 Reasons Claims and Flushes can Fail . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16:18
Update Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Timing of Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Checking Locations are Aligned. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Change Primary - Repair Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Risks of Aligning Databases Across Locations by File Copying . . . . . . . . . . . . . . . . . . . . Flushing/Issuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Transaction Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daemon Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . admnew Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
iv
12.0
Project Setup Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .A:1 Recovery from Reverse Propagation Errors . . . . . . . . . . . . . . . . . .B:1
Background - Propagation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:1 Identifying the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:2 Querying Database Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:2
Automating Checks For Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B:4
Using Global to Distribute Catalogue Data. . . . . . . . . . . . . . . . . . . .C:1 Example Macro for Collecting and Deleting Old Commands . . . . .D:1
12.0
vi
12.0
Introduction
This document proposes a set of guidelines for the effective use of the AVEVA Global product. The guidelines result from current working experience and may be amended in the light of future experience. Global manages a project distributed over several different geographical locations connected by a Wide Area Network (for example the Internet) and so presents special situations for the administrator and engineering user, which the guidelines address. Note: References to 'Windows' in this document mean MS Windows 2000 or MS Windows XP. AVEVA Global can be used to enhance projects created in either the AVEVA Plant or AVEVA Marine group of products - henceforth known as the base product in this document.
1:1
12.0
1:2
12.0
2:1
12.0
2:2
12.0
Global Daemon
The Global daemon (sometimes referred to as the ADMIN daemon) is supplied with the Global product, in the default install folder. It uses RPC, which is part of the standard Windows software, and so no additional software has to be installed. There must be one Global daemon running for each Project at a Location. Installing the Global daemon is described in the Global Installation Guide, configuring and starting the daemon is described in the Global User Guide.
3.1
3.2
3.3
3:1
12.0
3:2
12.0
Daemon Diagnostics
The Global daemon has the following types of diagnostic output: Tracing. Logging.
4.1
Tracing
Tracing can be switched on when you start the daemon. If you are running the Global daemon as a service, add a line to the startup batch file singleds.bat to set the environment variable DEBUG_ADMIND as follows:
DEBUG_ADMIND=1023
If you are not using the Global daemon as a service, you can set DEBUG_ADMIND from the command line. The value of the DEBUG_ADMIND variable determines the type of activities that are traced: 0 1 2 4 8 16 32 64 128 256 512 1024 2048 = = = = = = = = = = = = = Not used Not used Trace Remote Procedure Calls Thread Library Systems DB Access Dabacon Thread Event Loop Thread Operation Thread Trans DB I/O Not used Not used Dabacon Detail
These values are bit settings, so if you want to trace a combination of activities, you add the above values together. For example, to trace Systems DB access and the Event Loop thread only, you would set DEBUG_ADMIND as follows:
4:1
12.0
DEBUG_ADMIND=80
To enable tracing for all activities, you would set the DEBUG_ADMIND value to 3071. A useful level of tracing for tracking commands is 896. Full tracing can be verbose and fill disk space rapidly, the recommended value of 896 allows the administrator to gain an idea of the current number of commands running through the system. This may help when bringing down a daemon at a particular location. Further tracing may be required when investigating a particular problem.
4.2
Logging
It is beneficial to have the Daemon log setting activated for troubleshooting purposes as well as helping the System Administrator to know how the Global daemons are functioning. We can activate the diagnostics by configuring the Global ADMIN comms log. The Global ADMIN comms log is activated from Daemon>Daemon Settings. This will display the Local Daemon Settings form. In the appropriate text boxes, enter the Diagnostic Logfile name, the Diagnostic Level (see below), and finally Enable the Diagnostic Logging using the drop-down list. (Note: If you use an environment variable in the log file path, it must be defined in the daemon script or in the window from which you started the daemon.)
4.2.1
Diagnostic Level
The number to be entered is the sum of the code numbers for the individual requirements shown below: 0 1 2 4 8 16 32 64 = = = = = = = = None Received summary Received detail Send summary Send detail Dabacon thread summary Dabacon thread detail Propagation thread summary
4:2
12.0
0 128 255
= = =
The log files can be sent to the administering location at regular intervals. The log file will get bigger over time. If you want to keep the log record, but start a new file, move the log file to another directory. The daemon checks for the log file location every 15 minutes. It will keep writing to the moved log file until it checks the log file location and finds it has moved, and then a new log file will be generated. Note: Logging does not capture the same data as tracing, for full debugging purposes the trace facility provides much more comprehensive internal diagnostics.
4:3
12.0
4:4
12.0
5
5.1
Database Allocation
Allocating Databases to Locations
Before allocating databases, ensure that both daemons are running by selecting Query>Global States>Communications or by issuing a Ping command. ALLOCATE commands can be given in sequence without waiting for the first allocate to finish. However, the same Allocate command should not be done twice, unless you are sure that the allocation has failed and that there is no entry in the transaction databases at either of the locations affected by the allocation. When databases have been allocated to a location, you cannot add databases to MDBs until all allocations have been completed, so it is advisable to check the progress of allocations first. It is advisable to use macro input for long lists of database allocations; for example:
ALLOCATE ALLOCATE
and so on for each allocation. This will most likely be the case when the Global project is initially created. Note: Once all allocations have been committed, it is worth checking that all commands are complete, whether the command has been executed through the GUI, or as a manual command. This is described in the next section. If a de-allocation is in progress (see the DEALLOCATION command), then the allocation will stall until the de-allocation is complete before commencing.
5.2
5:1
12.0
A Get Work must be done prior to listing the DBALL, (that is, you must carry on doing a Get Work to see when the databases have been allocated). Allocation is successful when the DBALL list contains all of the databases allocated. getwork /Satellite 1 q mem - Navigate to the location (LOC element) - Go to the first member, i.e. DBALL - list the members, i.e. allocated databases
5.3
De-allocating Databases
The same principles for allocating databases, as described above, apply to de-allocating databases. If users are reading a database at any satellite location and it is de-allocated at that location by the hub while it is being read, then the database(s) de-allocated will not immediately be deleted from the satellite locations. The command will be stalled in the transaction database and, once all users at the location exit their session, the database(s) will be de-allocated and the database files deleted. Note: Only secondary databases can be de-allocated. If a database is primary at a satellite, first make it secondary, then de-allocate it. If you change a database from primary to secondary while a user is reading/writing to it, the user will be able to write to the database until such a time as that user changes modules. A dB does not need to be primary at the HUB, just as long as it is not primary at the location where it is being de-allocated.
5:2
12.0
5.4
5.4.1
admnew Files
.admnew files are created when the whole database needs to be propagated. This may occur: Whenever a database is allocated to a location. The database is copied from the hub to the new location by the Global daemon. As the file is copied over the network connection, a file named prjnnnn.admnew is created. Once copying is complete, this file is renamed automatically to prjnnn
5:3
12.0
For example: abc0001.admnew is created while copying, and it is renamed to abc0001 once copying is complete. Whenever a primary database is merged. The next update will force the entire database to be propagated. If the RECOVER command is used to recover a database from a specified location .admnew files are self contained by the system, there should be no reason to delete them, except in extreme cases. If the daemon is running continuously and a Copy dB operation fails then the system should tidy up. Normally .admnew files are retained for later use. However, if the daemon dies (such as in a power-cut) during the process this may result in an invalid .admnew file. In this case the file should be removed from the operating system to avoid possible problems on a repeat operation. In the case of a copy after a database has been merged, it may not be possible for the .admnew file to be renamed immediately. This will happen: If there are READERS of the database -users are accessing an MDB which contains the database (even if they are only in MONITOR). In this case Global will not attempt to rename the .admnew file until all such users have exited or switched to an MDB which does not include the database. Once all such users have exited, the copy will normally succeed. If the database is locked by a Dead user - a session for a user which has been expunged. In this case Global attempts to rename the .admnew file, but it fails. Ensure that the sessions for all dead users have been killed. Also ensure that no foreign projects are reading this database; or Use the NET FILE command in a cmd window (or a suitable third party tool) to identify network access to the file, and close it. If the project is not used as a foreign project, there is an additional alternative. The Overwrite DB Users flag - the attribute LCPOVW of the LOC element - for a location controls whether a locked file may be overwritten. If this attribute is set TRUE and there are no database READERS in the project, then Global will overwrite the locked file by the .admnew file.
To resolve the second situation, you must do one of the following: either
Note: This should not be done if other projects include this database as a foreign project, since these are valid READERS that are not recorded in the session data for the Global project. Overwriting of locked databases may be enabled by using the MODIFY dialogue for the location on the Admin Elements form to enable Overwriting, or by setting the Overwrite DB Users flag (LCPOVW) to TRUE for the appropriate LOC element on the command-line. See also Database File Locks.
5.4.2
5:4
12.0
Merging Databases
When setting up a project in a Global environment, you are likely to create many sessions in the Global database. This is because when ADMIN issues a Daemon command, it first does a SAVEWORK to give the Daemon an up-to-date view of the Global database. The Daemon also may add sessions to the Global database. We recommend that you should merge changes for the Global database and possibly the system database after setting up a Global project. This should also be done after making significant changes to the project setup.
6.1
6:1
12.0
6.1.1
6.2
6:2
12.0
Use CHANGE PRIMARY to return the child extracts back to their original primary location. Optionally, the databases could be copied (by ftp or similar) to all secondary locations manually after the MERGE (and before the second set of CHANGE PRIMARY commands). This avoids the need for the next Update to copy the entire file. Normally merging would be carried out on the entire extract hierarchy at the project Hub. However if an extract database owns working extracts, it must be merged at its original primary location, since the working extract files only exist at that location.
6:3
12.0
6:4
12.0
7.1
7.2
7.2.1
Program Initialisation
The program reads out of the database all input commands not in a final state (processed, timed out, cancelled) and all the owned operations and output commands and starts
7:1
12.0
progressing these commands. Only unfinished commands will be read. All others will be ignored and not validated for errors. If there are any errors found in reading the database, the daemon will not start. It will then be necessary to provide a (probably) empty database so that the daemon will start from fresh and not progress any previously running commands.
7.2.2
7.3
7.3.1
7:2
12.0
failure can terminate the TRINCO. Its TRPASS will be set to FALSE, its state will be Complete or a later state, and it may own a TRMLST/TRMESS and perhaps a TRFAIL but no TROPERs or TROUCOs. Input commands can be given a delayed start time (EXTIME) after which operations will be generated. It will wait in the Waiting state until this time has passed. This stay of execution will persist until EXTIME has expired, even if this is a longer period than the Time out. The TRINCO stays in Ready state for as long as all its operation and output commands take to complete. Once the TRINCO has been set to ready the command cannot time out until all operations have also timed out. When all member operations and output commands have completed INCSTA is set to Complete. All failures and successes generated by them are collected together and handed on to the sending TROUCO (which stores them). The success state of the command (TRPASS) is set to true if all operations have succeeded. INCSTA is now at Replied. Once a reply acknowledgement has been received back from the previous location, INCSTA is set to Processed and no more actions will take place. There are other terminating conditions of a TRINCO; Timed Out means that the command did not manage to start before either its end time was reached, or the number of retries allowed was exceeded. It will not own any TROUCOs or TROPERs. The state is set to Cancelled if the command is cancelled before any significant action took place. Owned TROUCOs and TROPERs may be set to cancelled if they have not yet started work: subsequent operations that depend on them will be set to Redundant.
7.3.2
7:3
12.0
has been created to store the command. This is stored in the TROUCOs CMREF attribute. For remote locations this will usually be an unknown reference since the specific transaction database is not visible. It can be used to track the command down the chain of locations if the administrator can see all the databases. When a reply is received OUTSTA becomes Replied. Any reply data is stored under TRFLST and TRSLST elements and the TRPASS attribute and OUTSTA goes to Processed. TROUCOs can terminate by timing out if they fail to send in the lifetime prescribed (Timed Out. They may never be sent if dependencies are not met, in which case they terminate as Redundant.
7.3.3
7:4
12.0
7.4
7:5
12.0
TRINCO: RECEIVED ACKNOWLEDGED DATECR DATEAK NACKN READY COMPLETE REPLIED DATERD DATECM DATERP NREPLY TIMEDOUT CANCELLED PROCESSED DATEND DATEND DATEND Date command received from user or other location and created Date acknowledgement for command sent Number of times acknowledgement sent Date command made ready (after EXTIME has been reached) Date command completed Date reply sent with results of command Number of times reply sent Date command timed out. No ops created Date command cancelled by user Date all processing of command finished including reply acknowledgement of command received Number of times reply acknowledgement received
NREPAK
TROUCO: WAIT READY SENT DATECR DATERD DATESN NRETRY ACKNOWLEDGED DATEAK NACKN REPLIED DATERP NREPLY COMPLETE DATERK NREPAK TIMEDOUT CANCELLED REDUNDANT PROCESSED DATEND DATEND DATEND DATEND Date command created by owning TRINCO or previous TROPER or TROUCO Date command made ready to be sent when dependencies satisfied Date command sent to target location Number of times command sent and stalled Date command acknowledgement received Number of times acknowledgement received Date reply received with results of command Number of times reply received Date command completed acknowledgement sent and reply
Number of reply acknowledgments sent Date command timed out. Could not be sent Date command cancelled by owning TRINCO Date command discovered to be redundant Date all processing of command finished including post operations generated.
7:6
12.0
TROPER: WAIT READY RUNNING DATECR DATERD DATERN NRETRY STALLED COMPLETE TIMEDOUT CANCELLED REDUNDANT PROCESSED DATESL DATECM DATEND DATEND DATEND DATEND Date operation created by owning TRINCO or previous TROPER or TROUCO Date operation set ready when all dependencies satisfied, Date operation started running Number of times operation was set running Date operation stalled Date operation completed Date operation timed out. Could not be run Date operation cancelled by owning TRINCO Date operation discovered to be redundant Date all processing of operation finished - including post operations generated.
7.5
Cancelled Commands
Commands can be cancelled at the location where they were first input. There are rules as to what a particular user may cancel, but this section describes what happens in the daemon once a cancel command has been passed to it. Cancellation only applies to TRINCOs and not to any particular operation it has. The cancellation is immediately effected if the TRINCO has INCSTA of state Waiting or Stalled. If the TRINCO is Ready then all of its operations and output commands are inspected. If these TROPERs and TROUCOs are all Waiting, Ready or Stalled then those in Ready or Stalled state are set to cancelled and the waiting ones become Redundant. And the TRINCO becomes Complete and then Cancelled. TRINCOs in other, later states are not cancellable and the cancellation is rejected. A Message is stored with the command as to whether the cancellation was effected, or rejected.
7.6
7:7
12.0
Messages are not stored in the database except under the TRINCO that was originally received from a user (not another locations TROUCO) and under the TROPERs and TROUCOs that the TRINCO owns. This is because messages are collected together regularly by each TRINCO as its operations progress and these are passed back to the TRINCOs originating TROUCO. If the sender was the user then the messages are stored in the database for review under the relevant element: In particular when these messages are passed between sites the TROUCO receives a set of messages each of which may have been generated by different operations, and yet they will now all belong to the single TROUCO. The messages contain sufficient attribute information to indicate the location that the message originated from, the operation type etc. When the messages are finally stored below the originating command successes and failures are persisted as TRSUCC and TRFAIL elements under a TRMLST element. This will distinguish them from the result successes and fails that are persisted when the operation or output command finally completes. The diagram on page describes the elements created for a simple command claim, between 2 locations. It provides an idea of the elements created in both transaction DBs.
7.7
7.7.1
7:8
12.0
In this case, all successful database updates report no data to send since the database was up to date. This is reflected in the summary, which reports the number of successful Copies and Updates. Note that the success for the Global db is also reported as database =0/0. A scheduled update normally only sends the latest sessions for a database - this is an Update. However, if the database has been merged or had another non-additive change (reconfigure, backtrack), then the entire database file must be copied. Database copies are always executed at the destination (the location to which the file must be copied). The file is copied from the remote location to a temporary file with the suffix .admnew and then committed. The database copy cannot be committed in the following circumstances: There are users in the database (recorded in the Comms db) There are dead users (file is locked) and Overwriting is disabled (see below) If the commit fails, the .admnew file will be retained. The next copy attempt will test this file against the remote original to see whether the remote copy stage must be repeated.
In the case of updates, the number of sessions and pages sent is also reported in the success for each database as well as cumulated in the update summary. In the case of copies, the number of pages sent will only be reported if the copy is executed locally. For DRAFT databases, the number of picture-files sent is also reported. The update summary also reports on the number of other data files transferred (see also success for Exchange of other data). Note that this will always report a success even if there is nothing to transfer or Other data transfer is not set up.
7.7.2
7:9
12.0
In this case, the databases could not be propagated, since the secondary database had a higher compaction number than the primary database. This may happen when a remote merge is executed without stopping scheduled updates. Normally it will be necessary to recover the database to resolve this error. Prevention of Reverse propagation may also be reported in the following situation - a satellite has executed a direct update (UPDATE DIRECT from the command-line) with a non-neighbour satellite. The next scheduled update with the intermediate location will report Prevented reverse propagation. In this case, scheduled updates will eventually resolve the situation. The following table summarises Failure messages that can be generated for Scheduled updates. This does not include all possible failures that may be generated from failed file copies. Error No Symptom Scheduled update was suppressed Reason Attribute LNOUPD set TRUE on LCOMD to disable scheduled update Daemon for CAM is not available;
Update will not report results to CAM this failure cannot be reported at CAM - usually due to location unavailable Prevented reverse copying Secondary location has a higher compaction number than the primary location. Database may need recovering Secondary location has a higher session number than the primary location. Database may need recovering
612
611
613
Unable to check update direction - The Global database is in use. This update skipped is normally temporary, due to another command.
7:10
12.0
610
Update skipped - cannot get local The specified database is in use at details for <database> the current location. This is normally temporary, due to another command using the database. Update skipped - cannot get remote The specified database is in use at details for <database> the remote location. This is normally temporary. Update skipped - cannot get local/ In the case of system databases, if remote details for CAM system DB one system db is in use, then the update will fail for any system db (they all have the same DB number) Failed database copy - file may be in Unspecified COPY failure use at HUB compaction numbers are still out of step. If the copy destination was the update location, then additional failures will give further detail. Cannot verify success - may be Unspecified COPY failure failed COPY compaction numbers are still out of step. No further detail is available. Missing remote/local file. Prevented System databases only. A system reverse propagation. database file is missing at the specified location. This may need to be recovered. Update failure - possibly database A database error was encountered error during the update. Full detail will be in the daemon log Update failure - database pages are The database file is corrupt at the not contiguous destination. This database must be recovered from its primary location. Failed database copy. File in use. Database file is locked and Cannot remove overwriting is disabled. File copy has failed to commit the .admnew file. The .admnew file will be retained for later use. Failed database copy - update for Prevention of an inconsistent extract previous extract failed hierarchy. A file copy for an extract db has not been attempted because of an update failure on another extract of the same database. (Not fully working at Global 2.4)
610
610
619
615
614
628
630
7.7.3
7:11
12.0
In this example, the database still had readers, so the copy could not be completed. An additional failure reports that 18 pages have been copied from the remote location. The next retry validates the .admnew file, but still cannot commit it due to readers. A further retry validates the .admnew file again and attempts to commit it. In this case there are no readers, but the file is locked.
In this case, the SYNCHRONISE command eventually succeeded, since Overwriting was enabled. Note that the Successful file copy success reports that nothing has been copied, since the remote copy stage was executed successfully on an earlier try, when the copy failed. Detailed failures for file copies can only be reported at the destination. During a scheduled update, the success of a copy is verified by checking that the compaction number has changed. If the copy was executed at the location which executes the scheduled update, then additional failures may show more detail. (Note this is the partner location for a scheduled update, not the originator!)
7.8
7:12
12.0
Action Commit Allocate Primary DB Initialise Set Systemloc Set Primary Remove all DBs from MDBs Remove from MDB Delete from MDB Change Hub Recover Hub Unlock DB allocation Unlock All db allocation
Cause of Failure Changes DBALL members, owned by LOC Changes LOC element Changes LOC element Changes DBLOC element, owned by DB Changes MDB elements at satellite Changes MDB elements at satellite Changes MDB elements at satellite Changes GLOCWL /*GL Changes GLOCWL /*GL Changes DBLOC element, owned by DB Changes DBALL element, owned by LOC
Refer to Extract Flush Commands Failing and Reasons Claims and Flushes can Fail for non Admin command failures.
7.9
7:13
12.0
7:14
12.0
Pending File
On a Global network, most remote commands that are stalled for any reason at a location are placed in the transaction database at that location, for later processing, (see next chapter). A small number of commands that cannot be carried out at once, known as kernel commands, are instead stored in a locations pending file for later processing. There are various situations where kernel commands may be added to a pending file. For example: Too many commands have been issued in quick succession. A communication link is down. ISOLATION TRUE/FALSE LOCK/UNLOCK PREVOWNER HUB ALLOCATE (PRIMARY) CHANGE PRIMARY All other commands use the transaction database to achieve a similar effect (see next chapter).
Once a pending file has been created at a location, it will continue to exist. When the kernel commands stored in it have been executed, they will be deleted from the file. You can tell if there are any outstanding commands by the size of the file: if it is empty, it will be zero size. You can read the contents of the pending file using a utility available from AVEVA. The pending file is named pending, and it will be saved in the project directory (for example, abc000). It can be read using the glbpend.exe utility provided in the Global install folder. For example, if the pending file is C:\AVEVA\projects\abc000\pending, the command to read it is:
8:1
12.0
8:2
12.0
9
9.1
You should make sure that the change of Hub location is complete before working with either the new or old Hub. Check the following attribute to confirm that the hub change has been successful. For example, if you are changing the Hub from London to Tokyo, then navigate to the location world /*GL and query the Hubrf attribute:
/*GL q hubrf
The Hubrf should be set to the name of the new Hub location; in this example, /Tokyo. You will also see that the location parent attribute of each location (locrf) has changed. This is a secondary effect, because the Hub location can have no parent. In the above example, navigate to the location of the old Hub and query the Locrf attribute:
/London q locrf
The Locrf should be set to the name of the new Hub location; in this example, /Tokyo. (Previously, London, as the old Hub, had no parent location.)
9:1
12.0
Now, navigate to the location of the new Hub and query its Locrf; for example:
/Tokyo q locrf
If the Locrf of Tokyo is set to Nulref, then the hub change has been successful. The new hub, Tokyo, has no parent location.
9.2
PREVOWNER HUB
Re-enter the ADMIN module. This will restore the Hub location and the Hub GUI. (Note: if daemons are running, then the original Hub location command may still be in progress and will attempt to commit the hub change or recover the original hub as appropriate) Make sure that the PREVOWNER command is complete before working with either the new or old Hub, as otherwise it is possible to end up with two Hubs. If this happens, the Global database must be propagated (or physically copied) from the new Hub to the old before further administration is be carried out. If the new Hub was to merge changes while the old Hub was still active, the system would not be able to recover. It would be necessary to reinstate the Global database from the backup taken before the change of Hub location was undertaken.
9:2
12.0
10
10.1
Synchronisation
Synchronisation can be carried out at both Hub and Satellite locations. This process can be used to synchronise databases at one location with the corresponding databases at a different location. This is a one-way process: project data is only received.
10.2
Manual Updates
Manual updates can also be carried out at both Hub and Satellite locations. This is a twoway process that can take place between neighbouring Locations. Data will be both sent and received from the location initiating the update, according to which Location has the most up-to-date version of the database. If update is used between two locations which are not neighbours, then Global will attempt to synchronise the database at the two locations as follows: If the sending location is the primary location, it will update the database at each location along the network path to the destination; If the receiving location is the primary location, it will execute a SYNCHRONISE command to request an update from the primary location If the primary location lies between the two locations, then it will synchronise the database at the sending location with the primary location and update the database from the primary location to the destination location.
It is also possible to do a direct update between two non-neighbour locations using the UPDATE DIRECT command. However this is not recommended, since it can result in Reverse propagation errors from scheduled updates. This happens because UPDATE DIRECT results in the database being more recent at the secondary destination of the update than at the intermediate satellite through which scheduled updates are routed.
10:1
12.0
To learn more about Reverse Propagation errors, see Recovery from Reverse Propagation Errors.
10.3
10.4
10:2
12.0
If this is done it is possible to regenerate all Picture and Neutral Format Files at the satellite, even though the Database is secondary. For Picture and Neutral Format Files to be successfully propagated the environment variables %ABCPIC% and %ABCDIA% must be set in the Daemon kick-off script. Final Designer, Schematics and Marine Drawings files are always propagated, even if Picture/Neutral Format File Propagation is disabled.
10.5
10:3
12.0
For these files to be successfully propagated the following environment variable must be set in the Daemon kick-off script: Final Designer Drawing Marine Hull Drawing objects (SDB files) Schematic Diagrams Stencil Template {ABC} DWG {ABC} DRG {ABC} DIA {ABC} STE {ABC} TPL
The {ABC} DIA folder can contain Neutral Format (SVG) files as well as, or instead of Viso Schematic Diagram files. For a detailed description of the file formats that are monitored within the above folders refer to the Administrator Command Reference Manual. Only PDMSDWG files that are associated with a DRAFT Database are propagated. Associated PDMSDWG Files are Sheet and Overlay drawings. Other DWG files, such as Backing Sheets and Symbols need to be propagated through Transfer of other Data. See Transfer of Other Data. This is also the case for AVEVA Marine where there are drawings located in the ASSI, ASSP, BACK, BTEM, CPAR, MARK, NPLD, NSKE, PDB, PICT, PINJ, PLIS, PLJI, PPAR, PRSK, RECE, SETT, STD and WCOG directories.
10.6
10.7
Update Timings
It is extremely difficult to predict the length of time that an update will take to complete. It will depend upon the bandwidth that is dedicated to the update process at the time it is run. Therefore, if the line is shared with other comms programs (mail, internet, etc), the update performance will be affected. The timings described below were undertaken on a line that had no other process competing, and a line that was extremely clean - that is, its rate of failure would be near zero. On a normal WAN line, its collision and failure rate would not achieve anywhere near such a low level.
10:4
12.0
These test timings were taken when propagating 11080 pages (22695936 bytes) of data between two machines.
10.8
10:5
12.0
the UPDATE ALL command is used). Files can only be transferred between neighbouring locations, and this method cannot be used to send files to/from off-line locations. For example, myfile has been produced at Satellite AAA and is needed at neighbouring location BBB. The user at AAA must ensure that myfile has been placed in directory %EXP_BBB%. During the next scheduled update with BBB, this file will be sent to BBB, and received in directory %IMPORT% at location BBB. A user at BBB can then use myfile. If myfile is to be sent on to other locations, it will need to be copied into the export directories at BBB for those locations. Offline locations: The TRANSFER command only copies databases and picture files to or from the transfer directory, ready for onward manual transfer to the specified location. Transfer of other data files must be done manually. It is possible to assign a batch script to run both before and after the Update Event occurs. This can be used to copy data into the EXPORT directories before the Update is executed, and then copy it out of the IMPORT directory once the Update Event has completed. This process will include the transfer of Other Data. The batch scripts are assigned to an Update Event through the Create/Modify Update form, see below.
Batch Scripts
The script itself can be of any type of batch script, for instance perl, and can be as complex as required.
10:6
12.0
Note: Transferring of other Data uses the same communication line as Updates, and all other Global functionality. Over use of transferring too many other files may have an impact on the Window of Opportunity for updates.
10.9
10:7
12.0
10:8
12.0
11
Deleting Databases
The procedure for deleting a database is summarised below. If the database owns extracts, see Deleting a Database that owns Extracts.
Note: A dB does not need to be primary at the HUB, just as long as it is not primary at the location where it is being deallocated.
11:1
12.0
11:2
12.0
12
Database Recovery
If for any reason a database at a location is corrupt, it can be recovered by transferring the database from a neighbouring location. It is important to remember that this could result in loss of work. The main objective when a recovery is carried out is obviously to restore the database(s) and minimise the work loss. Global does not verify that the file from which the database is being recovered is a valid database. It is the user's responsibility to ensure that this is the case. Remote DICE checking may be used to used to verify the state of the database at the remote location from which the database is to be recovered.
12.1
Corrupt DB 3, 4, 5 1+2,4,5
Recover Corrupt DB From Sat 1, Sat 2, Sat 3 respectively Hub, Sat 2, Sat 3 respectively
12:1
12.0
Recover Corrupt DB From Hub, Sat 1, Sat 3 respectively Hub, Sat 1, Sat 2 respectively
12.2
Note: When a DICE report indicates that Refblocks have been lost, normally this would require the master database to be patched. However in a Global project, this error is non-fatal if there are working extracts. These databases are non-propagating and only exist at the primary location of their extract owner. This results in the error report, since the Refblocks for working extracts are not accessible at the primary location of the master db (Refblocks are blocks of reference numbers available for use in the extract).
12.3
12.4
12:2
12.0
12.5
12.5.1
Note: The RENEW command may remove running commands because it deletes the transaction database.
12.5.2
12:3
12.0
database. The daemon will close the transaction database before the merge, and re-open it afterwards. However, the REMOTE MERGE command cannot be used when the transaction database is full, since this command cannot be recorded properly. In this case, it may be necessary to merge it by reconfiguring.To manage the transaction dB efficiently TRINCOs (and their child elements) need to be deleted at regular intervals. Only completed transactions should be deleted. It only makes sense to merge the transaction dB after TRINCOs have been deleted, otherwise the dB will not be compacted.
12.5.3
12:4
12.0
13
13:1
12.0
13:2
12.0
14
Note: It is very important to ensure that the replicated project has a different project UUID to the original project, otherwise the Daemon will not run correctly. The UUID for the project is stored in the ADUUID attribute of /*GL. If this is unset or has not been changed use the NEWUID attribute: /*GL !NEW=NEWUID ADUUID $!NEW SAVEWORK
14:1
12.0
14:2
12.0
15
When you use databases from backups, it is feasible for a secondary database to have newer sessions than a primary database. If so, at the next update, changes may be posted back from the secondary database to the primary database. If new sessions have been written at the primary location, this could cause corruption. You should therefore ensure that your secondary database backups do not have newer sessions than the primary database. To resolve this, it may be necessary to RECOVER some databases from the primary location after the restore.
15:1
12.0
15:2
12.0
16
You can work on an extract at the same time as another user is working on the master or another extract. When a user works on the extract, elements are claimed to the extract in a similar way to simple multiwrite databases, so no other User can work on them. When an extract User does a SAVEWORK, the changed data will be saved to the Extract. The unchanged data will still be read via pointers back to the master DB. When appropriate, the changes made to the extract are written back to the master. Also, the extract can be updated when required with changes made to the master.
16.1
Using Extracts
You can use extract databases both with standard (non-Global) projects and with Global projects. This chapter gives information about the use of extracts with Global projects. Refer to the Administrator User Guide for information about the use of extracts with standard projects.
16.1.1
Extract Families
A Master DB may have many extract DBs. You can create an extract from another extract, forming a hierarchy of extracts. The hierarchy can be up to 10 levels deep. The extracts derived from the same master are defined as an Extract Family. The maximum number of extracts at all levels in an extract family is 8191. The original database is known as the Master database. The Master database is the parent of the first level of extracts. If a more complex hierarchy of extracts is created, the lower level extracts will have parent extracts which are not the master. The extracts immediately below an extract are known as extract children. The maximum number of extract children is 408. If a hierarchy of extracts is created, the parent of an extract, and its parents up to and including the Master DB, are known collectively as the Extract Ancestors. The following diagram illustrates an example of an extract family hierarchy:
16:1
12.0
In this example: Label PIPES PIPES_X1 PIPES_X10 Description is the Master and the parent of PIPES_X1. is a child of PIPES and the parent of PIPES_X10. is a child of PIPES_X1.
Note: The children of PIPES are PIPES_X1 and PIPES-X2. PIPES and PIPES_X1 are the ancestors of PIPES_X10. Write access to extracts is controlled in the same way as any other database: The user must be a member of the Team owning the Extract. Extracts in the same family can be owned by the same team or by different teams. The user must select an MDB containing the extract (or containing its parent, if the extract is a working extract). Data Access Control can be applied. An extract database cannot be opened in a constructor module (such as DESIGN) at a satellite unless all its parent extracts are also allocated to that satellite.
Note: At this release, you can only create an extract at the bottom of an extract tree: you cannot insert a new extract between existing generations. At the Hub, you can also create a new master database above the original master.
16.1.2
Extract Number Extract Owner Extract Master Extract Ancestors Extract Children
16:2
12.0
Extract Descendants Extract Family Is Owner Primary Here? Is Parent Primary Here? Is All Ancestry Primary Here? Variant Controlled
16.2
16.2.1
16.2.2
Creating Extracts
Extracts can be created at any authorised Location: the parent extract must be allocated to the Location first. Like other databases in a Global project, extracts have a primary Location, and this need not be the same as the Primary location of the parent database. By default, the primary location of the new extract will be the current location. If you are at the Hub and creating an extract for a satellite, use the AT option in the CREATE command. The extract will be created with its primary location at the Satellite specified. If you are at an administering location, you must also use the AT option if you want to specify that the extract will be created at the administered location, otherwise the extract will be created at the administering location (that is, the true current location, queried using Q CURLOC). The parent extract must be allocated to the administered location. When you are creating an Extract at a satellite, make sure you give the CREATE EXTRACT command only once and check that the command has completed by issuing a Q DB dbname command. You may issue further CREATE EXTRACT commands provided that you do not use the same db name or db number (if specified). The daemon will assign a db number (dbno) if none is specified.
The CREATE EXTRACT command will be executed by the Daemon (which will imply a delay in executing the command) if any of the following is true: If the master database is primary somewhere else
16:3
12.0
If the current location is a satellite If the parent extract is primary at another location If the new child extract is specified to be primary at another location (AT loc option).
Note: An in-built recovery operation exists for CREATE EXTRACT and, therefore, the PREVOWNER command is not usually needed after a failure of the CREATE EXTRACT command. However, the automatic recovery operation does not cover the CREATE command Allocate operation and PREVOWNER may be needed in the unlikely event of this failing. Note that the ALLOCATE Command allows child extracts to be allocated to a satellite without their parent being allocated, but you will not be able to open the extract until all its ancestors have been allocated to the location. Also note that the ancestor extracts may need to be synchronised, if timed updates of extracts has not been implemented. Extract creation is controlled by the NOEXTC attribute of a location. If this is TRUE, then extract creation is disabled and extracts cannot be created by that location. However the Hub or its administering location (if authorised) may create extracts. The purpose of the NOEXTC attribute is to prevent a satellite from creating databases on the fly without authorisation, and it applies to the administering location, not the administered location. However, if the HUB is doing it, it is by definition authorised. Thus the HUB is always able to create extracts. Similarly, we could have a situation where one satellite AAA is administering another BBB. Satellite AAA might have NOEXTC false, and BBB might have NOEXTC true. In this case, AAA would be allowed to create extracts for itself and for satellite BBB. But BBB would not be allowed to create any extracts itself. The screenshots below show how you set the NOEXTC attribute in the Modify Location form
16:4
12.0
16.2.3
16:5
12.0
A working extract inherits the write access of the parent access. That is if the parent is primary at the location of the working extract than it can be written to, otherwise the user will only have read access.
16.2.4
Extract Numbers
Before you start creating extracts, you should work out an extract numbering system, and set the extract numbers explicitly when the extracts are created. Extract numbers must be between 1 and 8191 inclusive, for each database. You must set the range of extract numbers available for normal extracts, and for working extracts at each location (see the diagram below). You can do this by setting the EXTLO and EXTHI attributes for LOCLI and LOC elements as follows: The available numbers for extract databases at a location are defined by the EXTLO and EXTHI attributes for the LOCLI element under the /*GL element. You must define the range of extract numbers so that there are enough left for working extracts: see next point. The available numbers for working extracts at a location are defined by the EXTLO and EXTHI attributes for the LOC elements under the LOCLI element: for each Location you must select a range of numbers which lies within the range you have left for working extracts, and which does not overlap with the range for working extracts at any other Location.
Note: You can query extract number ranges by navigating to the appropriate element and giving the commands:
Q EXTLO Q EXTHI
When you are using the ADMIN menu bar, you can use the Location version of the Admin Elements form to create or modify a Location. On the form, you specify the range of numbers available for working extracts at the location. See the Global Management User Guide for details.
16:6
12.0
16.2.5
Reference Blocks
The allocation of reference numbers is controlled by the master database. Each extract may be allocated reference blocks from the master. Elements created in the extract will be allocated reference numbers from the local reference block(s). If no reference block is allocated manually, the system will allocate reference blocks as required. For a Global project, this may require daemon activity. To avoid this, we recommend that you should assign a block of reference numbers to the extract when you create it, using the REFBLOCK n option. The block of reference numbers will then be available locally. n should reflect the number of users writing to the extract, for example, if you expect to have five users writing to the extract, set n to 5. Note: There are 8191 reference blocks available for each extract hierarchy, so there is no need to be conservative when allocating them.
16.3
16:7
12.0
16.4
16.5
Q DBNAME
This command will return the name of the database that you are actually writing to. If the extract is a working extract, then the name of the parent extract is returned. Another useful querying command is:
Q WDBNAME
16:8
12.0
This command will return the name of the working extract that you are actually writing to, if there is a working extract. If there is no working extract, then the result is the same as for Q DBNAME.
16.5.1
Managing Extracts
If the extract hierarchy has different primary locations for different extracts, then both the parent and child databases must be both propagating and allocated at each others locations. If this isnt done then Claims and Flushes will fail. Because of this, Claiming, Flushing, and Issuing should be managed by a Supervisor to ensure Claims are handles in batches in a planned and controlled manner.
16.5.2
User Claims
Normal multiwrite databases require the user to claim an element before changing it. This is known as a user claim. Depending on how the database is set up when it is created, user claims can be implicit or explicit, and in either case, when a new element is created, it will be claimed to the user who created it. Note: In a Global project, we recommend that multiwrite databases should be created with EXPLICIT claim mode, unless all the children are primary at the same location. User claims can be explicitly released (unclaimed) by the user during a session, and elements are always unclaimed when the user changes or exits from a module. The commands for user claims are:
CLAIM . . . UNCLAIM . . .
Extract Users can check daemon availability before claiming or flushing using the following command line syntax:
Q COMMS (TO) <loc> Q COMMS (TO) <loc> PATH PING <loc> Q ISOLAT AT <loc> Q PROJ LOCK AT <loc>
These commands are now available in DESIGN and other modules. This is particularly useful to Claiming/Flushing, since commands fail if the connection is down.
16.5.3
Extract Claims
When you are using extracts, another type of claim, known as an extract claim, is made as well as user claims. If an element is claimed to an extract, only users with write access to the extract will be able to make a user claim and start work on the element. Once a user has made a user claim, no other users will be able to work on the elements claimed, as in a normal multiwrite database. If a user unclaims an element, it will remain claimed to the extract until the extract claim is released. Extract claims allow persistent claims across sessions.
16:9
12.0
16.5.4
Command Syntax
The command syntax for handling extract claims in DESIGN is as follows:
>- EXTRACT -+| || || || || || | | || || CLAIM --------. | FLUSH --------| | FLUSHW -------| | RELEASE ------| | ISSUE --------| .-----<---. | / | DROP ---------+-*- element -+- HIERARCHY -. | | | | -------------| | | FULLREFRESH --| | | | REFRESH ------+--- DB dbname -------------+---> FLUSH RESET ------ DB dbname ----------------->
CLAIM FLUSH
Claims the element or the whole database to the extract. Writes the changes back to the parent extract. The Extract claim is maintained. The extract is refreshed with changes that have been made to its owning database. Writes the changes back to the parent extract. The Extract claim is maintained. The extract is not refreshed. Resets the database after a failed EXTRACT FLUSH command. (See note below under Flushing Changes.). Refreshes an extract with changes that have been made to its parent extract. Refreshes an extract and all its ancestors. A full refresh takes place from the top of the database hierarchy downwards, ending with a refresh of the extract itself. Each extract is refreshed with changes that have been made to its parent extract. Writes the changes back to the owning extract, and releases the extract claim. Releases the extract claim: this command can only be used to release changes that have already been flushed. Drops changes that have not been flushed or issued. The user claim must have been unclaimed before this command can be given.
The HIERARCHY keyword must be the last on the command line. It will attempt to claim to the extract all members of the elements listed in the command which are not already claimed to the extract. The elements required can be specified by selection criteria, using a PML expression. For example:
EXTRACT CLAIM ALL PIPE WHERE (:OWNER EQ USERA) HIERARCHY
16:10
12.0
16.5.5
16.5.6
16.5.7
USERA creates a Pipe and flushes the database back to the parent database, PIPE/PIPE. The results of various Q CLAIMLIST commands by the three Users, together with the extract control commands which they have to give to make the new data available, are shown in the following diagram.
16:11
12.0
Note that:
Q CLAIMLIST EXTRACT
tells you what you can flush; and:
Q CLAIMLIST OTHERS
tells you want you can't claim You can query the extract claimlist for a named database. The database can be the current one or its parent:
16:12
12.0
Databases that are going to own extracts which are primary at other locations, should be created with explicit claim mode. Before you make an extract claim, you should do an EXTRACT REFRESH (or an EXTRACT FULLREFRESH, if necessary) and GETWORK. If you need to claim many elements to an extract, it improves performance if the elements are claimed in a single command, for example, by using a collection:
16.5.8
Flushing Changes
When an extract user makes changes and saves them, they are stored in the extract. These changes can be made available to users in other extracts using the EXTRACT FLUSH command. The FLUSH command operates on a single element or a database or a collection of elements. The changes to these elements will be made available in the parent extract. If changes need to be made available in the master database, it will be necessary to flush the changes up through each level of extracts. Users accessing extracts in other branches of the extract tree will need to use EXTRACT REFRESH to see the changes (or EXTRACT FULLREFRESH, if the users extract is part of a multi-level extract hierarchy and is itself owned by another extract). The following diagram illustrates the sequence of commands that need to be given so that a user working on extract B2 will be able to see the changes made by a user working on extract A2.
The Global daemon will only be involved in the flush process if the user is flushing changes to a secondary database / extract from their current primary extract. Note: If a flush fails, the database needs to be reset, because the failed flush causes subsequent flushes and refreshes to fail. The FLUSH RESET command is used to undo the failed flush.
16:13
12.0
This situation can arise when more than one user is issuing the same database extract. Flush and release commands might then be processed in the wrong order, causing a flush to fail and preventing subsequent refreshes of the extract.
16.5.9
Releasing Claims
Elements that have been claimed to an extract will remain claimed to that extract until they are released. Any changes must have been flushed to the parent extract before the extract claim is released. The EXTRACT RELEASE command operates on a single element or a database or a collection of elements. The elements claimed will be released from (that is, no longer claimed in) the current extract, at which point they will be claimed by the owning extract. If elements need to be made available in the master database, it will be necessary to release the elements up through each level of extracts. The Global daemon will only be involved in the release process if the user is releasing elements to a secondary database / extract from their current primary extract. When you are flushing / releasing data from a satellite to another location, you should check that the flush has been successful before releasing the changes.
16:14
12.0
The REFRESH command will only refresh from databases local to the satellite. Therefore, if a secondary database has not yet been automatically updated with changes made to the database at the primary location, then these changes will not yet be visible at the local satellite. Extracts below the database will only see the latest version of the secondary database when they are refreshed. To see the changes made to the primary database, you must wait for the next scheduled automatic update before refreshing.
16.6
Partial Operations
When named elements are specified in an ISSUE, DROP or FLUSH command, it is known as a partial issue, drop or flush. There are some restrictions on what you can do, as follows: Where a non-primary element has changed owner, then the old primary owner and the new primary owner must both be issued back at the same time. Otherwise there is potential for inconsistencies to occur. If an element has been unnamed, and the name reused, then both elements must be flushed back together. If an element and its owner have been created, then: If the element is included in a partial flush, then its owner must also be included. If the owner is included in a partial drop, then the element itself must be included. If the element is included in a partial drop, then its owner must also be included. If the owner is included in a partial flush, then the element itself must be included.
The HIERARCHY option will scan elements in both the extract and owned extract. Thus deleted/moved elements will be included as part of the issue/drop/flush. You can use selection criteria to specify partial issues and flushes. Deleted elements will be issued/dropped/flushed when the owning element is issued/ dropped/flushed. Alternatively the reference number of the deleted element may be given in the ISSUE/DROP/FLUSH command.
16.7
Extract Sessions
When an extract is created, it is created at a particular session number in the parent extract. This is called the linked session. As the owner extract is modified, and new sessions added, the linked session on the child extract will not change until a refresh or flush is made. Note that ISSUE, DROP and FLUSH cause an automatic refresh. The following example illustrates how extract session numbers and linked session numbers change as an extract is created and modified: Extract session Linked session no. in owner 1 2 3 10 10 10 Comment Extract created Modification made on extract Modification made on extract
16:15
12.0
Comment Refreshed from owner (sessions 11 to 15 created by other users). Further modification. Issued (sessions 16 and 17 created by other users) Further modification Further modification Issued (sessions 19 to 24 created by other users)
While a user is making changes only to the extract, the linked session number in the owner stays the same. On refreshing, the local extract is linked to the most recent version of the parent extract. The new session number linked to in the owner depends on the number of flushes done by other users. In the example the linked session number goes from 10 to 15, indicating that five flushes have been made by other users in the meantime (assuming that no work is being done directly on the owner).
16.7.1
Merging Changes
When a MERGE CHANGES command is given on a DB with extracts, all the lower extracts have to be changed to take account of this. Thus doing a MERGE CHANGE on a DB with extracts should not be undertaken lightly. The following restrictions apply: Any sessions linked to owned extracts must be preserved. There may be no users on any lower extracts. We recommend that you should MERGE CHANGES at the lowest level of extracts first, and then work up the tree.
In a Global project, MERGE CHANGES can only be carried out at the location at which the database and all its descendant extracts are primary. The REMOTE MERGE command currently only handles leaf extracts and databases which do not own extracts. See Merging Extract Databases for more information on merging extract databases. Note: BACKTRACK is not allowed for extract databases. You must use REVERT instead.
16.8
16:16
12.0
To delete a database: The database must not be allocated to any locations other than the Hub The database must not own any extracts, either working or standard ones Thus to delete a database that owns extracts (and may own working extracts) may involve doing a number of CHANGE PRIMARY commands to get rid of any working extracts at satellites where the database is secondary. The procedure for deleting a database that owns extracts is summarised in the diagram below.
No
No
satellite
Yes Make sure no one is accessing the db at the location.
No
Yes No Are they Working Extracts? Yes DELETE extract tree from DB you wish to delete. DELETE the working extracts at the location where the DB is primary. Wait for this command to complete. Check as follows: GETWORK Navigate to the DBLOC of the DB: DBLOC 1 of dbname Q ATT The DB is primary at the hub when the LOCRF is set to the Hub and the PRVRF is unset.
Check that this command has completed. The db is de-allocated when it is removed from the location DBALL list.
DELETE DB dbname
Note: A DB does not need to be primary at the HUB, just as long as it is not primary at the location where it is being de-allocated
16:17
12.0
16.9
Variant Extracts
Variants are a special type of extract, with less rigorous control of claiming elements and writing data back to the owning extract. They are designed to allow users to try out different designs, which then may or may not be written back to the master. Variants are different from normal extracts (including Working extracts) in the following ways: Any element can be modified without being claimed, and so different users can modify the same element in different variants. When data is written back to the owning database, it will overwrite any conflicting data in the owner.
A variant can have normal extracts created from it. Note that in this case, the variant forms a new root for claiming elements: claims in extracts below the variant will not be visible from other parts of the extract family, and claims in other parts of the family will not be visible in extracts owned by the variant. It is possible to have working variants.
Unable to savework. Perhaps you have Daemon has been expunged. Modifications been Expunged to database (other than updates) will fail. Flush may have overtaken another flush. In this case, the Flush will stall for a retry. Previous flush could not be found Previous flush failed Subsequent flushes will fail until failed flush has been reset.
Unable to claim <item> because element is Valid failure - another extract or user has it already claimed by <extract or user> from claimed Extract <no> Unable to claim <item> from parent extract EXTRACT REFRESH is required, to bring <no> because element is modified in a later the child extracts view of the parent up to session. date Nothing to claim locally - all claims failed in Cannot claim to child extract, because owning extract failed to claim anything from its parent You cannot claim <item> without doing an The item has not been claimed into the extract claim from the parent extract extract before the User has claimed it. This is only applicable to Explicit dBs.
16:18
12.0
Symptom
Cause
Unable to claim <item> from parent extract The item has been deleted in the parent, <no> as element has been deleted in a later and the child extract has not been brought session up to date yet. Element reference <item> is invalid or has The reference number of <item> cannot be been deleted found in the database, it is an invalid reference number. Element <item> has been modified, so The item must be saved the database cannot be released. Savework must be before an extract operation can be done first undertaken on it. Element <item> has been deleted by The item you are trying to Claim has been another User deleted by another user. Name clash on <item>. Please rename The name of the item that has just been created already exists.
Cannot flush/abandon <item> as old and The parent of the owner has been changed. new owners must both be in the list, or Both the old and the new owners need to neither in the list be flushed/issued/abandoned at the same time, and the list currently only contains one or the other. Cannot flush/abandon <item> without it's The item is either new or moved to a owner another item. Both need to be flushed/ issued/abandoned at the same time. Cannot flush/abandon <item> without it's The member list of item has changed in members some way. The item needs to be flushed/ issued/abandoned with it's members. Cannot abandon/release <item>. Element is The item is claimed by a User (possible the claimed out by a user (maybe yourself) or to user doing the EXTRACT ABANDON/ an extract RELEASE) or to a child extract. Element <item> kerror <no> Internal error. Please contact your AVEVA support desk for more information. Internal error. Please contact your AVEVA support desk for more information.
16:19
12.0
16:20
12.0
17
Off-line Locations
Normally there is a communications link between pairs of locations, and these locations are referred to as on-line. (Their ICONN attribute is 1, and RHOST points to a valid computer name.) However, Global can operate if there is no direct communications link between the Hub and certain locations. These locations are referred to as off-line. (Their ICONN is 0, and RHOST may be unset.) A tape, CD or other medium is used to copy the databases from one location to the other. It should be noted that: The TRANSFER command copies databases to or from the project directory to a special transfer directory, ready for the physical transfer to another location. The physical transfer must be made as well as using the TRANSFER command from ADMIN. The existence of off-line locations limits the administration capabilities of a project. Off-line locations can only be children of the Hub. An on-line satellite cannot have offline children. Database transfer to and from the media used for communication with an off-line location can only be made at the Hub and the off-line location. Commands such as ALLOCATE and CHANGE PRIMARY are not self-contained. Working practices are required to ensure the correct transfer of data. TRANSFER TO offline satellite from HUB copies satellite secondary dbs to the transfer folder for the satellite (at the Hub) The contents of this folder are transferred to the satellites transfer folder TRANSFER FROM HUB at the offline satellite copies satellite secondary dbs from the transfer folder at the satellite to the satellite project TRANSFER TO HUB at the offline satellite copies satellites primary dbs to the transfer folder for the Hub. The contents of this folder are transferred to the hubs transfer folder for the satellite. TRANSFER FROM offline satellite at the Hub copies satellites primary dbs from the transfer folder at the Hub.
The transfer folder is a holding area for data going to and from the satellite:
It is potentially unsafe to assume that samsys in a transfer folder is the satellite system database. If the TRANSFER FROM step is omitted, then the local system database could be corrupted. This is because the meaning of the file samsys is ambiguous in TRANSFER functionality. For this reason, the functionality of TRANSFER has been changed since previous version of Global to enforce the use of a location suffix in the Transfer folder. All system databases in the transfer folder always have a location qualifier, even the system database for the Offline satellite.
17:1
12.0
It is not recommended that users omit the TRANSFER FROM step: Potentially, inter-db macro changes could be lost. TRANSFER FROM merges the macros from the transfer folder into the satellites MISC database, which already might contain local inter-db macros. If the satellite system database is secondary, then the incoming system db transferred from the Hub will be named with a location suffix. This would need renaming to become the local system db.
17.1
17.2
17:2
12.0
17.3
17.4
17:3
12.0
17:4
12.0
18
Firewall Configuration
The primary objective of a firewall implementation is to provide security to an organizations network. In simple terms, a firewall solution enables only certain applications to communicate from the outside world (for example the Internet) to the organization's network, and vice versa. To enable these applications to function, specific communication ports need to be open. The fewer ports open within a firewall, the less chance there is of security breaches. Under situations where Global is implemented within an environment that has no firewall set up, Global will function without any specific network configuration (other than the requirements outlined under Global > IT Configuration on the AVEVA Support website and in the Global User Guide). However, when a Global project is to be deployed between two or more locations that have firewall implementations, certain ports need to be open in order for Global to function. RPC communications are an integral part of Global. Global uses TCP port 135 and a dynamic range of ports above 1024 to communicate from one location to another (i.e. through the Global daemons running at each location). The dynamic range of ports required to be open (i.e. 1024 and above) poses a security risk. In order to reduce this, we can force the operating systems RPC communications to use only a specified range of ports. This drastically reduces the risk of intrusion from third parties. Firewall rules can also be specified to limit access to these ports to a specific program. Global has a unique identifier (UUID) which is possible to use when defining firewall rules. For further details, contact AVEVA Support.
18.1
18:1
12.0
The following solution can be applied to any modern firewall with the functionality of packet filtering. The procedure for restricting the use of dynamic ports for RPC is through additions in the Microsoft Windows registry. Changing the registry should not be undertaken lightly. Please note that incorrect modification of the registry could lead to serious problems with your system. It is therefore recommended that you back up your registry before making changes. To change the registry, you must use REGEDT32 and not REGEDT, as the latter does not allow you to modify the string data type. If you do not use REGEDT32, the following message will appear on daemon startup:
Cant establish protocol sequences: Not enough resources are available to complete this operation
You must add a subkey and three values to the registry. Under the following key, add a subkey called Internet:
HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc
Under this subkey create three values with the corresponding string data:
18:2
12.0
Note: The RPC configuration procedure described in this document can also be found in Microsoft TechNet Knowledge base: Article number: Q154596. Note that Microsoft recommend a minimum of 20 ports to be open for other services; for more information on this please refer to the article which is available on the Internet at http://www.microsoft.com/technet. The number of open ports suggested in the example above is just that: a suggestion. However it is generally true that the more Global projects you are using, the more ports you are going to require to be open.
18:3
12.0
18:4
12.0
19
19.1
19.2
Dice
This is the Data Integrity Checking tool supplied as part of the ADMIN module. Its purpose is to provide a report on the base product Dabacon databases that informs the administrator if there are any issues with the database that require extra attention. In addition, you can also run it in a patch mode that will actually facilitate a repair on the database. It is recommended that a full Dice report it is run as a matter of routine daily on all databases in the project. This includes the full extract family and secondary databases if Global is in use. Foreign projects, such as a centralised Catalogue, should also be Dice checked, although the frequency should not need to be so frequent if they are not being updated on a daily basis. Often this is done as a scheduled batch routine during no working periods.
19:1
12.0
However, if the project is in a period of intense activity and the window for running bulk processes for reports, drawings, material take-off is small, it can be run with users and batch processes continuing to run on the model. Having produced the report it is imperative that it is closely scanned for issues of concern and then action taken to address them. Ideally, the Administrator should take action to remove all errors and warnings; however some warnings can be deemed to be acceptable and of no risk to the healthy running of the project e.g. Element =18585/38329 WarningAttribute TREF contains invalid ref =18585/74770. This error will also be highlighted to the normal users as they check their designs so it will be picked up there. However, if the identical reference numbers in these messages recur the Administrator should follow up with the last user to access the element (info in session data) to ensure it is cleared. The Fatal Errors listed in a Dice report are usually ones that need immediate attention and action to repair the database will be needed. Nevertheless, on occasion the error can either be tolerated for a period as it is not truly critical, or may have been wrongly categorised as Fatal and constitutes only a warning e.g. Error in level 2 NAME table, session no. 10469, page no. 42385 - incorrect value of first key on lower level page no. 42386 (extract 1). While AVEVA provide analysis of each error message outlining how it should be addressed, the nature of an individual project set-up can make the method on how they should be addressed variable. Therefore it is recommended that as the Administrator becomes familiar with the action needed to address each warning or error it is documented and recorded in project work instructions. Certain database errors can be fixed by running Dice again against the problem database, this time in patch mode to repair the fix. Two typical examples are:
Child extract 12 not listed on header page Element SBFITTING / SBFIT99 needs clearing from mainlist in header extract
This should normally be done when there is no Write access to the database. Even though the Dice report will report the problem cleared, it may be a good idea to rerun a Dice full check on the repaired db with patch mode disable to be 100% sure the problem is cured. Other database errors can only be fixed by a Reconfiguration of the database. For example:
Element =35021/ 13323 has an inconsistent entry in the name table. Name exists on the element but is not in the name table itself. Thus the element can not be navigated to by name Please reconfigure this DB to resolve the problem
This work should be done when there are no Read or Write access to the database, but to avoid a complete project shutdown it is possible to remove the problem db from all MDBs do the repair and then replace it. Because of the additional complexity this may involve, looking for a window in the project workload is normally the preferred choice. Two or three days before a phase of major deliverable production it is recommended to be especially diligent in Dice checking to ensure that all databases are in good shape and reduce the risk of an interruption in the bulk process. If a user reports an unusual problem with part of the project data, such as a Dabacon crash, the first step should always be to perform a Dice check on the database(s) involved. If the
19:2
12.0
report shows issues that cannot be repaired by patching or reconfiguration then the Dice report should be sent immediately to AVEVA support. If after repairing the database the database is OK for a few days and then Dice reports errors again then this may indicate a deeper issue and the Dice report together with any background information on circumstances that are common to the error occurring e.g. same users, same UI menu etc. should be reported to AVEVA support who may then request that the databases be sent in for fuller investigation.
19.3
Global
This section provides information to advise Administrators on good practices. recommend you read it fully. We
19.3.1
Update Frequency
The idea of Global is that it provides the ability for a project split across several locations to behave just as if it was located in one location. Therefore it is assumed that most deployments of Global will have this objective in mind and will ensure that each location is updated with changes from the other locations on a frequent basis, especially when they are in similar time zones. This is particularly important when the locations are operating in the same physical space e.g. in a compressor house one location covers the steam lines, the other the utility lines. The aim here is, of course, to try and avoid routing pipe in the same space as the other location. This also lies to the idea of keeping an Extract database only local at one location i.e. if the project is process split this will incur a higher risk of clash issues when the data eventually migrates to a higher level database shared between the locations. If the project is split geographically e.g. each location covering complete units, then this particular risk is reduced. As a baseline updates between locations occurring around 4 times per working day are reasonable with a possible escalation if significant change is occurring at critical times and data is needed by one location faster than normal e.g. a fabrication yard when the project data is reaching design completion. Examples of updates every 15 minutes have been seen in this particular scenario. Where the project is split across time zones, then timing updates to ensure data is exchanged to suit start and close of work with attention to any time overlap is recommended. However, when selecting update frequencies the issue of data quantity moving across the network should be considered too. The other idea of Global is to allow smaller chunks of data to be transferred rather than whole databases. Therefore, if only one transfer is done per day, the quantity of data will be large and if there has been an intense period of modelling in one location then the update may take longer, possibly not completing in time for drawing or review file production as expected. Therefore, doing several updates will reduce the risk of update overlap or incompletion before deliverable production. When different time zones are involved, it may be useful to use an intermediate satellite. This will make it easier to transfer large amounts of data outside working hours.
19.3.2
Timing of Updates
The batches of updates that are run in one update session to keep all locations synchronised do not have to be run sequentially. However, updates should be not started at exactly the same time to avoid file-contention on the Global database.
19:3
12.0
If it is felt desirable to run the updates sequentially then a script will be required that uses the EXECAfter and EXECBefore script attributes on the Update event (LCOMD) to run preand post-execution scripts on a scheduled update. This could also: Record update start and finish times Report on Database sessions Lock out other updates by creating/deleting a lock file This script is not a standard delivery as it needs tailoring for each project set-up. If required the customer can request services from AVEVA to deliver this.
19.3.3
19:4
12.0
Session_ Session_ Session_ SAT2 SAT3 HUB P 328 P 578 P 101 P 176 S 288 S 484 S 79 S 174 S 324 S 541 S 79 S 174
Legend
P S Primary location Secondary location Locations aligned Secondary locations not aligned Update manually to synchronise Secondary location ahead of Primary. Investigate and repair. This macro is not a standard delivery as it needs tailoring for each project set-up. If required you can request services from AVEVA to deliver this.
19.3.4
19.3.5
19:5
12.0
each update process completes successfully and that the realignment has been successful before kicking off the next update.
19.3.6
Flushing/Issuing
It is common practice for all users on a project that uses Extract databases, whether Global or not, to follow common practices of Flushing. Generally it is expected that each user will be expected to Claim, Flush and or Issue on an object-by-object (or small group of objects). However, it may be decided by some customers to manage the Flush and Issue on a collective basis at managed intervals, say once a day. If this is done then the Flush or Issue should be done at as high a level in the database e.g. SITE. This reduces both the number of sessions created and the database file size. Note that if the Model Object Manager software is in use, the program does background flushing and issuing to keep the Primary data as synchronised with the Oracle data as possible. If Model Object Manager is in use regular Global Updates will also reduce the risk of the user viewing Oracle data that is not aligned with the Secondary view of the PDMS data. .
19.3.7
Transaction Database
This database holds all the information about the success or failure of the updates and remote claiming and is the first place to go to check that Global is operating successfully. Ideally it should be regularly monitored by the Administrator responsible for each location. Note that if an automated update fails for any reason, then there is always the option to perform the update manually rather than waiting for the automated update to try and align things again. By doing the update manually, the duration of locations being out of synch is reduced and also the automated update process does not get loaded with 2 or more lots of update data to deal with. On a large and busy project the transaction database can become very large, so it should be compacted on a regular basis. The recommended method of doing this is to use the Merge-and-Purge function from the Daemon, or by selecting Utilities>Transactions in the Admin module and the selecting the Purge/Merge transactions DB tab. Daemon merge-and-purge can be done when DESIGN users are in the project (but not ADMIN users) provided that they do not have the Transaction db in their MDB. If a Module (e.g. ADMIN) is accessing the transaction db when the merge-and-purge is attempted, then nothing will be purged. If the merge-and-purge is interrupted e.g. by a crash of the Daemon, then one of the two following methods could be used at each satellite after al users are out of the project and the Daemon has been stopped: either carry out a normal merge (merge changes TRANSACTION/SAT) . It will be necessary first to run a macro to collect and delete old commands -otherwise the merge will achieve nothing. AVEVA can assist with writing such a macro. An example of a similar macro you can use as a basis is shown in Example Macro for Collecting and Deleting Old Commands. rename the database (eg. ABC0001_0001 to ABC0001_0001-ORI) restart the Global daemon ( a new clean database will be created automatically) The problem with this method is that incomplete transactions are lost and therefore updates are missed and this may contribute to misaligned Primary and satellite locations.
or
19:6
12.0
The ADMIN UI provides a view of the updates from the Transaction db and it is important that the administrator checks the actual messages from these Updates because the update may not have successfully updated ALL databases, although the overall command has been successful. If the MESSAGE reads 'Update All succeeded (NNNN DBs) with MMMM failures' then the administrator MUST investigate the failures. The FAILUREs pane of the Transaction messages form indicates this. If this check is considered to be worth separating to a distinct procedure a macro may be written to collect TRFAIL elements below the TRINCO for the TIMEDUPDATES user.
19.3.8
19.3.9
admnew Files
When the Daemon copies a Database usually after a session Merge (as opposed to a session-based update), it copies to a temporary file with the suffix .admnew. When the copy is complete, this file is renamed to replace the old database file. These .admnew files are normally tidied up automatically. However, if the daemon has crashed, it may leave unwanted .admnew files behind, which can prevent a subsequent Daemon attempt to copy the database from running. It should be ensured that the satellites and hub remove such files after a crash. See admnew Files for a full description of .admnew files.
19.4
19:7
12.0
update will take longer than a simple session update and it is therefore recommended to be done at a weekend. Merge has to be done at the Primary location unless a Leaf extract organisation has been used where the Remote Merge functionality can be used from the hub. Remote Merge can be done with the Daemon running, but for normal Merge operations it is recommended that the Daemon is stopped to prevent any updates occurring. The steps to be taken prior to a merge are covered more thoroughly in the Database File locks section of this document. Note: A Leaf extract is a database which does not own other database extracts.
19.5
19.5.1
19.5.2
19.5.3
19:8
12.0
Removing Users
After a session has been illegally exited, either deliberately or due to an unexpected system fault, the Users who were accessing the databases may be left as phantom users (also known as dead users) in the system. To clear these users from the databases and release their claims the Administrator can use the Expunge syntax for all users or specific dbs (see ADMIN Command Reference manual for details of all Expunge options, including how to set the Overwrite DB Users option to allow non-foreign projects to copy over locked files provided there are no users recorded in the COMMS db. Overwriting is disabled by default because it may cause sessions of dead users to crash). You can use the ADMIN Module for this also. To force live rogue users out of the system who have not followed the request to leave the system before Admin work is carried out, the Expunge User Process can be used. This will not stop the process on the Workstation but it will sever the link with the database file and the next time the user tries to access the process (Module Window) it will crash. After the Expunge User Process has been done it is common practice to then use Expunge All Users to remove any lingering phantom users and release all claims. However, it is necessary after the Expunge processes (or other illegal exits) to ensure that the database files have not been locked by Windows or left open and they should be closed so that further work in the databases can be done. As the files normally reside on a separate File Server, administration access to that server will be required.
19:9
12.0
project is to isolate the databases (inclusive of the whole extract family) from use by removing them from all MDBs and then performing steps 1-8 with the exception of 6. Deferring them is not recommended as the user can overwrite the deferral. After the Admin task has been performed on the specific databases they can then be readded to the MDBs. As this adds an extra level of complexity to the Admin task it is therefore suggested that a window of time is sought where the whole project can be shut down.
19.6
In this scenario the SAT2 users working on the EX2_SAT1 db are claiming objects from EX1 Primary at SAT1. This can be done dynamically in Explicit Claim mode over the Daemon. However, the response can be variable causing the SAT2 users to be unsure as to the status of their claim. Therefore it is recommended that the project is organised in such a way that the EX1: Primary objects to be worked on at SAT2 are identified and marked by the SAT1 users and then an Admin process is run to Extract Claim the collection to the EX2_SAT1 Primary db at SAT2. When the work on the objects is complete the SAT2 users mark the objects as ready to be Issued and an Admin process is run to Extract Issue the collection back to EX1: Primary.
19.7
19:10
12.0
19.7.1
ADMIN Lead
A single technical expert, with an in-depth knowledge of the application from a User and Administration background is placed in charge of the whole project, has decision making authority and is the contact for communication with the engineering and IT management for the project. This person is to have a full-time Deputy who can stand-in in their place during planned and unexpected absence. In a Global project the location of the Hub should be sited at the location of this person. This role, including the Deputy, should have a high-level of IT knowledge and be a trusted partner of the IT group with permissions to access the application server(s) to perform specific tasks. It will be this role that has the main contact with AVEVA Support unless it pertains to a specific discipline need when the Discipline SME role comes into play. This is a full-time role on a major project.
19.7.2
19.7.3
19:11
12.0
This is a part-time role on the project following a similar pattern of workload to the Discipline SMEs.
19:12
12.0
Note: The user will be prompted to close and re-open the Admin module. unlock savework
Then quit the Admin module and reload as prompted. Note: When re-starting the Admin module a prompt will inform the user that the Location is uninitialised. In the Admin module select Locations from the Elements pulldown and highlight /projecthub. Click Modify and rename /projecthub to /hub then click Apply. A prompt will ask if the user wants to initialise the location. Click Yes. A prompt will be displayed indicating that a new transaction database has been created. Click OK then Dismiss on the Modify Location window. Start the Global daemon by typing the following from the Windows command line. Click Start > Run and then type CMD to open a Windows command line window.
A:1
12.0
Open the Admin module. Select Location from the Element pulldown.
Select Display > Command to open the command window and then type the following commands
q linit
Example PML to wait for command to complete:
!c = curloc do pause 1 session comment 'Interim savework at HUB after INITIALISE' savework getwork break if (!c.linit) !f = object FILE(!!itaSkipPath + '/skip') break if (!f.exists()) skip enddo savework ***** Generate the locations at the HUB ****** /*GL LOCLI 1 NEW LOC /PFB LOCID PFB DESC Piping Fabrication RHOST sg132 CR DB TRANSACTION/PFB GENERATE LOCATION PFB NOALLOCATE
Note: ALLOCATE will copy all the project files to the location defined by variable {proj}_PFB. NOALLOCATE will only copy the system DB files. At the Satellite, use Windows Explorer to copy the files in {proj}_PFB to the location directory where the project will reside as {proj}000 (i.e. the Satellite). Set up the base product environment at the satellite location (executables, Project directories etc). Set up the base product environment at satellite location (executables, Project directories etc.)
A:2
12.0
PING PFB
Example PML to wait for command to complete
do pause 1 ping PFB handle ANY !f = object FILE(!!itaSkipPath + '/skip') break if (!f.exists()) skip elsehandle NONE break endhandle enddo
Log in to the Admin Module at location PFB (admin)
INITIALISE
Having setup the environment at location Log in to the Admin Module at the HUB (if not in already)
!loc = /PFB do pause 1 session comment 'Interim savework at HUB after initialisation of PFB' savework getwork break if (!loc.linit) !f = object FILE(!!itaSkipPath + '/skip') break if (!f.exists()) enddo session comment 'Savework at HUB after confirming initialisation of PFB' savework getwork Now allocate the required DBs to the location PFB ALLOCATE pipeapproved/master SECONDARY AT PFB ALLOCATE pipereview/siteufa/A SECONDARY AT PFB ALLOCATE pipeworkarea/fabwork/A PRIMARY AT PFB etc.
A:3
12.0
session comment 'Savework at HUB after allocations to PFB' savework Wait until all dbs have been allocated at PFB /PFB 1
The number of members in the DBALL should match the number of DBs allocated. Example PML to wait till all databases have been allocated
do pause 2 session comment 'Interim savework at HUB - waiting for allocations to PFB' savework getwork !location = /PFB q var !location.members[1].members break if (!location.members[1].members.size() ge 28) $* no. of allocates !f = object FILE(!!itaSkipPath + '/skip') break if (!f.exists()) enddo session comment 'Savework at HUB after confirming allocations to PFB' savework
Create Teams and Databases at the Hub and User, MDBs locally. REPEAT FROM ****GENERATE LOCATION****, for all locations required.
A:4
12.0
B
B.1
Prevented reverse propagation, should be From Remote not Update To Prevented reverse propagation, should be To Remote not Copy From
The words From and To indicate the directions implied the Primary location, and that inferred from the database header. These messages are output as Errors to the daemon window as well as being recorded as Failure in the Transaction database. The word Copy means that the compaction number at the secondary location is higher than that at the primary location; the word Update means that the latest session or counters are higher at the secondary location than the primary location. (If neither of the locations is the primary location, then the database at the location nearest to the primary location is the one that is used) A third message is also possible for another locations system database where a file is missing:
Missing file. Prevented reverse propagation, should be To Remote not Copy From
B:1
12.0
B.2
Note: Note that the RECOVER command is the only command which is allowed to copy the file without a check on the propagation direction. In general, if the Prevented Reverse Propagation message contains Copy, it is the NACCNT attribute that is the problem. This counter is incremented by a database MERGE, BACKTRACK (but not REVERT - the Appware uses REVERT) or Reconfiguration. In this case, the propagation needs to copy the entire database file. However the copy has failed, because the NACCNT is higher at the secondary location than the primary location. The other properties are used to control normal database propagation, where only the required sessions and the database header are sent. If the Latest session number is higher at the secondary location than at the primary location, then database recovery is required. If the session numbers are equal, but the HCCNT and CLCCNT attributes are higher at the secondary location than at the primary location, then a database recovery is also required. Usually, recovery should be made from the Primary location, unless there are good reasons why a secondary location has the correct version of the database.
B.3
Q SESSIONS MYTEAM/DESI
B:2
12.0
The database properties NACCNT, HCCNT and CLCCNT may be queried in the normal way by navigating to the DB element for the database, for example, /*MYTEAMDESI. Attributes. It should be emphasised that these attributes are properties of the database file, and may differ at each location. Alternatively, a PML object <DB> may be constructed for the database:
!DD.FileName !DD.Prmloc
The same properties may be queried for a database at a remote location ABC by using:
(6) At Tue Oct 04 01:03:24 2005 Claim Changes counts: local 17 remote 1 (6) At Tue Oct 04 01:03:24 2005 Extract List counts: local 3 remote 10
In this case this indicates that the current location has a more recent session than the remote location. The Claim count only applies to a session, so its value will be ignored unless the session numbers are the same. In this example, the implied propagation direction is from the current location to the remote location. However, before making the update, the Daemon checks the update direction, to ensure that the propagation direction is consistent with the direction away from the primary location of the database. If this check fails, then the Prevented reverse propagation error causes the update to fail. Occasionally, it is not possible for the daemon to check the Update direction (Global db may be in use). In this case, the failure will read Update skipped. This is normally a temporary problem, and the database will be propagated as normal on the next scheduled update.
B:3
12.0
B.3.1
/2005/OCT/5/TIMEDUPDATES/ABC
where ABC is the LOCID of the location owning the Update event (LCOMD). PML Collection syntax can be used to extract the Failures:
COLLECT ALL TRFAIL WITH (TYPE OF OWNER NEQ |TRMLST|) FOR !DBREF
where !DBREF refers to the timed update element above. Generally, successes (TRSUCC) and failures (TRFAIL) can be ignored when they are owned by TRMLST, since these are progress messages. Only those in the Success list (TRSLST) and Failure list (TRFLST) need to be considered. Alternatively the Transactions Utility Appware could be used as a basis for a suitable macro, since the embedded methods are extracting this information. There are two main forms involved:
!!glbtransactions for transaction command summary !!glbtransactionmessages for transaction messages, failures and successes
These forms are files in %PMLLIB%\global\forms with the suffix .pmlfrm. These forms use the Appware object GLBTRANSACTION. This contains suitable methods using the COLLECTION object and EXPRESSON filters to collect successes and failures. When a command is stalled, this is only reported as a Message (TRMESS) in the Message list (TRMLST). There is no corresponding success or failure, since the command may well complete on a re-try. Note: Some commands (such as Claims) use Successes as a way of passing data between operations, so contain fairly obscure data.
B:4
12.0
C:1
12.0
C:2
12.0
D:1
12.0
endif endif !date = object DATETIME(!year,!month,!day,!hour,!minute,!second) !collection = object COLLECTION() GOTO FRSTW TRAN !collection.scope(!!ce) !filter = object EXPRESSION('upc(TSTATE) eq |COMPLETE|') !collection.filter(!filter) !collection.type('TRINCO') !trincos = !collection.results() !promptstr = 'Found ' & !trincos.size().string() & ' complete transactio ns...' $P $!promptstr !promptstr = 'Deleting obsolete transactions more than ' & !days.string( & ' days old...' $P $!promptstr !numdel = 0 !numh = 0 do !trinco values !trincos !datecm = object DATETIME(!trinco.datecm) !datend = object DATETIME(!trinco.datend) if (!trinco.incsta.upcase() eq 'PROCESSED' and !datecm.lt(!date) or !t rinco.incsta.upcase().inset('TIMED OUT','CANCELLED','REDUNDANT') and !date nd.lt(!date)) then !numdel = !numdel + 1 !!CE = !trinco DELETE TRINCO if (!!CE.members.size() eq 0) then DELETE TRLOC !numh = !numh + 1 if (!!CE.members.size() eq 0) then DELETE TRUSER !numh = !numh + 1 if (!!CE.members.size() eq 0) then DELETE TRDAY !numh = !numh + 1 if (!!CE.members.size() eq 0) then DELETE TRMONT !numh = !numh + 1 if (!!CE.members.size() eq 0) then DELETE TRYEAR !numh = !numh + 1 endif endif endif endif endif endif enddo $P $!numdel obsolete transactions deleted $P $!numh associated hierarchy elements deleted if (!numdel eq 0) then $P No merge necessary !!Alert.Message('No obsolete transactions found') else
D:2
12.0
!cs = CURRENT SESSION !locrf = !cs.locationname.dbref() !transdbstr = 'TRANSACTION/' & !locrf.locid !promptstr = 'Merging all sessions of transaction DB ' & !transdbstr & '...' $P $!promptstr MERGE CHANGES $!transdbstr $P Merge complete !!Alert.Message(!numdel.string() & ' obsolete transactions deleted transaction database purge/merge complete') endif endfunction
D:3
12.0
D:4
12.0
Index
A
ADMIN Daemon . . . . . . . . . . . . . . . . . . . 3:1 Areas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5:4
E
Extracts . . . . . . . . . . . . . . . . . . . . . . . . 16:1 access . . . . . . . . . . . . . . . . . . . . . . 16:8 children . . . . . . . . . . . . . . . . . . . . . 16:1 claim restrictions . . . . . . . . . . . . . 16:11 creating . . . . . . . . . . . . . . . . . . . . . 16:3 creating working . . . . . . . . . . . . . . . 16:5 dropping changes . . . . . . . . . . . . 16:14 explicit claim . . . . . . . . . . . . . . . . 16:11 extract claim . . . . . . . . . . . . . . . . . . 16:9 flushing . . . . . . . . . . . . . . . . . . . . 16:13 flushing command failure . . . . . . . 16:11 hierarchy . . . . . . . . . . . . . . . . . . . . 16:7 implicit claim . . . . . . . . . . . . . . . . 16:11 issuing changes . . . . . . . . . . . . . . 16:14 master . . . . . . . . . . . . . . . . . . . . . . 16:1 merging changes . . . . . . . . . . . . . 16:16 numbers . . . . . . . . . . . . . . . . . . . . . 16:6 parent database . . . . . . . . . . . . . . . 16:1 partial operations . . . . . . . . . . . . . 16:15 querying family . . . . . . . . . . . . . . . . 16:2 reference blocks . . . . . . . . . . . . . . 16:7 refreshing . . . . . . . . . . . . . . . . . . . 16:14 releasing claims . . . . . . . . . . . . . . 16:14 sessions . . . . . . . . . . . . . . . . . . . . 16:15 user claim . . . . . . . . . . . . . . . . . . . 16:9 using in . . . . . . . . . . . . . . . . . . . . . 16:8 variant . . . . . . . . . . . . . . . . . . . . . 16:18
C
Command Processing . . . . . . . . . . . . . . . 2:1
D
Database allocation check . . . . . . . . . . . . . . . . 5:1 allocation to location . . . . . . . . . . . . . 5:1 creating extract . . . . . . . . . . . . . . . . 16:3 creating master . . . . . . . . . . . . . . . . 16:3 de-allocation . . . . . . . . . . . . . . . 5:2, 5:3 deleting . . . . . . . . . . . . . . . . . . . . . . 11:1 macros . . . . . . . . . . . . . . . . . . . . . . 10:4 manual update . . . . . . . . . . . . . . . . 10:1 master of extract . . . . . . . . . . . . . . . 16:1 merging . . . . . . . . . . . . . . . . . . . . . . 6:1 reconfiguring . . . . . . . . . . . . . . . . . . 13:1 recovery . . . . . . . . . . . . . . . . . . . . . 12:1 recovery of global . . . . . . . . . . . . . . 12:2 recovery of primary . . . . . . . . . . . . . 12:2 recovery of primary location . . . . . . 12:2 recovery of secondary . . . . . . . . . . 12:1 synchronisation . . . . . . . . . . . . . . . 10:1 update delay . . . . . . . . . . . . . . . . . . 10:2 update protection . . . . . . . . . . . . . . 10:7 update timing . . . . . . . . . . . . . . . . . 10:4 updating . . . . . . . . . . . . . . . . . . . . . 10:1 DESIGN Manager files . . . . . . . . . . . . . 10:5
F
Firewall . . . . . . . . . . . . . . . . . . . . . . . . . 18:1
Index page i
12.0
G
Global Daemon access rights . . . . . . . . . . . . . . . . . . 3:1 diagnostics . . . . . . . . . . . . . . . . . . . . 4:1 location . . . . . . . . . . . . . . . . . . . . . . . 3:1
writing to . . . . . . . . . . . . . . . . . . . . . 7:1
H
Hub changing . . . . . . . . . . . . . . . . . . . . . . 9:1 recovering . . . . . . . . . . . . . . . . . . . . . 9:2
I
ISODRAFT files . . . . . . . . . . . . . 10:5, 17:2
K
Kernel Command . . . . . . . . . . . . . . 2:1, 7:1
L
Locations off-line . . . . . . . . . . . . . . . . . . . . . . . 17:1
M
Macros . . . . . . . . . . . . . . . . . . . . . . . . . 10:4
P
Pending file . . . . . . . . . . . . . . . . . . . 2:1, 8:1 PLOT files . . . . . . . . . . . . . . . . . . 10:5, 17:2 Projects backing up . . . . . . . . . . . . . . . . . . . 15:1
T
Transaction Audit . . . . . . . . . . . . . . . . . . 7:1 Transaction database audit trail cancelled commands . . . . 7:7 audit trail dates and counts . . . . . . . 7:5 audit trail from TRINCO . . . . . . . . . . 7:2 audit trail from TROPER . . . . . . . . . . 7:4 audit trail from TROUCO . . . . . . . . . 7:3 audit trail results and messages . . . . 7:7 commands . . . . . . . . . . . . . . . . 2:1, 7:1 management . . . . . . . . . . . . . . . . . 12:3 merging . . . . . . . . . . . . . . . . . . . . . 12:3 merging and purging . . . . . . . . . . . 7:13 reading from . . . . . . . . . . . . . . . . . . . 7:1 reconfiguring . . . . . . . . . . . . . . . . . . 12:4 renewing . . . . . . . . . . . . . . . . . . . . . 12:3
Index page ii
12.0