Escolar Documentos
Profissional Documentos
Cultura Documentos
ibm.com/redbooks
International Technical Support Organization z/OS V1R3 DFSMS Technical Guide July 2002
SG24-6569-00
Take Note! Before using this information and the product it supports, be sure to read the general information in Notices on page ix.
First Edition (July 2002) This edition applies to Version 1 Release 3 of z/OS, Program Number 5694-A01. This document was created or updated on July 17, 2002. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. QXXE Building 80-E2 650 Harry Road San Jose, California 95120-6099 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 2002. All rights reserved. Note to U.S Government Users Documentation related to restricted rights Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi The team that wrote this redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii Notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Comments welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Chapter 1. Release summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 The DFSMS family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 z/OS V1R3 DFSMS release focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 SMS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.2 DFSMSdfp enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.3 DFSMShsm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.4 DFSMSdss enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.5 DFSMSrmm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.2.6 Advanced copy services enhancements . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 2. SMS enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Data set allocation verses creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Dynamic volume count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 Out-of-space failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Addressing out-of-space conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.3 What is supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.4 Enabling DVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.5 Everything you ever wanted to know about volumes . . . . . . . . . . . . 10 2.2.6 Allocation and candidate volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2.7 DVC and candidate volumes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.2.8 LISTCAT, volumes, and DVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.2.9 Deciding what value to specify for DVC . . . . . . . . . . . . . . . . . . . . . . 14 2.2.10 The size of the TIOT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.2.11 When changing the DVC is not dynamic. . . . . . . . . . . . . . . . . . . . . 15 2.2.12 Other considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2.13 Required maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.14 Advantages of implementing DVC . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.2.15 DVC, extend processing, and space constraint relief . . . . . . . . . . . 16 2.3 Extend storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.3.1 Defining extended storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.3.2 Rules for extend storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
iii
2.4 Overflow storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.4.1 Defining the overflow storage group . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.4.2 Using overflow storage groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.5 Automation assistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.1 Message routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.5.2 SMF. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 2.6 Data set separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.1 Allocation terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.2 Requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.3 The answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.6.4 Contents of the data set separation profile . . . . . . . . . . . . . . . . . . . . 32 2.6.5 Allocation examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6.6 Restrictions for separation processing . . . . . . . . . . . . . . . . . . . . . . . 33 2.6.7 Usage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.6.8 Required maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 2.7 Summary of factors influencing volume selection . . . . . . . . . . . . . . . . . . . 35 Chapter 3. DFSMSdfp enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.1 Large volume support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1.1 Large volume design considerations. . . . . . . . . . . . . . . . . . . . . . . . . 40 3.1.2 3390-9 overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.3 Limitations of the 3390-9 solution . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.4 Coexistence support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 3.1.5 EXCP considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.6 Interfaces and vendor code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.8 Implementation considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.1.9 Required support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2 IDCAMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.1 Changes to GDG base processing . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.2.2 Extended alias support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 3.3 Catalog management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.1 Defining catalogs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.2 Data set name validity checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 3.3.3 Performance, diagnostic, and nice-to-have. . . . . . . . . . . . . . . . . . . . 48 3.4 CONFIGHFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.1 How it works today . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.2 How it works with this release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 3.4.3 Other enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 3.5 VSAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 3.5.1 VSAM parameter definition support removed . . . . . . . . . . . . . . . . . . 53 3.5.2 System managed buffering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 3.6 Large real storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
iv
3.6.1 Media Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.7 REUSE for striped data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.8 Expiration date and retention period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 3.9 Record level sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9.1 Coupling facility structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9.2 Caching CIs larger than 4K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 3.9.3 Lock structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 3.10 OAM enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.10.1 Multiple object backup support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 3.10.2 Improved reliability and usability . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Chapter 4. DFSMShsm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 4.1 The common recall queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 4.1.1 Our test environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 4.2 Which environments can use a CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4.3 How to enable this function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3.1 Defining the members of the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.3.2 Sizing the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3.3 Defining the CRQ structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 4.3.4 Accessing the CRQ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4 Commands to manipulate the common recall queue . . . . . . . . . . . . . . . . 84 4.4.1 CANCEL command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.2 DELETE command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.3 HOLD AND RELEASE commands . . . . . . . . . . . . . . . . . . . . . . . . . . 84 4.4.4 RECALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.5 QUERY command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 4.4.6 SETSYS command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4.7 STOP command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 4.4.8 AUDIT Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5 Using the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 4.5.1 Placement of data on the queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 4.5.2 Processing when CRQ is full. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 4.5.3 Selection of requests from the queue . . . . . . . . . . . . . . . . . . . . . . . . 94 4.5.4 Disconnecting from the common recall queue . . . . . . . . . . . . . . . . . 96 4.5.5 Impact of HOLD and RELEASE commands . . . . . . . . . . . . . . . . . . . 98 4.6 Recovering from errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.6.1 Loss of a DFSMShsm or LPAR. . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.2 Loss of a CF or connectivity to a CF . . . . . . . . . . . . . . . . . . . . . . . . 102 4.6.3 Recall request processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 4.6.4 Auditing the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.6.5 Rebuilding the CRQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 4.6.6 Additional diagnostic data collection . . . . . . . . . . . . . . . . . . . . . . . . 109 4.7 Other new enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Contents
4.7.1 Keyrange data sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 4.7.2 DFSMShsm large volume support . . . . . . . . . . . . . . . . . . . . . . . . . 110 Chapter 5. DFSMSdss enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.1 HFS logical copy support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.1 z/OS view of an HFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.2 z/OS UNIX view of an HFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.3 How HFS logical copy works. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1.4 Target HFS space allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.5 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.6 Usage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.1.7 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.1.8 Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.2 Enhanced dump conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.2.2 DUMPCONDITIONING phase I with OW45674 . . . . . . . . . . . . . . . 115 5.2.3 DUMPCONDITIONING phase II with OW48234. . . . . . . . . . . . . . . 116 5.2.4 Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.2.5 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 5.3 Large volume support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Chapter 6. DFSMSrmm enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 6.1 Changes introduced with z/OS V1R3 DFSMS . . . . . . . . . . . . . . . . . . . . 120 6.1.1 Special character support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6.1.2 Changed messages for improved diagnostics . . . . . . . . . . . . . . . . 121 6.1.3 HELP moved from SYS1.SEDGHLP1 to SYS1.HELP . . . . . . . . . . 121 6.1.4 OAM multiple object backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 6.2 New functions introduced since DFSMSrmm R10 . . . . . . . . . . . . . . . . . 122 6.2.1 Software MTL support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 6.2.2 Multi-volume alert in DFSMSrmm dialog. . . . . . . . . . . . . . . . . . . . . 122 6.2.3 Updated conversion tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2.4 VSAM extended function support for DFSMSrmm CDS . . . . . . . . . 126 6.2.5 DFSMSrmm application programming interface . . . . . . . . . . . . . . . 127 6.2.6 PARMLIB options SMSACS and PREACS . . . . . . . . . . . . . . . . . . . 128 6.2.7 Storage location as home location . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.2.8 Enhanced bin management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.2.9 DSTORE by location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.2.10 Extended extract file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.2.11 Report generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.2.12 Buffered tape mark support for A60 controller . . . . . . . . . . . . . . . 155 Chapter 7. Advanced copy services enhancements . . . . . . . . . . . . . . . . 157 7.1 Extended remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 7.1.1 XRC overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
vi
Multiple XRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Configuring XRC or MXRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Coupling XRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 QUICKCOPY. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... ...... . . . . . . . . . . . . . 165 166 166 166 168 169 171 172 172 174 174 176 176
Appendix A. Record changes in z/OS V1R3 DFSMS . . A.1 VSAM RLS SMF record changes . . . . . . . . . . . . . . . . A.2 System managed buffering SMF record changes. . . . A.3 Open/Close/EOV SMF record changes . . . . . . . . . . . A.4 OAM SMF record changes . . . . . . . . . . . . . . . . . . . . . A.5 HSM FSR record changes . . . . . . . . . . . . . . . . . . . . . Appendix B. Maintenance information B.1 APAR II12431 . . . . . . . . . . . . . . . . . . B.1.1 Error description . . . . . . . . . . . . B.2 APAR II12896 . . . . . . . . . . . . . . . . . . B.2.1 Problem conclusion . . . . . . . . . B.3 APAR OW53834 . . . . . . . . . . . . . . . . B.3.1 Error description . . . . . . . . . . . . ....... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... ......
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Related publications . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other resources . . . . . . . . . . . . . . . . . . . . . . . . Referenced Web sites . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . IBM Redbooks collections . . . . . . . . . . . . . . . . . ...... ...... ...... ...... ...... ...... ....... ....... ....... ....... ....... ....... ...... ...... ...... ...... ...... ...... . . . . . . 191 191 191 193 193 193
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Contents
vii
viii
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrates programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing application programs conforming to IBM's application programming interfaces.
ix
Trademarks
The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: AIX CICS DB2 DFS DFSMS/MVS DFSMSdfp DFSMSdss DFSMShsm DFSMSrmm DFSORT Enterprise Storage Server Extended Services IBM IMS Magstar MORE MVS OS/390 Parallel Sysplex Perform RACF Redbooks Redbooks(logo) RMF S/390 SP TotalStorage z/OS z/VM
The following terms are trademarks of other companies: ActionMedia, LANDesk, MMX, Pentium and ProShare are trademarks of Intel Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. C-bus is a trademark of Corollary, Inc. in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. SET, SET Secure Electronic Transaction, and the SET Logo are trademarks owned by SET Secure Electronic Transaction LLC. Other company, product, and service names may be trademarks or service marks of others.
Preface
Each release of DFSMS builds upon the previous version to provide enhanced storage management, data access, device support, program management, and distributed data access for the z/OS platform in a system-managed storage environment. This IBM Redbook provides a technical overview of the functions and enhancements in z/OS V1R3 DFSMS. It provides you with the information you need to understand and evaluate the content of this DFSMS release, along with practical implementation hints and tips. Also included are enhancements that were made available prior to this release through an enabling PTF that have been integrated into this release. z/OS V1R3 DFSMS includes catalog function enhancements that improve your ability to self-diagnose problems. New SMS function reduces out of space conditions and provides data set separation at the physical control unit level. The RLS coupling facility caching enhancements allow you to specify the amount of data that is cached in the coupling facility cache structure defined to DFSMS. VSAM enhancements include record level sharing (RLS) coupling facility (CF) cache data records greater than 4K, and I/O processing with real addresses greater than 2 GB for most VSAM data sets. DFSMShsm provides a common recall queue that is shared by multiple DFSMShsm hosts, allowing the recall workload to be balanced across each of the hosts. Object Access Method (OAM) and DFSMSdss provide data backup and recovery enhancements. DFSMSrmm incorporates reporting, storage location, and usability functions made available prior to z/OS V1R3 DFSMS. This book is written for storage professionals and system programmers who have experience with the components of DFSMS. It provides sufficient information so you can start prioritizing the implementation of new functions and evaluating their applicability in your DFSMS environment.
xi
xii
Savur Rao Mark Thomen Dan Win IBM San Jose Stevan Allen Pamela Baird Harold Koeppel Gene McGaha Lisa Taylor John Thompson Glenn Wilcock IBM Tucson Mike Wood IBM United Kingdom
Notice
This publication is intended to help storage administrators and system programmers evaluate and implement the features and functions in DFSMS z/OS V1R3. The information in this publication is not intended as the specification of any programming interfaces that are provided by Version 1 Release 3 of z/OS. See the PUBLICATIONS section of the IBM Programming Announcement for IBM z/OS V1R3 for more information about what publications are considered to be product documentation.
Comments welcome
Your comments are important to us! We want our Redbooks to be as helpful as possible. Send us your comments about this or other Redbooks in one of the following ways: Use the online Contact us review redbook form found at:
ibm.com/redbooks
Preface
xiii
xiv
Chapter 1.
Release summary
DFSMS is now an integral element of the z/OS operating system. Each release of z/OS builds on the function DFSMS offers. This book provides you with a technical update to DFSMS provided in z/OS V1R3 what is new along with practical implementation information. Each chapter is devoted to a functional area. Those with an interest in DFSMSdss or VSAM can go to directly to the particular chapter to see what is new or enhanced for that specific functional area.
SMS data set allocation failure messages can now be written to hardcopy console log to allow automation products to take corrective action. New SMF type 42 subtype 10 records are also created if an allocation fails because of insufficient space. Data set separation, which automates separation of specified data to reduce the impact of single points of failure. To utilize data set separation, you create a data set separation profile and specify it in the SMS base configuration. During volume selection for data set allocation, SMS attempts to separate, on the PCU level, the data sets that are listed in the profile. The high availability requirement for data sets such as the couple data set and backup couple data set (or software duplexed DB2 logs) can be met by allocating them on separate storage controllers.
During SVC dump processing, if an address space being dumped has an active catalog request, the catalog address space will also be dumped. This will provide better problem determination in the cases where the problem is related to problems in the catalog address space (CAS). The MODIFY command is enhanced to allow the performance statistics to be reset. The CONFIGHFS command for a path name can now be issued from any system in the sysplex it is no longer limited to the system owning the HFS. In addition, ISPF has been enhanced to correctly display full details of HFS data sets owned or mounted on different systems in the sysplex. IDCAMS DEFINE parameters KEYRANGE, REPLICATE, and IMBED are ignored with this release of DFSMS. They are no longer useful on newer control units that have high bandwidth and deliver all the data through cache. System managed buffering is extended to support VSAM spheres containing alternate indices. In addition, two attempts will be made to build DO buffer pools using less space before reverting to the DW access bias if insufficient space is available for the first build attempt. This release will allow large (64 bit) real storage to be exploited for buffers for all VSAM record organizations. DFSMS supports greater than 4K coupling facility caching for VSAM spheres that are opened for RLS processing. The SMS data class keyword RLS CF Cache Value and its values (NONE, UPDATESONLY and ALL) are used to specify which data is placed in the coupling facility cache structures that are defined to DFSMS. The REUSE attribute is now available for VSAM striped data sets. DB2 table spaces defined with the REUSE attribute can take advantage of VSAM striping. This enhancement can provide a significant reduction in the elapsed time of batch jobs accessing large DB2 table spaces. You can now change the expiration date of an existing SMS managed non-VSAM data set using the JCL EXPDT or RETPD specification (similar to the existing capabilities for non-SMS managed data sets) when the data set is opened for output. VSAM RLS used to cache directory entries and data control intervals in the coupling facility only if they were less than or equal to 4096 bytes. Now control intervals larger than 4096 bytes can be cached in the coupling facility. This requires all of the sharing systems to be at z/OS Version 1 Release 3 and that a G4 or later processor be used.
Object access method (OAM) supports multiple object backup, which allows a system within an SMS complex to have more than one object backup storage group. Up to two object backup storage groups can be associated with each object storage group.Separate backup copies of objects, that are physically based on the object storage group to which the object belongs, may be specified.
DFSMSrmm reporting has been enhanced by a new report generator function added to the DFSMSrmm ISPF dialog. DFSMSrmm provides support for VSAM extended addressability (EA) for the control data set (CDS). If you do not use an extended format (EF) data set for the DFSMSrmm CDS the CDS size is limited to a maximum of 4GB. Using an EF data set enables you to use VSAM functions such as multi-volume allocation, compression, or striping. EF also enables you to define a CDS that uses VSAM EA to enable the CDS to grow above 4GB. This release provides DFSMSrmm home location enhancements. You can now define DFSMSrmm storage locations for use as home locations. This enables you to have a location name for each non-system managed library and to better manage non-library resident volumes. DFSMSrmm extended bin management provides new options for efficient selection of bins during storage location management. Previously, DFSMSrmm users were required to confirm the completion of a tape volume move before the bin number could be reused by DFSMSrmm. The extended bin management options provide additional flexibility of when and how volume movement is performed by DFSMSrmm. The new choices include: A choice of how bins are allocated to volumes. A volume can be assigned a bin in a storage location as long as a volume move has been started for the volume that was previously assigned or the move completion can be required before a new volume is assigned to the bin location. DFSMSrmm can assign volumes to bins in volume serial number sequence and bin number sequence. Information can be obtained about where volumes reside and where they are moving to or moving from. Storage location management processing can optionally be processed by location.
Chapter 2.
SMS enhancements
The SMS enhancements in z/OS V1R3 DFSMS are designed either to eliminate failures due to lack of space within a storage group; or if a failure still occurs, to ensure that information is available to: Automate recovery actions. Prevent further occurrences of the problem. Ensure that information is captured so that you can accurately diagnose the cause of the problem. In this chapter we discuss the following enhancements for SMS managed data sets: Dynamic volume count Extend storage groups Overflow storage groups Automation assistance and reporting Data set separation We then consider how these new facilities impact volume selection for SMS data sets.
SCDS Name . . . : SYS1.SMS.SCDS Data Class Name : JMETEST To ALTER Data Class, Specify: Data Set Name Type . . . . . . If Ext . . . . . . . . . . . Extended Addressability . . . N Record Access Bias . . . . . Space Constraint Relief . . . . Y Reduce Space Up To (%) . . . 0 Dynamic Volume Count . . . . 15 Compaction . . . . . . . . . . Spanned / Nonspanned . . . . . (EXT, HFS, LIB, PDS or blank) (P=Preferred, R=Required or blank) (Y or N) (S=System, U=User or blank) (Y or N) (0 to 99 or blank) (1 to 59 or blank) (Y, N, T, G or blank) (S=Spanned, N=Nonspanned or blank)
By default, DVC is not enabled. This default also applies to existing data classes. Supported down-level systems will tolerate the specification of DVC in a data class but will take no action based on it. There is no support available to implement this function on down-level systems. Note: DVC for VSAM striped data sets is supported even though they are not eligible for Space Constraint Relief.
10
Primary volumes
Volumes with space allocated; that is, either data has been written to the volume, or an extent exists because the data set was defined with guaranteed space.
Candidate volumes Do not have volume serial numbers associated with them, they are an indication of how many volumes the data set can extend to. These are most commonly defined by the volume count in the data class or the VOLUMES parameter of an IDCAMS DEFINE command.
11
Specific volume count: 2 (volumes with data) Nonspecific volume count: 4 (candidate volumes from catalog record) Cluster dynamic volume count: 20 (from data class) Specific volumes returned to allocation: 2 Nonspecific volumes returned to allocation: 18 (DVC value - specific count) Total count of volumes returned to allocation: 20 (DVC value)
If the data class DVC value is 3, and the catalog volume count is 6 (2 specific and 4 non-specific), then the input to allocation would be as shown in Figure 2-3.
Specific volume count: 2 (volumes with data) Nonspecific volume count: 4 (candidate volumes from catalog record) Cluster dynamic volume count: 3 (from data class) Specific volumes returned to allocation: 2 Nonspecific volumes returned to allocation: 4 Total count of volumes returned to allocation: 6
In either case, allocation is not given any indication as to whether a DVC value was used. Other examples can be found in z/OS V1R3.0 DFSMS Using Data Sets, SC26-7410. Note that the DVC only applies to base clusters and not upgrade alternate indexes (AIXs). For AIX and DVC considerations refer to 2.2.12, Other considerations.
12
This data set currently has one primary volume and four candidates, there is no indication of the DVC value. This is to be expected because the DVC value in the data class can be changed at any time and is therefore only relevant when the data set is allocated. If the data set extended to five volumes, then the LISTCAT output would be as shown in Figure 2-5.
If the data set now tried to extend to a sixth volume, there would be a failure if the DVC value was less than 6. If the DVC value was 6 or greater, then the LISTCAT would indicate the new primary volume, as shown in Figure 2-6.
13
There are now more volumes than the original definition; this is because primary volumes must be in the catalog records. This new volume in the catalog record is the only indication that DVC processing has been invoked successfully.
14
Specific volume count: 4 (volumes with data in base cluster) Specific volume count: 1 (AIX, not being used by the base cluster) Nonspecific volume count: 6 (candidate volumes for base cluster) Cluster dynamic volume count: 59 (from data class) Specific volumes returned to allocation: 5 (base + alternate index) Nonspecific volumes returned to allocation: 55 (DVC - base specific) Total count of volumes returned to allocation: 60
15
In this instance, the index data is on a volume that does not contain data from the base cluster which causes the total volume count to be 60. This is greater than the maximum of 59 allowed by z/OS and will cause a failure. If the index data had been on a volume that also contained data from the base cluster, then the total volume count would have been 59, and there would not have been a failure.
16
Assuming that both DVC and space constraint relief are enabled, extend processing tries to get space in the following order when selecting from similar volumes: On volumes in the current Storage Group (SG) which are below threshold On volumes in the extend SG which are below threshold On volumes in the current SG which are above threshold On volumes in the extend SG which are above threshold Space Constraint Relief processing is entered For more information on SMS volume selection for data set creation and extends, refer to 2.7, Summary of factors influencing volume selection.
IEF403I JMEALLOC - STARTED - TIME=16.13.51 - ASID=0019 - SC64 IGD17216I JOBNAME (JMEALLOC) PROGRAM NAME (IKJEFT01) 956 STEPNAME (UNLOAD ) DDNAME (OUT ) DATA SET (MHLRES4.TESTC.DSN5 ) WHICH WAS INITIALLY ALLOCATED TO STORAGE GROUP (SGMHL03) WAS EXTENDED SUCCESSFULLY TO EXTEND STORAGE GROUP (SGMHL04)
It has always been possible to increase the pool of volumes available to hold the first extent of an SMS managed data set by specifing more than one pool storage group as a target for allocation in your ACS routines. However, once the data sets first extent had been allocated, only the volumes in the storage group that contained the first extent were considered as targets by end of volume (EOV) processing. Extend storage groups allow volumes in a second storage group, the extend storage group, to be considered also.
17
For a data set to extend successfully to an extend storage group: An extend storage group must be defined in the active SMS configuration for the storage group that will contain the first extent of this data set. The data set must be capable of extending to a second (or additional) volume. This can be achieved if: A candidate volume exists in the catalog entry, or the allocation is explicitly specified as multi-volume. A value exists for Dynamic Volume Count in the data class assigned to this data set. We describe this in the section Dynamic volume count on page 8. Data sets that are restricted to a single volume for example, data sets with DSORG=PO or VVDSs are not able to use extend storage groups. There is no relaxation on the current limits on total number of extents for data sets. To use an extend storage group, the data set must not have reached its maximum number of extents. Volumes in your extend storage groups will tend to be selected by EOV processing when volumes in the primary storage groups are over their allocation high threshold. Once a storage group has begun to utilize its extend storage group, it will tend to continue to do so until the volumes in the primary storage group fall below their high allocation threshold. We discuss this further in Summary of factors influencing volume selection on page 35.
18
SCDS Name . . . . . : SYS1.SMS.SCDS Storage Group Name : SGMHL03 To ALTER Storage Group, Specify: Description ==> TESTING FOR z/OS 1.3 ==> Auto Migrate . . Y (Y, N, I or P) Auto Backup . . Y (Y or N) Auto Dump . . . N (Y or N) Overflow . . . . N (Y or N) Dump Class . . . Dump Class . . . Dump Class . . . Allocation/migration Threshold: High Guaranteed Backup Frequency . . . .
Migrate Sys/Sys Group Name Backup Sys/Sys Group Name Dump Sys/Sys Group Name . Extend SG Name . . . . . .
. . . .
. . . . SGMHL04
(1 to 8 characters) Dump Class . . . Dump Class . . . . . 85 (1-99) Low . . 5 (0-99) . . NOLIMIT (1 to 9999 or NOLIMIT)
ALTER SMS Storage Group Status . . . N (Y or N) Use ENTER to Perform Verification and Selection;
The second storage group, whose name matches the value you specified in the Extend SG Name field for the first storage group, must exist and be defined as a pool storage group. Your SCDS will not validate successfully if there is not a storage group defined that matches the extend storage group name. We illustrate this in Figure 2-10.
VALIDATION RESULTS VALIDATION RESULT: ERRORS DETECTED SCDS NAME: SYS1.SMS.SCDS ACS ROUTINE TYPE: * DATE OF VALIDATION: 2002/03/20 TIME OF VALIDATION: 19:27 IGD06202I STORAGE GROUP SGMHL03 INCORRECTLY SPECIFIES EXTEND STORAGE GROUP NAME SGMHL05
You do not need to update your ACS routines to use an extend storage group.
19
This support does not allow you to define extend volumes within an existing storage group or to define only a subset of volumes in an extend storage group as eligible for use by a specific data set. Storage groups defined as extend groups for another storage group are eligible to be used as normal pool storage groups. When you use extend storage groups as conventional pool storage groups it is transparent for all users. Volumes in extend storage groups are not considered during the initial creation of a data set unless the extend storage group is included in your storage group ACS routine as a target for allocation.
Guaranteed space
Data sets whose storage class specifies guaranteed space may be able to use extend storage groups if they have non-specific candidate volumes specified in their catalog entry after the initial data set creation resulting from an IDCAMS ALTER ADDVOL. Data sets created with explicit volume serials only (no specific candidate volumes) will not be able to use extend storage groups unless the data class assigned to them specifies DVC and the extent being taken is driven by DVC processing. Volumes added by DVC processing are always added to the catalog as candidate volumes.
20
You can reference extend storage groups directly in your Automatic Class Selection (ACS) routines, so you can use extend storage groups for allocations other than data set extends. There can only be one extend storage group for any single pool storage group. If you specify multiple storage groups today for a particular allocation then the storage group selected by allocation must specify an extend storage group for the use of the extend storage group to be successful. The storage group assigned as an extend storage group can also be defined as an overflow storage group. See the section on Overflow storage groups on page 23 for a description on overflow storage groups. When you wish to enable an extend storage group, the only changes required are those to the storage group definition in ISMF. You do not need to update your ACS routines to add references to the extend storage group. It is possible to use a single extend storage group for all other pool storage groups effectively a one-to-many, star configuration. You could also design a one-to-one, ring, configuration where each storage group extends to the next storage group in the ring. We illustrate these different configurations in Figure 2-11. A star configuration of your pool storage groups will allow you the most flexibility for use. However, it does require that you are able to dedicate volumes to an extend pool. In a ring configuration, if a single storage group fills, it becomes unavailable as an extend storage group, and may also impact the storage group that follows it as data sets extend into it. The advantage of the ring configuration is that you will be making use of resources that are already allocated.
SG1
SG2
SG1
SG2
SG Extend
SG5 SG3 SG3 SG4
SG4
SG5
21
You are not required to define an extend storage group for any storage group. You can define extend storage group(s) for just a subset of pool storage groups. Even if you choose a ring configuration, data sets from one storage group are only permitted to extend to the extend group defined for the storage group that holds the first extent of the data set. If you can define extend storage groups, there may be advantages in defining the volumes in the extend storage group as large volumes. These volumes are always more likely to be under their storage groups high allocation threshold so are more likely to be chosen for selection for allocation. We discuss support for large volumes in Large volume support on page 40. DFSMShsm space management will not be impacted by a data set having extended to an extend storage group. The first extent of any data set will still on exist on a volume that is in the original storage group and this determines how space management will process data sets. DFSMShsm availability management may be impacted if you use concurrent processing or any other copy process based on underlying hardware functions. Before we introduced extend storage groups all extents of a data set had to reside in the same storage group so that storage groups were often based on the underlying device types. With the use of an extend storage group, data sets can have extents in multiple storage groups. If you have previously configured separate storage groups for devices with different capabilities, for example RVAs and IBM TotalStorage Enterprise Storage Server (ESS), you need to consider what devices are added to your extend storage group and whether you require more than one extend storage group (potentially one per device type). For example, if a data set extends across multiple storage groups and you request a concurrent copy using hardware functions for example, SNAPSHOT and FLASHCOPY, or you use a volume copy mechanism such as PPRC or XRC this will not be successful. You need to ensure that the copy method that you rely on can support a data set potentially allocated across a number of different control units. If you are reliant on a hardware based copy solution, then you may need to define a number of extend storage groups, at least one per physical control unit, or ensure that your extend storage group contains volumes from each control unit. Allocation will favor volumes that meet requirements for specific hardware functions; we discuss this in Summary of factors influencing volume selection on page 35. But there are other factors, for example data set separation, that can outweigh these criteria.
22
If you rely on DFSMSdss or DFSMShsm full volume dumps for data recovery, you will need to ensure that your extend storage group is processed at the same time as the storage group that contains the start of the data set. Failure to do this will result in either an incomplete copy of the data set or a non-synchronized copy.
Many customers currently achieve a similar result by using storage groups or volumes defined to SMS in QUINEW status. These are often called spill storage groups. The difference between spill and overflow storage groups is that with spill storage groups, the volumes need to be in quiesced status, and volumes in an overflow group can be defined as enabled. Figure 2-12 shows a section of the JOBLOG output of a data set allocation that was directed directly to an overflow storage group.
IGD17223I JOBNAME (JMEREP01) PROGRAM NAME (IKJEFT01) STEPNAME (S1 ) DDNAME (OUT ) DATA SET (MHLRES4.TESTC.OVRF1 ) WAS ALLOCATED TO AN OVERFLOW STORAGE GROUP SGMHL02
Overflow storage groups are only considered for the initial creation of a data set, they are not seen by EOV processing. There are two steps to consider when implementing an overflow storage group: Defining the overflow storage group to ISMF Adding the storage group to your storage group ACS routine
23
Panel Utilities Help ssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss DGTDCSG2 POOL STORAGE GROUP ALTER Command ===> SCDS Name . . . . . : SYS1.SMS.SCDS Storage Group Name : SGMHL02 To ALTER Storage Group, Specify: Description ==> TESTING FOR SMS 1.3 RESIDENCY ==> Auto Migrate . . Y (Y, N, I or P) Migrate Sys/Sys Group Name Auto Backup . . Y (Y or N) Backup Sys/Sys Group Name Auto Dump . . . N (Y or N) Dump Sys/Sys Group Name . Overflow . . . . Y (Y or N) Extend SG Name . . . . . . Dump Class . . . Dump Class . . . Dump Class . . . Allocation/migration Threshold: High Guaranteed Backup Frequency . . . .
. . . .
. . . .
(1 to 8 characters) Dump Class . . . Dump Class . . . . . 85 (1-99) Low . . 5 (0-99) . . NOLIMIT (1 to 9999 or NOLIMIT)
ALTER SMS Storage Group Status . . . N (Y or N) Use ENTER to Perform Verification and Selection;
If you have a mixture of operating system levels in your SMSplex and wish to define overflow storage groups on those systems at z/OS V1R3 DFSMS, we recommend that the status of this group be set to QUINEW on all down-level systems as they will not recognize the overflow status. There are no toleration PTFs available. We recommend that you use a one-to-many relationship between your overflow and normal pool storage groups. An overflow storage group must be a pool storage group. You can use it for non-overflow allocations but each data set directed to the overflow storage group will receive message IGD17223I in both JOBLOG and SYSLOG. Overflow storage groups are only considered during the initial creation of a data set. Once the data set begins to extend only the primary and extend storage groups, if present, are available.
24
Once a storage group has been defined as an overflow storage group, you must update your storage group ACS routines to add the overflow storage group. In Figure 2-14 we show a fragment of the ACS routines that we used to allow allocations to flow to an overflow storage group. Our overflow storage group was SGMHL02.
PROC 0 STORGRP SELECT(&DSN) WHEN(MHLRES4.TESTB.**) DO SET &STORGRP EQ 'SGMHL02' EXIT CODE(0) END WHEN(MHLRES4.TESTC.**) DO SET &STORGRP EQ 'SGMHL03','SGMHL02' EXIT CODE(0) END WHEN(MHLRES4.TESTD.**) DO SET &STORGRP EQ 'SGMHL04' EXIT CODE(0) END OTHERWISE DO SET &STORGRP = 'SGMHL01' EXIT CODE(0) END END /* END SELECT */
Figure 2-14 Storage group ACS routine assigning an overflow storage group
Storage groups defined with Overflow . . . . Y can also be used as extend storage groups for other pools. When an overflow pool storage group contains more volumes than a non-overflow pool storage group, specified volume counts might result in volumes in the overflow storage group being preferred over volumes in the pool storage group during volume selection. This is just probability as the primary pools near their high allocation thresholds. We discuss this in more detail in Summary of factors influencing volume selection on page 35.
25
Volumes from the overflow storage group will only be selected by allocation if there are insufficient volumes under their high allocation threshold value in any of the eligible non-overflow storage groups. Once an extent has been allocated on a volume in a non-overflow storage group, volumes in an overflow storage group are not considered by extend processing as targets unless the overflow storage group is also defined as an extend storage group. Similarly, if the initial allocation is made to a volume in an overflow storage group, then all subsequent data set extents will be taken either in the overflow storage group or its extend storage group if one has been defined. The volumes in the ACS eligible storage groups will not be considered for extend processing since the overflow pool storage group becomes the primary storage group in this case. Unlike extend storage groups, overflow storage groups will contain the initial extent of a data set. You need to ensure that any space management or availability management functions enabled on the primary pool are also enabled on the overflow storage group. If you are relying on full volume processing for data archiving or recovery, you must ensure that data sets allocated to an overflow storage group are included. If you are using DFSMShsm to manage full volume dumps, you could either ensure that the overflow pool storage group is assigned the relevant dump classes, or you could use interval migration or command migration specifying CONVERT to move data back to the storage groups that should have initially contained this data.
26
IEF344I JMEREP01 S1 OUT - ALLOCATION FAILED DUE TO DATA FACILITY SYSTEM ERROR IGD17287I DATA SET MHLRES4.TESTC.OVRF2 COULD NOT BE ALLOCATED NORMALLY, SPACE CONSTRAINT RELIEF (MULTIPLE VOLUMES) WILL BE ATTEMPTED IGD17284I ALLOCATION ON STORAGE GROUP SGMHL03 WAS ATTEMPTED BUT ENOUGH SPACE COULD NOT BE OBTAINED, PROCESSING CONTINUES FOR DATA SET MHLRES4.TESTC.OVRF2 IGD17289I DATA SET MHLRES4.TESTC.OVRF2 COULD NOT BE ALLOCATED WITH SPACE CONSTRAINT RELIEF(MULTIPLE VOLUMES). SPACE REDUCTION AND/OR 5 EXTENT LIMIT RELIEF WILL BE ATTEMPTED IGD17284I ALLOCATION ON STORAGE GROUP SGMHL03 WAS ATTEMPTED BUT ENOUGH SPACE COULD NOT BE OBTAINED, PROCESSING CONTINUES FOR DATA SET MHLRES4.TESTC.OVRF2
Figure 2-15 Joblog failure messages due to requested space not available
27
18.40.49 JOB05746 549 549 549 549 549 549 18.40.49 JOB05746
IGD17272I VOLUME SELECTION HAS FAILED FOR INSUFFICIENT SPACE DATA SET MHLRES4.TESTC.OVRF2 JOBNAME (JMEREP01) STEPNAME (S1 ) PROGNAME (IKJEFT01) DDNAME (OUT ) REQUESTED SPACE QUANTITY = 55 KB STORCLAS (STANDARD) MGMTCLAS (MCDB22) DATACLAS (JMETEST) STORGRPS (SGMHL03 ) -JMEREP01 S1 FLUSH 0 .00 .00 .00
The following types of failures will cause information to be written to SYSLOG as well as to the JOBLOG: Extend failure Volume selection failures Volume redirection messages, the selection of either an extend or overflow storage group There are several benefits of writing these messages to SYSLOG: Storage administrators do not need access to JOBLOGs to determine the cause of problems. Often JOBLOGs have either been purged or archived by the time the storage administrator is informed of the problem, or storage administrators do not have security access to view JOBLOGs. Allocation failure messages will be available to automation products for further action, for example, scheduling DFSMShsm space management to schedule a command migration of the pool that has reached its high allocation threshold. It will be possible to report on these using syslog scanning tools.
28
Note: If you are using automation to trigger space management of a pool, you should immediately target the source pool, the one that the allocation was directed to, and some time later also target the extend or overflow pool that may have been used for the allocation. If overflow or extend processing was successful, it is probable that, for the short term at least, the data set that was allocated to these pools will still be in use.
You may wish to consider a regular scheduled sweep of your overflow and extend storage groups to move data that has landed there back to volumes in their proper storage groups. You could do this by generating, for example, DFSMShsm MIGRATE CONVERT commands, after you ensure that the primary storage groups are below their high allocation thresholds.
2.5.2 SMF
A new SMF record subtype is now created if an allocation fails because of insufficient space, SMF 42 subtype 10. These records are only produced for failures, not if the allocation succeeded due to an overflow pool being selected. SMF 42 subtype 10 are not produced in response to EOV or EOD ABENDs. This record will contain the following information about the failing allocation: JOB name STEP name DD name Data set name Space quantity requested Data class Management class Storage group The format of the record is described by MACRO IGWSMF. We have included the mapping of the relevant part of this macro inOpen/Close/EOV SMF record changes on page 166.
29
2.6.2 Requirement
System critical data such as system configuration data sets, JES2 checkpoint data sets and logging for subsystems such as DB2 or IMS, is often held in two or more copies to protect against failure. Having both data sets on the same physical control units introduces a single point of failure. To avoid this situation currently requires either manual data set placement or storage groups which are aligned to PCUs. These strategies are required to be maintained through the life of the data sets, not just for their initial allocation. Application data may also benefit from being separated across PCUs to reduce recovery time in the event of a subsystem failure.
You can use the DS QD,xxxx,1,RCD command to test which of your PCUs supports the RCD command. The profile information can be in a sequential data set or a member of a PDS or PDSE. The name of the sequential data set or member is specified in the base configuration, as shown in Figure 2-17.
30
DGTDBSA1 Command ===> SCDS Name . : SYS1.SMS.SCDS SCDS Status : VALID To ALTER SCDS Base, Specify:
Page 1 of 2
Description ===> BASE SMS CONFIG FOR OE ===> Default Management Class . . Default Unit . . . . . . . . Default Device Geometry Bytes/Track . . . . . . . . 56664 Tracks/Cylinder . . . . . . 15 DS Separation Profile ==> 'SYS1.SMS.SEP.PDSE(SEP1)' (1 to 8 characters) (esoteric or generic device name) (1-999999) (1-999999) (Data Set Name)
In this example, the profile data set is a member of a PDSE. The separation profile specified must exist and not contain invalid syntax when you validate your SMS configuration or validation will fail. In Figure 2-18, we show the output from an unsuccessful validation; in this case the specified separation data set was a PDS, but no member name was specified in the base configuration.
********************************* Top of Data **************************** VALIDATION RESULTS VALIDATION RESULT: SCDS NAME: ACS ROUTINE TYPE: DATE OF VALIDATION: TIME OF VALIDATION: ERRORS DETECTED SYS1.SMS.SCDS * 2002/03/29 14:04
IGD06031I DATA SET SEPARATION PROFILE SYS1.SMS.SEP.PDS COULD NOT BE ACCESSED. SMS RETURN CODE 00000008 GET REASON CODE 000C6000
31
When a new SMS configuration is activated as the result of one of these commands, the other systems in the SMSplex are notified and read the profile data set. Refer to Required maintenance on page 34 for APARs that affect data set separation.
The specification of FAIL(PCU) indicates that separation is required. The order of the data set names has no significance, the first data set to be allocated will be successful, if space is available. The allocation of the second and subsequent data sets will fail if space is not available on an another PCU. The specification of FAIL(NONE) indicates that separation is preferred. Allocation will attempt to direct these data sets to separate PCUs if possible. If it is not possible to separate them they can be allocated on any PCU. The complete syntax is described in z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402.
32
IGD17206I VOLUME SELECTION HAS FAILED - THERE ARE NOT ENOUGH VOLUMES WITH SUFFICIENT SPACE FOR DATA SET MHLRES4.TEST1.SEP2 IGD17277I THERE ARE (10) CANDIDATE VOLUMES OF WHICH (1) ARE ENABLED OR QUIESCED IGD17290I THERE WERE 1 CANDIDATE STORAGE GROUPS OF WHICH THE FIRST 1 WERE ELIGIBLE THE CANDIDATE STORAGE GROUPS WERE:SGMHL01 IGD17279I 9 VOLUMES WERE REJECTED BECAUSE THE SMS VOLUME STATUS WAS DISABLED IGD17279I 5 VOLUMES WERE REJECTED BECAUSE THEY DID NOT MEET SEPARATION CRITERIA
Figure 2-20 JOBLOG from allocation failure due to data set separation
JOBLOG messages for an allocation allowed to proceed even after a separation failure is shown in Figure 2-21. Separation was specified as SEP FAIL(NONE). This is the only notification received that the requested separation was not achieved. The job completed with RC=0 for the step that allocated this data set.
IGD17271I ALLOCATION HAS BEEN ALLOWED TO PROCEED FOR DATA SET MHLRES4.TEST1.SEP2 ALTHOUGH VOLUME COUNT REQUIREMENTS COULD NOT BE MET IGD101I SMS ALLOCATED TO DDNAME (DD5 ) DSN (MHLRES4.TEST1.SEP5 ) STORCLAS (STANDARD) MGMTCLAS (MCDB22) DATACLAS (JMETEST) VOL SER NOS= MHLS2A
33
34
All the volumes in all the specified storage groups are candidates for the first, or primary list. The primary list consists of online volumes that meet all the specified criteria in the storage class and data class, are below threshold, and whose volume status and storage group status are enabled. All volumes on this list are considered equally qualified to satisfy the data set creation request. Volume selection starts from this list. Volumes that do not meet all the criteria for the primary volume list are placed on the secondary list. If there are no primary volumes, SMS selects from the secondary volumes. Volumes are marked for the tertiary list if the number of volumes in the storage group is less than the number of volumes requested. If there are no secondary volumes available, SMS selects from the tertiary candidates. Volumes that do not meet the required specifications (ACCESSIBILITY = CONTINUOUS, AVAILABILITY = STANDARD or CONTINUOUS, ENABLED or QUIESCED, ONLINE...) are marked rejected and are not candidates for selection. After the system selects the primary space allocation volume, that volumes associated storage group is used to select any remaining volumes requested for the data set. If you specify an extend storage group, the data may be extended to the specified extend storage group.
Secondary
Tertiary
Rejected
Table 2-1 is from the manual z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402. It contains the best summary of volume selection.
35
Volume count
2048
High threshold
1024
512 256
128 64 32
ACCESSIBILITY
16
36
Preferences Volume provides the requested response time that is specified or defaulted in the storage class direct msr or sequential MSR Volume provides the requested response time that is specified or defaulted in the storage class direct msr or sequential MSR
Value 2
If a criterion is not met or not specified it is assigned a value of zero (0). SMS adds the values for each volume in the preference list will prefer the volumes with the highest cumulative score for allocation. The z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402 contains some additional information on volume selection and the example that follows in Table 2-2. In this example SMS has returned two volumes A and B. Volume B receives the higher preference score and will be the one selected by allocation for this data set.
Table 2-2 Volume preferencing example
Score
Volume selection preferencing criteria Volume A Volume does not satisfy data set separation Volume and its associated Storage Group SMS status are ENABLED Volume resides in a non-overflow Storage Group Volume resides in a control unit that supports ACCESSIBILITY and the Storage Class ACCESSIBILITY value is PREFERRED Total preference value for Volume A Volume B Volume satisfies data set separation Volumes associated Storage Group SMS status is QUIESCED Volume does not reside in a non-overflow Storage Group Volume resides in a control unit that does not support ACCESSIBILITY, and the Storage Class ACCESSIBILITY value is PREFERRED Total preference value for Volume B
4096 0 0 0
4096
37
If multiple volumes are returned to allocation as being available, allocation will select one. If no volumes are returned, the data set cannot be allocated, and SMS will perform space constraint relief (if specified in the data class) and repeat the selection process.
38
Chapter 3.
DFSMSdfp enhancements
In this chapter we describe the changes introduced in the DFP component. The following topics are covered: Large volumes IDCAMS Catalog management CONFIGHFS VSAM CICS Record Level Sharing (RLS) Object Access Method (OAM). The changes are in a variety of areas, some of which have implications to the user community, as well as others which are internal and do not require any actions.
39
40
41
3.1.7 Performance
We did not have the opportunity to do any performance testing, but the use of PAVs will be essential if performance is a consideration for the data on large volumes. If you are running on an IBM D/T2064 processor, or equivalent, the Workload Manager (WLM) controlled Dynamic CHPID management should be evaluated as it could improve overall DASD subsystem throughput.
42
Do not run the standard volume initialization job. With potentially over 32000 cylinders available for allocation, some thought needs to be put into sizing the VTOC, indexed VTOC, and VVDS.
3.2 IDCAMS
Enhancements to GDG base processing and LISTCAT command output for both GDG bases and symbolic resolution provide you with information to better manage your storage environment.
43
LISTC ENT(MHLRES4.TEST.GDG) ALL GDG BASE ------ MHLRES4.TEST.GDG IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY DATASET-OWNER----MHLRES3 CREATION--------2002.090 RELEASE----------------2 LAST ALTER------2002.099 ATTRIBUTES LIMIT------------------5 SCRATCH NOEMPTY ASSOCIATIONS NONVSAM--MHLRES4.TEST.GDG.G0001V00 NONVSAM ---- MHLRES4.TEST.GDG.G0001V00
LISTC ENT(MHLRES4.TEST.GDG) ALL GDG BASE ------ MHLRES4.TEST.GDG IN-CAT --- CATALOG.CS HISTORY DATASET-OWNER----MHLRES4 RELEASE----------------2 ATTRIBUTES LIMIT------------------8 ASSOCIATIONS
Figure 3-2 Output from LISTC on pre z/OS V1R3 DFSMS system
When you migrate to z/OS V1R3 DFSMS, any expiration dates applied to GDG bases will no longer be effective; it will be possible to delete and GDG base without specifying the PURGE operand. There will be no way of determining if an expiration date had been assigned to this GDG base, and if so, what its value was.
44
In Figure 3-3 we show the output from an attempt to alter the expiration date of a GDG on a z/OS V1R3 DFSMS system.
ALTER MHLRES4.TEST.GDG TO(99365) IDC3019I INVALID ENTRY TYPE FOR REQUESTED ACTION IDC3009I ** VSAM CATALOG RETURN CODE IS 60 - REASON CODE IS IGG0CLE8-30 IDC0532I **ENTRY MHLRES3.TEST.GDG NOT ALTERED
Figure 3-3 Output from attempt to alter expiration information for GDG base
If you have any processing based on the value found in the GDG expiration date field you will need to review this before upgrading your first system to z/OS V1R3 DFSMS.
Note: Please ensure that you install the fix for HIPER APAR OW53804 when you install z/OS V1R3 DFSMS. We have included the text for this APAR in Maintenance information on page 171. The PTFs for this APAR should be applied to all down level systems that will share catalogs with a system running z/OS V1R3 DFSMS.
The variable &SYSNAME resolves to SC64 on the system where this was tested, therefore references to MHLRES4.TEST.ALIAS should access data set MHLRES4.SC64.ALIASTST. There are several potential causes for the association between alias and data set not being successfully resolved, such as these: The symbol is not in use yet, because the system had not been IPLed. The data set has been given the wrong name. The symbol is entered incorrectly in the IDCAMS DEFINE. The symbol is entered incorrectly in the PARMLIB member.
45
Prior to this release, listing the catalog would show the output in Figure 3-5, which tells you that the symbolic was entered correctly, but nothing more.
ALIAS --------- MHLRES4.TEST.ALIAS IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY RELEASE----------------2 ASSOCIATIONS SYMBOLIC-MHLRES4.&SYSNAME..ALIASTST
Figure 3-5 Output from a LISTC of a symbolic alias prior to z/OS V1R3 DFSMS
In this release, there are two possible results, shown in Figure 3-6 and Figure 3-7.
ALIAS --------- MHLRES4.TEST.ALIAS IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY RELEASE----------------2 ASSOCIATIONS SYMBOLIC-MHLRES4.&SUSNAME..ALIASTST RESOLVED-MHLRES4.&SUSNAME..ALIASTST
This tells you that the symbolic SUSNAME could not be resolved, which in this case would enable you to solve the problem, as it should have been SYSNAME.
ALIAS --------- MHLRES4.TEST.ALIAS IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY RELEASE----------------2 ASSOCIATIONS SYMBOLIC-MHLRES4.&SYSNAME..ALIASTST RESOLVED-MHLRES4.SC64.ALIASTST
In Figure 3-7, you can see that the symbolic has been successfully resolved, so you will need to start looking to see if the data set is correctly defined.
46
MODIFY CATALOG,DISABLE(DSNCHECK) IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED
You can use the MODIFY CATALOG,REPORT command to check the status of data set name checking. MODIFY CATALOG,ENABLE can be used to reinstate data set name checking.
47
F CATALOG,REPORT IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC359I CATALOG REPORT OUTPUT 431 *CAS************************************** * CATALOG COMPONENT LEVEL = HDZ11G0 * * CATALOG ADDRESS SPACE ASN = 002F * * SERVICE TASK UPPER LIMIT = 180 * * SERVICE TASK LOWER LIMIT = 60 * * HIGHEST # SERVICE TASKS = 5 * * CURRENT # SERVICE TASKS = 5 * * MAXIMUM # OPEN CATALOGS = 1,024 * * ALIAS TABLE AVAILABLE = YES * * ALIAS LEVELS SPECIFIED = 1 * * SYS% TO SYS1 CONVERSION = OFF * * CAS MOTHER TASK = 007AF898 * * CAS MODIFY TASK = 007AF608 * * CAS ANALYSIS TASK = 007A0E88 * * CAS ALLOCATION TASK = 007AF2E0 * * VOLCAT HI-LEVEL QUALIFIER = SYS1 * * DELETE UCAT/VVDS WARNING = ON * * DATA SET SYNTAX CHECKING = ENABLED * *CAS************************************** IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED
48
This could be done by automation or by the use of the COMMNDxx member in PARMLIB. Other components, for example JES2, are already making use of the SERVERS function. We highly recommend this as it can save having to reproduce a problem. Enabling this may add a small additional overhead to dump processing but this should only be seen where multiple address spaces are being dumped. You may need to increase the size of your dump data sets to accommodate the additional address spaces being included.
For example, this could be done every shift change. This will give a rolling history which can be used for diagnostic purposes. To make this more useful, there is a minor change to the output of the MODIFY CATALOG REPORT,PERFORMANCE command, which now has a heading line showing the start time of the period that the statistics apply to, an example of which is shown in Figure 3-10. A new operand, RESET, has been added to the MODIFY CATALOG PERFORMANCE command; this allows you to reset the statistics gathered. We show this command in Figure 3-11. We recommend automating the regular issuing of the command MODIFY CATALOG,REPORT,PERFORMANCE followed by the command MODIFY CATALOG,REPORT,PERFORMANCE(RESET), so that over time, you can get a feel for the normal catalog workload on your system. This will enable you to recognize changes in catalog behavior more easily.
49
F CATALOG,REPORT,PERFORMANCE IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC359I CATALOG PERFORMANCE REPORT 442 *CAS*************************************************** * Statistics since 22:59:05.28 on 04/08/2002 * * -----CATALOG EVENT-----COUNT-- ---AVERAGE--- * * Entries to Catalog 7,602 23.171 MSEC * * BCS ENQ Shr Sys 8,285 0.279 MSEC * * BCS ENQ Excl Sys 20 0.796 MSEC * * BCS DEQ 15,024 0.253 MSEC * * VVDS RESERVE CI 2,360 1.463 MSEC * * VVDS DEQ CI 2,360 0.234 MSEC * * VVDS RESERVE Shr 15,708 0.266 MSEC * * VVDS RESERVE Excl 22 0.401 MSEC * * VVDS DEQ 15,730 0.276 MSEC * * SPHERE ENQ Excl Sys 2 0.261 MSEC * * SPHERE DEQ 2 0.205 MSEC * * CAXWA ENQ Shr 2 0.072 MSEC * * CAXWA DEQ 2 0.069 MSEC * * VDSPM ENQ 8,305 0.152 MSEC * * VDSPM DEQ 8,305 0.158 MSEC * * BCS Get 7,190 0.351 MSEC * * VVDS I/O 18,101 5.755 MSEC * * VLF Define Major 9 0.005 MSEC * * VLF Identify 10,989 0.000 MSEC * * BCS Allocate 4 0.342 MSEC * * SMF Write 1,098 0.053 MSEC * * IXLCONN 2 1.351 SEC * * IXLCACHE Read 6 0.062 MSEC * * MVS Allocate 10 110.765 MSEC * * Capture UCB 11 0.012 MSEC * * Uncapture UCB 64 0.010 MSEC * * SMS Active Config 2 0.590 MSEC * * RACROUTE Auth 162 0.216 MSEC * * ENQ SYSZPCCB 584 0.024 MSEC * * DEQ SYSZPCCB 584 0.016 MSEC * *CAS*************************************************** IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED
50
F CATALOG,REPORT,PERFORMANCE(RESET) IEC351I CATALOG ADDRESS SPACE MODIFY COMMAND ACTIVE IEC352I CATALOG ADDRESS SPACE MODIFY COMMAND COMPLETED
3.4 CONFIGHFS
The CONFIGHFS command is used to display usage statistics for the HFS data sets which contains the pathname specified. Figure 3-12 shows the results of a CONFIGHFS command.
51
The output from the command when issued for a HFS mounted on a different system as opposed to a HFS mounted on the local system is exactly the same. See Figure 3-12 for an example.
/>confighfs /etc/httpd.conf Statistics for file system HFS.SC64.ETC ( 03/27/02 12:39pm ) File system size:_____14220 (pages) ________55.547(MB) Used pages: _____12954 (pages) ________50.602(MB) Attribute pages: ________42 (pages) _________0.164(MB) Cached pages: ________32 (pages) _________0.125(MB) Seq I/O reqs: ___________________0 Random I/O reqs: __________________26 Lookup hit: _________________115 Lookup miss: _________________181 1st page hit: _________________334 1st page miss: __________________30 Index new tops: ___________________0 Index splits: ___________________0 Index joins: ___________________0 Index read hit: _________________480 Index read miss: ___________________5 Index write hit: __________________44 Index write miss:___________________0 RFS flags __________________43(HEX) RFS error flags: ___________________0(HEX) High foramt RFN: ________________3310(HEX) Member count: _________________388 Sync interval: __________________60(seconds)
52
This removes the requirement to add the install path for confighfs to your path statement (if /usr/sbin is already in your path) and provides a common location for commands.
3.5 VSAM
There are no programming interface changes in this release, but there are changes to keyword support and performance enhancements.
Figure 3-13 Open error for VSAM cluster defined with keyrange
For further details, refer to informational APAR II12431 and APAR II12896. The text for these is included in Maintenance information on page 171. For additional information about the impact of the removal of this support. We recommend that you review Flash10072, which can be found at the Web site:
http://www-1.ibm.com/support/techdocs/atsmastr.nsf/PubAllNum/Flash10072
53
54
In addition to the foregoing, there are two internal access techniques that are used during load-mode processing and data set creation. These cannot be specified by the user and will be invoked internally if the data set is in load-mode (HURBA=0) and the keyword SYSTEM is specified for Record Access Bias in the data class or ACCBIAS in the JCL. These two techniques are: CO System-managed buffering with Create Optimization. This is used if SPEED is specified at data set creation. CR System-managed buffering with Create Recovery optimization. This is used if RECOVEY is specified at data set creation.
Whats new
There are two enhancements to SMB in this release: Retry capability for DO access bias AIX support
Retry capability
Currently, SMB defaults to using Direct Weighted (DW) access bias when a failure occurs building a shared resource pool (LSR) for the buffers and I/O related control blocks during Direct Optimize (DO). With this release, SMB will now make two attempts with less resources than what are considered optimum, before resorting to using DW. There will be two attempts to reduce the buffers for the Data pool and one attempt for the index pool. An optimum data pool size including buffer resources is 20% of the data set allocation.
How it works
If an attempt to build an optimum data pool for DO processing, results in a failure due to insufficient virtual storage, an attempt will then be made to build a pool with reduced resources equal to one-half of the optimum amount. Two additional checks are made before this second attempt: If the optimum amount was already below the minimum pool size for DO, then the DW access bias will be used immediately. If one-half of the optimum amount is less or equal to the minimum, this attempt at reducing buffer requirements will be skipped and an allocation of the minimum pool size will be attempted.
55
The retry process works as follows: If the first attempt fails, then retry with the number of buffers for the data component reduced to 50% of the optimum. If this fails, then retry with the number of buffers for the data component reduced to the minimum. If this fails, then retry with the number of buffers for the index component reduced to the minimum. If this fails, revert to Direct Weighted (DW).
Note: The minimum data pool size is one megabyte (1 MB), and the minimum index pool size will include enough resources to contain the entire index set plus 20% of the sequence set records.
AIX support
SMB has been enhanced to support the Direct Optimize access bias for VSAM data sets with associated AIXs. VSAM Open processing passes information to SMB for a single sphere and all related components; this information now includes the intent of the open of the cluster/AIX. The intent is either general purpose or upgrade only. The number of data buffers is based on the attribute relating to the open intent. The number of index buffers would follow the same calculation used in the current implementation. All related components are defined as the base of the sphere and all associated AIXs. For more detail on SMB, there is a section in the redbook VSAM Demystifed, SG24-6105. There is an SMF record change associated with this which is described in Appendix A, Record changes in z/OS V1R3 DFSMS on page 165.
57
Figure 3-14 shows an example of this process; the JCL and output have been edited to remove some information. In this case the management class specified NOLIMIT for both EXPIRE DAYS/DATE and RETENTION PERIOD.
//S1 DD DISP=(,CATLG),DSN=MHLRES4.TEST1.EXP4, // SPACE=(TRK,(1,1)),LRECL=80,RECFM=FB,DSORG=PS, // STORCLAS=STANDARD,MGMTCLAS=MC365,RETPD=5 //* //S1 EXEC PGM=IKJEFT01 //S1 DD DISP=OLD,DSN=MHLRES4.TEST1.EXP4,RETPD=800 //SYSTSPRT DD SYSOUT=* //SYSTSIN DD * LISTC ENT('MHLRES4.TEST1.EXP4') ALL REPRO IDS('MHLRES4.INPUT') OFILE(S1) LISTC ENT('MHLRES4.TEST1.EXP4') ALL LISTC ENT('MHLRES4.TEST1.EXP4') ALL NONVSAM ------- MHLRES4.TEST1.EXP4 IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY DATASET-OWNER-----(NULL) CREATION--------2002.085 RELEASE----------------2 EXPIRATION------2002.090 ACCOUNT-INFO-----------------------------------(NULL) READY REPRO IDS('MHLRES4.INPUT') OFILE(S1) NUMBER OF RECORDS PROCESSED WAS 9116 READY LISTC ENT('MHLRES4.TEST1.EXP4') ALL NONVSAM ------- MHLRES4.TEST1.EXP4 IN-CAT --- MCAT.SANDBOX.Z03.VSBOX11 HISTORY DATASET-OWNER-----(NULL) CREATION--------2002.085 RELEASE----------------2 EXPIRATION------2004.155 ACCOUNT-INFO-----------------------------------(NULL) READY
Please note that if the data set you are altering already has an expiration date or retention period set, then you will receive message IEC507D at the console when you attempt to access the data set. We show this in Figure 3-15.
*IEC507D E 37E4,MHLS2A,JMEEXPD3,S1,MHLRES4.TEST1.EXP4 *193 IEC507D REPLY 'U'-USE OR 'M'-UNLOAD
Figure 3-15 Attempting to overwrite data set with an existing expiration date
58
Your operational procedures should ensure that this message is responded to appropriately. It is possible that you will see new occurrences of this message, as JCL that is running today may be specifying a retention period or expiration date, and this is being ignored.
. . . . . . N
There is also a new parameter in the IGDSMSxx PARMLIB member, RLS_MAXCFFEATURELEVEL which can have a value of A or Z (default): If Z is specified, or defaults, then caching of CIs greater than 4K is not allowed, even if it is specified in the data class. If A is specified, then caching of CIs greater than 4K is permitted, if specified in the data class.
59
To determine the support on a z/OS V1R3 DFSMS system issue the command:
D SMS,SYSVSAM
The reply to this will include the output shown in Figure 3-17.
DISPLAY SMS,SMSVSAM - GLOBAL CACHE FEATURE PARMLIB VALUES MAXIMUM CF CACHE FEATURE LEVEL = Z DISPLAY SMS,SMSVSAM - CACHE FEATURE CODE LEVEL VALUES SYSNAME: SC63 CACHE FEATURE CODE LEVEL = A CACHE FEATURE LEVEL DESCRIPTION: Z = No Caching Advanced functions are available A = Greater than 4K Caching code is available
This shows that, on this particular system, the code is installed to support greater than 4K caching, but it has not been enabled in the PARMLIB member.
Note: The support for CIs which are 4K or less in size is unchanged.
60
61
Migration
There are some migration tasks which are required when moving to the z/OS V1R3 release of OAM and you use object backup support. The CBRSMR13 SAMPLIB job must be run if you are migrating from DFSMS/MVS 1.5.0 or OS/390 V2R10 to the current release of DFSMS. The CBRSMR13 SAMPLIB job contains two jobs that modify the object directories SMR13A and SMR13B. They require customization before running: SMR13A performs the migration from any DFSMS/MVS 1.5.0 or OS/390 V2R10 system level of the optical configuration database to the z/OS V1R3 version that supports the multiple OBSGs and the second object backup function in the OAM storage management component (OSMC). This job adds a new column BKTYPE to the existing VOLUME table. It also adds a new column BKTYPE to the existing TAPEVOL table. For recovery purposes, we recommend that you create a DB2 image copy of the existing VOLUME and TAPEVOL tables prior to executing this migration job. SMR13B performs the migration from the base version of the OAM object directory tables to the z/OS V1R3 version, which supports second backup copies of objects. This job adds new columns ODBK2LOC and ODBK2SEC to the existing object directory tables. For recovery purposes, we recommend that you create a DB2 image copy of the existing object directory tables prior to running this migration job.
Note: After running the CBRSMR13 migration job you may need to run a DB2 reorganization after performing an ALTER to the table.
There are also several optional migration tasks. These tasks need only be performed if you intend to exploit the multiple object backup support: Update automatic class selection (ACS) routines to accommodate the new tape data set name of OAM.BACKUP2.DATA. OAM.BACKUP2.DATA is a new tape data set that will be created on OAM tapes that belong to the OBSGs and contain the second backup copies of objects. You will need to perform this step: If you are implementing multiple object backup support and storing second backup copies on tape media If the tape is to be SMS managed. Update the SMS management class definition construct to indicate Autobackup is allowed.
62
Update the SMS management class definition construct to indicate the number of backup versions of objects that you want. The field that displays the number of backup versions now displays the number of backup versions to be maintained for objects. The default value for this field is two and any value greater than one is treated as two. Define multiple OBSGs using ISMF. Add the SETOSMC statement to the CBROAMxx member of PARMLIB. Associate at least one OBSG with a SECONDBACKUPGROUP keyword in a SETOSMC statement in order to write a second backup copy of an object. The SETOSMC statement and its associated keywords determines which OBSGs contain the first and second backup copies of the objects that are associated with an object storage group If SETOSMC statements are not provided, OAM will not process second backup copies of objects. For detailed migration information refer to z/OS V1R3.0 DFSMS OAM Planning, Installation, and Storage Administration Guide Object Support, SC35-0426.
Co-existence
Toleration/coexistence APAR OW47941 is required on all systems which will be running with z/OS 1.3 OAM in the same OAMplex.This APAR introduces several changes: It enables previous releases of OAM to coexist in OAMplex with z/OS V1R3 level. It enables previous releases of OAM to fall back to original version (down to DFSMS 1.4.0). It introduces modified control blocks in OAMplex XCF messages to accommodate different versions of OAM control blocks in OAMplex. There are OAM messages issued for toleration support on lower level DFSMS systems. Lower level systems will only use a single OBSG but can share the SCDS in OAMplex with a z/OS V1R3 system that has multiple OBSGs defined. When OAM encounters multiple OBSGs defined to a lower level system, the last one defined in the SCDS will be selected for use. OAM will issue message CBR0230D. You can choose to use the last OBSG to contain all backup copies of objects, or specify another is to be used. If another OBSG is to be used, message CBR0231A will be issued which allows another OBSG name to be used for writing backup copies of objects. If multiple OBSGs are not defined and no SETOSMC statements are specified in the CBROAMxxx PARMLIB member, OAM will continue to function as it did prior to z/OS V1R3 DFSMS.
63
If multiple OBSGs are defined but no SETOSMC statements are specified in the CBROAMxx PARMLIB member, OAM will issue CBR0231A to verify which object storage group should be used for backup processing. Second backup copies of objects will not be written.
CBR1100I has been modified to display which backup copy, if any, is being used for Access Backup processing.
D SMS,OSMC,TASK(name)
CBR9370I has been modified to show statistics for the number of internal work items queued on the work and wait queues and the number of internal work items completed by the write first and write second backup service during OSMC processing.
D SMS,STORGRP(group_name),DETAIL
CBR1130I has been modified to include the names of the first and second backup storage groups associated with this object storage group.
D SMS,STORGRP(ALL),DETAIL
CBR1140I has been modified for readability and contains a new field to indicate if the volume is used to write first or second backup copies of objects: BACKUP TYPE: (BACKUP1|BACKUP2).
F OAM,START,AB,reason[,BACKUP1|BACKUP2]
64
CBR1075I GLOBAL VALUE FOR BACKUP1 IS backup1 CBR1075I GLOBAL VALUE FOR BACKUP2 IS backup2
F OAM,DISPLAY,SETOSMC,group-name
CBR1075I group_name VALUE FOR BACKUP1 IS backup1 CBR1075I group_name VALUE FOR BACKUP2 IS backup2
These keywords specify the default first and second backup storage groups at the global level. They will be used as the default OBSGs when both of the following are true: The object storage group to which the object is defined is not specified in the on the SETOSMC statement FIRSTBACKUPGROUP and SECONDBACKUPGROUP parameters The management class that is assigned to the object specifies that a first or second backup copy can be written.
SETOSMC STORAGEGROUP(obj_storage_group FIRSTBACKUPGROUP(1st_bu_group)) SETOSMC STORAGEGROUP(obj_storage_group SECONDBACKUPGROUP(2nd_bu_group))
If no second backup group is defined, either globally or on a specific object storage group, then no second backup copy will be taken regardless of the settings in the management class.
Scenario one
Management class settings:
65
Results of running the OAM space management cycle (OSMC): GROUP22 object two backup copies successfully written GROUP44 object fails (x'0472') attempting to write second backup
Note: In this scenario, because one of the storage groups has a second backup group defined and the number of backup versions is set to two, two backups will be attempted for all storage groups but will only succeed for storage groups with the SECONDBACKUPGROUP parameter specified.
Scenario two
Management class settings:
Auto Backup = Y Number of Backup Versions = 2
Results of running OSMC cycle 1 backup copy written for GROUP22 and GROUP44 objects
Note: In this scenario, there is only one backup copy taken, even though the number of backup versions is set to two in the management class. No storage group has a SECONDBACKUPGROUP parameter specified in the SETOSMC statement.
Scenario three
Management class settings:
Auto Backup = 'Y' Number of Backup Versions = 1
66
Results of running OSMC cycle 1 backup copy written for GROUP22 and GROUP44 objects
Note: In this scenario, even though one storage group has a SECONDBACKUPGROUP parameter specified in the SETOSMC statement, there is still only one backup copy taken since the number of backup versions is set to one in the management class.
As you can see from these scenarios, if you have SECONDBACKUPGROUP parameters specified on the SETOSMC statement in the CBROAMxx PARMLIB member, OSMC will attempt to create a second backup copy for any objects that have a Management Class with Autobackup and Number of Backup Versions greater than 1 (the default is 2). If you don't want a second backup copy attempted then you need to change Number of Backup Versions in the your existing Management Class constructs.
67
A complete list of TAPE volumes required for recovery is listed in message CBR9827I. If the volumes are available, recovery will proceed when you reply GO to message CBR9810D. If the volumes are not available, recovery can be stopped by replying QUIT to message CBR9810D. If some of the volumes are available and others are not, and you reply GO to message CBR9810D, recovery will be performed for objects from the volumes that are available. A complete list of OPTICAL volumes required for recovery is listed in message CBR9824I. If the volumes are available, recovery will proceed when you reply GO to message CBR9810D. If the volumes are not available, recovery can be stopped by replying QUIT to message CBR9810D. If some of the volumes are available and others are not, and you reply GO to message CBR9810D, recovery will be performed for objects from the volumes that are available. Prior to z/OS V1R3 DFSMS, OAM returned the list of volumes in sets of 100, then a response to CBR9820D was required. Once replied to, the next set of 100 volumes was displayed, and so on until all volumes required had been listed. Informational message CBR9863I is issued at completion of volume recovery stating the number of objects attempted and the number successfully recovered. The volume being recovered is automatically marked non-writable and then restored to original status. This prevents OAM from selecting the volume for a write request during recovery. If the volume is to remain non-writable after recovery, you must manually change its status. The ISMF line operator, RECOVER, has not been modified to support recovery from the second backup copy. To exploit recovery from the second backup copy, the MODIFY OAM command must be used.
68
Chapter 4.
DFSMShsm enhancements
In this chapter we focus on the major new enhancement to DFSMShsm, the common recall queue (CRQ). The CRQ enables you to have all DFSMShsm address spaces in an HSMplex process recall requests from a single common queue. We discuss these topics: The function that has been provided The environments required to take advantage of this function A description of the steps required to enable this function Some samples showing the function in use We also briefly discuss the other new functions introduced with this level of z/OS and their impact on DFSMShsm.
69
70
RECALLS
RECALLS
Today
ML1 DASD
ML1 DASD
ML1 DASD
ML1 DASD
ML2 Tape
ML1 DASD
ML2 Tape
ML1 DASD
ML2 Tape
ML2 Tape
ML2 Tape
Figure 4-1 DFSMShsm processing today and using a common recall queue
Using a common queue requests from DFSMShsm on one system can be processed by DFSMShsm on another system. This new function also allows AUX hosts in a MASH complex to process recall requests from the common queue as well as placing recall requests explicitly directed to them on the common queue. The algorithm that determines which DFSMShsm will process a request is discussed in Accessing the CRQ on page 78. Systems that connect to the CRQ still retain their own, local, recall queue. There is a new SETSYS command introduced, the SETSYS COMMONQUEUE command. Several other commands have been altered to support the CRQ. These changes to commands are discussed in Commands to manipulate the common recall queue on page 84.
71
There were four DFSMShsm address spaces active. HOST 4 on SC64 was defined as the primary host.
72
73
IEE421I RO *ALL,F HSM,Q I SYSNAME RESPONSES ---------------------------------------------SC63 ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=3 ARC0250I HOST PROCNAME JOBID ASID MODE ARC0250I 3 HSM STC07035 0042 MAIN ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=3 SC64 ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=4 ARC0250I HOST PROCNAME JOBID ASID MODE ARC0250I 4 HSM STC07036 0048 MAIN ARC0250I A HSM2 STC07037 0047 AUX ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=4 SC65 ARC0101I QUERY IMAGE COMMAND STARTING ON HOST=5 ARC0250I HOST PROCNAME JOBID ASID MODE ARC0250I 5 HSM HSM 0020 MAIN ARC0101I QUERY IMAGE COMMAND COMPLETED ON HOST=5
Figure 4-2 Output from Q I command from all systems in test HSMplex
Your next decision is whether you want the CRQ to apply to all members of the HSMplex. Generally this is the preferred solution. A one-to-one relationship between CRQplex and HSMplex provides the most flexible allocation of resources. This one-to-one relationship also generally provides the best overall throughput and availability of the queue and is the simplest to manage. There is no requirement that all DFSMShsm hosts connect to the CRQ or those that do connect remain connected to the queue at all times. Connecting and disconnecting from the CRQ is not disruptive to the DFSMShsm address space or to recall requests that are currently being processed. If you have catalogs or data that are not shared with all members of the HSMplex, or you have a mixture of production and other systems in the HSMplex, you may decide to exclude some systems from a CRQ or to implement more than one CRQ. These are also supported configurations. They do require more resources to implement and maintain but may be the preferred solution for some environments. Systems running in monoplex mode with a single DFSMShsm address space can implement a CRQ. There may not be a justification for doing this. If you have implemented MASH in your HSMplex then AUX hosts are able to process recall requests and you could dedicate an AUX host as a recall server. Systems that are not members of a Parallel Sysplex or a monoplex are not able to participate in the CRQ.
74
As a starting point, we recommend that you allocate the recall queue structure with an initial size of 5120 KB and a maximum size of 10240 KB. If you have a high proportion of recall requests requiring unique tape mounts, you may not quite reach this capacity. This is more likely in environments using 3480 or 3490 capacity volumes. For environments with high numbers of recall requests satisfied from a single volume, you may exceed this capacity. If you wish to resize your CF structure at any time you can, see our discussion of this in Processing when CRQ is full on page 92.
75
//STEP20 EXEC PGM=IXCMIAPU //SYSPRINT DD SYSOUT=* //SYSABEND DD SYSOUT=* //SYSIN DD * DATA TYPE(CFRM) REPORT(YES) DEFINE POLICY NAME(HSM1 ) STRUCTURE NAME(SYSARC_PLEX0_RCL) SIZE(10240) INITSIZE(5120) PREFLIST(cfname1,cfname2)
Figure 4-3 JCL to define the CF structure for the common recall queue
Most of the options for the structure are defined by DFSMShsm when an address space connects to the structure for the first time. You must define:
Name
The name of the structure. This is fixed except for the base name. The basename must be exactly five characters in length. The maximum size of the structure in kilobytes. We recommend 10240. The names of the coupling facilities you want to place the structure into. This may be a list of CFs. If you have two or more suitable CFs, we recommend that you specify at least two in the preference list.
The entries in the preference list are tested in turn. The first CF that meets the structures requirements will be selected.
76
An initial size value for the structure in kilobytes. This value is used rather than SIZE for the initial allocation of the structure. We recommend that you specify 5120. Specifying which coupling facilities the structure is not to be placed in. If you have any CFs that are not at levels that support the CRQ they should be placed here. A percentage used by XCF for monitoring structure utilization. We recommend that you leave this at the default value of 80 percent. If not specified, this defaults to one (1). z/OS uses this value to help determine when to attempt to rebuild a structure after a connectivity failure based on weighting values in your sysplex failure management (SFM) policy. If you do not have an active SFM policy, z/OS will attempt to rebuild the structure anyway.
Exclusion list
Threshold
Rebuildpercent
We recommend that the CRQ structure be failure isolated from the members of the sysplex. Failure isolation means that the CF that contains the CRQ structure is not located on the same physical processor as any of the z/OS LPARs connecting to the structure. Once the structure is defined, you need to activate the CFRM policy that you have just updated. This is done by starting your new CFRM policy. Figure 4-4 shows the starting of a new CFRM policy. Sysplex structures and policies are managed by a component called Cross System Coupling Facility (XCF).
SETXCF START,POLICY,TYPE=CFRM,POLNAME=HSM1 IXC511I START ADMINISTRATIVE POLICY HSM1 FOR CFRM ACCEPTED IXC513I COMPLETED POLICY CHANGE FOR CFRM. HSM1 POLICY IS ACTIVE.
Once the CFRM policy with the new structure is active with the CRQ structure defined you are ready to enable DFSMShsms use of it.
CFSIZER
You can use the Web based S/390 Coupling Facility Structure Sizer Tool (CFSIZER) to estimate the required size and generate the code to define your CRQ structure. The tool is available from:
http://www.ibm.com/servers/eserver/zseries/cfsizer/
77
Command scope
Commands that you issue to manipulate XCF structures have a Sysplex scope, that is the command needs to be issued once only for the sysplex. This is not true for most of the commands used by DFSMShsm. Unlike the XCF commands the scope of DFSMShsm commands that interact with the CRQ queue is generally limited to the system that they were issued from. We discuss the scope of each of the DFSMShsm commands you will be using to manipulate the CRQ in Commands to manipulate the common recall queue on page 84 You can use the sysplex console routing functions to direct commands to DFSMShsms active on other systems in the sysplex. If you are using the same STC name prefix in a MASH environment you can use command masking to route a command to all DFSMShsm hosts in your sysplex. We illustrate this in Figure 4-5 where one command was propagated to four hosts on three systems.
IEE421I RO *ALL,F HS*,Q REQ SYSNAME RESPONSES -------------------------------------------SC63 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=3 ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=3 SC64 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=4 ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=4 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=A ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=A SC65 ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=5 ARC0166I NO DFSMSHSM REQUEST FOUND FOR QUERY ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=5
78
When DFSMShsm connects to the CRQ structure messages are received from both DFSMShsm and XCF about the status of the structure and DFSMShsms connection to it. The XCF status of the structure can also be displayed using the DISPLAY XCF,STR command. We provide an example in Figure 4-7. This display shows structure SYSARC_PLEX0_RCL that has been connected to by the four DFSMShsm address spaces running on the systems in our sysplex. A structure that has been defined but not yet connected to will not have the ACTIVE STRUCTURE or CONNECTION NAME sections in the display structure command output.
79
D XCF,STR,STRNM=SYSARC* IXC360I 19.23.04 DISPLAY XCF STRNAME: SYSARC_PLEX0_RCL STATUS: ALLOCATED POLICY INFORMATION: POLICY SIZE : 10240 K POLICY INITSIZE: 5120 K POLICY MINSIZE : 0 K FULLTHRESHOLD : 80 ALLOWAUTOALT : NO REBUILD PERCENT: N/A PREFERENCE LIST: CF02 CF01 ENFORCEORDER : NO EXCLUSION LIST IS EMPTY ACTIVE STRUCTURE ---------------ALLOCATION TIME: 03/07/2002 15:13:11 CFNAME : CF02 COUPLING FACILITY: 002064.IBM.02.000000010ECB PARTITION: D CPCID: 00 ACTUAL SIZE : 5120 K STORAGE INCREMENT SIZE: 256 K PHYSICAL VERSION: B74AF405 F5E4EF45 LOGICAL VERSION: B74AF405 F5E4EF45 SYSTEM-MANAGED PROCESS LEVEL: 8 XCF GRPNAME : IXCLO02F DISPOSITION : KEEP ACCESS TIME : 0 MAX CONNECTIONS: 32 # CONNECTIONS : 4 CONNECTION NAME ---------------HOSTCONNECTIONA HOSTCONNECTION3 HOSTCONNECTION4 HOSTCONNECTION5 ID -04 03 01 02 VERSION -------00040001 00030005 00010019 0002000B SYSNAME -------SC64 SC63 SC64 SC65 JOBNAME -------HSM2 HSM HSM HSM ASID ---004C 0045 0049 0060 STATE ------ACTIVE ACTIVE ACTIVE ACTIVE
When the first DFSMShsm connects to your new structure, a new XCF group will be created. If you are explicitly assigning transport classes to groups, you will need to assign one to this group as well.
80
If SETSYS COMMONQUEUE(RECALL(CONNECT(basename)) is specified in the DFSMShsm startup parameters and the CF structure does not exist when DFSMShsm is started, then DFSMShsm will attempt, unsuccessfully, to connect to the CRQ structure. DFSMShsm will invoke an ENF Listen so that it is automatically notified when the structure is available, then DFSMShsm will attempt to connect to it. Once DFSMShsm has successfully connected to the CRQ structure existing recall requests are placed on the CRQ and new requests are passed to the CRQ. These recall requests will be eligible for processing by any DFSMShsm connected to the CRQ.
There is no requirement that you explicitly disconnect DFSMShsm from the CRQ structure when you shutdown DFSMShsm. As part of its normal shutdown process DFSMShsm will disconnect from the queue. We discuss the implications of using the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command further in Disconnecting from the common recall queue on page 96. If the DFSMShsm address space is in the process of shutting down in response to a STOP command, any outstanding non-batch WAIT recall request originating from this host that is on the CRQ is failed, but a new NOWAIT request is created to replace this request, so that the recall request is preserved. The new request has the same priority as the original request, but is a NOWAIT request instead of a WAIT request. NOWAIT recall requests remain available in the CRQ. The new recall request is eligible for processing by any eligible host still connected to the CRQ. If the recall request is a batch WAIT request, it is not converted to a NOWAIT type request; it will remain on the CRQ and in CSA so the request can time out if not processed in a timely manner. The batch wait type request is eligible for processing by any eligible host still connected to the CRQ.
81
If you receive the RC=12 from an IXLCONN, DFSMShsm will not revert to not using a CRQ. Rather, it waits for the structure to be allocated. IXLCONN return and reason codes are documented in the MVS PROGRAMMING: Authorized Assembler Services Reference, Volume 2 SA22-7610-02. If DFSMShsms connection to the CRQ is not successful, this will not stop DFSMShsm from processing recall requests placed on its local recall queue as long as there are sufficient resources available for the recalls to be processed. Figure 4-10 shows a QUERY ACTIVE command issued just after the command in Figure 4-9. The COMMONQUEUE CONNECTION STATUS is RETRY.
COMMON RECALL QUEUE PLACEMENT FACTORS: (CONT.) CONNECTION STATUS=RETRY,CRQPLEX HOLD STATUS=***,HOST (CONT.) COMMONQUEUE HOLD STATUS=NONE,STRUCTURE ENTRIES=***% (CONT.) FULL,STRUCTURE ELEMENTS=***% FULL COMMON RECALL QUEUE SELECTION FACTORS: (CONT.) CONNECTION STATUS=RETRY,HOST RECALL HOLD (CONT.) STATUS=RECALL(TAPE),HOST COMMONQUEUE HOLD STATUS=NONE QUERY ACTIVE COMMAND COMPLETED ON HOST=A
82
If the structure name was specified in error, you must first disconnect by issuing the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command and then specify a new SETSYS COMMONQUEUE(RECALL(CONNECT(basename))) command. If you receive the error because the structure is not yet defined in the active CFRM policy, DFSMShsm will connect to the structure when the CFRM policy that defines the structure is started. If you are running with multiple HSMplexes and multiple CRQs, you need to take care that you actually connect the right DFSMShsm and structure. DFSMShsm will use the value you pass to it to. DFSMShsm does no checking, other than checking the structure exists for the first connector to a structure. For subsequent connectors the PLEXNAME value of the connector is checked to ensure that it matches that of the systems currently sharing the CRQ. In Figure 4-11 we show what happened when we attempted to connect a host whose SETSYS PLEXNAME value was different to the SETSYS PLEXNAME of the currently or previously connected hosts.
ARC0008I DFSMSHSM INITIALIZATION SUCCESSFUL ARC1501I CONNECTION TO STRUCTURE SYSARC_PLEX0_RCL WAS ARC1501I (CONT.) SUCCESSFUL, RC=00, REASON=00000000 IXL014I IXLCONN REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL WAS SUCCESSFUL. JOBNAME: HSM ASID: 0059 CONNECTOR NAME: HOSTCONNECTION1 CFNAME: CF02 ARC1506E ARC1506E ARC1506E ARC1502I ARC1502I AN INVOCATION OF THE COUPLING FACILITY LIST (CONT.) STRUCTURE IXLLSTE MACRO COMPLETED UNSUCCESSFULLY, (CONT.) RC=08, REASON=0C1C0859 DISCONNECTION FROM STRUCTURE SYSARC_PLEX0_RCL (CONT.) WAS SUCCESSFUL, RC=00, REASON=00000000
The connection to the CRQ was not successful. In this case DFSMShsm fully disconnected from the CRQ structure. This DFSMShsm will continue processing local recall requests only. There is no requirement that the values specified for PLEXNAME and COMMONQUEUE are the same. However when you have a one-to-one relationship between recall queues and HSMplexes it will makes operations simpler. If you are implementing multiple CRQs in the one HSMplex, then you will not be able to maintain this one-to-one relationship.
83
COMMONQUEUE
There is a new operand, COMMONQUEUE, on both the HOLD and RELEASE commands. There are three ways that the COMMONQUEUE operand can be used:
COMMONQUEUE(RECALL)
Impacts all RECALL requests to the CRQ. They influence both addition and removal of requests from the CRQ
COMMONQUEUE(RECALL(SELECT))
Has no impact on how requests are placed on the CRQ. They will influence whether requests can be removed from the CRQ for processing.
84
COMMONQUEUE(RECALL(PLACEMENT))
Has no impact on DFSMShsms selection of work from the CRQ. Determines whether requests can be added to the CRQ for processing. HOLD or RELEASE COMMONQUEUE(RECALL) commands override any setting that has specified SELECT or PLACEMENT: For example, a RELEASE COMMONQUEUE(RECALL)) command overrides a HOLD COMMONQUEUE(RECALL(SELECTION)) command. The inverse is not true; a HOLD COMMONQUEUE(RECALL) command can only be reversed by a RELEASE COMMONQUEUE(RECALL) command.
Note: There is also a HOLD/RELEASE COMMONQUEUE command. For z/OS V1R3 DFSMS this command is equivalent to the CQ(R) command except when issuing the RELEASE command. If HOLD CQ has been issued, RELEASE CQ(R) is not sufficient to release the common queue. The result is:
ARC0111I SUBFUNCTION COMMONQUEUE(RECALL) CANNOT BE ARC0111I (CONT.) RELEASED WHILE MAIN FUNCTION COMMONQUEUE IS HELD
You can determine whether placement to and selection from the CRQ are HELD or RELEASED using the QUERY ACTIVE command.
85
4.4.4 RECALL
Although there are no changes to the syntax of the RECALL command, there are some changes to the impact that the command has. The scope of the RECALL command is changed. If there is a CRQ in place, there is no way to direct that an explicit host will process a particular command without changing the HOLD status of systems; whereas previously, RECALLs were always processed on the DFSMShsm that received them.
QUERY COMMONQUEUE
This is a new command that returns the status of the CRQ. We show a sample output in Figure 4-12.
RO SC63,F HSM,Q CQ(R) ARC1545I COMMON QUEUE STRUCTURE FULLNESS: COMMON ARC1545I (CONT.) RECALL QUEUE:STRUCTURE ENTRIES=004% FULL, STRUCTURE ARC1545I (CONT.) ELEMENTS=004% FULL ARC0162I RECALLING DATA SET MHLRES4.TEST5.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000053 ON HOST 4 ARC0162I RECALLING DATA SET MHLRES4.TEST1.A3 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000017 ON HOST 4 ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A4, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000018, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000000 MWES AHEAD OF ARC1543I (CONT.) THIS ONE ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A5, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000019, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000001 MWES AHEAD OF ARC1543I (CONT.) THIS ONE
The QUERY COMMONQUEUE(RECALL) command returns information for the CRQ and the requests on the CRQ. The results should be the same no matter which host it is issued from. This command may result in large amounts of data being returned to you if there are many recall requests on the CRQ. You can also issue a QUERY COMMONQUEUE command with no RECALL operand, with this form of the command DFSMShsm returns only message ARC1545I.
86
QUERY ACTIVE
The QUERY ACTIVE command has been enhanced to show the status of the CRQ. We show partial output from a QUERY ACTIVE command in Figure 4-13.
F HSM,Q AC ARC0101I QUERY ACTIVE COMMAND STARTING ON HOST=3 ... ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD STATUS=NONE, ARC1540I (CONT.) HOST COMMONQUEUE HOLD STATUS=NONE,STRUCTURE ARC1540I (CONT.) ENTRIES=004% FULL,STRUCTURE ELEMENTS=004% FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=NONE,HOST COMMONQUEUE HOLD ARC1541I (CONT.) STATUS=CQ(RECALL(SELECTION)) ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=3
QUERY ACTIVE only produces information about the current status of the host it was directed to. If you are trying to determine the cause of a problem, you may need to issue this command to all DFSMShsms that are connected to the common queue.
QUERY DATASETNAME
There are no changes for the QUERY DATASET command. The QUERY DATASETNAME command only returns information about recall requests that originated on the host that executes the command.
QUERY REQUEST
The QUERY REQUEST command has been enhanced to return the location of outstanding recall requests. It now distinguishes whether a request is to be found on a common recall queue or the local recall queue. We show the output from a QUERY REQUEST command in Figure 4-14. A new message ARC1543I is issued for requests that are on the CRQ. The QUERY REQUEST command only returns information about recall requests that originated on the host that executes the command. If the recall is currently being processed by another host, this information is available from the response to the QUERY REQUEST command. For example, in Figure 4-14, the RECALL and QUERY commands were issued from HOST=3, but the recall request is being processed by HOST=4.
87
F HS*,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=3 ARC0162I RECALLING DATA SET MHLRES4.TEST1.A3 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000017 ON HOST 4 ARC0162I RECALLING DATA SET MHLRES4.TEST5.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000053 ON HOST 4 ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A4, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000018, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000000 MWES AHEAD OF ARC1543I (CONT.) THIS ONE ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST1.A5, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00000019, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000001 MWES AHEAD OF ARC1543I (CONT.) THIS ONE ARC0167I RECALL MWE FOR DATA SET MHLRES4.TEST5.A10 FOR ARC0167I (CONT.) USER MHLRES3, REQUEST 00000020, WAITING TO BE ARC0167I (CONT.) PROCESSED, 00003 MWE(S) AHEAD OF THIS ONE
QUERY WAITING
The QUERY WAITING command also returns information about requests that are currently on the CRQ. We illustrate this in Figure 4-15.
F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=3 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000210,TOTAL=00000210 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00000, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000000 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=3
The QUERY WAITING command shows the number of requests currently on the CRQ in message ARC1542I. However it will only show recall requests that are in its own local queue in message ARC0168I.
88
The CONNECT command can be issued either during execution of your ARCCMDxx member or issued as an explicit command once your DFSMShsm address space has initialized. We showed an example of this command in Figure 4-6 on page 79. The SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command should not be issued during normal DFSMShsm operations. DFSMShsm does not require that you manually disconnect it from the CRQ during shutdown processing. Please read the discussion in Disconnecting from the common recall queue on page 96 before using the command SETSYS COMMONQUEUE(RECALL(DISCONNECT)) There is also a change in processing for SETSYS EMERGENCY. Hosts running in EMERGENCY mode are able to place recall requests on the CRQ but not able to select requests from the CRQ. If you wish to prevent hosts in EMERGENCY mode generating actual recall requests you will need to issue a HOLD COMMONQUEUE(RECALL(PLACEMENT)) command to that host.
89
This can be issued with either FIX or NOFIX. This command executes only on the host that is issued on but may impact all DFSMShsm hosts attached to the CRQ. We discuss using the AUDIT COMMONQUEUE(RECALL) further in Auditing the CRQ on page 106. Message ARC1544I is the only output that AUDIT COMMONQUEUE(RECALL) returns. It does not return a specific message for each error. We show the output from an AUDIT COMMONQUEUE NOFIX in Figure 4-16.
F HSM,AUDIT CQ NOFIX ARC1544I AUDIT COMMONQUEUE HAS COMPLETED, 0000 ERRORS ARC1544I (CONT.) WERE DETECTED FOR STRUCTURE SYSARC_PLEX0_RCL, RC=00
90
You can display the HOLD status of the CRQ by issuing the QUERY ACTIVE command, a portion of sample output from the QUERY ACTIVE command is included in Figure 4-17. Holding the CRQ will direct recall requests to the local recall queue. We discuss this in Impact of HOLD and RELEASE commands on page 98.
F HSM,Q AC ... ARC6019I AGGREGATE BACKUP = NOT HELD, AGGREGATE ARC6019I (CONT.) RECOVERY = NOT HELD ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD ARC1540I (CONT.) STATUS=RECALL(TAPE),HOST COMMONQUEUE HOLD ARC1540I (CONT.) STATUS=CQ(RECALL(PLACEMENT)),STRUCTURE ENTRIES=000% ARC1540I (CONT.) FULL,STRUCTURE ELEMENTS=000% FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=RECALL(TAPE),HOST COMMONQUEUE HOLD STATUS=NONE ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=3
In Figure 4-17, for the host queried, RECALL was active, but a HOLD RECALL(TAPE) command had been issued. This host was allowed to select work from the CRQ but a HOLD CQ(R(PLACEMENT)) command had been issued and this DFSMShsm was unable to place new work onto the queue. The impact of these commands is summarized in Impact of HOLD and RELEASE commands on page 98.
91
Each recall request placed on the CRQ is given a priority between zero and 100 by the DFSMShsm address space that issues the request. By default, NOWAIT recall requests receive a priority of 50. WAIT requests a priority of 100. The DFSMShsm address space that places the request on the CRQ drives the recall priority exit, ARCRPEXT, for each request before it is placed on the queue. This allows the host originating the request to influence the relative placement on the CRQ for specific requests. If you currently use the ARCRPEXT to change the priority of recall requests, you may still wish to do so. If you have some systems or data that you deem to be more important you may wish to implement this function, for example, you may wish to prioritize recall requests for production data over those for test data. A sample ARCRPEXT is provided in SAMPLIB. Hosts attempting to place WAIT requests on the CRQ must determine that there is a host currently capable of processing the request currently connected to the queue. We discuss this further in WAIT versus NOWAIT recalls on page 99. NOWAIT recall requests placed on the CRQ are interleaved by userid, as are requests that are placed on the local recall queue. This should prevent recall requests from one user monopolizing the CRQ.
92
Note: The IXC585E message shown in Figure 4-18 is an XCF message. XCF has a structure full monitoring threshold independent of that used by DFSMShsm. The default value for XCF monitoring is 80 percent. You can change this by changing the value specified for THRESHOLD in the structures definition in your CFRM policy.
You could use this difference in threshold monitoring to trigger automation to rebuild the CRQ structure with more space before the DFSMShsm limits are reached, or to take what ever other action you believe to be appropriate.
*IXC585E STRUCTURE SYSARC_PLEX0_RCL IN COUPLING FACILITY CF02, PHYSICAL STRUCTURE VERSION B753986F 50682D63, IS AT OR ABOVE STRUCTURE FULL MONITORING THRESHOLD OF 80%. F HSM,Q W ARC0101I ARC1542I ARC1542I ARC0168I ARC0168I ARC0168I ARC0168I ARC0101I
QUERY WAITING COMMAND STARTING ON HOST=4 WAITING MWES ON COMMON QUEUES: COMMON RECALL (CONT.) QUEUE=00004527,TOTAL=00004527 WAITING MWES: MIGRATE=00000, RECALL=00000, (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, (CONT.) TOTAL=000000 QUERY WAITING COMMAND COMPLETED ON HOST=4
*ARC1505E THE ENTRIES FOR STRUCTURE SYSARC_PLEX0_RCL ARC1505E (CONT.) ARE MORE THAN 95% IN-USE. ALL NEW REQUESTS WILL BE ARC1505E (CONT.) DIRECTED TO THE LOCAL QUEUE. ARC0058I CSA USAGE BY DFSMSHSM HAS REACHED THE ACTIVE THRESHOLD OF 000090K BYTES, ALL BUT BATCH WAIT REQUESTS FAILED F HSM,CANCEL USER(MHLRES3) ARC0931I (H)CANCEL COMMAND COMPLETED, NUMBER OF ARC0931I (CONT.) REQUESTS CANCELLED=6739 IXC586I STRUCTURE SYSARC_PLEX0_RCL IN COUPLING FACILITY CF02, PHYSICAL STRUCTURE VERSION B753986F 50682D63, IS NOW BELOW STRUCTURE FULL MONITORING THRESHOLD.
93
Instead of cancelling the outstanding requests to relieve this problem, we could have increased the size of the CRQ structure. The maximum size of the CRQ structure is determined by the SIZE value specified in the active CFRM policy. You can increase the size of the CRQ structure up to the value specified in SIZE parameter using the SETXCF START,ALTER command. We show an example of this in Figure 4-19.
SETXCF START,ALTER,STRNM=SYSARC_PLEX0_RCL,SIZE=10240 IXC530I SETXCF START ALTER REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL ACCEPTED. IXC533I SETXCF REQUEST TO ALTER STRUCTURE SYSARC_PLEX0_RCL COMPLETED. TARGET ATTAINED. CURRENT SIZE: 10240 K TARGET: 10240 K IXC534I SETXCF REQUEST TO ALTER STRUCTURE SYSARC_PLEX0_RCL COMPLETED. TARGET ATTAINED. CURRENT SIZE: 10240 K TARGET: 10240 K CURRENT ENTRY COUNT: 13674 TARGET: 13674 CURRENT ELEMENT COUNT: 13672 TARGET: 13672 CURRENT EMC COUNT: 3360 TARGET: 3360
If you wish to increase the size of the CRQ structure beyond the value specified for size you must update the structure size value in the active CFRM policy. Effectively write a CFRM policy with the new SIZE value, activate it, and then rebuild the common queue structure.
94
There is an unused recall task available to process this request. There are recall requests to be processed on the CRQ. DFSMShsm is able to allocate the resources required to satisfy the request: Input volumes, ML2 or ML1 SDSP, are not already in use by another host. Recall is not held for this resource on this host. Once one DFSMShsm host has selected a recall request to process the request is copied to the local host and flagged as in process in the CRQ. DFSMShsm then begins to process the recall request. If the recall request requires an ML2 volume to be mounted DFSMShsm will check the CRQ to determine whether there are any other recall requests that can be satisfied from this volume. All recalls for other data sets will be processed by this host while the tape remains mounted. This can result in requests apparently jumping the queue, but reduces contention for tape volumes and the total number of tape mounts that need to be processed. This processing replaces the function previously provided by recall tape takeaway from recall. If you have multiple CRQs or hosts that do not connect to the CRQ within your HSMplex you will lose this benefit, and may still see recall tape takeaway from recall processing. Recalls are processed by the host that selected them as if they had originated on this host, except that: Messages are still returned to the originating user or job. For wait requests, the originating host POSTs the waiting task. The processing host: Produces the Functional Statistics Records (FSR) for the recall. There are new fields in the FSRs that allow you to determine that a recall request was selected from the CRQ. We summarize the changes to FSRs in Table 7-1 on page 169. The FSRs also record the host IDs of the host that originated the recall request. Produces the operator messages for the recall, these are only logged on the host that processes the recall. Drives any ARCRDEXT that may exist. Since the ARCRDEXT can be used to determine device pooling for recall we strongly recommend that the same ARCRDEXT is used on all hosts participating in the CRQ as there is no way of predetermining which host will actually service this recall request.
95
Care should be taken for JES3 environments that if recall is enabled and recalls are being selected from tape and no tape drives are online, then even NOWAIT recall requests will be cancelled. You will receive the messages in Figure 4-20. If there are no online tape drives available to a JES3 system we recommend that you issue a HOLD RECALL(TAPE) command on this host. DFSMShsm running on a JES3 system will select and fail recall requests from a common queue even if there are other systems connected to the CRQ that could process this request.
HRECALL MHLRES4.TEST5.A11 NOWAIT ARC0790E TAPE(S) ARE NOT AVAILABLE FOR USERID ARC0790E (CONT.) **OPER** RECALL REQUEST. ARC0790E (CONT.) DSN=MHLRES4.TEST5.A11, ARC0790E (CONT.) VOLSER(S)=TST010 ARC1001I MHLRES4.TEST5.A11 RECALL FAILED, RC=0081, ARC1001I (CONT.) REAS=0000 ARC1198E TAPE(S) CONTAINING NEEDED DATA NOT AVAILABLE ARC1181I RECALL FAILED - ERROR ALLOCATING TAPE VOLUME
DFSMShsm AUX hosts are able to process recall requests even though implicit recall requests are not directed to them. We illustrate this in Figure 4-21. Where an AUX host, HOST A, is shown processing a recall request that was generated from HOST 4.
F HSM,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=4 ARC0162I RECALLING DATA SET MHLRES4.TEST1.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00000047 ON HOST A ARC0101I QUERY REQUEST COMMAND COMPLETED ON HOST=4
96
In addition, when you disconnect DFSMShsm from the CRQ with the command CRQSETSYS COMMONQUEUE(RECALL(DISCONNECT)), all requests that this DFSMShsm had placed on the CRQ are moved back to this DFSMShsms local recall queue. We show this in Figure 4-22. This does not happen if the disconnection is part of normal shutdown processing. During normal shutdowns, recall requests are left on the CRQ and are available for processing by other DFSMShsm hosts still connected to the CRQ.
IEF244I HSM HSM - UNABLE TO ALLOCATE 1 UNIT(S)AT LEAST 1 OFFLINE UNIT(S) NEEDED. IEF877E HSM NEEDS 1 UNIT(S)FOR HSM RESIN1LIBRARY: LIB1 LIBRARY STATUS: ONLINE OFFLINE 0B90-0B93 IEF878I END OF IEF877E FOR HSM HSM RESIN1 *059 IEF238D HSM - REPLY DEVICE NAME OR 'CANCEL'. F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=4 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000009,TOTAL=00000009 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00000, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000000 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=4 F HSM,SETSYS CQ(R(D)) ARC0100I SETSYS COMMAND COMPLETED ARC1504I DISCONNECTION FROM STRUCTURE SYSARC_PLEX0_RCL MAY BE DELAYED F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=4 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00009, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000009 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=4 R 59,CANCEL IEE600I REPLY TO 059 IS;CANCEL *060 ARC0381A ALLOCATION REQUEST FAILED FOR TST011 FOR RECALL. REPLY WAIT OR CANCEL R 60,CANCEL IEE600I REPLY TO 060 IS;CANCEL ARC1502I DISCONNECTION FROM STRUCTURE SYSARC_PLEX0_RCL ARC1502I (CONT.) WAS SUCCESSFUL, RC=00, REASON=00000000
97
Similarly, there is no need to disconnect DFSMShsm from the CRQ structure if you need to move it to another CF. If another CF is specified in the preference list for the CRQ structure, you can move the structure non-disruptively with the SETXCF START,REBUILD,STRNM=SYSARC_basename_RCL,LOC=xxxx command. When DFSMShsm is disconnecting from a CRQ structure you may see a delay in the process if there are outstanding allocation requests. DFSMShsm will issue a ARC1504I message. We show this in Figure 4-22. Note that this output has been slightly reformatted. In Figure 4-22 we forced DFSMShsm to go through allocation recovery by requesting an offline tape drive, then while DFSMShsm was still requesting the tape mount issued the SETSYS COMMONQUEUE(RECALL(DISCONNECT)) command. Before this command was issued there were nine recall requests on the CRQ after the disconnect was issued there were nine recall requests on the local recall queue. DFSMShsm did successfully disconnect from the recall queue once allocation recovery had completed.
98
The HOLD ALL and the RELEASE ALL commands have no impact on the CRQ. The only HOLD or RELEASE commands that impacts the CRQ are the HOLD or RELEASE COMMONQUEUE commands. If a HOLD COMMONQUEUE(RECALL) command has been issued it is not possible to partially release the HOLD command. We illustrate this in Figure 4-23 where a RELEASE COMMONQUEUE(RECALL(SELECTION)) command was issued unsuccessfully after a HOLD COMMONQUEUE(RECALL) command.
F HSM,RELEASE CQ(R(S)) ARC0111I SUBFUNCTION COMMONQUEUE(RECALL(SELECTION)) ARC0111I (CONT.) CANNOT BE RELEASED WHILE MAIN FUNCTION ARC0111I (CONT.) COMMONQUEUE(RECALL) IS HELD ARC0100I RELEASE COMMAND COMPLETED
To allow this host to select recall requests from the CRQ, you will need to issue the command RELEASE COMMONQUEUE(RECALL)). Then, if you wish this host to only select requests from the CRQ, issue the command HOLD COMMONQUEUE(RECALL(PLACEMENT)).
99
Using the information from Table 4-3 and Table 4-4, it is possible to see how a DFSMShsm host could be configured to be able to place RECALL requests on the CRQ, but only to be able to process RECALL requests from ML1, a HOLD RECALL(TAPE) command would allow this. If you wanted to prevent a DFSMShsm host from performing all recalls, you could choose either a HOLD RECALL or a HOLD COMMONQUEUE(RECALL(SELECTION)) command. The HOLD RECALL command would have an impact to DELETE requests that the HOLD COMMONQUEUE command would not.
100
Figure 4-24 shows recall requests being processed on the local queue after a HOLD CQ(R(P)) command has been issued.
F HSM,Q AC ARC0101I QUERY ACTIVE COMMAND STARTING ON HOST=4 ARC0160I RECALL=NOT HELD, TAPERECALL=NOT HELD, DATA SET RECALL=ACTIVE ARC0162I RECALLING DATA SET MHLRES4.TEST3.A1 FOR USER ... ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD STATUS=NONE, ARC1540I (CONT.) HOST COMMONQUEUE HOLD STATUS=CQ(RECALL(PLACEMENT)), ARC1540I (CONT.) STRUCTURE ENTRIES=000% FULL,STRUCTURE ELEMENTS=000% ARC1540I (CONT.) FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=NONE,HOST COMMONQUEUE HOLD STATUS=NONE ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=4 F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=4 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000000,TOTAL=00000000 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00003, ... F HSM,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=4 ARC0162I RECALLING DATA SET MHLRES4.TEST3.A1 FOR USER ARC0162I (CONT.) MHLRES3, REQUEST 00004422 ON HOST 4 ARC0167I RECALL MWE FOR DATA SET MHLRES4.TEST3.A10 FOR ARC0167I (CONT.) USER MHLRES3, REQUEST 00004423, WAITING TO BE ARC0167I (CONT.) PROCESSED, 00000 MWE(S) AHEAD OF THIS ONE
101
We will also discuss errors that may not have quite such an obvious cause, rather that involve a loss of function, for example: Recall requests not processing Recall requests failing Recall requests being processed by the wrong DFSMShsm We recommend that you also review the manual DFSMShsm Data Recovery Scenarios, GC35-0419.
102
CF rebuild
If a second CF is defined in the CRQs structure definition preference list in your CFRM policy, loss of a CF will cause Cross-system Extended Services (XES) to attempt to rebuild the structure to the other CF. Loss of system connection to the CF may also drive this, depending on: The values specified for REBUILDPERCENT in the structure definition Whether an active SFM policy exists If both these conditions are true, loss of connectivity to a single z/OS system may cause the structure to rebuild. For more information about SFM and REBUILDPERCENT, please refer to z/OS V1R3.0 MVS Setting Up a Sysplex SA22-7625-03. If your structure is rebuilt to another CF, all systems should retain access to the CRQ and continue using this for recall processing. It is possible that some systems may loose access to the CRQ during this time if that occurs processing will continue as discussed in No CF available below.
No CF available
If there is no alternate available for the CRQ structure, then each DFSMShsm: Will continue to process any active recall requests. Will fall back to using the local recall queue for processing. This may cause problems if a HOLD RECALL has been issued. Will move all requests that it has retrieved from the CRQ to the local recall queue. May experience some recall failures for data sets that were queued for processing on a remote host, this is not a major problem since the recall request will be processed. Will listen to see when connectivity is restored to the CRQ structure and reconnect to it. After DFSMShsm reconnects to the CRQ structure, then DFSMShsm will move requests from DFSMShsms local recall queue to the CRQ This process should be transparent to users of DFSMShsm. Each DFSMShsm still retains information about all the RECALL requests it had placed on the CRQ. If you are using recall servers or limiting tape recalls to certain hosts you may need either to issue RELEASE commands on some hosts or experience delays in recall processing until DFSMShsm can reconnect to the CRQ.
103
104
F HSM,Q W ARC0101I QUERY WAITING COMMAND STARTING ON HOST=1 ARC1542I WAITING MWES ON COMMON QUEUES: COMMON RECALL ARC1542I (CONT.) QUEUE=00000199,TOTAL=00000199 ARC0168I WAITING MWES: MIGRATE=00000, RECALL=00000, ARC0168I (CONT.) DELETE=00000, BACKUP=00000, RECOVER=00000, ARC0168I (CONT.) COMMAND=00000, ABACKUP=00000, ARECOVER=00000, ARC0168I (CONT.) TOTAL=000000 ARC0101I QUERY WAITING COMMAND COMPLETED ON HOST=1 F HSM,Q REQ ARC0101I QUERY REQUEST COMMAND STARTING ON HOST=1 ARC1543I RECALL MWE FOR DATASET MHLRES4.TEST3.A1, FOR ARC1543I (CONT.) USER MHLRES3, REQUEST 00004446, WAITING TO BE ARC1543I (CONT.) PROCESSED ON A COMMON QUEUE,00000000 MWES AHEAD OF ARC1543I (CONT.) THIS ONE F HSM,Q AC ARC1540I COMMON RECALL QUEUE PLACEMENT FACTORS: ARC1540I (CONT.) CONNECTION STATUS=CONNECTED,CRQPLEX HOLD STATUS=ALL, ARC1540I (CONT.) HOST COMMONQUEUE HOLD STATUS=CQ(RECALL),STRUCTURE ARC1540I (CONT.) ENTRIES=004% FULL,STRUCTURE ELEMENTS=003% FULL ARC1541I COMMON RECALL QUEUE SELECTION FACTORS: ARC1541I (CONT.) CONNECTION STATUS=CONNECTED,HOST RECALL HOLD ARC1541I (CONT.) STATUS=NONE,HOST COMMONQUEUE HOLD STATUS=CQ(RECALL) ARC0101I QUERY ACTIVE COMMAND COMPLETED ON HOST=1
Figure 4-25 Commands for determining the status of the common queue
The key items to check are these: Check to see whether any DFSMShsm is currently processing any recall request. This is shown in the Q AC output. This needs to be checked for each member of the HSMplex. Use the D XCF,STR,STRNM=SYSARC* command to check that the CRQ structure is allocated and all address spaces that should be connected to the structure are. Issue the Q CQ command and check the ARC1545I message to verify that the CRQ is not full. The results of this command should be identical on all systems connected to the CRQ. Check that DFSMShsm sees all hosts successfully connected to the CRQ.
105
Check to see whether there are any queued requests to be processed. The Q W command provides this information. You can also determine whether the requests are on the common or local queue from this output. This needs to be issued on all participating systems. You can use the output from the Q REQ or the Q AC command to determine whether data sets to be recalled are currently located on ML1 or ML2. Use the Q AC output to check whether RECALL has been held at any level Use the Q AC output to check whether any HOLD CQ command has been issued and if so what form of the command was used. For example in Figure 4-25 a HOLD CQ(R) command has been issued. Check to see whether any outstanding RECALL requests are waiting for a common resource, for example, an ML2 volume. If all request are waiting for a single volume then Check to see whether this volume is currently mounted and whether it is being used for input or output. If the volume is currently mounted then one of the TAPETAKEAWAY functions should release it to allow the recall requests to process. If the volume is not mounted, then check that the volumes information is correctly recorded by DFSMShsm. The quickest check is to issue the LIST TTOC(volser) NODSI command. If the LIST returns message ARC0378I, then you will need to perform the standard recovery for this message. ARC0378I will only be issued after the DFSMShsm address space that had been using the volume is recycled so you also need to use the LIST VOLUME(volser) command and check whether the in use flag is on, if a system has not reset this flag when they completed using the volume this will prevent any system from allocating this volume for recall. This limitation exists for systems whether or not they are taking part in a CRQ.
106
The structure of the common queues is not externalized. It is possible that losses of connectivity, ABENDS or internal errors may cause the CRQ to become corrupted. Just like the DFSMShsm control data sets the CRQ has interrelated entries that can get out-of-sync. You can use the AUDIT COMMONQUEUE(RECALL) command to analyze the entire structure and report/fix the errors that it finds. It issues ARC1544I to report the number of errors. It cannot find/correct all errors. It creates PDA entries to record what it found/fixed. AUDIT COMMONQUEUE(RECALL) does NOT return FIXCDS commands, as there are no control data set records for the CRQ these are are not applicable. Also AUDIT COMMONQUEUE(RECALL) does not create an output data set. You may see differences in the number of errors found between AUDITs specified with FIX and NOFIX. CRQ errors are usually interrelated. Fixing one error will fix many errors reported by NOFIX.
107
SETXCF FORCE,STR,STRNM=SYSARC_PLEX0_RCL IXC579I NORMAL DEALLOCATION FOR STRUCTURE SYSARC_PLEX0_RCL IN 704 COUPLING FACILITY 002064.IBM.02.000000010ECB PARTITION: D CPCID: 00 HAS BEEN COMPLETED. PHYSICAL STRUCTURE VERSION: B753986F 50682D63 INFO116: 132C2000 01 2800 00000003 TRACE THREAD: 00002A1B. IXC353I THE SETXCF FORCE REQUEST FOR STRUCTURE SYSARC_PLEX0_RCL WAS COMPLETED: STRUCTURE WAS DELETED D XCF,STR,STRNM=SYSARC_PLEX0_RCL IXC360I 13.56.22 DISPLAY XCF 707 STRNAME: SYSARC_PLEX0_RCL STATUS: NOT ALLOCATED POLICY INFORMATION: POLICY SIZE : 10240 K POLICY INITSIZE: 5120 K POLICY MINSIZE : 0 K FULLTHRESHOLD : 80 ALLOWAUTOALT : NO REBUILD PERCENT: N/A PREFERENCE LIST: CF02 CF01 ENFORCEORDER : NO EXCLUSION LIST IS EMPTY
The SETXCF FORCE command will not complete if any connections still exist to the structure. You can use the D XCF,STR,STRNM=SYSARC_basename_RCL command to list the connections and the SETXCF FORCE,CON command to delete any failed persistent connections, you cannot use this command to delete a connection that is in active status. Once the active structure has been deleted you should be able to reconnect your DFSMShsms to the CRQ using the SETSYS COMMONQUEUE(RECALL(CONNECT(basename))) command. This sequence effectively deletes the contents of the CRQ structure and the contents will be rebuilt as each DFSMShsm reconnects to the structure.
108
You should also ensure that you collect all the usual diagnostic information required by your IBM support center, this may include LOGREC, JOBLOGS, SYSLOGs and DFSMShsm PDA trace information.
109
110
Chapter 5.
DFSMSdss enhancements
In this chapter we describe the changes introduced with the DFSMSdss component: HFS logical copy support Enhanced dump conditioning Large volume support
111
112
If the HFS data set is mounted and the user specified DELETE, then the enqueue will fail and DFSMSdss will fail the request with ADR410E. If the enqueue fails and the data set is uncataloged, then DFSMSdss will fail the request with ADR412E. If the HFS data set is currently mounted for read-write, then a quiesce is performed against the HFS. This quiesce marks the HFS and the files in it as unusable, even for read processing, until the HFS is unquiesced. Users accessing the files in a quiesced HFS are suspended until the unquiesce is processed. If the quiesce fails, the copy fails and ADR960E is issued. If the quiesce succeeds, then the copy proceeds.
5.1.5 Restrictions
There are some restrictions and other points to note about the HFS logical copy support: The following DFSMSdss COPY keywords are ignored for the source HFS data set. DYNALLOC TOLERATE(ENQFAILURE) You cannot perform a logical copy of an uncataloged HFS data set if there is a cataloged HFS with the same name, which is mounted Read-Write. If you attempt to do so, the copy will fail with ADR412E.
113
The DELETE keyword is only honored if the SYSZDSN enqueue is obtained EXCLUSIVE on the source HFS. DELETE is possible only if the HFS is unmounted. If you code the DELETE keyword and the HFS is mounted as either Read-Only or Read-Write, then the copy will fail with message ADR410E.
5.1.7 Performance
Some existing DFSMSdss logical copy jobs may now run longer if HFS data sets which you never intended to be copied are now being copied. For DFSMSdss HFS logical copy between like devices, DFSMSdss will attempt to use fast replication techniques (for example, Snapshot) to perform the copy. If fast replication is not possible, then one of the following techniques is used: Concurrent copy EXCP For logical copies between unlike devices, DFSMSdss performs I/O by the track packing method. Regardless of the copy method selected, this new single step copy process should be faster than the previous requirement to use the two-step DUMP/RESTORE process.
5.1.8 Coexistence
There are no coexistence issues, as this function will not be made available in previous releases. However, in a mixed level sysplex you will have to ensure that any HFS logical copies required, are performed on a z/OS V1R3 DFSMS system. Any logical copy attempted on a system with a prior release will still exclude HFS data sets, the same as it does today.
114
5.2.1 Overview
Prior to the introduction of dump conditioning, when performing a full volume copy of a SMS-managed volume, DFSMSdss required that the COPYVOLID keyword was specified. This resulted in the target volume being varied offline automatically due to a duplicate volser. Dump conditioning was added as a means to perform a full volume COPY operation which allows both the source and target volumes to remain online, so that full volume DUMP operations can be performed against the source data on an intermediary target location. The DFSMSdss COPY command DUMPCONDITIONING keyword specifies that you want to create a copy of the source volume for backup purposes, not for the purpose of using the target volume for general applications. The two DFSMSdss COPY command keywords, DUMPCONDITIONING and COPYVOLID, are mutually exclusive.
115
Note: A volume with a volser which does not match the VVDS and VTOC index name may have data accessibility problems. The target volume from a COPY with DUMPCONDITIONING should only be used as a source for full volume DUMP operations.
Physical data set RESTORE from a dump of a conditioned volume is not supported.
116
5.2.4 Restrictions
There are a few restrictions when using DUMPCONDITIONING: If you attempt to perform a full volume copy of a dump conditioned volume, you must specify the DUMPCONDITIONING keyword. If this is not specified, the copy will fail with new message ADR814E. The DUMPCONDITIONING keyword can be used with COPY TRACK operations, but only those that include the VTOC tracks. For a conditioned volume, the VVDS name will not match the volume serial number. This makes the VVDS invisible to the system. One side effect of this is that a full volume copy or full volume restore operation to a conditioned volume may fail when DFSMSdss attempts to do expiration date checking because the VVDS cannot be located. To avoid this problem, specify the PURGE keyword during full volume copy and restore operations if the target volume is a previously used conditioned volume, or if the target volume was copied with neither COPYVOLID nor DUMPCONDITIONING specified.
5.2.5 Performance
You can combine physical volume copy using DUMPCONDITIONING with Snapshot or Flashcopy to reduce the amount of time that your data is unavailable when you back it up. A full volume copy in conjunction with Snapshot or Flashcopy can produce a copy of a volume in seconds. Then, the copy can be dumped to tape while your applications are accessing the data on the original volume.
117
Coexistence
Full LVS is available for OS/390 Release 10 and higher with APAR OW50405. Toleration support is also provided on releases prior to OS/390 Release 10 by APAR OW49148. This toleration support prevents a large volume from being varied online to an earlier level system. See 3.1, Large volume support on page 40 for further information.
118
Chapter 6.
DFSMSrmm enhancements
In this chapter we describe the changes introduced with the DFSMSrmm component. We also describe the new functions introduced since DFSMSrmm Release 10. These are the changes introduced with DFSMSrmm in z/OS V1R3: Special character support Changed messages for improved diagnostics HELP moved from SYS1.SEDGHLP1 to SYS1.HELP These new functions have been introduced since DFSMSrmm R10: Software MTL support Multi-Volume alert in RMM Dialog Updated conversion tools VSAM Extended function support for DFSMSrmm CDS RMM Application Programming Interface PARMLIB options SMSACS and PREACS Storage location as Home location Enhanced bin management DSTORE by location Extended extract file Report generator Buffered tape marks support for A60 controller
119
Special considerations
These are some considerations regarding special character support: For volumes that reside in IBM automated tape libraries, only alphanumeric characters are supported (with barcode labels) unless the unlabeled tape facility is used. When the unlabeled tape facility is used, then the alphanumeric characters (+) plus, - (hyphen), # (number or pound sign), & (ampersand), $ (dollar sign), and @ (at sign) are supported. Leading blanks are not allowed. DFSMSrmm does not support a leading blank on the volser. Asterisks are allowed anywhere in a volser.
AA001*, *A0001 and A*0001 are all valid volsers
Using an asterisk in a volser could lead to unexpected results when using the RMM dialog. The RMM dialog treats the asterisk as a wildcard character and therefore will return the actual volser requested and any others that match the mask specified. The ADDVOLUME volser COUNT(n) supports less than 6 characters in the VOLSER field. The following would add volumes A0001 through A0015:
ADDVOLUME A0001 COUNT(15)
RACKs can now be less than 6 characters. Use the TPRACF(N) option for special character volsers. You should protect them outside RMM with a generic RACF TAPEVOL profile if required. This is because RACF does not support special characters in tape volume volsers for the TAPEVOL class.
120
This change to message EDG4021I is also available through APAR OW48865 for OS/390 R10.
121
No JCL changes are required to use an MTL. SMS storage groups and ACS routines can be updated to determine the placement of new tape data sets to an MTL. For details on defining tape libraries to SMS, both MTL and ATL, refer to DFSMS Object Access Method Planning, Installation, and Storage Administration Guide for Tape Libraries, SC35-0427.
122
123
124
The real conversion process implies a number of activities for reading the current tape management related data and translates into a format that DFSMSrmm is able to use. This process is shown in Figure 6-2.
Extraction
L-Records
D-Records
O-Records
K-Records
Report
ADDVRS Commands
VRS Records
OWNER Records
BIN Records
VOLUME Records
Loading CDS
DFSMSrmm CDS
Build UXTABLE
DSN Records
Load UXTABLE
UXTABLE
125
There are three major phases in this process: 1. Extraction phase, in which the information that DFSMSrmm needs is derived from your current tape management system database. 2. Conversion phase, where data from the previous phase is converted to a format that is, after loaded into a VSAM KDSD data set, useful to DFSMSrmm. 3. Post-processing phase, where the VSAM KSDS data set is loaded, the DFSMSrmm CDS control record is created, and a comparison utility is run to verify the completeness and accuracy. For more detailed information about the conversion process, refer to the most current DFSMSrmm conversion redbook applicable to your installation.
126
User Space
RMM Command
TSO
REXX Variables
REXX Program
RMM
EDGXCI Macro
Asm Program
Using the API, you can issue any of the RMM TSO subcommands from an assembler program. You can use the API to obtain information about DFSMSrmm resources and then use the data to create reports or to implement automation. The sample installation exit EDGUX100 shows how to use the API call to list a volume record from the DFSMSrmm Control Data Set. The use of the API call is optional, and you must have High Level Assembler installed on your system in order to code the assembler language programs. For further details, refer to z/OS V1R3.0 DFSMSrmm Application Programming Interface, SC26-7403.
127
Specify this operand to control whether DFSMSrmm-supplied and EDGUX100 installation exit-supplied values are input to SMS Pre-ACS processing.
NO Specify NO to avoid DFSMSrmm PreACS processing using the DFSMSrmm EDGUX100 installation exit. YES Specify YES to enable DFSMSrmm PreACS processing using the DFSMSrmm EDGUX100 installation exit.
Default: NO.
SMSACS(NO|YES)
Specify this operand to control whether DFSMSrmm calls SMS ACS processing to enable use of storage group and management class values with DFSMSrmm.
NO Specify NO to prevent DFSMSrmm from calling the SMS ACS processing to obtain management class and storage group names. DFSMSrmm system-based scratch pooling, and scratch pooling and VRS management values based on the EDGUX100 installation exit are used. YES Specify YES to enable DFSMSrmm calls to the SMS ACS processing to obtain management class and storage group names. If values are returned by the SMS ACS routines the values are used instead of the DFSMSrmm and EDGUX100 decisions.
Default: NO.
128
Implementation tasks
Update the EDGRMMxx PARMLIB member with PREACS and SMSACS options to enable the function. Before you enable SMS ACS support in your installation, at a minimum, you must make a preventative change in your SMS ACS routines to keep DFSMSrmm from incorrectly processing tape volumes. Update your ACS management class (MC) and storage group (SG) routines to check whether the ACS environment variable (&ACSENVIR) is set to either RMMPOOL or RMMVRS, and if it is, avoid setting an MC or SG name. You can add statements such as the following to your MC and SG routines:
WHEN (&ACSENVIR ='RMMPOOL'|&ACSENVIR ='RMMVRS') DO EXIT END
If you do not make this change, it is possible for DFSMSrmm to use a management class name or a storage group name that was incorrectly returned by the ACS routines.
Storage locations
Storage locations are those places outside the removable media library where you send removable media. Storage locations are not part of the removable media library, because the volumes are not generally available for immediate use and it is not possible to return to scratch when in these locations. Storage locations are typically used to store removable media that are kept for disaster recovery or vital records. DFSMSrmm manages two types of storage locations: installation-defined storage locations and DFSMSrmm built-in storage locations. DFSMSrmm provides shelf-management of storage locations by assigning bin numbers to shelf locations within a storage location. DFSMSrmm automatically provides shelf-management for the built-in locations; this means that DFSMSrmm assigns bin numbers to each volume in a built-in storage location.
129
You can define an unlimited number of installation-defined storage locations, using any 8-character name for each storage location. Within the installation-defined storage location, you can define the type or shape of the media in the location. You can also define the bin numbers that DFSMSrmm assigns to the shelf locations in the storage location. You can request DFSMSrmm shelf management when you want DFSMSrmm to assign a specific shelf location to a volume in the location. We recommend the use of installation-defined storage locations. All that you can do with built-in storage locations, you can do with installation-defined storage locations, and more. In Table 6-1 you can see an overview of the differences between built-in and installation-defined storage locations.
Table 6-1 Differences between built-in and installation-defined storage locations
Built-in storage location Number Name Bin numbers Priority Segregate Shelf-managed
3 predefined LOCAL, DISTANT, REMOTE From 1 to 999999 Default priority between locations You cannot Automatically
Installation-defined storage locations can be subdivided based on the media that resides in the location. For example, you can identify part of a storage location for cartridges and another part for reels. To do this you should provide a media name when you add bin numbers so the volumes are sent to the correct part of the storage location.
130
You can decide how volumes are shelf managed. Shelf management would be required if you want volumes stored in a specific slot such as a rack number or bin number. A shelf location would not be required if the volume is stored in a robotic tape library. Depending on how you define the locations, you will use one way or the other: To not use shelf locations, define the storage location using:
LOCDEF LOCATION(loc_name) TYPE(STORAGE,HOME) MANAGEMENTTYPE(NOBINS)
(And do not define rack numbers that match to the volume serial number.) To use the rack number as the shelf location, define the storage location in this way:
LOCDEF LOCATION(loc_name) TYPE(STORAGE,HOME) MANAGEMENTTYPE(NOBINS)
(And define rack numbers that match to the volume serial numbers, or use the POOL operand when adding or moving volumes.) To use the bin numbers as shelf locations, define the storage location using:
LOCDEF LOCATION(loc_name) TYPE(STORAGE,HOME) MANAGEMENTTYPE(BINS)
(And do not define rack numbers that match to the volume serial numbers.) To assign or to change the assignment of shelf locations for volumes in the SHELF location or in a system-managed library, you use the RMM ADDVOLUME and CHANGEVOLUME subcommands with the RACK or POOL operands. DFSMSrmm does not automatically initiate assignment of rack numbers as the shelf location for these volumes. When you specify LOCATION operand on CHANGEVOLUME and specify a storage location, it does not set the HOME location. To set the HOME location, you must specify the HOME location operand. When implementing this function, follow the steps that describe your current situation.
131
3. Use DFSMSrmm CHANGEVOLUME subcommand to set the home location of volumes to be assigned to a specific storage home location:
RMM CV volser HOME(storname)
4. Use DFSMSrmm CHANGEVOLUME subcommand to set the current location of volumes already at the storage home location:
RMM CV volser LOCATION(storename)
If the storage location is not shelf managed, you can include the CMOVE operand to confirm the move is completed. If the storage location is shelf managed, you must run EDGHSKP DSTORE processing to assign shelf locations to the volumes. If you want specific shelf locations assigned, you can include the BIN operand on the CV subcommand.
When you enter volumes into the library, issue the following command:
RMM CV volser LOCATION(ACS0) HOME(ACS0) CMOVE
When a volume is ejected from the library, ensure that DFSMSrmm knows where the volume is going. If it is not being moved by DSTORE processing, tell DFSMSrmm where it is being stored. For moving it, issue the following command:
RMM CV volser LOCATION(SHELF) POOL(T*) CMOVE
132
In this way you can easily identify whether a volume is inside the robot library or outside. You can use the TSO RMM subcommands via the DFSMSrmm API, perhaps from an exit provided by the robot library vendor such as SLSUX06. 2. In this situation, your data center is located between two separate centers, with active volumes in both. With the previous implementation, all were located in SHELF. Now you can give your second data center a storage location name, and manage as a data center. You can easily identify where your volumes are: Give your second data center library a location a name, such as LIB; define it to DFSMSrmm using LOCDEF command:
LOCDEF LOCATION(LIB) TYPE(STORAGE,HOME) MEDIANAME(REELS,CARTS,*) MANAGEMENTTYPE(NOBINS)
When you define volumes for use in the this second data center, issue the following command:
RMM AV volser LOCATION(LIB) STATUS(SCRATCH) MEDIANAME(CARTS)
Bin reuse
A bin can become reusable as soon as a move of a volume out of this bin has been started. Use the REUSEBIN operand to control how DFSMSrmm reuses bins when a volume is moving from a bin. There are two options: CONFIRMMOVE: When a volume moves out of a bin, DFSMSrmm does not reuse this bin until the volume move has been confirmed. STARTMOVE: A bin can be reused as soon as a volume starts moving out of a bin. CONFIRMMOVE is the default.
133
Reassign processing
Inventory management (DSTORE) processing can now reassign bins to moving volumes. REASSIGN only applies to those volumes that are already moving from other than a bin-managed storage location and the required location is either a bin-managed storage location or is different from the destination. When you specify REASSIGN, you are canceling the move for these volumes and requesting that the move for the volumes is restarted so that DFSMSrmm could assign these volumes to other locations or bins: A volume is reassigned when DSTORE(LOCATION(...)) is specified if at least one of the LOCATION subparameter pairs matches the volume current location and destination.
134
Command enhancements
The following commands have been changed to allow for the enhanced bin management functions: LISTCONTROL: Output reflects extended bin management support. LISTVOLUME: Output reflects extended bin management support. LISTBIN: Output reflects extended bin management support. SEARCHBIN: Output reflects extended bin management support. Also, the default of INUSE has been removed. If neither INUSE nor EMPTY is specified in the search request, DFSMSrmm lists all bins regardless of their status. SEARCHRACK: Output reflects extended bin management support. Also, the default of INUSE has been removed. If neither INUSE nor EMPTY is specified in the search request, DFSMSrmm lists all racks regardless of their status.
Performance
Inventory management DSTORE run time could slightly increase if extended bin support is enabled, or if DSTORE(INSEQUENCE) is used. The amount of the increase depends on the relative number of volumes that move to and from bin managed storage locations.
Usage considerations
These are some important considerations regarding usage: Since the default INUSE has been removed from the SEARCHBIN and SEARCHRACK commands, you only need to invoke the command once to obtain a complete list of bins or racks. You can change any existing user-written programs and REXX programs that request two or more searches to retrieve all rack or bin numbers to only make one search request. Extended bin management is now the default for all new conversions from other tape library management systems to DFSMSrmm. In your RMMplex, your system managed libraries have to be connected to at least one system which has enhanced bin management installed and enabled.
Coexistence
To avoid corruption of the DFSMSrmm control data set, before you enable extended bin support, ensure that for all DFSMSrmm systems sharing a control data set in an RMMplex, you have installed coexistence APAR OW49863 or APAR OW47947 or are running all systems on z/OS V1R3 or higher.
135
If you do not enable extended bin support, then there is no requirement to install these APARS.
Migration tasks
To implement these enhancements, consider each task listed in the following sections. The task list is split into required and optional tasks: Required tasks apply to all DFSMSrmm installations enabling the function. Optional tasks apply to any DFSMSrmm installation which is going to implement some or all of the enhancements.
136
Move volumes to selected storage locations by running storage location management (DSTORE) with location specified. Refer to DSTORE by location on page 138 for details. Redirect and reassign a moving volume by running the EDGHSKP utility with the DSTORE(REASSIGN) parameter. Run the EDGHSKP utility with the DSTORE(INSEQUENCE) parameter to assign volumes to storage locations in volume serial number and bin number sequence Some or all of these optional migration tasks may be appropriate to your installation.
137
The journal data set MUST be large enough that it does not fill up during inventory management processing. A summary of journal record growth is listed here: Each confirm move into bin requires an additional 0.3 KB. Each start move from bin requires an additional 0.3 KB. Also, during DSTORE(INSEQUENCE): Each start move into bin requires an additional 1.6 KB.
138
In this statement, from_location:to_location is a pair of location names separated by a colon. The from_location is the current location a volume should move from. The to_location is the name of the required location where a volume should move to. If you omit to_location, DFSMSrmm uses * as the default. You can specify 1 to 8 pairs of from_location:to_location names. DSTORE processing will then be performed for a volume, if at least one of the location pairs matches the volumes current location and destination. The from_location and to_location names can be specified in one of the following ways:
Specific:
A specific location name is to 8 characters. The location names you specify are not validated against the DFSMSrmm LOCDEF entries or the name of SMS libraries. Example 1: SHELF This means just this one location is included.
Generic:
The location names can be specified in one of the following ways: All location names can be specified using a single asterisk (*). Example 2: * This means all locations are included. Use an asterisk to specify all locations that begin or end with specific characters. Example 3: ATL* This means all location names starting with the characters ATL are included. Use the percent sign (%) in the location name to replace a single character. Up to eight percent signs can be used in a location name mask. Example 4: ABC%%% This means all locations starting with ABC and with any other character up to a total of a 6 character name. Use a combination of the asterisk and the percent sign. Example 5: *AB% Sample jobs for performing inventory management functions are provided in SAMPLIB members EDGJDHKP and EDGJWHKP.
139
Migration considerations
Once installed you must use the sample JCL that was shipped in SAMPLIB as EDGJRPT or update any existing JCL that calls REXX EXEC EDGRRPTE. When running in a sysplex with mixed levels, you should ensure that the housekeeping functions, including report generation occur on the system with the highest level of DFSMSrmm.
140
The report generator allows you to create reports using sequential data sets as input. You must specify mappings of the records in the input data set for the report generator to be able to pick out information in the input data set. You can create a DFSMSrmm extract data set using the EDGHSKP utility to create an input file for the report generator. If you use the extract data set as input, DFSMSrmm provides mapping macros for the records in the extract data set. DFSMSrmm provides report types that associate the input file with the needed mapping macros. The default version of the report generator uses DFSORT ICETOOL as the reporting tool to create the reports. A sample reporting tool using SYNCSORT is also provided. If you want to use other types of input data sets or other reporting tools, you can modify the report generator to accomplish this. You must also provide mapping of the input data set records. There are no special requirements for using the new report generator, only to have DFSORT installed if you want to use the default reporting tool and input data set.
141
In addition, a JCL library to save the generated report JCL can be specified. The default library name is userid.REPORT.JCL. The default libraries are defined in the EDGRMAIN exec, refer to Implementation steps on page 143 for more information. All four libraries must be partitioned data sets with fixed 80 byte records. If the two user libraries are not predefined, DFSMSrmm will allocate them automatically with a primary and secondary space of 10 tracks and 50 directory blocks. All data sets need to be specified in normal ISPF convention, fully qualified with single quotes, or without quotes. If the data set name is specified without quotes then the data set names is automatically expanded to the full qualifier using the TSO PREFIX value and enclosed within single quotes. If the user has specified NOPREFIX in the ISPF profile, then the RACF userid will be used as the HLQ. You can select from predefined report types and report definitions or create your own report types and definitions. To specify the four libraries needed for the report generator, you can use the ISPF panels. The library names are initially set to the values defined in the EDGRMAIN exec. You can change the names if you want. If you select option 0 (OPTIONS) in the Removable Media Manager primary menu, then you receive the Dialog Options Menu panel, Figure 6-4
Panel Help -----------------------------------------------------------------------------DFSMSrmm Dialog Options Menu Option ===> 1 USER 2 SORT 3 REPORT - Specify processing options - Specify list sort options - Specify report options
Enter selected option or END command. For more info., enter HELP or PF1.
142
From this panel, select option 3 (REPORT), and you get the Report Options panel, Figure 6-5. In this panel you enter the name of the four libraries required.
Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Options Command ===> Report definition libraries: User . . . . . . . . . . . 'MHLRES2.REPORT.LIB' Installation . . . . . . . 'RMM.REPORT.LIB' Product . . . . . . . . . 'SYS1.SAMPLIB' User report JCL library . . 'MHLRES2.REPORT.JCL'
Implementation steps
There are a number of steps to follow before being able to use the report generator, and these steps will vary depending on the type of user.
Storage administrator
These steps are for the storage administrator: 1. Define the installation library to be used in your installation as user library in the Report Options panel and allocate it manually. 2. Define the JCL library and specify the product library. The product library by default is SYS1.SAMPLIB. 3. Provide READ authority to the necessary users to the installation and product libraries. 4. Select the Report Types panel and add or change reports types shipped with the product and set them up for your users. 5. Select the Report Definition panel and add or change the reports shipped with the product and set them up for your users. 6. Define the installation library name in EXEC EDGRMAIN as well as the product library name, if not equal to SYS1.SAMPLIB.
143
You may want to consider leaving the Product library name blank as the RMM dialog searches the entire library when searching for reports. If you do this you will need to copy the IBM provided samples from SAMPLIB to your installation library. 7. Update the default naming convention in EXEC EDGRMAIN for user library name and the JCL library name if necessary.
144
Report types are only used when a new report definition is created. Once the report definition has been created all report type information is copied to the report definition. Changes in the report types will not be reflected in existing report definitions. The report type contains the following information: The report type name: This is the unique identifier of all the report types. The report type description: This is a required field. The name of the associated macros: If a record is mapped by the concatenation of more than one macro (for example, DFSMSrmm SMF records have two macros: EDGSMFAR for the SMF header and EDGSVREC for DFSMSrmm SMF volume record), then you can specify up to five macros. The macro structures are concatenated in the sequence they are specified. Macros must be defined in ASSEMBLER. One macro name is required. The name of the macro library: All macros need to be defined in the same library.This is a required field. The report input data set: This is the data set containing the input records for the report mapped by the macro definition. If you are using GDGs or always use the same data set name, the data set name can already be specified here. If you plan to use different input data set names then you should leave this field empty, since you will be prompted for the name in any case before the report JCL is created. The input data set must be a sequential file. This is an optional field. The report creation information: This is the creating timestamp and user ID. This information is created automatically. The report last change information: This is the last change timestamp and user ID. This information is created automatically.
145
The record selection criteria: Record selection criteria specify the criteria for selecting records from the input data set in case different record types exist. As with selection of records in the report definition dialog, you can specify criteria for the different fields. Refer to The report definition support on page 147 for more details about the selection criteria. These criteria are not displayed in the ISPF table. You have to select the entry and then you can specify the criteria. The selection criteria are passed to the report definition and can be changed there as well.
146
The reporting tool definition contains the following information: The reporting tool EXEC name: The reporting tool EXEC is the name of the member that creates the reporting JCL out of the report definition. This is a unique identifier of all reporting tools. The reporting tool description: The reporting tool description is the synonym of the reporting tool (for example, ICETOOL, SYNCTOOL, SAS). This is a required field. The column space: This is the number of spaces between the report columns. The value depends on the reporting tool being used. It will only be used for the calculation of the report width. This is a required field. The report creation information: This is the creating timestamp and user ID and is generated automatically. The report last change information: This is the last change timestamp and user ID and is generated automatically.
147
Panel Help -----------------------------------------------------------------------------DFSMSrmm Command Menu - z/OS V1R3 Option ===> 0 1 2 3 4 5 6 7 R OPTIONS VOLUME RACK DATA SET OWNER PRODUCT VRS CONTROL REPORT Specify dialog options and defaults Volume commands Rack and bin commands Data set commands Owner commands Product commands Vital record specifications Display system control information Report generator
Enter selected option or END command. For more info., enter HELP or PF1.
Panel Help -----------------------------------------------------------------------------DFSMSrmm User Menu - z/OS V1R3 Option ===> 0 1 2 3 4 5 6 R OPTIONS VOLUME DATA SET PRODUCTS OWNER REQUEST RELEASE REPORT Specify dialog options and defaults Display list of volumes Display list of data sets Display list of products Display or change owner information Request a new volume Release an owned volume Work with reports
Enter selected option or END command. For more info., enter HELP or PF1.
148
Panel Help -----------------------------------------------------------------------------DFSMSrmm Librarian Menu - z/OS V1R3 Option ===> 0 1 2 3 4 5 6 7 8 9 A B R OPTIONS VOLUME ADDPP PRODUCTS OWNER RACKS ADDVOL SCRATCH RELEASE CONFIRM REQUEST STACKED REPORT Specify dialog options and defaults Display or change volume information Add a product Search for products Display or change owner information Manipulate library racks and storage location bins Add a volume to the library Add SCRATCH volumes to the library Release volumes Confirm librarian/operator actions Assign a user volume to an owner Add stacked volumes Work with reports For more info., enter HELP or PF1.
From the Command Menu panel, and using the REPORT primary line command within the RMM dialog, you can reach the Report Generator panel (Figure 6-9).
Panel Help ------------------------------------------------------------------------------DFSMSrmm Report Generator Option ===> 0 1 2 3 OPTIONS REPORT REPORT TYPE REPORTING TOOL Specify dialog options and defaults Work with reports Work with report types Work with reporting tools
Enter selected option or END command. For more info., enter HELP or PF1.
149
From the Dialog Options Menu panel (Figure 6-4 on page 142), selecting option 3 (REPORT) , and from the Report Generator panel (Figure 6-9), selecting option 0 (OPTIONS), you can reach the Report Options Menu panel, Figure 6-4 on page 142. In this panel you can change the names of the different libraries needed for the report generator.
Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Definition Search Command ===> Report name . . User id . . . . Libraries (enter S): User Installation Product May be generic. Leave blank for all reports.
Leave blank for all user ids. Select one or more library. Default are all defined libraries.
The A G N T
following line commands will be available when the list is displayed: - Add a new report definition D - Delete a report definition - Generate and save the JCL J - Edit and manually submit the JCL - Copy a report definition S - Display or change the report definition - Select a reporting tool
From this panel you can enter a Report name to search for report definitions by name. The report definition name is the name of a member in one of the report definition libraries. You can also use the User ID fields to search for report definitions updated by a specific user. Finally, you can select one or more of the user, installation or product libraries among the three specified.
150
You can also leave these fields blank to get a list of all available report definitions. In Figure 6-11 you can see an example Report Definitions panel with a list of all the IBM supplied reports.
Panel Help ------------------------------------------------------------------------------DFSMSrmm Report Definitions Row 1 to 18 of 18 Command ===> Scroll ===> PAGE The following line commands are valid: A,D,G,J,N,S, and T S Name - -------S EDGGAUD1 EDGGAUD2 EDGGR01 EDGGR02 EDGGR03 EDGGR04 EDGGR06 EDGGR07 EDGGR08 EDGGR09 EDGGR10 EDGGR11 EDGGR12 EDGGR13 EDGGR14 EDGGR15 EDGGSEC1 Report title -----------------------------SMF Audit of Volumes by Volser SMF Audit of Volume by Rack Scratch tapes by volume serial List of SCRATCH Volumes by Dat Inventory List by Volume Seria Inventory List by Dataset Name Inventory of Volumes by Locati Inventory of Dataset by Locati Inventory of Bin by Location Datasets in Loan Location Volumes in Loan Location List MultiVolume and MultiFile Movement Report by Dataset Movement Report by Bin Movement Report by Volume Seri Volume Inventory Including Vol Report of Accesses to Secure V Report type ---------------------------SMF Records for Volumes SMF Records for Volumes Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records Extended Extract Records SMF Security Records User id ------RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM RMM
You can create a new report definition by selection the option A in the Report Definitions panel. You can also create a new report definition by copying an existing one, by selecting option N in the Report Definitions panel. In both cases, you will receive a screen with the following prompt for a new report name.
Enter the report name . . . .
151
When you select option A, then you must select the report type (Figure 6-12) and the reporting tool to use in the new report definition (Figure 6-13).
Panel Help ----------------------------------------------------------Select Report Type Row 1 to 12 of 17 Command ===> Scroll ===> PAGE S Report type - -----------------------------Extended Extract Records Extract Records for Bins Extract Records for Data Sets Extract Records for Owners Extract Records for Products Extract Records for Racks Extract Records for Volumes Extract Records for VRSs HSKP ACTIVITY file records SMF Records for Bins SMF Records for Data Sets SMF Records for Owners Name -------EDGRXEXT EDGRSEXT EDGRDEXT EDGROEXT EDGRPEXT EDGRREXT EDGRVEXT EDGRKEXT EDGACTRC EDGSSREC EDGSDREC EDGSOREC
Panel Help ----------------------------------------------------------Select Reporting Tool Row 1 to 2 of 2 Command ===> Scroll ===> PAGE S Reporting tool - -----------------------------ICETOOL SYNCTOOL ********************* Bottom of data **********************
The next step is to select the criteria fields for creating the report. In the Report Definition panel, Figure 6-14, you define the header, the footer, the reporting tool, and the fields to be reported.
152
Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Definition - SCRATCH Row 1 to 18 of 171 Command ===> Scroll ===> PAGE Report title . . . Scratch volumes Report footer . . ITSO Reporting tool . : ICETOOL
Use END to save changes, NOSAVE to ignore Select a field name with S to specify a field selection criterion S CO SO - -- --1 1A 2 3 4 5 6 7 Field name -------------------XVVOLSER XVMDMVID XVUSE XVSTORID XVOWNID XVLRDDAT XVLWTDAT RXTYPE XVPVOL XVNVOL XVCRDATE XVCRTIME XVCRSID XVLCDATE XVLCTIME XVLCUID XVLCSID XVEXPDTO Column header text -----------------------------------Volume serial number Multi-dataset multi-volume id Volume use count Current location name Volume owner userid Date volume last read Date volume last written Record type - C'X' Previous volume in sequence Next volume in sequence Create date of volume record Create time volume record (hhmms Create system id of volume recor Last change date of volume recor Last change time of volume recor Last change user id of volume Last change system id of volume Expiration date - original CW Len Typ --- --- --20 6 C 29 8 C 16 4 C 21 8 C 19 8 C 21 10 C 24 10 C 18 1 C 27 6 C 23 6 C 28 10 C 32 6 C 32 8 C 32 10 C 32 6 C 29 8 C 31 8 C 27 10 C
The Report Definition panel displays a list of fields in the record and any report criteria specified previously or with the report type. Use the S line command to select the fields for which you would like to view or change the selection criteria. Update the column header test as required for your reporting columns. You can also enter data in the panel to specify which fields are to be included in the report, sorted on and used to group records onto a new page. The grouped fields are used as the primary sort key and to separate the records into pages. Once you have selected the criteria fields for your report, you get the Report Criteria panel, Figure 6-15, in which you can define the criteria for selecting records in your report.
153
Panel Help -----------------------------------------------------------------------------DFSMSrmm Report Criteria - SCRATCH Row 1 to 2 of 2 Command ===> Scroll ===> PAGE Report title : Scratch volumes Use END to save changes, NOSAVE to ignore The following line commands are valid: B,D,I,N,P,R, and T Comparison operators: EQ =, NE <>, GT >, GE >=, LT <, LE <=, IN, BW Conjunction: AND, OR, AND(, )AND S Field name Op Compare value(s) Conj Len Typ - -------------------- -- --------------------------------------- ---- --- --RXTYPE EQ X 1 C XVVOLSER EQ 8* 6 C ******************************* Bottom of data ********************************
When you select option I in the previous panel, you get the Report Criteria Details panel, showing the details of the field you selected; see Figure 6-16.
Field name . . . Operation . . . Compare value(s) Compare value(s) Conjunction . . Length . . . . . Type . . . . . .
. . . . . . .
: . . . . : :
XVVOLSER EQ 8*
6 C
154
When creating a new report by copying an existing one, all the values from the old definition remain the same for the new one.
Input data set . . . 'RMM.EXTRACT' New data set name to be stored in the report definition . . . . . N (Y/N)
The input data set name should normally be the same as the data set specified on the XREPTEXT DD in the EDGHSKP JCL or the data set name containing the SMF extract records from EDGJSMFP. Once the JCL has been generated, it will be saved into the User Report JCL library as specified on the Report Options panel (Figure 6-5 on page 143). Once the JCL has been generated, you can edit and submit it by selecting J from the Report Definitions panel. For further information, refer to z/OS V1R3.0 DFSMSrmm Reporting, SC26-7406.
155
Buffered tape marks are application requested via assembler program DCBE macro option: SYNC=NONE. This results in no synchronization of the controller buffer to the tape media when tape marks are written. Currently this option only has an effect on IBM 3590 MAGSTAR Tape Subsystems. Buffering tape mark support allows multiple files to be written at streaming speed when volume disposition leaves the tape positioned at the end of file created. If the device supports buffered tape marks, the OPEN, EOV and CLOSE functions will take advantage of it when writing. This can save several seconds of real time. If the device does not support buffered tape marks, this option has no effect. Similarly, this option has no effect on older level systems. The system does not ensure that user data, tape marks and data set labels are actually written to the tape media when transitioning to a new volume or when closing the data set. However, if the application ensures that the CLOSE for the last file written is issued with the REWIND or REREAD option a synchronize failure will result in the job abending and a message externalizing the number of lost blocks the number of blocks written to the buffer but not to the tape media.
156
Chapter 7.
157
158
A single XRC session can manage somewhere between 10003000 3390-3 volumes, depending on the workload characteristics. These characteristics include workloads with a high-write rate at the low end of the scale, and those with a high read-to-write ratio and low-write rate at the higher end of the scale. For most workloads, a single XRC session can manage approximately 1500 3390-3 volumes. The limiting factors for a single XRC session are the single central processor (CP) speed and the amount of available storage. As processor speeds and storage capabilities increase, the number of volumes capable of being managed per XCF session also increases. While an XRC session could potentially support up to 3000 primary volumes, this limit may not be practical in your environment. Two enhancements have been made to XRC to cater for larger environments where the number of volumes is greater than 2000-3000 or a large number of storage controllers (or both) were required to be copied using XRC for disaster recovery purposes. The CPU and amount of storage available are still limiting factors, as is workload characteristics. The two enhancements are: Multiple XRC Coupled XRC Coupled XRC and Multiple XRC are standard features of z/OS V1R3 DFSMS. Both of these enhancements are available for earlier releases back to DFSMS 1.4 via APAR OW43316.
How it works
Multiple XRC is implemented by running up to 6 address spaces for the SDM, ANTAS000 through ANTAS005. ANTAS000 is the primary address space and handles TSO commands and API requests that control XRC. This address space is started at IPL time.
159
ANTAS001 through ANTAS005 may or may not be present depending on the number of XRC sessions you have started using the XSTART command.
A minimum of two journal data sets are required and you can allocate up to 16. Refer to z/OS DFSMS Advanced Copy Services, SC35-0428-01, for sizing information. Control data set hlq.XCOPY.session_id.CONTROL: The control data set contains consistent group information on the secondary volumes and the journal data set. It contains information necessary for recovery operations. The control data set acts as the table of contents for the session.
Example: The control data set keeps track of data written to secondary volumes, the location of unwritten data in the journal set, and which group to start recovery with.
The control data set must be a sequential data set. We recommend the following allocation:
DCB=(RECFM=FB,LRECL=15360,BLKSIZE=15360,DSORG=PS)
Refer to z/OS DFSMS Advanced Copy Services, SC35-0428-01, for sizing information. State data set hlq.XCOPY.session_id.STATE: The state data set contains status of the XRC session and of associated volumes that XRC is managing. The state data set is updated if an XADDPAIR, XDELPAIR, XSET, XSUSPEND, XRECOVER, or XEND command is issued, or whenever a volume state changes. Allocate the state data set on disk as an SMS-managed partitioned data set extended (PDSE) data set with the following attributes:
DCB=(RECFM=FB,LRECL=4096,BLKSIZE=4096,DSORG=PO),DSNTYPE=LIBRARY
160
Allocate ten tracks per storage control session and one track for each volume pair in the storage control session. Try to plan for expected future growth when you initially allocate the state data set. There is no harm in over-allocating this data set but it is inconvenient if you under-allocate and then have to re-size it once XRC has been implemented. The default for the HLQ is SYS1. If you choose to use a HLQ other than SYS1, then you must specify the HLQ keyword on the XSTART command. The session_id specified in the data set names must match the session_id keyword specified on the XSTART command.
Configuring CXRC
To configure CXRC, a master data set required in addition to the three required for each XRC session. Master data set hlq.XCOPY.msession.MASTER. The master data set ensures recoverable consistency among all XRC sessions contained within the Coupled XRC system. The HLQ of the data set can be changed on the XCOUPLE ADD command using the MHLQ keyword. This HLQ does not have to be the same as that for the individual XRC sessions. The default is SYS1. The msession specified in the data set name must match the msession keyword specified on the XCOUPLE ADD command. When XRC sessions are added to a CXRC configuration, the state data set for each XRC session contains additional information which identifies the master session it is coupled to and is updated when an XCOUPLE command is issued.
161
Allocate the master data set as physical sequential and not striped without defining secondary extents.
Note: The required size for the master data set is fixed at one cylinder. This size allows for 14 XRC sessions to be coupled.
Allocate the master data set with one cylinder primary space and zero cylinders secondary space as follows:
DCB=(RECFM=FB,LRECL=15360,BLKSIZE=15360,DSORG=PS),SPACE=(CYLS,(1,0))
Allocate the master data set on a single disk device and pre-allocate the master data set size before use of the XCOUPLE command. Only the space that is allocated at the time the XCOUPLE ADD command is issued, will be available for XRC use. It is recommended that you place the master data set in a user catalog that contains only entries for the master data set.
162
Issue the XCOUPLE command for the SC64XRC session from the SC64 system.
XCOUPLE SC64 ADD MSESSION(COUPLED) MHLQ(MHLRES2)
Message ANTC8400I is display to indicate that a XRC session was added to a Coupled XRC session.
ANTL8800I ANTQ8300I ANTQ8202I ANTQ8302I ANTQ8303I ANTQ8304I ANTQ8304I ANTQ8305I ANTQ8308I ANTQ8309I ANTQ8301I
XQUERY COUPLED MASTER MHLQ(MHLRES2) XQUERY STARTED FOR MSESSION(COUPLED) MHLQ(MHLRES2) 361 XQUERY MASTER REPORT - 002 SESSION STA VOL INT CMD JOURNAL DELTA RCV/ADV DELTA ------------------------------------------------------------SC63 ACT Y =00:00:00.000000 =00:00:00.000000 SC64 ACT NOV N TOTAL=2 ACT=2 SUS=0 END=0 ARV=0 RCV=0 UNK=0 MSESSION RECOVERABLE TIME(2002.094 19:59:58.857134) INTERLOCKED=1 NON-INTERLOCKED=1 XQUERY MASTER REPORT COMPLETE FOR MSESSION(COUPLED)
163
7.1.5 QUICKCOPY
XRC has also introduced a new keyword to the XADDPAIR function, which is used to define primary and secondary volume pairs. When QUICKCOPY is specified, XRC only copies the actual allocated space on the volume. The default on the XADDPAIR command is FULLCOPY this ensures that function is maintained as it is today. You can change the default by using the XSET COPY command.
164
Appendix A.
165
166
Name
Type
Character Character Character Character Character
Length
8 8 8 8 44
Description
Jobname Program name Step name DD name Data set name
SMF42JBN SMF42PGN
SMF42STN SMF42DDN SMF42DSN SMF42SPQ SMF42RSP SMF42UNT SMF42VDC SMF42DCL SMF42DCN SMF42VMC SMF42MCL SMF42MCN SMF42VSC SMF42SCL SMF42SCN SMF42SGS SMF42SGL SMF42SGN
Fixed Character
31 2
Fixed Character
16 30
Fixed Character
16 30
Fixed Character
16 30
Fixed Character
15 30
167
Number of second backup objects written to optical Number of bytes of second backup object data written to optical Number of second backup objects read from optical Number of kbytes of second backup object data read from optical Number of second backup objects deleted from optical Number of kbytes of second backup object data deleted from optical Number of second backup objects written to tape Number of kbytes of second backup object data written to tape Number of second backup objects read from tape Number of kbytes of second backup object data read from tape Number of second backup objects deleted from tape Number of kbytes of second backup object data deleted from tape
Subtype 34: Volume Recovery Utility:
Number of second backup objects written to optical Number of kbytes of second backup object data written to optical Number of second backup objects read from optical Number of kbytes of second backup object data read from optical Number ofNumber of second backup objects deleted from optical Number of kbytes of second backup object data deleted from optical Number of second backup objects written to tape Number of kbytes of second backup object data written to tape Number of second backup objects read from tape Number of kbytes of second backup object data read from tape Number of second backup objects deleted from tape Number of kbytes of second backup object data deleted from tape
Subtype 36: Single Object Recovery:
168
Offsets
131 (83)
Type
Bitstring 1... ....
Length
1
Name
FSRMFLGS FSRFRTRY
Description
Flags from MWE When set to 1, the backup copy was made during a retry, after the first try failed because the data set was in use When set to 1, this request completed successfully on a remote system When set to 1, request was completed by a tape already mounted When set to 1, remote host processed
Reserved Host ID that generated the request. Only valid for recall requests.
169
170
Appendix B.
Maintenance information
Throughout this book there is recommended maintenance for the various components of DFSMS. This appendix contains the responder text for several of the APARs in order to provide you with the information you need to successfully implement the function in z/OS V1R3 DFSMS.
171
172
One recommendation is to make the primary space allocation of the Index component (in tracks) equal to the number of cylinders the data set uses. This means that, for a data set using 200 cylinders, the Index component should have a primary allocation of 200 tracks. Also, secondary space should always be defined. Large data sets could be defined with an Index component space allocation of 2 cylinders with a smaller secondary space allocation. PLEASE NOTE: REPLICATE is not ignored in the base code of DFSMS R150. OW44442 corrects this omission and should be applied as soon as possible so IMBED and REPLICATE are treated together. Without OW44442, the IMBED/REPLICATE option will be treated as NOIMBED/REPLICATE which requires more index space than either IMBED/REPLICATE or NOIMBED/NOREPLICATE. For R1F0, after applying the PTF of APAR OW41955, the new return code IEC161I RC254 will be generated when OPEN for OUTPUT is done against a VSAM IMBED/REPLICATE/KEYRANGE data set. The associated sfi code means IMBED (001), KEYRANGE (002), and REPLICATE (003). The new RC254 with SFI 1, 2 and 3 are informational only and do not require any immediate action. It is intended as an aid to customers to identify data sets currently defined with IMBED, REPLICATE or KEYRANGE. If the message IEC161I rc254 is created 00000000 will be returned in REG15, and no ACBERFLG value will be returned. PLEASE NOTE: After converting (migrating) to R1F0 When a DEFINE CLUSTER contains the IMBED and/or REPLICATE keywords they will be ignored and the CLUSTER will be defined as NOIMBED and/or NOREPLICATE. Thus, msgIEC161i rc254 message will no longer be generated after the CLUSTER is re-DEFINEd under R1F0. PLEASE NOTE: For further information regarding KEYRANGE data sets please see II12896, and WSC Flash10072 which may be found at:
http://www-1.ibm.com/servlet/support/manager?rs=0&rt=0&org=ats &doc=F9AD8FC0B58E4A6A852569F5004ADC21
You may also find this Web site by: 1. Go to http://www.ibm.com 2. Select Support & downloads 3. Type Flash10072 (without the quotes) in the field Search for technical support by keyword(s): 4. Double-click on the title KEYRANGE Specification to be Ignored in Future Release of DFSMS
173
Datamover selection
DFSMShsm can use either DFSMSdss or DFSMShsm as the datamover when performing either space or availability management. DFSMSdss has been the default data mover since the introduction of DFSMS/MVS V1R1 and is being used in most DFSMShsm installations.
174
When DFSMShsm is the datamover, IDCAMS is invoked to manage VSAM data sets. Because of this, beginning with the next release of DFSMS, VSAM data sets with keyranges will have their keyranges removed during Recall and Recovery operations. To verify that your installation is not using DFSMShsm as the datamover, examine your DFSMShsm startup parmlib member for a patch to the Datamover Selection Table (DMVST). If this patch is being used, then DFSMShsm is being selected as the datamover. IBM recommends that only DFSMSdss be used as the datamover.
Migration
Attempts to migrate VSAM keyrange data sets with the DFSMShsm datamover will be failed. VSAM keyrange data sets must be migrated using the DFSMSdss datamover.
Recall
VSAM keyrange data sets that were previously migrated with the DFSMShsm datamover will be recalled, but the keyranges will be removed. A warning message will be issued to indicate this. If possible, the data set should be recalled on a lower level system. VSAM keyrange data sets that were previously migrated with the DFSMSdss datamover will be recalled with the keyranges intact.
Backup
Attempts to backup VSAM keyrange data sets with the DFSMShsm datamover will be allowed, but a warning message will be issued. The warning message will indicate that the keyranges will be removed if the data set is recovered.
Recovery
VSAM keyrange data sets that were previously backed up with the DFSMShsm datamover will be recovered, but the keyranges will be removed. A warning message will be issued to indicate this. If possible, the data set should be recovered on a lower level system. VSAM keyrange data sets that were previously backed up with the DFSMShsm datamover will be recovered, but the keyranges will be removed. A warning message will be issued to indicate this. If possible, the data set should be recovered on a lower level system.
175
VSAM keyrange data sets that were previously backed up with the DFSMSdss datamover will be recovered with the keyranges intact. Other DFSMShsm functions remain unchanged.
The errors seen during processing of this GDG base are unpredictable, but may include MSGIGD07001I with RC14 RSN0 module IGG0CLED, or MSGIDC3009I RC24 RSN12. There is a method to help detect whether or not you have any GDGs susceptible to this problem, and a step you can take to correct those old-format GDGs to avoid the problem entirely: 1. If you know the date when JDP2230 or OZ97150 was installed on your system, you may run a LISTCAT CREATE(xxxx) GDG ALL where 'xxxx' is the number of days that have passed since that date. This LISTCAT will only show those GDG bases that were defined prior to that date. If you have none, then this problem should not occur in your environment. 2. If you issue an IDCAMS ALTER of the expiration date of: a. All of your GDG bases, or b. Only those that are listed in step 1 above, the old record format will automatically be upgraded to the new format, and you will not be susceptible to this problem. The IDCAMS ALTER must be successful for this to occur. You may alter the expiration date to its current value it is not necessary to alter it to a new date for this to correct the down level record.
176
Note: You cannot alter GDG expiration dates in HDZ11G0, as that support has been removed. You must do the ALTER from a prior level system.
Problem summary
USERS AFFECTED: All releases using GDGs
Problem description:
Use of a GDG under HDZ11G0 may break the GDG catalog record, if the GDG base was created prior to installation of Y2K support (product FMID JDP2230 or APAR OZ97150 added support to Catalog, approximate date of ship was 5/86).
Recommendation
GDG catalog records can be corrupted if they were created prior to installation of JDP2230 or OZ97150 on the system, and those GDGs are accessed by a HDZ11G0 system. Also, changes are necessary on lower-level releases to ensure that expiration dates are no longer accepted or modified for GDG bases (HDZ11G0 removed support for expiration dates for GDGs, and this APAR includes toleration support for that change for other releases). Note that failure to install the PTFs on lower-level releases will not result in any catalog corruption, however the dates shown under HDZ11G0 for a GDG last alter date will be incorrect. This data is not currently used by any catalog functions but is made available for customer use. Incorrect dates in this field will not affect catalog operation, but any new user programs that extract and take action based on last alter dates of the GDG may make incorrect decisions if lower-level releases do not have this fix installed and they are used to alter the expiration dates of GDGs.
Problem conclusion
For the HDZ11G0 APAR, if a down-level (e.g. pre-Y2K) format GDG base record is encountered when adding or deleting a GDS, the old format cell will be upgraded to the new format as part of the update. This will prevent any breakage of the catalog record. For the other releases, this APAR fix prevents users from altering or setting the expiration of GDG bases, which is incompatible with the change in HDZ11G0. Beginning with HDZ11G0, expiration dates are no longer supported for GDG bases. Without this fix for the releases before HDZ11G0, users can alter the expiration date of a GDG, and it will incorrectly show up as the last alteration date if the GDG base is listed under HDZ11G0. If you attempt to alter the expiration date of a GDG base under HDZ11G0, it will fail with MSGIDC3009I RC60 RSN30. After installing this fix on releases prior to HDZ11G0, an attempt to alter the expiration date of a GDG base will fail for the same reason.
177
178
Glossary
Your glossary term, acronym or abbreviation. Term definition.
A ABARS. Aggregate backup and recovery support. ABR. Aggregate backup and recovery record. access method services. A multifunction service program that manages VSAM and non-VSAM data sets, as well as integrated catalog facility (ICF). Access method services provides the following functions:
accompany list. An optional list in the selection data set that identifies the accompany data sets. ACDS. Active control data set. ACS. Automatic class selection. active control data set (ACDS). A VSAM linear data set that contains an SCDS that has been activated to control the storage management policy for the installation. When activating an SCDS, you determine which ACDS will hold the active configuration (if you have defined more than one ACDS). The ACS is shared by each system that is using the same SMS configuration to manage storage. active data. Data that is frequently accessed by users and that resides on level 0 volumes. activity log. In DFSMShsm, a SYSOUT or data set on disk used to record activity and errors that occurred during DFSMShsm processing. AG. Aggregate group. aggregate backup. The process of copying the data sets and control information of a user-defined group of data sets so that they may be recovered later as an entity by an aggregate recovery process. aggregate data sets. In aggregate backup and recovery processing, data sets that have been defined in an aggregate group as being related.
Defines and allocates space for data sets and catalogs Converts indexed-sequential data sets to key-sequenced data sets Modifies data set attributes in the catalog Reorganizes data sets Facilitates data portability among operating systems Creates backup copies of data sets Assists in making inaccessible data sets accessible Lists the records of data sets and catalogs Defines and builds alternate indexes Converts CVOLs to ICF catalogs
accompany data set. In aggregate backup and recovery processing, a data set that is physically transported from the backup site to the recovery site instead of being copied to the aggregate data tape. It is cataloged during recovery.
179
aggregate group. A Storage Management Subsystem construct that defines control information and identifies the data sets to be backed up by a specific aggregate backup. aggregate recovery. The process of recovering a user-defined group of data sets that were backed up by aggregate backup. ATL. Automated tape library. audit. A DFSMShsm process that detects discrepancies between data set information in the VTOCs, the computing system catalog, the MCDS, BCDS, and OCDS. authorized user. In DFSMShsm, the person or persons who are authorized through the DFSMShsm AUTH command to issue DFSMShsm system programmer, storage administrator, and operator commands. automated tape library. A device consisting of robotic components, cartridge storage frames, tape subsystems, and controlling hardware and software, together with the set of volumes which reside in the library and may be mounted on the library tape drives. automatic backup. In DFSMShsm, the process of automatically copying eligible data sets from DFSMShsm-managed volumes or migration volumes to backup volumes during a specified backup cycle. automatic class selection (ACS) routine. A procedural set of ACS language statements. Based on a set of input variables, the ACS language statements generate the name of a predefined SMS class, or a list of names of predefined storage groups, for a data set. automatic class selection (ACS). A mechanism for assigning SMS classes and storage groups.
automatic dump. In DFSMShsm, the process of using DFSMSdss to automatically do a full volume dump of all allocated space on DFSMShsm-managed volumes to designated tape dump volumes. automatic interval migration. In DFSMShsm, automatic migration that occurs periodically when a threshold level of occupancy is reached or exceeded on a DFSMShsm-managed volume during a specified time interval. Data sets are moved from the volume, largest eligible data set first, until the low threshold of occupancy is reached. automatic primary space management. In DFSMShsm, the process of automatically deleting expired data sets, deleting temporary data sets, releasing unused overallocated space, and migrating data sets from DFSMShsm-managed volumes. automatic secondary space management. In DFSMShsm, the process of automatically deleting expired migrated data sets from the migration volumes, deleting expired records from the migration control data set, and migrating eligible data sets from level 1 volumes to level 2 volumes. automatic space management. In DFSMShsm, includes automatic volume space management, automatic secondary space management, and automatic recall. automatic volume space management. In DFSMShsm, includes automatic primary space management and automatic interval migration. availability management. In DFSMShsm, the process of ensuring that a current version (backup copy) of the installations data sets resides con tape or disk.
180
B backup control data set (BCDS). A VSAM, key-sequenced data set that contains information about backup versions of data sets, backup volumes, dump volumes, and volumes under control of the backup and dump functions of DFSMShsm. backup copy. In DFSMShsm, a copy of a data set that is kept for reference in case the original data set is destroyed. backup cycle. In DFSMShsm, a period of days for within a pattern is used to specify the days in the cycle on which automatic backup is scheduled to take place. backup frequency. In DFSMShsm, the number of days that must elapse since the last backup version of a data set was made until a changed data set is again eligible for backup. backup version. Synonym for backup copy. backup volume. A volume managed by DFSMShsm to which backup versions of data sets are written. backup. In DFSMShsm, the process of copying a data set residing on a level 0 volume, a level 1 volume, or a volume not managed by DFSMShsm to a backup volume. base configuration. The part of an SMS configuration that contains general storage management attributes., such as the default management class, default unit, and default device geometry. It also identifies the systems or system groups that an SMS configuration manages.
base sysplex. A base (or basic) sysplex is the set of one or more MVS systems that is given a cross-system coupling facility (XCF) name and in which the authorized programs can then use XCF coupling services. A base system does not include a coupling facility. basic catalog structure (BCS). The name of the catalog structure in the integrated catalog facility environment. BCDS. Backup control data set. BCS. Basic catalog structure. C CDS. Control data set. CF. Coupling facility. COMMDS. Communications data set. communications data set (COMMDS). The primary means of communications among systems governed by a single SMS configuration. The COMMDS is a VSAM linear data set that contains the name of the ACDS and current utilization statistics for each system-managed volume, which helps balance space among systems running SMS. compaction. In DFSMShsm, a method of compressing and encoding data that is migrated or backed up. compress. To reduce the amount of storage required for a given data set by having the system replace identical words or phrases with a shorter token associated with the word or phrase.
Glossary
181
compressed format. A particular type of extended-format data set specified with the COMPACTION parameter of data class. VSAM can compress individual records in a compressed-format data set. SAM can compress individual blocks in a compressed-format data set. concurrent copy. A function to increase the accessibility of data by enabling you to make a consistent backup or copy of data concurrent with the usual application program processing. construct. One of the following: data class, storage class, management class, storage group, aggregate group, base configuration. control data set. (1) In DFSMShsm, one of three data sets (BCDS, MCDS, and OCDS) that contain records used in DFSMShsm processing. coupling facility (CF). The hardware that provides high-speed caching, list processing, and locking functions in a Parallel Sysplex. D data class. A collection of allocation and space attributes, defined by the storage administrator, that are used to create a data set. Data Facility Sort (DFSORT). An IBM licensed program that is a high-speed data processing utility. DFSORT provides an efficient and flexible way to handle sorting, merging, and copy operations, as well as providing versatile data manipulation at the record, field, and bit level.
Data Facility Storage Management Subsystem (DFSMS). An operating environment that helps automate and centralize the management of storage. To manage storage, SMS provides the storage administrator with control over data class, storage class, management class, storage group, and automatic class selection routine definitions. device category. A storage device classification used by SMS. The device categories are as follows: SMS-managed disk, SMS-managed tape, non-SMS-managed disk, non-SMS-managed tape. DFSMS. Data Facility Storage Management System. DFSMSdfp. A DFSMS functional component or base element of z/OS, that provides functions for storage management, data management, program management, device management, and distributed data management. DFSMSdss. A DFSMS functional component or base element of z/OS, used to copy, move dump, and restore data sets or volumes. DFSMShsm. A DFSMS functional component or base element of z/OS, used for backing up and recovering data, and managing space on volumes in the storage hierarchy. DFSMShsm-managed volume. (1) A primary storage volume, which is defined to DFSMShsm but which does not belong to a storage group. (2) A volume in a storage group, which is using DFSMShsm automatic dump, migration, or backup services.
182
DFSMShsm-owned volume. A storage volume on which DFSMShsm stores backup versions, dump copies, or migrated data sets. DFSMSrmm. A DFSMS functional component or base element of z/OS, that manages removable media. disaster backup. A means to protect a computing rm2 definition. disaster recovery. A procedure for copying and storing an installations essential business data in a secure location, and for recovering that data in the event of a catastrophic problem. dummy storage group. A type of storage group that contains the serial numbers of volumes no longer connected to a system. Dummy storage groups allow existing JCL to function without having to be changed. dump class. A set of characteristics that describes how volume dumps are managed by DFSMShsm. duplexing. The process of writing two sets of identical records in order to create a second copy of data. E EA. Extended addressability. esoteric unit name. A name used to define a group of devices having similar hardware characteristics, such as TAPE or SYSDA.
expiration. The process by which data sets or objects are identified for deletion because their expiration data or retention period has passed. On disk, data sets and objects are deleted. On tape, when all data sets have reached their expiration date, the tape volume is available for reuse. extended addressability. The ability to create and access a VSAM data set that is greater than 4 GB in size. Extended addressability data sets must be allocated with DSNTYPE=EXT and EXTENDED ADDRESSABILITY=Y. extended format. The format of a data set that has a data set name type of EXTENDED. The data set is structured logically the same as a data set that is not in extended format but the physical format is different. extent reduction. In DFSMShsm, the releasing of unused space, reducing the number of extents, and compressing partitioned data sets. F filtering. The process of selecting data sets based on specified criteria. These criteria consist of fully or partially-qualified data set names or of certain data set characteristics. FSR. Functional statistics record. functional statistics record (FSR). A record that is created each time a DFSMShsm function is processed. It contains a log of system activity and is written to the system management facilities (SMF) data set. G GB. Gigabyte.
Glossary
183
GDG. Generation data group. GDS. Generation data set. generation data group (GDG). A collection of data sets with the same base name, such as PAYROLL, that are kept in chronological order. Each data set is called generation data set (GDS). generic unit name. A name assigned to a class of devices with the same geometry (such as 3390). global resource serialization (GRS). A component of z/OS used for serializing use of system resources and for converting hardware reserves on disk volumes to data set enqueues. global scratch pool. A group of empty tapes that do not have unique serial numbers and are not known individually to DFSMShsm. The tapes are not associated with a specific device. GRS. Global resource serialization. H hierarchical file system (HFS) data set. A data set that contains a POSIX-compliant file system, which is a collection of files and directories organized in a hierarchical structure, that can be accessed using z/OS UNIX System Services. HMT. HSM Monitor/Tuner. HSM complex (HSMplex). One or more z/OS images running DFSMShsm that share a common set of control data sets (MCDS, BCDS, OCDS, and Journal).
I inactive data. Copies of active or low-activity data that reside on DFSMShsm-owned dump and incremental backup volumes. incremental backup. In DFSMShsm, the process of copying a data set that has been opened for other than read-only access since the last backup version was created, and that has met the backup frequency criteria. inline backup. The process of copying a specific data set to a migration level 1 volume from a batch environment. This process allows you to back up data sets in the middle of a job. in-place conversion. The process of bringing a volume and the data sets it contains under the control of SMS without data movement, using DFSMSdss. Interactive Storage Management Facility (ISMF). The interactive interface of DFSMS that allows users and storage administrators access to the storage management functions. interval migration. In DFSMShsm, automatic migration that occurs when a threshold level of occupancy is reached or exceeded on a DFSMShsm-managed volume, during a specified time interval. Data sets are moved from the volume, largest eligible data set first, until the low threshold of occupancy is reached. journal data set. In DFSMShsm, a sequential data set used by DFSMShsm for recovery of the MCDS, BCDS, and OCDS. The journal contains a duplicate of each record in the control data sets that has changed since the MCDS, BCDS, and OCDS were last backed up.
184
K KB. Kilobyte; 1024 bytes. level 0 volume. A volume that contains data sets directly accessible by the user. The volume may be either DFSMShsm-managed or non-DFSMShsm-managed. level 1 volume. A volume owned by DFSMShsm containing data sets migrated from a level 0 volume. level 2 volume. A volume under control of DFSMShsm containing data sets that migrated from a level 0 volume, from a level 1 volume, or from a volume not managed by DFSMShsm. M management class. A named collection of management attributes describing the retention, backup, and class transition characteristics for a group of objects in an object storage hierarchy. manual tape library. Installation-defined set of tape drives defined as a logical unit together with the set of system-managed volumes which can be mounted on the drives. The IBM implementation includes one or more 3490 subsystems, each connected by a Library Attachment Facility to a processor running the Library Manager application, and a set of volumes, defined by the installation as part of the library, which resides in shelf storage located near the 3490 subsystems. MB. Megabyte; 1,048,576 bytes. MCB. BCDS data set record. MCC. Backup version record. MCD. MCDS data set record.
MCDS. Migration control data set. MCT. Backup volume record. MCV. Primary and migration volume record. MEDIA2. Enhanced Capacity Cartridge System Tape. MEDIA3. High Performance Cartridge Tape. MEDIA4. Extended High Performance Cartridge Tape. migration control data set (MCDS). In DFSMShsm, a VSAM key-sequenced data set that contains records, control records, user records, records for data sets that have migrated, and records for volumes under migration control of DFSMShsm. migration level 1. DFSMShsm-owned disk volumes that contain data sets migrated from primary storage volumes. The data can be compressed. migration level 2. DFSMShsm-owned tape or disk volumes that contain data sets migrated from primary storage volumes or from migration level 1 volumes. The data can be compressed. migration. The process of moving unused data to lower cost storage in order to make space for high-availability data. If you wish to use the data set, it must be recalled. ML1. Migration level 1. ML2. Migration level 2. MTL. Manual tape library.
Glossary
185
N NaviQuest. A component of DFSMSdfp for implementing, verifying, and maintaining your SMS environment in batch mode. It provides batch testing and reporting capabilities that can be used to automatically create test cases in bulk, run many other storage management tasks in batch mode, and use supplied ACS code fragments as models when creating your own ACS routines. non-DFSMShsm-managed volume. A volume not defined to DFSMShsm containing data sets that are directly accessible to users. O OAM. Object access method. object access method (OAM). An access method that provides storage, retrieval, and storage hierarchy management for objects and provides storage and retrieval management for tape volumes contained in system-managed libraries. object. A named byte stream having no specific format or record orientation. OCDS. Offline control data set.
partitioned data set (PDS). A data set on direct access storage that is divided into partitions, called members, each of which can contain a program, part of a program, or data. partitioned data set extended (PDSE). A system-managed data set that contains an indexed directory and members that are similar to the directory and members of partitioned data sets. A PDSE can be used instead of a partitioned data set. PDS. Partitioned data set. PDSE. Partitioned data set extended. pool storage group. A type of storage group that contains system-managed disk volumes. Pool storage groups allow groups of volumes to be managed as a single entity. primary space allocation. Amount of space requested by a user for a data set when it is created. primary storage. A disk volume available to users for data allocation. The volumes in primary storage are called primary volumes. R RACF. Resource Access Control Facility.
offline control data set (OCDS). In DFSMShsm, a VSAM, key-sequenced data set that contains information about tape backup volumes and tape migration level 2 volumes. P parallel sysplex. A sysplex with one or more coupling facilities, and defined by the COUPLExx members of SYS1.PARMLIB as being a parallel sysplex.
recall. The process of moving a migrated data set from a level 1 or level 2 volume to a DFSMShsm-managed volume or to a volume not managed by DFSMShsm. record-level sharing (RLS). An extension to VSAM that provides direct shared access to a VSAM data set from multiple systems using cross-system locking.
186
recovery. The process of rebuilding data after it has been damaged or destroyed, often by using a backup copy of the data or by reapplying transactions recorded in a log. relative track address (TTR). Relative track and record address on a direct-access device, where TT represents two bytes specifying the track relative to the beginning of the data set, and R is one byte specifying the record on that track. Resource Access Control Facility (RACF). An IBM licensed program that provides access control by identifying users to the system; verifying users of the system; authorizing access to protected resources; logging detected, unauthorized attempts to enter the system; and logging detected accesses to protected resources. RACF is included in z/OS Security Server and is also available as a separate program for the MVS and VM environments. Resource Measurement Facility (RMF). An IBM licensed program or optional element of z/OS, that measures selected areas of system activity and presents the data collected in the format of printed reports, system management facilities (SMF) records, or display reports. Use RMF to evaluate system performance and identify reasons for performance problems. restore. In DFSMShsm, the process of invoking DFSMSdss to perform the programs recover function. In general, it is to return to an original value or image, for example, to restore data in main storage from auxiliary storage. RLS. Record-level sharing. RMF. Resource Measurement Facility.
S SCDS. Source control data set. SDSP. Small data set packing. secondary space allocation. Amount of additional space requested by the user for a data set when primary space is full. service-level agreement. (1) An agreement between the storage administration group and a user group defining what service-levels the former will provide to ensure that users receive the space, availability, performance, and security they need. (2) An agreement between the storage administration group and operations defining what service-level operations will provide to ensure that storage management jobs required by the storage administration group are completed. shelf location. A single space on a shelf for storage of removable media. shelf. A place for storing removable media, such as tape and optical volumes, when they are not being written to or read. small data set packing (SDSP). In DFSMShsm, the process used to migrate data sets that contain equal to or less than a specified amount of actual data. The data sets are written as one or more records into a VSAM data set on a migration level 1 volume. small-data-set-packing data set. In DFSMShsm, a VSAM key-sequenced data set allocated on a migration level 1 volume and containing small data sets that have migrated. SMF. System management facilities.
Glossary
187
SMS complex. A collection of systems or system groups that share a common configuration. All systems in an SMS complex share a common active control data set (ACDS) and a communications data set (COMMDS). The systems or system groups that share the configuration are defined to SMS in the SMS base configuration. SMS configuration. A configuration base, Storage Management Subsystem class, group, library, and drive definitions, and ACS routines that the Storage Management Subsystems uses to manage storage. SMS control data set. A VSAM linear data set containing configurational, operational, or communications information that guides the execution of the Storage Management Subsystem. SMS. Storage Management Subsystem. source control data set (SCDS). A VSAM linear data set containing an SMS configuration. The SMS configuration in an SCDS can be changed and validated using ISMF. space management. In DFSMShsm, the process of managing aged data sets on DFSMShsm-manages and migration volumes. The three types of space management are: migration, deletion, and retirement. specific scratch pool. A group of empty tapes with unique serial numbers that are known to DFSMShsm as a result of being defined to DFSMShsm with the ADDVOL command. spill storage group. An SMS storage group used to satisfy allocations which do not fit into the primary storage group.
storage administrator. A person in the data processing center who is responsible for defining, implementing, and maintaining storage management policies. storage class. A collection of storage attributes that identify performance goals and availability requirements, defined by the storage administrator, used to select a device that can meet those goals and requirements. storage control. The component in a storage subsystem that handles interaction between processor channel and storage devices, runs channel commands, and controls storage devices. storage group. A collection of storage volumes and attributes, defined by the storage administrator. The collections can be a group of disk volumes, or a group of disk, optical, or tape volumes treated as a single object storage hierarchy. storage hierarchy. An arrangement of storage devices with different speeds and capacities. The levels of the storage hierarchy include main storage (memory, disk cache), primary storage (disk containing uncompressed data), migration level 1 (disk containing data in a space-saving format), and migration level 2 (tape cartridges containing data in a space-saving format). storage location. A location physically separate from the removable media library where volumes are stored for disaster recovery, backup, and vital records management.
188
Storage Management Subsystem (SMS). A DFSMS facility used to automate and centralize the management of storage. Using SMS, a storage administrator describes data allocation characteristics, performance and availability goals, backup and retention requirements, and storage requirements to the system through data class, storage class, management class, storage group, and ACS routine definitions. storage management. The activities of data set allocation, placement, monitoring, migration. backup, recall, recovery, and deletion. These can be done either manually or by using automated processes. The Storage Management Subsystem automated these processes for you, while optimizing storage resources. striping. A software implementation of a disk array that distributes a data set across multiple volumes to improve performance. sysplex. A set of MVS or z/OS systems communicating and cooperating with each other through certain multi-system hardware components and software services to process customer workloads. system management facilities (SMF). A component of z/OS that collects input/output (I/O) statistics, provided at the data set and storage class levels, which helps you monitor the performance of the direct access storage subsystem. system-managed data set. A data set that has been assigned a storage class. system-managed storage. An approach to storage management in which the system determines data placement and an automatic data manager handles data backup, movement, space, and security.
system-managed tape library. A collection of tape volumes and tape devices, defined in the tape configuration database. A system-managed tape library can be automated or manual. system-managed volume. A disk, optical, or tape volume that belongs to a storage group. T tape configuration database (TCDB). One or more volume catalogs used to maintain records of system-managed tape libraries and tape volume. Tape Library Dataserver. A hardware device that maintains the tape inventory associated with a set of tape drives. An automated tape library dataserver also manages the mounting, removal, and storage of tapes. tape library. A set of equipment and facilities that support an installations tape environment. This can include tape storage racks, a set of tape drives, and a set of related tape volumes mounted on those drives. tape mount management. The methodology used to optimize tape subsystem operation and use, consisting of hardware and software facilities used to manage tape data efficiently. tape storage group. A type of storage group that contains system-managed private tape volumes. The tape storage group definition specifies the system-managed tape libraries that can contain tape volumes.
Glossary
189
tape subsystem. A magnetic tape subsystem consisting of a controller and devices, which allows for the storage of user data on tape cartridges. Examples of tape subsystems include the IBM 3490 and 3490E Magnetic Tape Subsystems. TB. Terabyte. TTOC. Tape table of contents record. TTR. Relative track address. U unit affinity. Requests that the system allocate different data sets residing on different removable volumes to the same device during execution of the step to reduce the total number of tape drives required to execute the step. Explicit unit affinity is specified by coding the UNIT=AFF JCL keyword on a DD statement. Implicit unit affinity exists when a DD statement requests more volumes that devices. use attribute. (1) The attribute assigned to a disk volume that controls when the volume can be used to allocate new data sets; use attributes are public, private, and storage. (2) For system-managed tape volumes, use attributes are scratch and private. V virtual storage access method (VSAM). An access method for direct or sequential processing of fixed and variable-length records on direct access devices. The records in a VSAM data set or file can be organized in logical sequence by a key field (key sequence), in the physical sequence in which they are written on the data set or file (entry-sequence), or by relative-record number.
vital records. A data set or volume maintained for meeting an externally-imposed retention requirement, such as a legal requirement. volume status. In the Storage Management Subsystem, indicates whether the volume is fully available for system management:
Initial indicates that the volume is not ready for system management because it contains data sets that are ineligible for system management. Converted indicates that all of the data sets on a volume have an associated storage class and are cataloged in an integrated catalog facility catalog. Non-system-managed indicates that the volume does not contain any system-managed data sets and has not been initialized as system-managed.
volume. The storage space on disk, tape. or optical devices, which is identified by a volume label. volume pool. In DFSMShsm, a set of related primary volumes. When a data set is recalled, if the original volume that it was on is in a defined volume pool, the data set can be recalled to one of the volumes in the pool. VTOC. Volume table of contents.
190
Related publications
The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook.
IBM Redbooks
For information on ordering these publications, see How to get IBM Redbooks on page 193.
DFSMS Release 10 Technical Update, SG24-6120 Hierarchical File System Usage Guide, SG24-5482 DFSMSrmm Primer, SG24-5983 VSAM Demystified, SG24-6105
Other resources
These publications are also relevant as further information sources:
z/OS V1R1.0-V1R3.0 DFSMS DFM Guide and Reference, SC26-7395 z/OS V1R1.0-V1R3.0 DFSMS Using the Volume Mount Analyzer, SC26-7413 z/OS V1R1.0-V1R3.0 DFSMSdfp Checkpoint/Restart, SC26-7401 z/OS V1R3.0 DFSMS Access Method Services for Catalogs, SC26-7394 z/OS V1R3.0 DFSMS Installation Exits, SC26-7396 z/OS V1R3.0 DFSMS Introduction, SC26-7397 z/OS V1R3.0 DFSMS Macro Instructions for Data Sets, SC26-7408 z/OS V1R3.0 DFSMS Migration, GC26-7398 z/OS V1R3.0 DFSMS Implementing System Managed Storage, SC26-7407 z/OS V1R3.0 DFSMS Managing Catalogs, SC26-7409 z/OS V1R3.0 DFSMS Using Data Sets, SC26-7410 z/OS V1R3.0 DFSMS Using Magnetic Tapes, SC26-7412 z/OS V1R3.0 DFSMS Using the Interactive Storage Management Facility, SC26-7411
191
z/OS V1R3.0 DFSMSdfp Advanced Services, SC26-7400 z/OS V1R3.0 DFSMSdfp Diagnosis Guide, GY27-7617 z/OS V1R3.0 DFSMSdfp Diagnosis Reference, GY27-7618 z/OS V1R3.0 DFSMSdfp Storage Administration Reference, SC26-7402 z/OS V1R3.0 DFSMSdfp Utilities, SC26-7414 z/OS V1R3.0 DFSMSrmm Application Programming Interface, SC26-7403 z/OS V1R3.0 DFSMSrmm Diagnosis Guide, GY27-7619 z/OS V1R3.0 DFSMSrmm Guide and Reference, SC26-7404 z/OS V1R3.0 DFSMSrmm Implementation and Customization Guide, SC26-7405 z/OS V1R3.0 DFSMSrmm Reporting, SC26-7406 z/OS V1R3.0 DFSMShsm Storage Administration Reference, SC35-0422 z/OS V1R3.0 DFSMS Advanced Copy Services, SC35-0428 z/OS V1R3.0 DFSMS OAM Planning, Installation, and Storage Administration Guide Object Support, SC35-0426 z/OS V1R3.0 DFSMS OAM Planning, Installation, and Storage Administration Guide for Tape Library, SC35-0427 z/OS V1R3.0 DFSMS Object Access Method Application Programmer's Reference, SC35-0425 z/OS V1R3.0 DFSMSdss Storage Administration Guide, SC35-0423 z/OS V1R3.0 DFSMSdss Storage Administration Reference, SC35-0424 z/OS V1R3.0 DFSMShsm Data Recovery Scenarios, GC35-0419 z/OS V1R3.0 DFSMShsm Implementation and Customization Guide, SC35-0418 z/OS V1R3.0 DFSMShsm Managing Your Own Data, SC35-0420 z/OS V1R3.0 DFSMShsm Storage Administration Guide, SC35-0421 z/OS V1R3.0 MVS System Management Facilities, SA22-7630
192
You can also download additional materials (code samples or diskette/CD-ROM images) from that site.
Related publications
193
194
Index
Numerics
3390-9 overview 41 automatic class selection 62, 128
C A
ACS See automatic class selection advanced copy services 157 APARs II12431 172 II12896 174 OW43316 159 OW44442 173 OW44917 123 OW45271 122 OW45557 123 OW45674 115 OW46143 122 OW46387 123 OW47639 126 OW47651 140 OW47947 135 OW47967 140 OW47993 138 OW48234 116 OW48865 121, 128 OW48921 130 OW49148 118 OW49379 42 OW49491 63 OW49833 155 OW49863 133 OW50405 118 OW50528 57 OW53521 16 OW53804 45 OW53834 176 OW54128 57 API See Application Programming Interface Application Programming Interface DFSMSrmm 127 OAM 67 XRC 158 candidate volumes 11 catalog management 47 catalog address space 4 coupling facility caching 4 data set name validity checking 47 data set naming rules 3 defining catalogs 47 dumping CAS 48 expiration date 4 KEYRANGE 4 large real storage 4 MODIFY command 4 performance statistics 49 record size 3 REUSE attribute 4 system managed buffering 4 catalog search interface 43 common recall queue 5, 70 ARCRDEXT exit 95 ARCRPEXT exit 92 auditing the CRQ 106 CF loss 103 CF structure sizing 75 CRQ structure definition 76 diagnostic data collection 109 enabling 73 error recovery 101 full queue 92 HSMplex 70 JES3 environments 96 processing recall requests 104 processing requests 91 rebuilding the CRQ 107 selecting requests 94 usage considerations 72, 90 concurrent copy 114 CONFIGHFS command 51 configure MXRC 160 XRC 160
195
coupling XRC 161 CRQ see common recall queue 5 CSI See catalog search interface CXRC See coupled XRC
D
data set separation 3, 30 profile 30 usage considerations 33 DATACLASS 126 DFSMSdfp 39 caching CIs greater than 4K 59 catalog management 47 catalog performance statistics 49 CONFIGHFS command 51 data set name validity checking 47 dumping CAS 48 EXCP considerations 42 expiration date 57 extended alias support 45 GDG alter date 43 GDG base processing 43 GDG expiration date 44 HFS data sets 52 KEYRANGE parameter 53 large real storage 56 large volume support 40 OAM multiple object backup 61 OAM operator commands 64 OAM volume recovery 67 object access method record level sharing RECORDSIZE parameter 47 retention period 57 RLS lock structures 60 striped data sets 57 system managed buffering z/OS V1R3 enhancements 3 DFSMSdfp enhancements 3 DFSMSdss dump conditioning 115 full volume restore 116 HFS logical copy 114 large volume support 117 DFSMSdss keyword
ALLDATA 113 COPY 113, 115 COPY TRACK 117 COPYVOLID 115 DELETE 114 DUMP 115 DUMPCONDITIONING 117 DYNALLOC 113 PURGE 117 RESTORE 116 TOLERATE(ENQFAILURE) 113 DFSMShsm ARCRDEXT exit 95 ARCRPEXT exit 92 AUDIT FIX 107 auditing the CRQ 106 common recall queue 5 See common recall queue keyrange data sets 109 QUERY IMAGE command 73 DFSMSrmm API 127 bin management 133 control data set 126 extended addressability 126 extended format 126 control data set growth 137 conversion process 123 conversion tools 123 dialog multi-volume alert 122 DSTORE by location 138 EDGGTOOL 146 EDGRMMxx 128 EDGUX100 exit 127 error diagnostic messages 121 extended extract file 140 generating report JCL 155 Home location 130 journal data set growth 137 management class 128 object access method 121 RACF FACILITY class profile 140 reassign processing 134 report definition 147 report generator 140 report type support 144 reporting tool 146 special character support 120 storage group 128
196
storage location management 138 storage locations 129 TSO/E help 121 DO See system managed buffering dump conditioning 5, 115 DVC See dynamic volume count dynamic volume count 2, 8 advantages 16 candidate volumes 12 changing DVC value 15 enabling 10 extend storage group 20 LISTCAT 13 maintenance 16 space constraint relief 17 supported data set types 9 TIOT size 14 value considerations 14 volume selection 10
XDELPAIR 160 XEND 160 XQUERY 163 XRECOVER 160, 162 XSET 160, 164 XSTART 158, 161 XSUSPEND 160 configure CXRC 161 control data set 160 coupled XRC 161 displaying CXRC session 163 journal data set 160 master data set 161 multiple XRC 159 overview 158 state data set 160
F
Flashcopy 117
E
EA See extended addressability EDGHSKP utility 141 EDGRMAIN exec 142 EDGRRPTE exec 140 ESS See IBM TotalStorage ESS EXCP 42 exits EDGUX100 127128 SLSUX06 133 expiration date 57 extend storage group 2, 17 defining 18 DFSMShsm 22 DVC processing 20 star configuration 21 usage considerations 20 extended addressability 126 extended alias support 45 extended format 42, 126 extended remote copy 158 command XADDPAIR 160, 164 XCOUPLE 161
G
GDG base processing 3, 43 alter date 43 expiration date 44 guaranteed space 20, 26
H
HFS See hierarchical file system hierarchical file system 51, 112 logical copy 112 HSMplex 70
I
IBM TotalStorage ESS 40 ICETOOL 141 IDCAMS 126
K
KEYRANGE parameter 53
L
large real storage 56 large volume support 40, 117 coexistence support 41
Index
197
design considerations 40 EXCP considerations 42 implementation 42 large volumes 3 LOCATION operand 131 LOCDEF command 131 LOCATION 132 MANAGEMENTTYPE 131 MEDIANAME 132 TYPE 131
O
OAM API changes 67 CBROAMxx keyword FIRSTBACKUPGROUP 65 SECONDBACKUPGROUP 63, 65 STORAGEGROUP 65 CBROAMxx statement SETOSMC 63, 65 DB2 tables ODBK2LOC 62 ODBK2SEC 62 display commands 64 ISMF 63 ISMF line operator RECOVER 68 management class 62 autobackup 62 modify commands 65 multiple object backup migration 62 multiple object backup scenarios 65 multiple object backup support 61 object backup storage group 63 object storage group 61 operator commands 64 SETOSMC STORAGEGROUP command FIRSTBACKUPGROUP 65 SECONDBACKUPGROUP 65 tape data set name 62 volume recovery process 67 OAM space management cycle 66 object access method See OAM OBSG See OAM OSMC See OAM space management cycle overflow storage group 2, 23 defining 24 guaranteed space 26 usage considerations 25
M
Magstar A60 155 maintenance information 171 management class 128 manual tape library 122 media manager 57 message routing 27 messages ADR410E 113114 ADR412E 113 ADR439E 113 ADR808I 116 ADR814E 117 ADR960E 113 CBR0230D 63 CBR0231A 6364 CBR1075I 65 CBR1100I 64 CBR1130I 64 CBR1140I 64 CBR9370I 64 CBR9820D 68 CBR9824I 68 CBR9863I 68 EDG2236I 121 EDG4021I 121 EDGT062 123 EDGT063 123 IEC507D 58 IGW322I 61 IGWFAMS 52 MODIFY CATALOG command 49 MTL See manual tape library multiple object backup support 61 multiple XRC 159 MXRC
P
parallel access volumes 40 PARMLIB members BPXPRMxx 112 CBROAMxx 61, 63, 6567
198
COMMNDxx 49 EDGRMMxx 128129, 131132, 136 IGDSMSxx 59 partitioned data set extended 160 PAV See parallel access volumes PDSE See partitioned data set extended POOL operand 131 primary volumes 11
Q
QUICKCOPY 164
R
RACF STGADMIN.EDG.HOUSEKEEP.RPTEXT 140 RACK operand 131 record level sharing 59 lock structures 60 rebuild or alter structures 61 records greater than 4K 59 Redbooks Web site 193 Contact us xiii report generator installation library 141 product library 141 report definition 147 user library 141 retention period 57 REUSE parameter 4, 57 RLS See record level sharing RMM ADDVOLUME subcommand 131 RMM CHANGEVOLUME subcommand 131 RMM OPTION command PREACS 128 REUSEBIN 133, 136 REUSEBIN(STARTMOVE) 134 SMSACS 128 TPRACF(N) 120
EDGDOCS 123 EDGGRTD 144 EDGGTOOL 146 EDGJDHKP 139 EDGJRPT 140 EDGJWHKP 139 EDGUX100 127 SCR See space constraint relief SDM See system data mover SMB See system managed buffering SMF 29 SMS 2 SMS enhancements 2 Snapshot 117 space constraint relief 8, 10, 17 spill storage group 23 storage group 128 storage locations built-in 129 DSTORE by location 138 home locations use 132 installation-defined 129130 SVC99 8 symbolic alias support 3 system data mover 158 ANTAS000 158 ANTAS001 158 API 158 system managed buffering 54 AIX support 56 create optimization 55 create recovery 55 direct optimize 55 direct weighted 55 retry capability 55 system managed tape 122 system symbolics 45
T
task input/output table 11 content 11 DVC value 12 size 14 TIOT See task input/output table
S
SAMPLIB members CBRSMR13 62 EDGCMM01 123 EDGDOC 123
Index
199
V
vital record specification 128 volume selection 10 VRS See vital record specification VSAM 9 KEYRANGE parameter 53 large real storage 56 media manager 57
W
WLM See workload manager workload manager 42
X
XRC See extended remote copy
Z
z/OS UNIX 112
200
Back cover
BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.