Você está na página 1de 156

Data Analyzer

Administrator Guide

Informatica PowerCenter
(Version 9.6.1)
Informatica PowerCenter Data Analyzer Administrator Guide

Version 9.6.1

June 2014

Copyright 1998-2014 Informatica Corporation. All rights reserved.

This software and documentation contain proprietary information of Informatica Corporation and are provided under a license agreement containing restrictions on use and disclosure and are also
protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying,
recording or otherwise) without prior consent of Informatica Corporation. This Software may be protected by U.S. and/or international Patents and other Patents Pending.

Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as provided in DFARS 227.7202-1(a) and
227.7702-3(a) (1995), DFARS 252.227-7013(1)(ii) (OCT 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14 (ALT III), as applicable.

The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to us in writing.

Informatica, Informatica Platform, Informatica Data Services, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange, PowerMart, Metadata Manager,
Informatica Data Quality, Informatica Data Explorer, Informatica B2B Data Transformation, Informatica B2B Data Exchange Informatica On Demand, Informatica Identity Resolution, Informatica
Application Information Lifecycle Management, Informatica Complex Event Processing, Ultra Messaging and Informatica Master Data Management are trademarks or registered trademarks of
Informatica Corporation in the United States and in jurisdictions throughout the world. All other company and product names may be trade names or trademarks of their respective owners.

Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rights reserved. Copyright Sun
Microsystems. All rights reserved. Copyright RSA Security Inc. All Rights Reserved. Copyright Ordinal Technology Corp. All rights reserved.Copyright Aandacht c.v. All rights reserved.
Copyright Genivia, Inc. All rights reserved. Copyright Isomorphic Software. All rights reserved. Copyright Meta Integration Technology, Inc. All rights reserved. Copyright Intalio. All rights
reserved. Copyright Oracle. All rights reserved. Copyright Adobe Systems Incorporated. All rights reserved. Copyright DataArt, Inc. All rights reserved. Copyright ComponentSource. All
rights reserved. Copyright Microsoft Corporation. All rights reserved. Copyright Rogue Wave Software, Inc. All rights reserved. Copyright Teradata Corporation. All rights reserved. Copyright
Yahoo! Inc. All rights reserved. Copyright Glyph & Cog, LLC. All rights reserved. Copyright Thinkmap, Inc. All rights reserved. Copyright Clearpace Software Limited. All rights reserved.
Copyright Information Builders, Inc. All rights reserved. Copyright OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved. Copyright Cleo Communications, Inc. All
rights reserved. Copyright International Organization for Standardization 1986. All rights reserved. Copyright ej-technologies GmbH. All rights reserved. Copyright Jaspersoft Corporation. All
rights reserved. Copyright is International Business Machines Corporation. All rights reserved. Copyright yWorks GmbH. All rights reserved. Copyright Lucent Technologies. All rights reserved.
Copyright (c) University of Toronto. All rights reserved. Copyright Daniel Veillard. All rights reserved. Copyright Unicode, Inc. Copyright IBM Corp. All rights reserved. Copyright MicroQuill
Software Publishing, Inc. All rights reserved. Copyright PassMark Software Pty Ltd. All rights reserved. Copyright LogiXML, Inc. All rights reserved. Copyright 2003-2010 Lorenzi Davide, All
rights reserved. Copyright Red Hat, Inc. All rights reserved. Copyright The Board of Trustees of the Leland Stanford Junior University. All rights reserved. Copyright EMC Corporation. All
rights reserved. Copyright Flexera Software. All rights reserved. Copyright Jinfonet Software. All rights reserved. Copyright Apple Inc. All rights reserved. Copyright Telerik Inc. All rights
reserved. Copyright BEA Systems. All rights reserved. Copyright PDFlib GmbH. All rights reserved. Copyright Orientation in Objects GmbH. All rights reserved. Copyright Tanuki
Software, Ltd. All rights reserved. Copyright Ricebridge. All rights reserved. Copyright Sencha, Inc. All rights reserved.

This product includes software developed by the Apache Software Foundation (http://www.apache.org/), and/or other software which is licensed under various versions of the Apache License (the
"License"). You may obtain a copy of these Licenses at http://www.apache.org/licenses/. Unless required by applicable law or agreed to in writing, software distributed under these Licenses is distributed
on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the specific language governing permissions and limitations
under the Licenses.

This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software copyright 1999-2006 by Bruno
Lowagie and Paulo Soares and other software which is licensed under various versions of the GNU Lesser General Public License Agreement, which may be found at http:// www.gnu.org/licenses/
lgpl.html. The materials are provided free of charge by Informatica, "as-is", without warranty of any kind, either express or implied, including but not limited to the implied warranties of
merchantability and fitness for a particular purpose.

The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California, Irvine, and Vanderbilt
University, Copyright () 1993-2006, all rights reserved.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and redistribution of this software is subject to
terms available at http://www.openssl.org and http://www.openssl.org/source/license.html.

This product includes Curl software which is Copyright 1996-2013, Daniel Stenberg, <daniel@haxx.se>. All Rights Reserved. Permissions and limitations regarding this software are subject to terms
available at http://curl.haxx.se/docs/copyright.html. Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright
notice and this permission notice appear in all copies.

The product includes software copyright 2001-2005 () MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at http://www.dom4j.org/
license.html.

The product includes software copyright 2004-2007, The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at http://
dojotoolkit.org/license.

This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations regarding this software are subject to
terms available at http://source.icu-project.org/repos/icu/icu/trunk/license.html.

This product includes software copyright 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at http:// www.gnu.org/software/
kawa/Software-License.html.

This product includes OSSP UUID software which is Copyright 2002 Ralf S. Engelschall, Copyright 2002 The OSSP Project Copyright 2002 Cable & Wireless Deutschland. Permissions and
limitations regarding this software are subject to terms available at http://www.opensource.org/licenses/mit-license.php.

This product includes software developed by Boost (http://www.boost.org/) or under the Boost software license. Permissions and limitations regarding this software are subject to terms available at http:/
/www.boost.org/LICENSE_1_0.txt.

This product includes software copyright 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at http:// www.pcre.org/license.txt.

This product includes software copyright 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to terms available at http://
www.eclipse.org/org/documents/epl-v10.php and at http://www.eclipse.org/org/documents/edl-v10.php.

This product includes software licensed under the terms at http://www.tcl.tk/software/tcltk/license.html, http://www.bosrup.com/web/overlib/?License, http://www.stlport.org/doc/ license.html, http://
asm.ow2.org/license.html, http://www.cryptix.org/LICENSE.TXT, http://hsqldb.org/web/hsqlLicense.html, http://httpunit.sourceforge.net/doc/ license.html, http://jung.sourceforge.net/license.txt ,
http://www.gzip.org/zlib/zlib_license.html, http://www.openldap.org/software/release/

license.html, http://www.libssh2.org, http://slf4j.org/license.html, http://www.sente.ch/software/OpenSourceLicense.html, http://fusesource.com/downloads/license-agreements/fuse-message-broker-v-


5-3- license-agreement; http://antlr.org/license.html; http://aopalliance.sourceforge.net/; http://www.bouncycastle.org/licence.html; http://www.jgraph.com/jgraphdownload.html; http://
www.jcraft.com/jsch/LICENSE.txt; http://jotm.objectweb.org/bsd_license.html; . http://www.w3.org/Consortium/Legal/2002/copyright-software-20021231; http://www.slf4j.org/license.html; http://
nanoxml.sourceforge.net/orig/copyright.html; http://www.json.org/license.html; http://forge.ow2.org/projects/javaservice/, http://www.postgresql.org/about/licence.html, http://www.sqlite.org/
copyright.html, http://www.tcl.tk/software/tcltk/license.html, http://www.jaxen.org/faq.html, http://www.jdom.org/docs/faq.html, http://www.slf4j.org/license.html; http://www.iodbc.org/dataspace/
iodbc/wiki/iODBC/License; http://www.keplerproject.org/md5/license.html; http://www.toedter.com/en/jcalendar/license.html; http://www.edankert.com/bounce/index.html; http://www.net-
snmp.org/about/license.html; http://www.openmdx.org/#FAQ; http://www.php.net/license/3_01.txt; http://srp.stanford.edu/license.txt; http://www.schneier.com/blowfish.html; http://
www.jmock.org/license.html; http://xsom.java.net; http://benalman.com/about/license/; https://github.com/CreateJS/EaselJS/blob/master/src/easeljs/display/Bitmap.js; http://www.h2database.com/
html/license.html#summary; http://jsoncpp.sourceforge.net/LICENSE; http://jdbc.postgresql.org/license.html; http://protobuf.googlecode.com/svn/trunk/src/google/protobuf/descriptor.proto; https:/
/github.com/rantav/hector/blob/master/LICENSE; http://web.mit.edu/Kerberos/krb5-current/doc/mitK5license.html. and http://jibx.sourceforge.net/jibx-license.html.

This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php), the Common Development and Distribution License (http://
www.opensource.org/licenses/cddl1.php) the Common Public License (http://www.opensource.org/licenses/cpl1.0.php), the Sun Binary Code License Agreement Supplemental License Terms, the BSD
License (http:// www.opensource.org/licenses/bsd-license.php), the new BSD License (http://opensource.org/licenses/BSD-3-Clause), the MIT License (http://www.opensource.org/licenses/mit-
license.php), the Artistic License (http://www.opensource.org/licenses/artistic-license-1.0) and the Initial Developers Public License Version 1.0 (http://www.firebirdsql.org/en/initial-developer-s-
public-license-version-1-0/).

This product includes software copyright 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this software are subject to terms
available at http://xstream.codehaus.org/license.html. This product includes software developed by the Indiana University Extreme! Lab. For further information please visit http://
www.extreme.indiana.edu/.

This product includes software Copyright (c) 2013 Frank Balluffi and Markus Moeller. All rights reserved. Permissions and limitations regarding this software are subject to terms of the MIT license.

This Software is protected by U.S. Patent Numbers 5,794,246; 6,014,670; 6,016,501; 6,029,178; 6,032,158; 6,035,307; 6,044,374; 6,092,086; 6,208,990; 6,339,775; 6,640,226; 6,789,096;
6,823,373; 6,850,947; 6,895,471; 7,117,215; 7,162,643; 7,243,110; 7,254,590; 7,281,001; 7,421,458; 7,496,588; 7,523,121; 7,584,422; 7,676,516; 7,720,842; 7,721,270; 7,774,791; 8,065,266;
8,150,803; 8,166,048; 8,166,071; 8,200,622; 8,224,873; 8,271,477; 8,327,419; 8,386,435; 8,392,460; 8,453,159; 8,458,230; and RE44,478, International Patents and other Patents Pending.

DISCLAIMER: Informatica Corporation provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of
noninfringement, merchantability, or use for a particular purpose. Informatica Corporation does not warrant that this software or documentation is error free. The information provided in this software
or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation is subject to change at any time without notice.

NOTICES

This Informatica product (the "Software") includes certain drivers (the "DataDirect Drivers") from DataDirect Technologies, an operating company of Progress Software Corporation ("DataDirect")
which are subject to the following terms and conditions:1.

THE DATADIRECT DRIVERS ARE PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.2.

IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OF THE POSSIBILITIES OF DAMAGES IN
ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACH OF CONTRACT, BREACH OF WARRANTY,
NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.

Part Number: DA-ADG-96100-0001


Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Informatica Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Informatica My Support Portal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Informatica Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Informatica Web Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Informatica How-To Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Informatica Knowledge Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Informatica Support YouTube Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Informatica Marketplace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Informatica Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Informatica Global Customer Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii

Chapter 1: Data Analyzer Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Data Analyzer Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Main Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Supporting Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Data Analyzer Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Using Data Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Configuring Session Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Localization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Data Analyzer Display Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Language Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Setting the Default Expression for Metrics and Attributes . . . . . . . . . . . . . . . . . . . . . . . . . 5
Date and Number Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exporting Reports with Japanese Fonts to PDF Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Chapter 2: Managing Users and Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Restricting User Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Setting Permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Authentication Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
User Synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Managing Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Editing a Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Managing Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Editing a User Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Adding Data Restrictions to a User Account . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Chapter 3: Setting Permissions and Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . 11


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

v
Setting Access Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Restricting Data Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Using Global Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Understanding Data Restrictions for Multiple Groups . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Restricting Data Access by Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Restricting Data Access by User or Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Chapter 4: Managing Time-Based Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Creating a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Managing Time-Based Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Editing a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Editing Access Permissions for a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Viewing or Clearing a Time-Based Schedule History . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Starting a Time-Based Schedule Immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Stopping a Time-Based Schedule Immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Disabling a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Removing a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Managing Reports in a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Attaching Reports to a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Viewing Attached Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Viewing Task Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Viewing or Clearing a Task History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Removing a Report from a Time-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Using the Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Navigating the Calendar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Defining a Business Day . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Defining a Holiday . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Monitoring a Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Stopping a Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Chapter 5: Managing Event-Based Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Updating Reports When a PowerCenter Session Completes . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Step 1. Create an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Step 2. Use the PowerCenter Integration Utility in PowerCenter . . . . . . . . . . . . . . . . . . . 31
Managing Event-Based Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Editing an Event-Based Schedule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Editing Access Permissions for an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . 32
Viewing or Clearing an Event-Based Schedule History . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Starting an Event-Based Schedule Immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Stopping an Event-Based Schedule Immediately . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Disabling an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Removing an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Managing Reports in an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Viewing Attached Reports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

vi Table of Contents
Viewing Task Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Viewing or Clearing a Report History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Removing a Report from an Event-Based Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Attaching Imported Cached Reports to an Event-Based Schedule . . . . . . . . . . . . . . . . . . 35

Chapter 6: Exporting Objects from the Repository . . . . . . . . . . . . . . . . . . . . . . . . . . 37


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Exporting a Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Exporting Metric Definitions Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Exporting Metrics and Associated Schema Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Exporting a Time Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Exporting a Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Exporting a Global Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exporting a Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Exporting a Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Exporting a User Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Exporting a Group Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Exporting a Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

Chapter 7: Importing Objects to the Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
XML Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Object Permissions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Importing Objects from a Previous Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Importing a Schema . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Importing a Time Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Importing a Report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Importing Reports from Public or Personal Folders . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Steps for Importing a Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Importing a Global Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Importing a Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Importing a Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Importing a User Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Importing a Group Security Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Importing a Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Chapter 8: Using the Import Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Running the Import Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Importing a Large Number of Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Using SSL with the Import Export Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

vii
Chapter 9: Managing System Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Managing Color Schemes and Logos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Using a Predefined Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Editing a Predefined Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Creating a Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Selecting a Default Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Assigning a Color Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Managing Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Viewing the User Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Configuring and Viewing the Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Configuring the System Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Configuring the JDBC Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Managing LDAP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Managing Delivery Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Configuring the Mail Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configuring the External URL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Configuring SMS/Text Messaging and Mobile Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Specifying Contact Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Viewing System Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Setting Rules for Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Setting Query Rules at the System Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Setting Query Rules at the Group Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Setting Query Rules at the User Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Setting Query Rules at the Report Level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Configuring Report Table Scroll Bars. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Configuring Report Headers and Footers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Configuring Departments and Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Configuring Display Settings for Groups and Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Chapter 10: Working with Data Analyzer Administrative Reports . . . . . . . . . . . . . . . 89


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Administrators Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Data Analyzer Administrative Reports Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Setting Up the Data Analyzer Administrative Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Step 1. Set Up a Data Source for the Data Analyzer Repository . . . . . . . . . . . . . . . . . . . . 90
Step 2. Import the Data Analyzer Administrative Reports . . . . . . . . . . . . . . . . . . . . . . . . 91
Step 3. Add the Data Source to a Data Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Step 4. Add the Administrative Reports to Schedules . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Using the Data Analyzer Administrative Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Chapter 11: Performance Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

viii Table of Contents


IBM DB2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Microsoft SQL Server 2000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
HP-UX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Solaris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Application Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Servlet/JSP Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
JSP Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
EJB Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Data Analyzer Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Ranked Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Datatype of Table Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Date Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
JavaScript on the Analyze Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Interactive Charts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Number of Charts in a Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Scheduler and User-Based Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Frequency of Schedule Runs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Row Limit for SQL Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Indicators in Dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Purging of Activity Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Recommendations for Dashboard Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Chart Legends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Connection Pool Size for the Data Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
Server Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

Chapter 12: Customizing the Data Analyzer Interface . . . . . . . . . . . . . . . . . . . . . . . 113


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Using the Data Analyzer URL API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Using the Data Analyzer API Single Sign-On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Setting Up Color Schemes and Logos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Setting the UI Configuration Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Default UI Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
UI Configuration Parameter in Data Analyzer URL . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Appendix A: Hexadecimal Color Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117


HTML Hexadecimal Color Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Appendix B: Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125


Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Table of Contents ix
Modifying the Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Properties in DataAnalyzer.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Properties in infa-cache-service.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
Configuring the Lock Acquisition Timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
Configuring the Eviction Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Properties in web.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

x Table of Contents
Preface
The Data Analyzer Administrator Guide provides information on administering Data Analyzer, including
managing user access and report schedules and exporting and importing objects in a Data Analyzer repository.
It also discusses performance tuning and server clusters.

The Data Analyzer Administrator Guide is written for system administrators. It assumes that you have
knowledge of relational databases, SQL, and web technology.

Informatica Resources
Informatica My Support Portal
As an Informatica customer, you can access the Informatica My Support Portal at
http://mysupport.informatica.com.
The site contains product information, user group information, newsletters, access to the Informatica customer
support case management system (ATLAS), the Informatica How-To Library, the Informatica Knowledge Base,
Informatica Product Documentation, and access to the Informatica user community.

Informatica Documentation
The Informatica Documentation team takes every effort to create accurate, usable documentation. If you have
questions, comments, or ideas about this documentation, contact the Informatica Documentation team
through email at infa_documentation@informatica.com. We will use your feedback to improve our
documentation. Let us know if we can contact you regarding your comments.
The Documentation team updates documentation as needed. To get the latest documentation for your product,
navigate to Product Documentation from http://mysupport.informatica.com.

Informatica Web Site


You can access the Informatica corporate web site at http://www.informatica.com. The site contains
information about Informatica, its background, upcoming events, and sales offices. You will also find product
and partner information. The services area of the site includes important information about technical support,
training and education, and implementation services.

xi
Informatica How-To Library
As an Informatica customer, you can access the Informatica How-To Library at
http://mysupport.informatica.com. The How-To Library is a collection of resources to help you learn more
about Informatica products and features. It includes articles and interactive demonstrations that provide
solutions to common problems, compare features and behaviors, and guide you through performing specific
real-world tasks.

Informatica Knowledge Base


As an Informatica customer, you can access the Informatica Knowledge Base at
http://mysupport.informatica.com. Use the Knowledge Base to search for documented solutions to known
technical issues about Informatica products. You can also find answers to frequently asked questions, technical
white papers, and technical tips. If you have questions, comments, or ideas about the Knowledge Base, contact
the Informatica Knowledge Base team through email at KB_Feedback@informatica.com.

Informatica Support YouTube Channel


You can access the Informatica Support YouTube channel at http://www.youtube.com/user/INFASupport. The
Informatica Support YouTube channel includes videos about solutions that guide you through performing
specific tasks. If you have questions, comments, or ideas about the Informatica Support YouTube channel,
contact the Support YouTube team through email at supportvideos@informatica.com or send a tweet to
@INFASupport.

Informatica Marketplace
The Informatica Marketplace is a forum where developers and partners can share solutions that augment,
extend, or enhance data integration implementations. By leveraging any of the hundreds of solutions available
on the Marketplace, you can improve your productivity and speed up time to implementation on your projects.
You can access Informatica Marketplace at http://www.informaticamarketplace.com.

Informatica Velocity
You can access Informatica Velocity at http://mysupport.informatica.com. Developed from the real-world
experience of hundreds of data management projects, Informatica Velocity represents the collective knowledge
of our consultants who have worked with organizations from around the world to plan, develop, deploy, and
maintain successful data management solutions. If you have questions, comments, or ideas about Informatica
Velocity, contact Informatica Professional Services at ips@informatica.com.

Informatica Global Customer Support


You can contact a Customer Support Center by telephone or through the Online Support.
Online Support requires a user name and password. You can request a user name and password at
http://mysupport.informatica.com.
The telephone numbers for Informatica Global Customer Support are available from the Informatica web site at
http://www.informatica.com/us/services-and-training/support-services/global-support-centers/.

xii Preface
CHAPTER 1

Data Analyzer Overview


This chapter includes the following topics:
Introduction, 1
Data Analyzer Framework, 2
Data Analyzer Basics, 4
Security, 5
Localization, 5

Introduction
PowerCenter Data Analyzer provides a framework for performing business analytics on corporate data. With
Data Analyzer, you can extract, filter, format, and analyze corporate information from data stored in a data
warehouse, operational data store, or other data storage models. Data Analyzer uses the familiar web browser
interface to make it easy for a user to view and analyze business information at any level.
You can use Data Analyzer to run PowerCenter Repository Reports, Metadata Manager Reports, Data Profiling
Reports, or create and run custom reports. You can create a Reporting Service in the PowerCenter
Administration Console. The Reporting Service is the application service that runs the Data Analyzer
Reporting For application
A
Data Analyzer works with the following data models:
Analytic schema. Based on a dimensional data warehouse in a relational database. Data Analyzer uses the
characteristics of a dimensional data warehouse model to assist you to analyze data. When you set up an
analytic schema in Data Analyzer, you define the fact and dimension tables and the metrics and attributes in
the star schema.
Operational schema. Based on an operational data store in a relational database. When you set up an
operational schema in Data Analyzer, you define the tables in the schema. Identify which tables contain the
metrics and attributes for the schema, and define the relationship among the tables. Use the operational
schema to analyze data in relational database tables that do not conform to the dimensional data model.
Hierarchical schema. Based on data in an XML document. A hierarchical schema contains attributes and
metrics from an XML document on a web server or an XML document returned by a web service operation.
Each schema must contain all the metrics and attributes that you want to analyze together.
Data Analyzer supports the Java Message Service (JMS) protocol to access real-time messages as data sources.
To display real-time data in a Data Analyzer real-time report, you create a Data Analyzer real-time message

1
stream with the details of the metrics and attributes to include in the report. Data Analyzer updates the report
when it reads JMS messages.
Data Analyzer stores metadata for schemas, metrics and attributes, queries, reports, user profiles, and other
objects in the Data Analyzer repository. When you create a Reporting Service, you need to specify the Data
Analyzer repository details. The Reporting Service configures the Data Analyzer repository with the metadata
corresponding to the selected data source. When you run reports for any data source, Data Analyzer uses the
metadata in the Data Analyzer repository to determine the location from which to retrieve the data for the
report, and how to present the report.
The Data Analyzer repository must reside in a relational database. The data for an analytic or operational
schema must also reside in a relational database. The data for a hierarchical schema resides in a web service or
XML document.
Note: If you create a Reporting Service for another reporting source, you need to import the metadata for the
data source manually.

Data Analyzer Framework


Data Analyzer works within a web-based framework that requires the interaction of several components. It
includes components and services that may already exist in an enterprise infrastructure, such as an enterprise
data warehouse and authentication server.

Main Components
Data Analyzer is built on JBoss Application Server and uses related technology and application programming
interfaces (API) to accomplish its tasks. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)-
compliant application server. Data Analyzer uses the application server to handle requests from the web
browser. It generates the requested contents and uses the application server to transmit the content back to the
web browser. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it
needs to handle web browser requests.

Application Server
JBoss Application Server helps the Data Analyzer Application Server manage its processes efficiently. The Java
application server provides services such as database access and server load balancing to Data Analyzer. The Java
Application Server also provides an environment that uses Java technology to manage application, network, and
system resources.

Web Server
Data Analyzer uses an HTTP server to fetch and transmit Data Analyzer pages to web browsers. If the
application server contains a web server, you do not need to install a separate web server. You need a separate
web server to set up a proxy server to enable external users to access Data Analyzer through a firewall.

Data Analyzer
Data Analyzer is a Java application that provides a web-based platform for the development and delivery of
business analytics. In Data Analyzer, you can read data from a data source, create reports, and view the results
on the web browser.
Data Analyzer uses the following Java technology:
Java Servlet API
JavaServer Pages (JSP)
Enterprise Java Beans (EJB)

2 Chapter 1: Data Analyzer Overview


Java Database Connectivity (JDBC)
Java Message Service (JMS)
Java Naming and Directory Interface (JNDI)

Data Analyzer Repository


The repository stores the metadata necessary for Data Analyzer to track the objects and processes that it requires
to effectively handle user requests. The metadata includes information on schemas, user profiles,
personalization, reports and report delivery, and other objects and processes. You can create reports based on
the schemas without accessing the data warehouse directly.
Data Analyzer connects to the repository with JDBC drivers.

Data Source
For analytic and operational schemas, Data Analyzer reads data from a relational database. It connects to the
database through JDBC drivers.
For hierarchical schemas, Data Analyzer reads data from an XML document. The XML document can reside on
a web server, or it can be generated by a web service operation. Data Analyzer connects to the XML document
or web service through an HTTP connection.

Supporting Components
Data Analyzer has other components to support its processes, including an API that allows you to integrate
Data Analyzer features into other web applications and security adapters that allow you to use an LDAP server
for authentication. Although you can use Data Analyzer without these components, you can extend the power
of Data Analyzer when you set it up to work with these additional components.

Authentication Server
You use PowerCenter authentication methods to authenticate users logging in to Data Analyzer. You launch
Data Analyzer from the Administration Console, PowerCenter Client tools, or Metadata Manager, or by
accessing the Data Analyzer URL from a browser. For more information about authentication methods, see the
P u
When you use the Administration Console to create native users and groups, the Service Manager stores the
users and groups in the domain configuration database and notifies the Reporting Service. The Reporting
Service copies the users and groups to the Data Analyzer repository.
Note: You cannot create or delete users and groups, or change user passwords in Data Analyzer. You can only
modify the user settings such as the user name or the contact details in Data Analyzer.

PowerCenter
You create and enable a Reporting Service on the Domain page of the PowerCenter Administration Console.
When you enable the Reporting Service, the Administration Console starts Data Analyzer.
You log in to Data Analyzer to create and run reports on data in a relational database or to run PowerCenter
Repository Reports, Data Analyzer Data Profiling Reports, or Metadata Manager Reports.

Mail Server
Data Analyzer uses Simple Mail Transfer Protocol (SMTP) to provide access to the enterprise mail server and
facilitate the following services:
Send report alert notification and SMS/Text Messages to alert devices.
Forward reports through email.

Data Analyzer Framework 3


Web Portal
The Data Analyzer API enables you to integrate Data Analyzer into other web applications and portals. The
API specifies the functions available to developers to access Data Analyzer dashboards, reports, and other
objects and display them in any web application or portal.

Data Analyzer Basics


This section lists the steps you need to complete to access analytic data in Data Analyzer.
To preserve application resources, Data Analyzer terminates a user session if it does not have any activity for a
length of time. You can set the session timeout period according to the Data Analyzer usage in your
organization.

Using Data Analyzer


When you use Data Analyzer to analyze business information, complete the following steps:
1. Define the data source. Set up the connectivity information so that Data Analyzer can access the data
warehouse, web service, or XML documents. You can configure Data Analyzer to access more than one
data source. You need system administrator privileges to define data sources.
2. Import the table definitions from JDBC data sources or set up rowsets and columns from XML sources.
Import table definitions from the data warehouse or operational data store into the Data Analyzer
repository. Define the rowsets and columns for web services or XML data sources. You need system
administrator privileges to import table definitions or define rowsets.
3. Define an analytic, operational, or hierarchical schema. Define the fact and dimension tables for an
analytic schema, set up the tables for an operational schema, or define a hierarchical schema. Define the
metrics and attributes in the schemas. If you set up an analytic schema, set up a time dimension. You need
system administrator privileges to define the schemas in Data Analyzer.
4. Set up the data connector. Create a data connector to identify which data source to use when you run
reports. You need system administrator privileges to set up data connectors.
5. Create and run reports. Create reports based on the metrics and attributes you define. Create analytic
workflows to analyze the data. Set up schedules to run reports regularly.
6. Create indicators and alerts for the report. Set up alerts on reports based on events and threshold values
that you define.
7. Create dashboards. Create a dashboard and customize it to display the indicators and links to reports and
shared documents to which you want immediate access.
Data Analyzer has many more features you can use to analyze and get the most useful information from your
corporate data. This book presents the tasks that a system administrator typically performs in Data Analyzer.

Configuring Session Timeout


By default, if you log in to Data Analyzer, but you do not use it for 30 minutes, the session terminates or times
out. The system administrator can change the session timeout period by editing the value of the session-timeout
property in the web.xml file. For more information, see Configuration Files on page 125.

4 Chapter 1: Data Analyzer Overview


Security
Data Analyzer provides a secure environment in which to perform business analytics. It supports standard
security protocols like Secure Sockets Layer (SSL). It also provides system administrators a way to control access
to Data Analyzer tasks and data based on privileges and roles granted to users and groups.
You manage users and groups in the PowerCenter Administration Console. Data Analyzer uses the
PowerCenter authentication methods to authenticate users set up in the PowerCenter domain configuration
the
PowerCenter
methods,
about
more database.
A
Data Analyzer depends on database servers to provide their own security and data integrity facilities. Data
Analyzer reads data from the data warehouse and stores data in a repository to support its different components.
Security and data integrity in the database servers that contain the data warehouse and the repository are
essential for a reliable system environment.

Localization
Data Analyzer uses UTF-8 character encoding for displaying in different languages. UTF-8 character encoding
is an ASCII-compatible multi-byte Unicode and Universal Character Set (UCS) encoding method.

Data Analyzer Display Language


You can change the display language for the Data Analyzer client regardless of the locale of the Data Analyzer
server. You change the display language for Data Analyzer in the Manage Accounts tab in Data Analyzer. You
must change the display language for the Data Analyzer login page separately in the browser. For more
information, see the D .

Language Settings
When you store data in multiple languages in a database, enable UTF-8 character encoding in the Data
Analyzer repository and data warehouse. For more information about how to enable UTF-8 character encoding,
see the database documentation.
A language setting is a superset of another language setting when it contains all characters encoded in the other
language. To avoid data errors, you must ensure that the language settings are correct when you complete the
following tasks in Data Analyzer:
Back up and restore Data Analyzer repositories. The repositories you back up and restore must have the
same language type and locale setting or the repository you restore must be a superset of the repository you
back up. For example, if the repository you back up contains Japanese data, the repository you restore to
must also support Japanese.
Import and export repository objects. When you import an exported repository object, the repositories
must have the same language type and locale setting or the destination repository must be a superset of the
source repository.
Import table definitions from the data source. When you import data warehouse table definitions into the
Data Analyzer repository, the language type and locale settings of the data warehouse and the Data Analyzer
repository must be the same or the repository must be a superset of the data source.

Setting the Default Expression for Metrics and Attributes


When you set the default expression for metrics and attributes, Data Analyzer uses the same expression
regardless of the locale of the Data Analyzer server. If you want to use a different default expression for a

Security 5
different locale, you must change the default expression in the metric or attribute property. For more
information,seetheD e .

Date and Number Formats


The language setting for your Data Analyzer user account determines the numeric, date, and time formats Data
Analyzer displays. When Data Analyzer performs date calculations in calculated or custom metrics, Data
Analyzer uses the format for the repository database language setting.
When you enter a date in an SQL expression or define a date value for a global variable, enter the date in the
same format used in the data warehouse.

Exporting Reports with Japanese Fonts to PDF Files


If a report contains Japanese fonts and you export the report to a PDF file, you must download the Asian Font
Package from the Adobe Acrobat web site to view the PDF file. Save the Asian Font Package on the machine
where you want to view the PDF file. You can find the Asian Font Package from the following web site:
c h

6 Chapter 1: Data Analyzer Overview


CHAPTER 2

Managing Users and Groups


This chapter includes the following topics:
Overview, 7
Managing Groups, 8
Managing Users, 9

Overview
You create users, groups, and roles in the PowerCenter domain configuration database. Use the Security page of
the PowerCenter Administration Console to create users, groups, and roles for a Data Analyzer. For more
informationaboutcreating
i users,groups,androles,seetheP u
To secure information in the repository and data sources, Data Analyzer allows login access only to individuals
with user accounts in Data Analyzer. A user must have an active account to perform tasks and access data in
Data Analyzer. Users can perform different tasks based on their privileges.
You can edit some user and group properties in Data Analyzer.

Restricting User Access


You can limit user access to Data Analyzer to secure information in the repository and data sources. Users in
Data Analyzer need their own accounts to perform tasks and access data. Users can perform different tasks
based on their privileges. You assign privileges to users, groups and roles in the Security page of the
PowerCenter Administration Console.

Setting Permissions
You can set permissions to determine the tasks that users can perform on a repository object. You set access
permissions in Data Analyzer.

Authentication Methods
The way you manage users and groups depends on the authentication method you are using:
Native. You create and manage users, groups, and roles in the PowerCenter Administration Console.
PowerCenter stores the users, groups, and roles in the domain configuration database.You can modify some
user and group properties in Data Analyzer.

7
LDAP authentication. You manage the users and groups in the LDAP server but you create and manage the
roles and privileges in the PowerCenter Administration Console.
Formoreinformationaboutauthentication
i methods,seetheP .

User Synchronization
You manage users, groups, privileges, and roles on the Security page of the Administration Console. The
Service Manager stores users and groups in the domain configuration database and copies the list of users and
groups to the Data Analyzer repository. The Service Manager periodically synchronizes the list of users and
groups in the repository with the users and groups in the domain configuration database.
Note: If you edit any property of a user other than roles or privileges, the Service Manager does not synchronize
the changes to the Data Analyzer repository. Similarly, if you edit any property of a user in Data Analyzer, the
Service Manager does not synchronize the domain configuration database with the modification.
When you assign privileges and roles to users and groups for the Reporting Service in the Administration
Console or when you assign permissions to users and groups in Data Analyzer, the Service Manager stores the
privilege, role, and permission assignments with the list of users and groups in the Data Analyzer repository.
The Service Manager periodically synchronizes users in the LDAP server with the users in the domain
configuration database. In addition, the Service Manager synchronizes the users in the Data Analyzer repository
with the updated LDAP users in the domain configuration database. For more information, see the
P .

Managing Groups
Groups allow you to organize users according to their roles in the organization. For example, you might
organize users into groups based on their departments or management level. You manage users and groups,
their organization, and which privileges and roles are assigned to them in the PowerCenter Administration
Console. You can restrict data access by group.

Editing a Group
You can see groups with privileges on a Reporting Service when you launch the Data Analyzer instance created
by that Reporting Service. In Data Analyzer, you can edit some group properties such as department, color
schemes, or query governing settings. You cannot add users or roles to the group, or assign a primary group to
users in Data Analyzer.

To edit a group in Data Analyzer:

1. Connect to Data Analyzer from the PowerCenter Administration Console, PowerCenter Client tools,
Metadata Manager, or by accessing the Data Analyzer URL from a browser.
2. Click Administration > Access Management > Groups.
The Groups page appears.
3. Select the group you want to edit and click Edit.
The properties of the group appear.

8 Chapter 2: Managing Users and Groups


4. Edit any of the following properties:

Property Description

Department Choose the department for the group.


For more information, see Configuring Departments and Categories
on page 87.

Color Scheme Assignment Assign a color scheme for the group. For more information, see
Managing Color Schemes and Logos on page 72.

Query Governing Query governing settings on the Groups page apply to reports that
users in the group can run. If a user belongs to one or more groups in
the same hierarchy level, Data Analyzer uses the largest query
governing settings from each group. For more information, see Setting
Rules for Queries on page 83.

5. Click OK to return to the Groups page.

Managing Users
Each user must have a user account to access Data Analyzer. To perform Data Analyzer tasks, a user must have
the appropriate privileges for the Reporting Service. You assign privileges to a user, add the user to one or more
groups, and assign roles to the user in the PowerCenter Administration Console.

Editing a User Account


You can see users with privileges on a Reporting Service when you launch the Data Analyzer instance created by
that Reporting Service. You can edit a user account in Data Analyzer to change the color scheme, or modify
other properties of the account. You cannot assign a group to the user or define a primary group for a user in
Data Analyzer.

To edit a user account in Data Analyzer:

1. Connect to Data Analyzer from the PowerCenter Administration Console, PowerCenter Client tools,
Metadata Manager, or by accessing the Data Analyzer URL from a browser.
2. Click Administration > Access Management > Users.
The Users page appears.
3. Enter a search string for the user in the Search field and click Find.
Data Analyzer displays the list of users that match the search string you specify.
4. Select the user record you want to edit and click on it.
The properties of the user appear.
5. Edit any of the following properties:

Property Description

First Name First name of the user.

Middle Name Middle name of the user.

Last Name Last name of the user.


If you edit the first name, middle name, or last name of the user, Data Analyzer saves
the modification in the Data Analyzer repository. When the Service Manager
synchronizes with the Data Analyzer repository, it does not update the records in the
domain configuration database. For more information about these properties, see Full
Name for Data Analyzer Users on page 10.

Managing Users 9
Property Description

Title Describes the function of the user within the organization or within Data Analyzer.
Titles do not affect roles or Data Analyzer privileges.

Email Address Data Analyzer uses this as the email for the sender when the user emails reports from
Data Analyzer. Data Analyzer sends the email to this address when it sends an alert
notification to the user. You cannot edit this information.

Department Department for the user. You can associate the user with a department to organize
users and simplify the process of searching for users. For more information, see
Configuring Departments and Categories on page 87.

Color Scheme Select the color scheme to use when the user logs in to Data Analyzer. If no color
Assignment scheme is selected, Data Analyzer uses the default color scheme when the user logs
in. Color schemes assigned at user level take precedence over color schemes
assigned at group level. Unless users have administrator privileges, they cannot
change the color scheme assigned to them.
For more information, see Managing Color Schemes and Logos on page 72.

Query Specify query governing settings for the user.


Governing The query governing settings on the User page apply to all reports that the user can
run. For more information, see Setting Rules for Queries on page 83.

Note: Users can edit some of the properties of their own accounts in the Manage Account tab.

Full Name for Data Analyzer Users


Data Analyzer displays the full name property in the PowerCenter domain as the following user account
properties:
First name
Middle name
Last name
Data Analyzer determines the full name as first, middle, and last name based on the following rules:
1. If the full name does not contain a comma, the full name has the following syntax:
[

2. If the full name contains a comma, the full name has the following syntax:
<

Any full name that contains a comma is converted to use the syntax without a comma:
[

3. After the conversion, the full name is separated into first, middle, and last names based on the number of
text strings separated by a space:
If the full name has two text strings, there is no middle name.
If the full name has more than three text strings, any string after the third string is included in the last
name.

Adding Data Restrictions to a User Account


You can restrict access to data based on user accounts. To add data restrictions to a user account, click
Administration > Access Management > Users, and then click the Data Restrictions button ( ) for the data for
which you want to restrict user access. For more information, see Restricting Data Access on page 14.

10 Chapter 2: Managing Users and Groups


CHAPTER 3

Setting Permissions and


Restrictions
This chapter includes the following topics:
Overview, 11
Setting Access Permissions, 11
Restricting Data Access, 14

Overview
You can customize Data Analyzer user access with the following security options:
Access permissions. Restrict user and group access to folders, reports, dashboards, attributes, metrics,
template dimensions, and schedules. Use access permissions to restrict access to a particular folder or object
in the repository.
Data restrictions. Restrict access to data in fact tables and operational schemas using associated attributes.
Use data restrictions to restrict users or groups from accessing specific data when they view reports.
When you create an object in the repository, every user has default Read and Write permission on that object.
By customizing access permissions on an object, you determine which users and groups can Read, Write,
Delete, or Change Access permission on that object.
When you create data restrictions, you determine which users and groups can access particular attribute values.
When a user with a data restriction runs a report, Data Analyzer does not display restricted data associated with
those values.

Setting Access Permissions


Access permissions determine the tasks you can perform for a specific repository object. When you set access
permissions, you determine which users and groups have access to folders and repository objects. You can assign
the following types of access permissions to repository objects:
Read. Allows you to view a folder or object.
Write. Allows you to edit an object. Also allows you to create and edit folders and objects within a folder.

11
Delete. Allows you to delete a folder or an object from the repository.
Change permission. Allows you to change the access permissions on a folder or object.
By default, Data Analyzer grants Read permission to every user in the repository. Use the General Permissions
area to modify default access permissions for an object.
When you modify the access permissions on a folder, you can override existing access permissions on all objects
in the folder, including subfolders.
Use the following methods to set access permissions:
Inclusive. Permit access to the users and groups that you select. You can also permit additional access
permissions to selected users and groups.
Exclusive. Restrict access from the users and groups that you select. You can completely restrict the selected
users and groups or restrict them to fewer access permissions.
To grant more extensive access to a user or group, use inclusive access permissions. For example, you can grant
the Analysts group inclusive access permissions to delete a report.
To restrict the access of specific users or groups, use exclusive access permissions. For example, you can use
exclusive access permissions to restrict the Vendors group from viewing sensitive reports.
You can use a combination of inclusive, exclusive, and default access permissions to create comprehensive access
permissions for an object. For example, you can select Read as the default access permission for a folder, grant
the Sales group inclusive write permission to edit objects in the folder, and use an exclusive Read permission to
deny an individual in the Sales group access to the folder.
To grant access permissions to users, search for the user name, then set the access permissions for the user you
select.
Setting access permissions for a composite report determines whether the composite report itself is visible but
does not affect the existing security of subreports. Users or groups must also have permissions to view individual
subreports. Therefore, a composite report might contain some subreports that do not display for all users.
Note: Any user with the System Administrator role has access to all Public Folders and to their Personal Folder
in the repository and can override any access permissions you set. If you have reports and shared documents
that you do not want to share, save them to your Personal Folder or your personal dashboard.

To set access permissions:

1. Navigate to a repository object you want to modify.


The following table shows how to navigate to the repository object you want to modify:

To set access permissions on... Click...

Content folder in Public Folders Find > Public Folders > folder name

Content folder in Personal Folder Find > Personal Folder > folder name

Report in Public Folders Find > Public Folders > report name

Report in Personal Folder Find > Personal Folder > report name

Composite Report in Public Folders Find > Public Folders > composite report name

Composite Report in Personal Folder Find > Personal Folder > composite report name

Public Dashboard Find > Public Folders > dashboard name

Personal Dashboard Find > Personal Folder > dashboard name

Metric Folder Administration > Schema Design > Schema Directory > Metrics
folder > metric folder name

Attribute Folder Administration > Schema Design > Schema Directory >
Attributes folder > attribute folder name

Template Dimensions Folder Administration > Schema Design > Schema Directory >
Template Dimensions folder > template dimensions folder name

12 Chapter 3: Setting Permissions and Restrictions


To set access permissions on... Click...

Metric Administration > Schema Design > Schema Directory > Metrics
Folder > metric folder name > metric name

Attribute Administration > Schema Design > Schema Directory >


Attributes folder > attribute folder name > attribute name

Template Dimension Administration > Schema Design > Schema Directory


>Template Dimensions folder > template dimension folder name
> template dimension name

Time-Based Schedule Administration > Scheduling > Time-Based Schedules > time-
based schedule name

Event-Based Schedule Administration > Scheduling > Event-Based Schedules > event-
based schedule name

Filterset Administration > Schema Directory > Filtersets > filterset name

2. Click the Permissions button ( ) for the repository object.


The Access Permissions page appears. The object name appears in quotes.
3. If you are editing access permissions on an item, such as a report or shared document, skip to step 4.
If you are editing access permissions on a folder, you can select Replace Permissions on Subfolders to apply
access permission changes to all subfolders. You can also select Replace Permissions on All Items in Folder
to apply access permission changes to the reports and shared documents in the folder.
4. From the General Permissions area, click No to prevent all repository users from receiving default access
permissions.
Click Yes to allow all users to receive the default access permissions you select.
If you click Yes, set the default access permissions.
5. Click Make a Selection to search for a group or user.
6. Refine the selection by choosing the search criteria for the group or user.
You can select groups or users by criteria such as name or department.
The Query Results field displays groups or users that match the search criteria.
Note: Permissions set on composite reports do not affect permissions on the subreports. Only those
subreports where a user or group has access permissions display in a composite report.
7. Select the group or user in the Query Results field.
8. Select the access permissions you want to include or exclude.
9. Click Include to include the user or group in the access permissions you select.
-or-
Click Exclude to exclude the user or group from the access permissions you select.

Setting Access Permissions 13


Data Analyzer displays a minus sign (-) next to users or groups you exclude.

Everyone has Read


permission on the
Sales folder, unless
restricted below.

Red text and a minus


sign indicate that the
user Hansen is not
permitted to read the
Sales folder.

Corporate Sales
group granted
additional write
permission.

10. Click OK to save the access permissions settings.

Restricting Data Access


You can restrict access to data associated with specific attribute values. Create data restrictions to keep sensitive
data from appearing in reports. When you create a data restriction, you specify the users or groups to be
restricted. This allows you to make the data restriction as specific as required.
For example, you can create a data restriction that restricts the Northeast Sales group to sales data for stores in
their region. When users in the Northeast Sales group run reports that include the SALES fact table and Region
attribute, they view sales data for their region only. They cannot see sales data for western or southern regions.
When a report contains restricted data, a Data Restrictions button appears in the report.
You can create data restrictions using one of the following methods:
Create data restrictions by object. Access the fact table or operational schema that contains the metric data
you want to restrict and specify the associated attributes for which to restrict the metric data. You can apply
the data restriction to any user or group in the repository.
Use this method to apply the same data restriction to more than one user or group.
Create data restrictions by user or group. Access the user or group you want to restrict. Select the fact table
or operational schema that contains the metric data you want to restrict and specify the associated attributes
for which to restrict the metric data. You can apply the data restriction to a single fact table or operational
schema or to all related data in the repository.
Use this method to apply multiple data restrictions to the same user or group or to restrict all data associated
with specified attribute values.
If you have multiple data restrictions, you can create a complex expression with nested conditions. By default,
Data Analyzer displays the data restrictions in simple grouping mode. In this mode, Data Analyzer applies the
data restrictions in the order in which they appear in the Created Restrictions task area. If you have multiple
data restrictions, Data Analyzer uses the AND operator to apply all restrictions.
In the advanced grouping mode, use the OR or AND operator to group the data restrictions. For example, the
following condition allows users to view data from every March and from the entire year of 2007:
I r

14 Chapter 3: Setting Permissions and Restrictions


You can also use parentheses to create more complex groups of restrictions. For example, you can group three
data restrictions:
R I N B a F

In the above example, Data Analyzer allows users to view data which is not included in the North region and
which is in either the Footware category or has the BigShoes brand.

Using Global Variables


You can use global variables when you define data restrictions. When you use a global variable in a data
restriction, Data Analyzer updates the data restriction when you update the global variable value.

Understanding Data Restrictions for Multiple Groups


A restricted user assigned to a restricted group is subject to both individual and group restrictions. Data
Analyzer
joins
the
restrictions
with
the
ANDoperator.
For
example,
the
user
if has
the
restriction
R
e
and returns no data:
R

When a user belongs to more than one group, Data Analyzer handles data restrictions differently depending on
the relationship between the two groups.
The following table describes how Data Analyzer handles multiple group situations:

Data Analyzer
A user who belongs to... joins data Example
restrictions with...

Both a group and its AND operator If Group A has the following restriction:
subgroup R N
And Subgroup B has the following restriction:
C
Data Analyzer joins the restrictions with AND:
R E a N
W

Two groups that belong to OR operator If Group A has the following restriction:
the same parent group R N
And Group B has the following restriction:
C
Data Analyzer joins the restrictions with OR:
R t

Restricting Data Access by Object


Create data restrictions by object when you want to apply the restriction to more than one user or group or to
create more than one data restriction for the object. You can restrict access to data in the following objects:
Fact tables
Operational schemas
You cannot create data restrictions for hierarchical schemas. Also, you cannot create data restrictions on fact
tables or operational schemas using CLOB attributes.

Restricting Data Access 15


To create data restrictions by object:

1. Navigate to the object you want to restrict.

To create data restrictions for... Click...

Fact Table Administration > Schema Design > Analytic Schemas > Show
Fact Tables

Operational Schema Administration > Schema Design > Operational Schemas

2. Click the Data Restrictions button ( ) of the object you want to restrict.
The Data Restrictions page appears.
3. Click Select a Group/User.
The Select Group or User window appears.
4. To create a data restriction for a group, select Group. To create a data restriction for a user, select User.
If you select Group and the number of groups is less than 30, a list of available groups appears. If the
number of groups is 30 or more, the group search option appears. If you select User and you know the user
name you want to restrict, enter it in the User field.
Or, search for a user or group. Use the asterisk or percent symbols as wildcard characters.
5. Click Find.
6. Select the user or group you want to restrict and click OK.
7. In the Create Restriction task area, select an attribute from the attribute list.
Recently-used attributes appear in the list. To browse or find other attributes, click Select Other Attributes.
The Attribute Selection window appears. Data Analyzer displays the attributes for the object in the
Attribute Selection window. Navigate to the attribute you want and select an attribute. CLOB attributes
are not available for use in data restrictions.
8. From the condition list, select an operator.
9. Enter attribute values.
You can select attribute values from a list, or you can search for specific values and Ctrl-click to select more
than one. If a global variable contains the attribute values you want to use, you can select a global variable.
You can also manually enter attribute values.
10. To view the SQL query for the restriction, click Advanced.
Data Analyzer displays the SQL query for the restriction in advanced mode.
In advanced mode, you can edit the SQL query for a restriction. Data Analyzer displays buttons for adding
numbers and operators to the SQL query for the data restriction. Click within the SQL query, and then
click the buttons to add numbers or operators to the SQL query.
11. Click Add.
The data restriction appears in the Created Restrictions task area.
Use the Basic or Advanced mode, described in steps 7 to 11, to create more restrictions for the same user or
group.
If you can create more than one data restriction, you can adjust the order of the restrictions and the
operators to use between restrictions.
12. To adjust the restrictions, click Advanced in the Created Restrictions task area.

16 Chapter 3: Setting Permissions and Restrictions


In advanced mode, Data Analyzer displays lists for adding parentheses and operators. Click the appropriate
list to group the restrictions.

Click to add left Click to add Click to add right Click to change the order
parenthesis. operators. parenthesis. of the restrictions.

13. To remove a data restriction, click the Remove button.


14. When you have completed adding data restrictions for the user or group, click Apply Restrictions.
Applied restrictions appear in the Current Data Restrictions area.
To remove all data restrictions, click Cancel.
15. Click OK to save the changes.

Restricting Data Access by User or Group


Edit a user or group to restrict data when you want to create more than one restriction for the user or group.
Data restrictions limit the data that appears in the reports. When you edit a user or group, you can create data
restrictions for metrics in any fact table or operational schema.
You can restrict data in a single fact table or operational schema for an associated attribute. When the attribute
is associated with other fact tables or operational schemas in the repository, you can restrict all data related to
the attribute values you select.
For example, if the Region attribute is associated with both the Sales fact table and Salary fact table, you can
create a single data restriction to restrict all sales and salary information from Europe.
You cannot create data restrictions for hierarchical schemas. Also, you cannot create data restrictions on fact
tables or operational schemas using CLOB attributes.

To create data restrictions for users or groups:

1. To create data restrictions for users, click Administration > Access Management > Users.
-or-
To create data restrictions for groups, click Administration > Access Management > Groups. Then click
Groups to display all groups.
2. Click the Data Restrictions button ( ) of the user or group profile you want to edit.
The Data Restrictions page appears.
3. Select a schema from a list of available schemas.
The page shows a list of fact tables and operational schemas tables. Hierarchical schemas are not available
for use in data restrictions.
To select all schemas, select All Schemas. This applies the data restriction to all data in the repository
associated with the attribute you choose.
4. In the Create Restriction task area, select an attribute from the attribute list.
Recently-used attributes appear in the list. To browse or find an attribute, click Select Other Attributes.
The Attribute Selection window appears. Data Analyzer displays all attribute folders for the object in the
Attribute Selection window. Navigate to the attribute you want and select an attribute. CLOB attributes
are not available for use in data restrictions.
5. From the condition list, select an operator.

Restricting Data Access 17


6. Enter attribute values.
You can select attribute values from a list, or you can search for specific values and Ctrl-click to select more
than one. If a global variable contains the attribute values you want to use, you can select a global variable.
You can also manually enter attribute values.
7. To view the SQL query for the restriction, click Advanced.
Data Analyzer displays the SQL query for the restriction in advanced mode.
In advanced mode, you can edit the SQL query for a restriction. Data Analyzer displays buttons for adding
numbers and operators to the SQL query for the data restriction. Click within the SQL query, and then
click the buttons to add numbers or operators to the SQL query.
8. Click Add.
The data restriction appears in the Created Restrictions task area.
Use the Basic or Advanced mode, described in steps 3 to 8, to create more restrictions for the same user or
group. If you create more than one data restriction, you can adjust the order of the restrictions and the
operators to use between restrictions.
9. To adjust the restrictions, click Advanced in the Created Restrictions task area.
In advanced mode, the Created Restrictions task area displays lists for adding parentheses and operators.
Click the appropriate list to group the restrictions.

Click to add left Click to add Click to add right Click to change the order
parenthesis. operators. parenthesis. of the restrictions.

10. To remove a data restriction, click the Remove button.


11. When you have completed adding data restrictions for the user or group, click Apply Restrictions.
Applied restrictions appear in the Current Data Restrictions area.
To remove all data restrictions, click Cancel.
12. Click OK to save the changes.

18 Chapter 3: Setting Permissions and Restrictions


CHAPTER 4

Managing Time-Based Schedules


This chapter includes the following topics:
Overview, 19
Creating a Time-Based Schedule, 20
Managing Time-Based Schedules, 21
Managing Reports in a Time-Based Schedule, 23
Using the Calendar, 26
Defining a Business Day, 27
Defining a Holiday, 27
Monitoring a Schedule, 27

Overview
A time-based schedule updates reports based on a configured schedule. When Data Analyzer runs a time-based
schedule, it runs each report attached to the schedule. You can attach any cached report to a time-based
schedule.
To use a time-based schedule, complete the following steps:
1. Create a time-based schedule.
Configure the start time, date, and repeating option of the schedule when you create or edit a time-based
schedule.
2. Attach reports to the time-based schedule as tasks.
Attach a report to the time-based schedule when you create or edit the report. Attach imported cached
reports to tasks from the time-based schedule.
You can configure the following types of time-based schedules:
Single-event schedule. Updates report data only on the configured date. Create a single-event schedule for a
one-time update of the report data. For example, if you know that the database administrator will update the
data warehouse on December 1, but do not know when other updates occur, create a single-event schedule
for December 2.
Recurring schedule. Updates report data on a regular cycle, such as once a week or on the first Monday of
each month. Create a recurring schedule to update report data regularly. You might use a recurring schedule
to run reports after a regularly scheduled update of the data warehouse. For example, if you know that the

19
data warehouse is updated the first Friday of every month, create a time-based schedule to update reports on
the second Monday of every month.
After you attach reports to a time-based schedule, you can create indicators and alerts for the reports.
Monitor existing schedules with the Calendar or the Schedule Monitor. The Calendar provides daily, weekly,
or monthly views of all the time-based schedules in the repository. You can set up business days and holidays for
the Data Analyzer Calendar. The Schedule Monitor provides a list of the schedules currently running reports.
If you want to update reports when a PowerCenter session or batch completes, you can create an event-based
schedule.

Creating a Time-Based Schedule


You can create single-event or recurring schedules to run reports as tasks. Single-event schedules run tasks once.
Recurring schedules can repeat every minute, or hourly, daily, weekly, monthly, or quarterly.

To create a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules > Add.


The Properties page appears.
2. Enter the following information:

Field Description

Name Name of the time-based schedule. The name can include any character except a
space, tab, newline character, and the following special characters:
\/:*?<>|&[]

Description Description of the time-based schedule.

Business Day When selected, the schedule runs reports on business days only. If a scheduled
Only run falls on a non-business day, a weekend or configured holiday, Data Analyzer
waits until the next scheduled run to run attached reports.

Start Date Date the schedule initiates. Default is the current date on Data Analyzer.

Start Time Time the schedule initiates. Default is 12:00 p.m. (noon).

3. Select a repeat option.


For a single-event schedule, select Do Not Repeat.
For a repeating schedule, select one of the following repeat options:

Field Description

Repeat Every Repeats every specified number of units of time. You


(Number) (Minute/Hour/Day/Week/Month/Year) can select Minute, Hour, Day, Week, Month, or Year
as a unit of time. Select minutes in increments of five.
Use this setting to schedule recurring updates of
report data.

Repeat Every Repeats each week on the specified day(s). Use this
(Monday/Tuesday/Wednesday/Thursday/Frida setting to schedule weekly updates of report data.
y/Saturday/Sunday)

20 Chapter 4: Managing Time-Based Schedules


Field Description

Repeat the Repeats on the specified day of the week of every


(First/Second/Third/Fourth) month or year. Use this setting to schedule monthly or
(Monday/Tuesday/Wednesday/Thursday/Frida yearly updates of report data.
y/Saturday/Sunday)
of every (Month/Year)

Repeat on Repeats every specified number of days from the


(Number) of days beginning or end of the specified month. Use this
from the (Beginning of/End) setting to schedule quarterly updates of report data.
of the (First/Second/Third Month) of each
Quarter

4. Select the repeat condition:

Field Description

Always Schedule repeats until disabled or deleted from the repository. Default is
Always.

Until (Month) (Day) (Year) Schedule repeats until the date you specify. Default is the current date on
Data Analyzer.

5. Click OK.

Managing Time-Based Schedules


After you create a time-based schedule, you can attach reports to the schedule. You can attach any cached report
to a time-based schedule. When Data Analyzer runs a time-based schedule, it runs each attached report.
You can complete the following tasks for a time-based schedule:
Edit a schedule.
Edit schedule access permissions.
View or clear the schedule history.
Start the schedule immediately.
Stop the schedule immediately.
Disable the schedule.

Editing a Time-Based Schedule


After you create a time-based schedule, you can edit schedule properties. You can also remove reports or change
the order in which they run.
When you update the schedule of a time-based schedule, the change impacts all attached reports and alerts.

To edit a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


The Time-Based Schedules page appears.
2. Click the name of the schedule you want to edit.
The Properties page appears.
3. Click Tasks to remove reports from the schedule.
4. Edit schedule properties if necessary, and then click OK.

Managing Time-Based Schedules 21


Editing Access Permissions for a Time-Based Schedule
Access permissions determine which users and groups can attach reports to the schedule, modify the schedule,
or change access permissions to the schedule.
By default every user with the appropriate privileges can edit a schedule. You can change the access permissions
for a schedule to protect the security of the schedule.
To set access permissions, click the Permissions button.

Click to change access permissions.

Viewing or Clearing a Time-Based Schedule History


You can view the history of a time-based schedule. Each time-based schedule has a history containing the
following information:
Start time. The date and time Data Analyzer started running the schedule.
End time. The date and time Data Analyzer stops running the schedule.
Status. Lists whether the schedule or task completed successfully or the number of errors that occurred.
When you view schedule histories, you can determine how long all tasks attached to the schedule take to
update, the number of successfully completed schedule runs, or the number of recurring errors during the run.
You can also clear the history of a schedule. You might clear a schedule history at the end of a quarter or to save
space in the repository.

To view or clear the history of a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


The Time-Based Schedules page appears.
2. Select the schedule you want to view.
The Properties page appears.
3. Click History.
The Schedule History page appears. The schedule name appears in parentheses.
4. To clear the history of the schedule, click Clear.
5. Click OK.

Starting a Time-Based Schedule Immediately


You can start a time-based schedule immediately instead of waiting for its next scheduled run. You might start
a time-based schedule immediately to test attached reports. You might also start the schedule if errors occurred
during the previously scheduled run.

22 Chapter 4: Managing Time-Based Schedules


To start a time-based schedule immediately:

1. Click Administration > Scheduling > Time-Based Schedules.


The Time-Based Schedules page appears.
2. For the time-based schedule you want to start, click Run Now.
Data Analyzer starts the schedule and runs the attached reports.

Stopping a Time-Based Schedule Immediately


You can stop a time-based schedule immediately, aborting all attached reports. You can stop a schedule
immediately when you need to restart the server. For more information, see Stopping a Schedule on page 28.

Disabling a Time-Based Schedule


You can disable a time-based schedule when you do not want it to run. You might disable a schedule when it
has no attached reports or when the update of source data is temporarily interrupted. When you want the
schedule to resume, you can enable the schedule.

To disable a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


2. Click the Enabled button for the schedule you want to disable.
The status of the schedule changes to Disabled. When you want to enable the schedule again, click the
Disabled button.

Removing a Time-Based Schedule


You can remove time-based schedules from the repository. Before you remove any schedule from the repository,
Data Analyzer recommends that you reassign all tasks attached to the schedule.

To remove a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


2. Click the Remove button for the schedule you want to delete.
3. Click OK.

Managing Reports in a Time-Based Schedule


After you create a time-based schedule, you can attach reports to the schedule. You can attach any cached report
to a time-based schedule. When Data Analyzer runs a time-based schedule, it runs each attached report.
You can complete the following schedule-related tasks for a report:
Attach a report to a time-based schedule.
View a list of attached reports.
View task properties.
View or clear a task history.
Remove a report from a time-based schedule.

Managing Reports in a Time-Based Schedule 23


Attaching Reports to a Time-Based Schedule
You can attach a report to a time-based schedule using one of the following methods:
Save a new report as cached. Select the schedule option when you save a new report to the repository.
Save an existing report as a cached report. Select Save As on an existing report, and change the scheduling
options.
Add an imported report to a schedule. Select a schedule and use the add task option to attach multiple
imported cached reports to an existing schedule.
You can attach multiple reports to a single schedule. If you attach multiple reports to a schedule, Data Analyzer
runs the reports concurrently. To make troubleshooting easier, attach a small number of reports to a schedule.
Set up multiple schedules to run a large number of reports.
You can attach reports that have alerts on a predefined schedule to a time-based schedule, but not to an event-
based schedule. If you attach a report that has alerts on a predefined schedule to a time-based schedule, the
report schedule must update more often than the alert schedule updates.

Attaching Imported Cached Reports to a Time-Based Schedule


When you import cached reports to the repository, the following message appears:
S so r h e
s

You must attach any cached reports that you import to a schedule. You can attach each imported report
individually or attach multiple imported reports from a list to a single schedule. To attach multiple reports
from the list, you must attach the reports during the same Data Analyzer session. If the session expires or you
log out before attaching multiple reports from the import list, you cannot attach multiple reports. You must
attach the imported reports individually.
You can attach imported cached reports to time-based or event-based schedules.

To attach an imported cached report to a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


2. Click the time-based schedule that you want to use.
3. Click Tasks.
The list of the tasks attached to the schedule appears.
4. Click Add.
The Add button appears only when you have unscheduled imported reports in the repository.
The Imported Scheduled Reports window appears.
5. Select the imported reports that you want to add to the schedule.
If you want to add all available imported reports as a task for the schedule, select the All check box next to
Select Reports.
6. Click Apply.
The report appears as an item on the task list.

Viewing Attached Reports


All reports that are attached to a time-based schedule display as a list of tasks for the schedule. You can view
these tasks on the Tasks page for the schedule. When a user selects broadcast or an alert rules for a time-based
schedule, Data Analyzer attaches the rules to the schedule but does not display the rules on the list of tasks for
the schedule. Although the rules do not display on the Tasks page for the schedule, Data Analyzer applies the
rules when it runs the report on the time-based schedule.

24 Chapter 4: Managing Time-Based Schedules


To view a report attached to a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


The Time-Based Schedules page appears.
2. Click the schedule you want to view.
The Properties page appears.
3. Click Tasks.
All attached reports display.

Viewing Task Properties


You can view the task properties for any report attached to a time-based schedule.

To view task properties:

1. Click Administration > Scheduling > Time-Based Schedules.


2. Click the name of the schedule that runs the report.
3. Click Tasks.
4. Click the name of the report.
The Task Properties page appears. You cannot modify the task properties.
5. Click OK to close the Task Properties page.

Viewing or Clearing a Task History


You can view a task history for reports attached to time-based schedules. View report histories to determine
how long the report takes to update, the number of successfully completed runs, or recurring errors when
running the report. You can view a task history to compare the number of successful runs on different
schedules.
You can also clear the history of a report. You can clear a task history at the end of a quarter or to save space in
the repository.

To view or clear a task history:

1. Click Administration > Scheduling > Time-Based Schedules.


2. Click the name of the schedule that runs the report.
3. Click Tasks.
4. Click the name of the report.
The Task Properties page appears.
5. Click History.
6. To clear the task history, click Clear, and then click OK.
7. To return to Task Properties, click OK.

Removing a Report from a Time-Based Schedule


You can remove a report from a time-based schedule. Remove a report when you plan to disable the schedule or
when the report requires a new update strategy. When you remove a task, you must attach it to another
schedule to ensure it updates in a timely manner.

Managing Reports in a Time-Based Schedule 25


To remove a report from a time-based schedule:

1. Click Administration > Scheduling > Time-Based Schedules.


The Time-Based Schedules page appears.
2. Click the name of the schedule you want to edit.
The Properties page appears.
3. Click Tasks.
4. Select the check box for the report you want to remove.
If you want to remove all attached reports, select the check box in the title bar next to Name.
5. Click Remove, and then click OK.

Using the Calendar


Use the Calendar in the Scheduling section to view all enabled time-based schedules in the repository. The
Calendar lists schedules by day, week, or month. The default Calendar display is a view of the current day.
The Calendar recognizes leap years.

To view the Calendar:

1. Click Administration > Scheduling > Calendar.


The Calendar appears.
2. Click Weekly or Monthly to change the view of the Calendar.

Navigating the Calendar


The Calendar provides daily, weekly, and monthly views. You can navigate from one view to another.

Navigating the Daily View


The Calendar opens to the Daily view by default. The Daily view displays the current day and organizes the
time-based schedules for the current day by hour. Use the left and right arrows to navigate to the previous and
next day, respectively. To view a different date, select a different date or month in the calendar.

Navigating the Weekly View


The Weekly view opens to the current week by default. The Weekly view displays all time-based schedules for
the week. Use the left and right arrows to navigate to the previous and following weeks, respectively. To access
a Daily view, click a date.

Navigating the Monthly View


The Monthly view opens to the current month by default. The Monthly view displays all time-based schedules
for the month. Use the left and right arrows to navigate to the previous and following months, respectively. To
access a Weekly view, click a week. To access a Daily view, click the specific date.

26 Chapter 4: Managing Time-Based Schedules


Defining a Business Day
You can define business days for the Data Analyzer Calendar. Business days are the days Data Analyzer treats as
regular working days. After you define business days, you can create time-based schedules that run only on
those days.
The business day setting overrides all other recurring schedule settings you create. If the schedule falls on a non-
business day, like a weekend or holiday, Data Analyzer postpones the schedule to run attached reports on the
next scheduled day.
For example, the configured business days are Monday through Friday. You create a schedule to run reports on
the first of the month, and configure the schedule to run only on business days. If March 1 falls on a Sunday,
Data Analyzer waits until the next scheduled day, April 1, to run the schedule.
The default business days are Monday through Friday. You can change these business days to fit your work
schedule.

To define business days:

1. Click Administration > Scheduling > Business Days.


The Business Days page appears.
2. Select the days you want to define as business days.
Clear the days you do not want defined as business days.
3. Click Apply.

Defining a Holiday
You can define holidays for the Data Analyzer Calendar. Data Analyzer treats holidays as non-business days.
Time-based schedules configured to run reports only on business days do not run on holidays. When a schedule
falls on a holiday, Data Analyzer runs the reports on the next scheduled day. Time-based schedules that are not
configured to run only on business days still run on configured holidays.
View all configured holidays on the Holidays page. By default, there are no configured holidays.

To define a holiday:

1. Click Administration > Scheduling > Holidays.


The Holidays page appears.
2. Click Add.
The Holiday Properties page appears.
3. Enter the name, date, and a brief description of the holiday.
4. Click OK.

Monitoring a Schedule
The Schedule Monitor provides a list of all schedules that are currently running in the repository. You might
check the Schedule Monitor before you restart Data Analyzer to make sure no schedules are running. You
might also use the Schedule Monitor to verify whether Data Analyzer runs reports at the scheduled time.

Defining a Business Day 27


To monitor a schedule, click Administration > Scheduling > Schedule Monitoring. Data Analyzer displays
schedules that are currently running.

Stopping a Schedule
You can stop a running schedule and all attached reports through the Schedule Monitor. You might stop a
schedule when you need to restart the server or when a problem arises with source data.

To stop a running schedule:

1. Click Administration > Scheduling > Schedule Monitoring.


The Schedule Monitor lists all currently running schedules.
2. Click Remove to stop a running schedule.
3. Click OK.

28 Chapter 4: Managing Time-Based Schedules


CHAPTER 5

Managing Event-Based Schedules


This chapter includes the following topics:
Overview, 29
Updating Reports When a PowerCenter Session Completes, 29
Managing Event-Based Schedules, 31
Managing Reports in an Event-Based Schedule, 33

Overview
PowerCenter Data Analyzer provides event-based schedules and the PowerCenter Integration utility so you can
update reports in Data Analyzer based on the completion of PowerCenter sessions.
To update reports in Data Analyzer when a session completes in PowerCenter, complete the following steps:
1. Create an event-based schedule and attach cached reports to the schedule. For more information, see Step
1. Create an Event-Based Schedule on page 30.
2. Configure a PowerCenter session to call the PowerCenter Integration utility as a post-session command and
pass the event-based schedule name as a parameter. For more information, see Step 2. Use the
PowerCenter Integration Utility in PowerCenter on page 31.
If the PowerCenter Integration utility is set up correctly, Data Analyzer runs each report attached to the event-
based schedule when a PowerCenter session completes.
You can create indicators and alerts for the reports in an event-based schedule.
You can monitor event-based schedules with the Schedule Monitor. The Schedule Monitor provides a list of the
schedules currently running reports.
You cannot use the PowerCenter Integration utility with a time-based schedule.

Updating Reports When a PowerCenter Session


Completes
When you create a Reporting Service in the PowerCenter Administration Console, PowerCenter installs the
PowerCenter Integration utility.

29
PowerCenter installs a separate PowerCenter Integration utility for every Reporting Service that you create. You
can find the PowerCenter Integration utility in the following folder:
< e /

PowerCenter suffixes the Reporting Service name to the notifyias folder. For example, if you create a Reporting
Service and call it DA_Test, the notifyias folder would be notifyias-DA_Test.
Before you run the PowerCenter Integration utility, complete the following steps:
1. Open the notifyias.properties file in the notifyias-<
logfile.location property to the location and the name of the PowerCenter Integration utility log file.
The PowerCenter Integration utility creates a log file when it runs after the PowerCenter session completes.
The logfile.location property determines the location and the name of the log file.
2. Open the notifyias file in a text editor:
UNIX: notifyias.sh
Windows: notifyias.bat
Back up the notifyias file before you modify it.
3. Set the JAVA_HOME environment variable to the location of the JVM.
Run the PowerCenter Integration utility to update reports in Data Analyzer when a session completes in
PowerCenter.
The PowerCenter Integration utility considers the settings in the notifyias.properties file to update reports in
Data Analyzer. The notifyias.properties file contains information about the Reporting Service URL and the
schedule queue name.
When you create a Reporting Service, PowerCenter sets the properties in the notifyias.properties file to point to
the correct instance of the Reporting Service.

Step 1. Create an Event-Based Schedule


To run reports in Data Analyzer after a session completes in PowerCenter, create an event-based schedule in
Data Analyzer and attach the reports that you want to run after the PowerCenter session completes.

Creating an Event-Based Schedule


When you create an event-based schedule, you need to provide a name and description of the schedule. You do
not need to provide information about the PowerCenter session you want to use.

To create an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


The Event-Based Schedules page appears.
2. Click Add.
The Add an Event-Based Schedule page appears.
3. Enter a name and description for the schedule.
4. Click OK.
After you create the event-based schedule, you can attach it to a cached report when you save the report.

Attaching Reports to an Event-Based Schedule


You can attach a report to an event-based schedule with one of the following methods:
Save a new report as a cached report. Select the cached report option and a specific schedule when you save
a new report to the repository.

30 Chapter 5: Managing Event-Based Schedules


Save an existing report as a cached report. Select Save As on a report, then change the scheduling options.
You can attach multiple reports to a single schedule. If you attach multiple reports to a schedule, Data Analyzer
runs the reports concurrently. To make troubleshooting easier, attach a small number of reports to a schedule.
Set up multiple schedules to run a large number of reports.

Step 2. Use the PowerCenter Integration Utility in PowerCenter


Before you can use the PowerCenter Integration utility in a PowerCenter post-session command, create an
event-based schedule as outlined in the previous step.
In the PowerCenter Workflow Manager, you must configure the PowerCenter session to call the PowerCenter
Integration utility as a post-session command. You can set up the post-session command to send Data Analyzer
notification when the session completes successfully. Data Analyzer then connects to the PowerCenter data
warehouse to retrieve new data to update reports.
When you use the PowerCenter Integration utility in the post-session command, you need to navigate to the
nt-based schedule that youcorrect notifyias-<R
want to associate with the PowerCenter session.
Use the following post-session command syntax for PowerCenter installed on Windows:
n E

Use the following shell command syntax for PowerCenter installed on UNIX:
n v

E
want to run when the PowerCenter session completes. If the system path does not include the path of the
PowerCenter Integration utility, you need to prefix the utility file name with the file path.
You can also run the PowerCenter Integration utility as a command task in a PowerCenter workflow. If you
want to run the PowerCenter Integration utility after all other tasks in a workflow complete, you can run it as
the last task in the workflow.
For more information about configuring post-session commands, PowerCenter workflows, or the PowerCenter
Integrationutility,seetheP B f .

Managing Event-Based Schedules


You can perform the following tasks to manage an event-based schedule:
Edit a schedule.
Edit schedule access permissions.
View or clear the schedule history.
Start a schedule immediately.
Stop a schedule immediately.
Disable a schedule.
Remove a schedule.

Editing an Event-Based Schedule


After you create an event-based schedule, you can edit its name and description.

Managing Event-Based Schedules 31


To edit an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


The Event-Based Schedules page appears.
2. Click the name of the schedule you want to edit.
The Edit an Event-Based Schedule page appears.
3. Edit the name or description of the event-based schedule.
If you want to view the reports assigned as tasks to the schedule, click Tasks.
If you want to view the history of the schedule, click History.
4. Click OK.

Editing Access Permissions for an Event-Based Schedule


Access permissions determine which users and groups can attach reports to the schedule, modify the schedule,
or change access permission for the schedule.
By default, the system administrator and users with the Set Up Schedules and Tasks privilege and Write
permission on the schedule can edit an event-based schedule. To secure a schedule, you can change the access
permissions for the schedule. To edit access permissions, click the Permissions button.

Viewing or Clearing an Event-Based Schedule History


You can view the history of an event-based schedule to see the following information:
Start time. The date and time Data Analyzer started the schedule.
End time. The date and time the schedule completes.
Status. Lists the successful completion of the schedule or the number of errors that have occurred.
View schedule histories to determine how long attached reports take to complete, the number of successfully
completed runs of the schedule, or the number of recurring errors.
You can also clear the history of an event-based schedule. You might clear a schedule history at the end of a
quarter or to save space in the repository.

To view an event-based schedule history:

1. Click Administration > Scheduling > Event-Based Schedules.


The Event-Based Schedules page appears.
2. Click the schedule you want to view.
3. Click History.
The Schedule History page appears with the schedule name in parentheses.
4. To clear the schedule history, click Clear.
5. Click OK.

Starting an Event-Based Schedule Immediately


You can start an event-based schedule immediately instead of waiting for the related PowerCenter session to
complete. You might start an event-based schedule immediately to test attached reports and report alerts. You
might start the schedule if errors occurred during the last run of the schedule.

32 Chapter 5: Managing Event-Based Schedules


To start an event-based schedule immediately:

1. Click Administration > Scheduling > Event-Based Schedules.


The Event-Based Schedules page appears.
2. For the event-based schedule you want to start, click Run Now.
Data Analyzer starts the schedule and runs the attached reports.

Stopping an Event-Based Schedule Immediately


You can stop an event-based schedule immediately, which stops all attached reports. You can stop a schedule
immediately when you need to restart the server. For more information, see Stopping a Schedule on page 28.

Disabling an Event-Based Schedule


You can disable an event-based schedule when you do not want it to run. You might disable a schedule when it
has no attached reports or when the update of source data has been interrupted. When you want the schedule to
resume, you can enable the schedule.

To disable an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


The Event-Based Schedules page appears.
2. Click the Enabled button for the schedule you want to disable.
The Status of the schedule changes to Disabled. To enable the schedule again, click Disabled.

Removing an Event-Based Schedule


You can remove event-based schedules from the repository. You might want to remove an event-based schedule
when the PowerCenter session is no longer valid.
Before removing a schedule from the repository, reassign all attached reports to another schedule.

To remove an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


2. Click the Remove button for the schedule you want to delete.
3. Click OK.

Managing Reports in an Event-Based Schedule


After you create an event-based schedule, you can attach any cached reports to the schedule. When Data
Analyzer runs an event-based schedule, it runs each attached report.
You can perform the following tasks to manage reports in an event-based schedule:
View a list of attached reports.
View task properties.
View or clear a report history.
Remove a report from an event-based schedule.
Attach imported cached reports to a schedule.

Managing Reports in an Event-Based Schedule 33


Viewing Attached Reports
You can view all reports attached to an event-based schedule.

To view tasks attached to an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


The Event-Based Schedules page appears.
2. Click the name of the schedule you want to edit.
The schedule properties display.
3. Click Tasks.
Data Analyzer displays all attached reports.

Viewing Task Properties


You can view the properties of any report attached to an event-based schedule.

To view task properties:

1. Click Administration > Scheduling > Event-Based Schedules.


2. Click the name of the schedule that runs the report.
3. Click Tasks.
4. Click the name of the report.
The Task Properties page appears.
5. Click OK.

Viewing or Clearing a Report History


You can view a report history for the reports attached to an event-based schedule. View report histories to
determine how long a report takes to update, the number of successfully completed runs, or recurring errors
when running the report. You might want to view a report history to compare the number of successful runs on
different schedules.
You can also clear report the history. You might clear history at the end of a quarter or to make space in the
repository.

To view or clear a report history:

1. Click Administration > Scheduling > Event-Based Schedules.


2. Click the name of the schedule that runs the report.
3. Click Tasks.
4. Click the name of the report.
The Task Properties page appears.
5. Click History.
Data Analyzer displays the report history.
6. To clear the history, click Clear, and then click OK.
7. To return to Task Properties, click OK.

34 Chapter 5: Managing Event-Based Schedules


Removing a Report from an Event-Based Schedule
You can remove a report from an event-based schedule. You might want to remove a report when you plan to
disable the schedule or when the report requires a new update strategy. When you remove a cached report,
attach it to another schedule to ensure it updates in a timely manner.

To remove a report from an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


2. Click the name of the schedule you want to edit and then click Tasks.
3. Select the check box for the report you want to remove.
If you want to remove all attached reports, select the check box in the title bar next to Name.
4. Click Remove, and then click OK.

Attaching Imported Cached Reports to an Event-Based Schedule


When you import cached reports to the repository, Data Analyzer displays the following message:
S so r h e
s

You must attach each imported cached report to a schedule. You can attach imported reports individually or
attach multiple imported reports from a list to a single schedule. To attach multiple reports from the list, you
must attach them during the same Data Analyzer session. If the session expires or you log out before attaching
the reports from the import list, you cannot attach multiple reports. You must attach the imported reports
individually.
You can attach imported cached reports to time-based or event-based schedules.

To attach an imported cached report to an event-based schedule:

1. Click Administration > Scheduling > Event-Based Schedules.


2. Click the event-based schedule that you want to use.
3. Click Tasks.
The list of the tasks assigned to the schedule appears:

Appears when imported reports are not yet scheduled.


Click to add the reports to existing schedules.

4. Click Add.
The Add button appears only when you have unscheduled imported reports in the repository.
The Imported Scheduled Reports window appears.
5. Select the reports that you want to add to the schedule.
If you want to add all available imported reports to the schedule, click the All check box.
6. Click Apply.
The report appears as an item on the task list.

Managing Reports in an Event-Based Schedule 35


36 Chapter 5: Managing Event-Based Schedules
CHAPTER 6

Exporting Objects from the


Repository
This chapter includes the following topics:
Overview, 37
Exporting a Schema, 38
Exporting a Time Dimension, 40
Exporting a Report, 40
Exporting a Global Variable, 42
Exporting a Dashboard, 42
Exporting a Security Profile, 43
Exporting a Schedule, 44
Troubleshooting, 45

Overview
You can export repository objects to XML files and import repository objects from XML files. You might want
to export objects to archive the repository. You might also want to export and import objects to move Data
Analyzer objects from development to production.
You can export the following repository objects:
Schemas
Time Dimensions
Reports
Global Variables
Dashboards
Security profiles
Schedules
When you export the repository objects, Data Analyzer creates an XML file that contains information about the
exported objects. Use this file to import the repository objects into a Data Analyzer repository. You can view
the XML files with any text editor. However, do not modify the XML file created when you export objects. Any

37
change might invalidate the XML file and prevent you from using it to import objects into a Data Analyzer
repository.
When you save the XML file on a Windows machine, verify that you have enough space available in the
Windows temp directory, usually in the C: drive, for the temporary space typically required when a file is saved.
Schedule exporting and importing tasks so that you do not disrupt Data Analyzer users. Exporting and
importing repository objects uses considerable system resources. If you perform these tasks while users are
logged in to Data Analyzer, users might experience slow response or timeout errors.
You can also export repository objects using the ImportExport command line utility. For more information, see
Using the Import Export Utility on page 63.

Exporting a Schema
You can export analytic and operational schemas. When you export a schema from the Data Analyzer
repository, you can select individual metrics within a schema to export or you can select a folder that contains
metrics. You can also choose whether to export only metric definitions or to export all metrics, attributes,
tables, and other schema objects associated with the metric.

Exporting Metric Definitions Only


When you export only metric definitions, Data Analyzer exports the metrics you select. It does not export the
definition of the table or schema that contains the metrics or any other schema object associated with the metric
or its table or schema.

Exporting Metrics and Associated Schema Objects


When Data Analyzer exports a metric or schema and the associated objects, it exports different objects based on
the type of schema you select.
You can export the following metrics and schemas:
Operational schemas or metrics in operational schemas
Analytic schemas or metrics in analytic schemas
Hierarchical schemas or metrics in hierarchical schemas
Calculated metrics

Exporting Operational Schemas


When Data Analyzer exports a metric from an operational schema, it also exports all metrics, attributes, and
tables in the operational schema and the join expressions for the operational schema tables.

Exporting Analytic Schemas


When exporting a metric from an analytic schema, Data Analyzer exports the definitions of the following
schema objects associated with the metric:
Fact tables associated with the exported metric.
When exporting a calculated metric, Data Analyzer also exports all associated metrics that are used to
calculate the calculated metric. Data Analyzer also exports all fact tables associated with any of the
exported metrics, including the calculated metric and those used to calculate it.
When exporting a fact table associated with a time dimension, Data Analyzer does not export the time
dimension. You can export the time dimensions separately.
Dimension keys in the exported fact table.

38 Chapter 6: Exporting Objects from the Repository


Aggregate fact tables associated with the exported fact tables.
Dimension tables associated with the exported fact tables.
Attributes in the exported dimension tables.
Drill paths associated with any of the attributes in the dimension tables.
Aggregate, template, and snowflake dimension tables associated with the dimension tables.
If you export a template dimension table associated with the exported metric, Data Analyzer exports only
one definition of the template dimension. You can also export template dimensions separately. If you export
only a template dimension, Data Analyzer exports only the template dimension and its attributes. It does not
export any associated schema object.

Exporting Hierarchical Schemas


When Data Analyzer exports a metric from a hierarchical schema, it also exports all metrics and attributes in the
hierarchical schema.

Exporting Calculated Metrics


Calculated metrics are derived from two or more base metrics from analytic, operational, or hierarchical
schemas.
For example, you have the following metrics:
Base metric 1 (BaseMetric1) and base metric 2 (BaseMetric2) are metrics from fact tables in an analytic
schema.
Base metric 3 (BaseMetric3) is a metric from an operational schema (OpSch1).
Base metric 4 (BaseMetric4) is a metric from a different operational schema (OpSch2).
If you export a calculated metric, which is calculated from BaseMetric1 and BaseMetric2, Data Analyzer
exports the fact table associated with each metric. In addition, Data Analyzer exports all schema objects
associated with the metrics in these fact tables.
If you export a calculated metric, which is calculated from BaseMetric1 and BaseMetric3, Data Analyzer
exports BaseMetric1, its associated fact table, and the schema objects associated with the metric in that fact
table. In addition, Data Analyzer exports BaseMetric3 and its entire operational schema.
If you export a calculated metric, which is calculated from BaseMetric3 and BaseMetric4, Data Analyzer
exports BaseMetric3 and its entire associated operational schema, and BaseMetric4 and its entire operational
schema.

To export schema objects:

1. Click Administration > XML Export/Import > Export Schemas.


The Export Schemas page displays all the folders and metrics in the Metrics folder of the Schema
Directory.
If you define a new object in the repository or if you create a new folder or move objects in the Schema
Directory, the changes may not immediately display in the Schema Directory export list. Click Refresh
Schema to display the latest list of folders and metrics in the Schema Directory.
2. Select the type of information you want to export.
To export the metric definitions and associated tables and attributes, select Export the Metrics with the
Associated Schema Tables and Attributes.
To export only metric definitions, select Export Metric Definitions Only.
3. Select the folders, metrics, or template dimensions that you want to export.
At the top of the Metrics section, you can select Metrics to select all folders and metrics in the list.
You can select Template Dimensions to select all template dimensions in the list or select a metrics folder
to export all metrics within the folder. You can also select individual metrics in different folders.

Exporting a Schema 39
4. Click Export as XML.
The File Download window appears.
5. Click Save.
The Save As window appears.
6. Navigate to the directory where you want to save the file.
7. Enter a name for the XML file and click Save.
Data Analyzer exports the schema to an XML file.

Exporting a Time Dimension


You can export time dimension tables to an XML file. Time dimension tables contain date- and time-related
attributes that describe the occurrence of a metric.

To export a time dimension table:

1. Click Administration > XML Export/Import > Export Time Dimensions.


The Export Time Dimensions page displays the time dimension tables in the repository.
2. Select the time dimension you want to export.
3. Click Export as XML.
The File Download window appears.
4. Click Save.
The Save As window appears.
5. Navigate to the directory where you want to save the file.
6. Enter a name for the XML file and click Save.
Data Analyzer exports the time dimension table to an XML file.
If an XML file with the same name already exists in the directory, Data Analyzer prompts you to overwrite
the file or rename the new file.

Exporting a Report
You can export reports from public and personal folders. You can export multiple reports at once. When you
export a folder, Data Analyzer exports all reports in the folder and its subfolders.
You can export cached and on-demand reports. When exporting cached reports, Data Analyzer exports the
report data and the schedule for cached reports.
When you export a report, Data Analyzer always exports the following report components:
Report table
Report charts
Filters
Calculations
Custom attributes

40 Chapter 6: Exporting Objects from the Repository


All reports in an analytic workflow
All subreports in a composite report
By default, Data Analyzer also exports the following components associated with reports. You can choose not to
export any of these components:
Indicators
Alerts
Highlighting
Permissions
Schedules
Filtersets
Data Analyzer exports all current data for each component, with the following exceptions:
Gauge indicators. Exported personal gauge indicators do not keep their original owner. The user who
imports the report becomes the owner of the gauge indicator and the gauge indicator becomes personal to
that user. Exported public gauge indicators keep their original owner.
Alerts. Exported personal and public alerts use the state set for all report subscribers as the default alert state.
Highlighting. Data Analyzer does not export any personal highlighting. Exported public highlighting uses
the state set for all users as the default highlighting state.
To export an analytic workflow, you need to export only the originating report. When you export the
originating report of an analytic workflow, Data Analyzer exports all the workflow reports.
When you export a report that uses global variables, Data Analyzer lists the global variables used in the report.
Although the global variables are not exported with the report, you can export them separately.

To export a report:

1. Click Administration > XML Export/Import > Export Reports.


The Export Report page displays all public and personal folders in the repository that you have permission
to access.
If you create, modify, or delete a folder or report, the changes may not immediately display in the report
export list. Click Refresh Reports to display the latest list of reports from Public Folders and Personal
Folder.
2. Select the folders or reports that you want to export.
Select a folder to export all subfolders and reports in the folder.
3. To modify the report components to export, click Export Options.
4. From the list of Export Options, clear each component that you do not want to export to the XML file.
5. Click Export as XML.
The File Download window appears.
6. Click Save.
The Save As window appears.
7. Navigate to the directory where you want to save the file.
8. Enter a name for the XML file, and then click Save.
Data Analyzer exports the definitions of all selected reports.

Exporting a Report 41
Exporting a Global Variable
You can export any global variables defined in the repository. When you export multiple global variables, Data
Analyzer creates one XML file for the global variables and their default values.

To export a global variable:

1. Click Administration > XML Export/Import > Export Global Variables.


The Export Global Variables page appears, listing all the global variables in the repository.
2. Select the global variables that you want to export.
Optionally, select Name at the top of the list to select all the global variables in the list.
3. Click Export as XML.
The File Download window appears.
4. Click Save.
The Save As window appears.
5. Navigate to the directory where you want to save the file.
6. Enter a name for the XML file and click Save.
Data Analyzer exports the definitions of all selected global variables.

Exporting a Dashboard
When you export a dashboard, Data Analyzer exports the following objects associated with the dashboard:
Reports
Indicators
Shared documents
Dashboard filters
Discussion comments
Feedback
Data Analyzer does not export the following objects associated with the dashboard:
Access permissions
Attributes and metrics in the report
Real-time objects
When you export a dashboard, the Export Options button is unavailable. Therefore, you cannot select specific
components to export.
You can export any of the public dashboards defined in the repository. You can export more than one
dashboard at a time.

To export a dashboard:

1. Click Administration > XML Export/Import > Export Dashboards.


The Export Dashboards page appears, listing all the dashboards in the repository that you can export.
2. Select the dashboards that you want to export.
Optionally, select Name at the top of the list to select all the dashboards in the list.

42 Chapter 6: Exporting Objects from the Repository


3. Click Export as XML.
The File Download window appears.
4. Click Save.
The Save As window appears.
5. Navigate to the directory where you want to save the file.
6. Enter a name for the XML file and click Save.
Data Analyzer exports the definitions of all selected dashboards and objects associated with the dashboard.

Exporting a Security Profile


Data Analyzer keeps a security profile for each user or group in the repository. A security profile consists of the
access permissions and data restrictions that the system administrator sets for a user or group.
When Data Analyzer exports a security profile, it exports access permissions for objects under the Schema
Directory, which include folders, metrics, and attributes. Data Analyzer does not export access permissions for
filtersets, reports, or shared documents.
Data Analyzer allows you to export one security profile at a time. If a user or group security profile you export
does not have access permissions or data restrictions, Data Analyzer does not export any object definitions and
displays the following message:
T

Exporting a User Security Profile


You can export a security profile for one user at a time.

To export a user security profile:

1. Click Administration > XML Export/Import > Export Security Profile.


2. Click Export from Users.
The Export Security Profile page displays a list of all the users in the repository
3. Select a user whose security profile you want to export.
If there are a large number of users in the repository, Data Analyzer lists one page of users and displays the
page numbers at the top. To view a list of users on other pages, click the page number.
4. Click Export as XML.
The File Download window appears.
5. Click Save.
The Save As window appears.
6. Navigate to the directory where you want to save the file.
7. Enter a name for the XML file and click Save.
Data Analyzer exports the security profile definition of the selected user.

Exporting a Group Security Profile


You can export a security profile for only one group at a time.

Exporting a Security Profile 43


To export a group security profile:

1. Click Administration > XML Export/Import > Export Security Profile.


2. Click Export from Groups.
The Export Security Profile page displays a list of all the groups in the repository.
3. Select the group whose security profile you want to export.
If there are a large number of groups in the repository, Data Analyzer lists one page of groups and displays
the page numbers at the top. To view groups on other pages, click the page number.
4. Click Export as XML.
The File Download window appears.
5. Click Save.
The Save As window appears.
6. Navigate to the directory where you want to save the file.
7. Enter a name for the XML file and click Save.
Data Analyzer exports the security profile definition for the selected group.

Exporting a Schedule
You can export a time-based or event-based schedule to an XML file. Data Analyzer runs a report with a time-
based schedule on a configured schedule. Data Analyzer runs a report with an event-based schedule when a
PowerCenter session completes.
When you export a schedule, Data Analyzer does not export the history of the schedule.

To export a schedule:

1. Click Administration > XML Export/Import > Export Schedules.


The Export Schedules page displays a list of the schedules in the repository.
2. Select the schedule you want to export.
You can click Names at the top of the list to select all schedules in the list.
3. Click Export as XML.
The File Download window appears.
4. Click Save.
The Save As window appears.
5. Navigate to the directory where you want to save the file.
6. Enter a name for the XML file and click Save.
Data Analyzer exports the definitions of all selected schedules.

44 Chapter 6: Exporting Objects from the Repository


Troubleshooting
After I export an object, I double-click the XML file and receive the following error:
r T o s E
'

If you double-click the XML file, the operating system tries to open the file with a web browser. The web
browser cannot locate the DTD file Data Analyzer uses for exported objects.
Use a text editor to open the XML file. However, do not edit the file. Changes might invalidate the file.

Troubleshooting 45
46 Chapter 6: Exporting Objects from the Repository
CHAPTER 7

Importing Objects to the Repository


This chapter includes the following topics:
Overview, 47
Importing a Schema, 48
Importing a Time Dimension, 51
Importing a Report, 52
Importing a Global Variable, 54
Importing a Dashboard, 55
Importing a Security Profile, 57
Importing a Schedule, 59
Troubleshooting, 60

Overview
You can import objects into the Data Analyzer repository from a valid XML file of exported repository objects.
You can import the following repository objects from XML files:
Schemas
Time dimensions
Reports
Global variables
Dashboards
Security profiles
Schedules
Data Analyzer imports objects based on the following constraints:
You can import objects into the same repository or a different repository. When you import a repository
object that was exported from a different repository, both repositories must have the same language type and
locale settings, or, the destination repository must be a superset of the source repository. For more
information, see Localization on page 5.
You can import objects from Data Analyzer 5.0 repositories or later. For more information, see Importing
Objects from a Previous Version on page 48.
Except for global variables, if you import objects that already exist in the repository, you can choose to
overwrite the existing objects. You cannot overwrite global variables that already exist in the repository.

47
You might want to back up the target repository before you import repository objects into it. You can back up
a Data Analyzer repository in the PowerCenter Administration Console. For more information, see the
P .
Exporting and importing repository objects use considerable system resources. If you perform these tasks while
users are logged in to Data Analyzer, users might experience slow response or timeout errors. Make sure that
you schedule exporting and importing tasks so that you do not disrupt Data Analyzer users.
You can also import repository objects using the ImportExport command line utility.

XML Validation
When you import objects, you can validate the XML file against the DTD provided by Data Analyzer.
Ordinarily, you do not need to validate an XML file that you create by exporting from Data Analyzer.
However, if you are not sure of the validity of an XML file, you can validate it against the Data Analyzer DTD
file when you start the import process.
You must ensure that you do not modify an XML file of exported objects. If you modify the XML file, you
might not be able to use it to import objects into a Data Analyzer repository. If you try to import an invalid
XML file, Data Analyzer stops the import process and displays the following message:
E

Object Permissions
When you import a repository object, Data Analyzer grants you the same permissions to the object as the owner
of the object. Data Analyzer system administrators can access all imported repository objects. When you import
a report, you can limit access to the report for users who are not system administrators by clearing the Publish
to Everyone option. If you publish an imported report to everyone, all users in Data Analyzer have read and
write access to the report. You can then change the access permissions to the report to restrict specific users or
groups from accessing it.

Importing Objects from a Previous Version


You can import objects from Data Analyzer 5.0 or later. When you import objects from a previous version,
Data Analyzer upgrades the objects to the current version. For example, when you import a Data Analyzer 5.0
report using a custom attribute with groups, Data Analyzer 8.x upgrades the attribute to one with an advanced
expression. For more information about upgrading objects in the repository, see the P
e G .

Importing a Schema
You can import schemas from an XML file. A valid XML file can contain definitions of the following schema
objects:
Tables. The schema tables associated with the exported metrics in the XML file. The file might include the
following tables:
Fact table associated with the metric
Dimension tables associated with the fact table
Aggregate tables associated with the dimension and fact tables
Snowflake dimensions associated with the dimension tables
Template dimensions associated with the dimension tables or exported separately
Schema joins. The relationships between tables associated with the exported metrics in the XML file. The
file can include the following relationships:

48 Chapter 7: Importing Objects to the Repository


Fact table joined to a dimension table
Dimension table joined to a snowflake dimension
Metrics. All metrics exported to the XML file. The file can include calculated metrics and base metrics.
Attributes. The attributes in the fact and dimension tables associated with the exported metrics in the XML
file.
Drill paths. The drill paths associated with exported attributes.
Time keys. The time keys associated with exported tables.
Operational schemas. When you import an operational schema, Data Analyzer imports the following
objects:
Tables in the operational schema
Metrics and attributes for the operational schema tables
Schema joins
Hierarchical schemas. When you import a hierarchical schema, Data Analyzer imports the metrics and
attributes in the hierarchical schema.
When you import a schema, Data Analyzer displays a list of all the definitions contained in the XML file. It
then displays a list of all the object definitions in the XML file that already exist in the repository. You can
choose to overwrite objects in the repository. If you import a schema that contains time keys, you must import
or create a time dimension. For more information, see Importing a Time Dimension on page 51.
When you export metrics with the associated schema tables and attributes, the XML file contains different types
of schema objects. If you export the metric definition only, the XML file contains only a list of metric
definitions.
If the XML file contains only the metric definition, you must make sure that the fact table for the metric exists
in the target repository. You can import a metric only if its associated fact table exists in the target repository or
the definition of its associated fact table is also in the XML file.

To import a schema:

1. Click Administration > XML Export/Import > Import Schemas.


The Import Schemas page appears.
2. To validate the XML file against the DTD, select Validate XML against DTD.
3. Click Browse to select an XML file from which to import schemas.
4. Click Open.
The name and location of the XML file display on the Import Schemas page.
5. Click Import XML.
The lists of schema tables, schema joins, metrics, attributes, drill paths, time keys, and operational schemas
display in separate sections.
Table 7-1 shows the information that Data Analyzer displays for schema tables:

Table 7-1. Imported Schema Table Description

Property Description

Name Name of the fact or dimension tables associated with the metric to be imported.

Last Modified Date Date when the table was last modified.

Last Modified By User name of the Data Analyzer user who last modified the table.

Importing a Schema 49
Table 7-2 shows the information that Data Analyzer displays for the schema joins:

Table 7-2. Imported Schema Join Expression

Property Description

Table1 Name Name of the fact table that contains foreign keys joined to the primary keys in the
dimension tables. Can also be the name of a dimension table that joins to a
snowflake dimension.

Table2 Name Name of the dimension table that contains the primary key joined to the foreign
keys in the fact table. Can also be the name of a snowflake dimension table
associated with a dimension table.

Join Expression Foreign key and primary key columns that join a fact and dimension table or a
dimension table and a snowflake dimension in the following format:
T

Table 7-3 shows the information that Data Analyzer displays for the metrics:

Table 7-3. Imported Metrics Information

Property Description

Name Name of the metric to be imported.

Last Modified Date Date when the metric was last modified.

Last Modified By User name of the person who last modified the metric.

Analyzer Table Fact table that contains the metric. If the metric is a calculated metric, square
Locations brackets ([]) display in place of a fact table.

Table 7-4 shows the information that Data Analyzer displays for the attributes:

Table 7-4. Imported Attributes Information

Property Description

Name Name of the attributes found in the fact or dimension tables associated with the
metric to be imported.

Last Modified Date Date when the attribute was last modified.

Last Modified By User name of the person who last modified the attribute.

Analyzer Table Fact or dimension table that contains the attribute.


Locations

Table 7-5 shows the information that Data Analyzer displays for the drill paths:

Table 7-5. Imported Drill Paths Information

Property Description

Name Name of the drill path that includes attributes in the fact or dimension tables
associated with the metric to be imported.

Last Modified Date Date when the drill path was last modified.

Last Modified By User name of the person who last modified the drill path.

Paths List of attributes in the drill path that are found in the fact or dimension tables
associated with the metric to be imported.

50 Chapter 7: Importing Objects to the Repository


Table 7-6 shows the information that Data Analyzer displays for the time keys:

Table 7-6. Imported Time Keys Information

Property Description

Name Name of the time key associated with the fact table.

Table 7-7 shows the information that Data Analyzer displays for the operational schemas:

Table 7-7. Imported Operational Schemas Information

Property Description

Name Name of the operational schema to be imported.

Last Modified Date Date when the operational schema was last modified.

Last Modified By User name of the person who last modified the operational schema.

Table 7-8 shows the information that Data Analyzer displays for the hierarchical schemas:

Table 7-8. Imported Hierarchical Schema Information

Property Description

Name Name of the hierarchical schema to be imported.

Last Modified Date Date when the hierarchical schema was last modified.

Last Modified By User name of the person who last modified the hierarchical schema.

6. Click Continue.
If objects in the XML file are already defined in the repository, a list of the duplicate objects appears.
To overwrite all the schema objects, select Overwrite All. To overwrite the schema objects of a certain type,
select Overwrite at the top of each section. To overwrite only specific schema objects, select the object.
7. Click Apply.
If you select to overwrite schema objects, confirm that you want to overwrite the objects.
Data Analyzer imports the definitions of all selected schema objects.

Importing a Time Dimension


Time dimension tables contain date- and time-related attributes that describe the occurrence of metrics and
establish the time granularity of the data in the fact table. You can import a time dimension table from an XML
file. When you import a time dimension table, Data Analyzer imports the primary attribute, secondary
attribute, and calendar attribute of the time dimension table.

To import a time dimension table:

1. Click Administration > XML Export/Import > Import Time Dimensions.


The Import Time Dimensions page appears.
2. To validate the XML file against the DTD, select Validate XML against DTD.
3. Click Browse to select an XML file from which to import time dimensions.
4. Click Open.
The name and location of the XML file display on the Import Time Dimensions page.

Importing a Time Dimension 51


5. Click Import XML.
Data Analyzer displays the time dimensions found in the XML file.
Table 7-9 shows the information that Data Analyzer displays for the time dimensions:

Table 7-9. Imported Time Dimension Information

Property Description

Name Name of the time dimension table.

Last Modified Date Date when the time dimension table was last modified.

Last Modified By User name of the Data Analyzer user who last modified the report.

6. Click Continue.
If you successfully import the time dimensions, Data Analyzer displays a message that you have successfully
imported the time dimensions.
If objects in the XML file are already defined in the repository, a list of the duplicate objects appears.
7. Select the objects you want to overwrite.
8. Click Continue.
Data Analyzer imports the definitions of all selected time dimensions.

Importing a Report
You can import reports from an XML file. Depending on the reports included in the file and the options
selected when exporting the reports, the XML file might not contain all supported metadata. When available,
Data Analyzer imports the following components of a report:
Report table
Report chart
Indicators
Alerts
Filters
Filtersets
Highlighting
Calculations
Custom attributes
All reports in an analytic workflow
Permissions
Report links
Schedules
Data Analyzer imports all data for each component, with the following exceptions:
Gauge indicators. Imported gauge indicators do not keep their original owner. The user who imports the
report becomes the owner of the gauge indicator. If the gauge indicator is personal, it becomes personal to
the user who imports the report.
Alerts. Imported personal and public alerts use the state set for all report subscribers as the default alert state.
Highlighting. Data Analyzer does not export any personal highlighting. Imported public highlighting uses
the state set for all users as the default highlighting state.

52 Chapter 7: Importing Objects to the Repository


When you import a report, make sure all the metrics, attributes, and global variables used in the report are
defined in the target repository. If you import a report that uses objects not defined in the target repository, you
must import or recreate the objects before you run the report.
You can import cached and on-demand reports. If during the export process, you chose to export schedules
associated with a report, then Data Analyzer also imports the schedule stored in the cached report. Data
Analyzer does not import report data for cached reports. If you try to view an imported cached report
immediately after you import it, the following error appears:
R

To view the data for the report, you first must run the report. You can run imported cached reports in the
background immediately after you import them. Running reports in the background can be a long process, and
the data may not be available immediately. You can also edit the report and save it before you view it to make
sure that Data Analyzer runs the report before displaying the results.
If you import a report and its corresponding analytic workflow, the XML file contains all workflow reports. If
you choose to overwrite the report, Data Analyzer also overwrites the workflow reports. When importing
multiple workflows, Data Analyzer does not import analytic workflows containing the same workflow report
names. Thus, ensure that all imported analytic workflows have unique report names prior to export.
If you import a composite report, the XML file contains all the subreports. You can choose to overwrite the
subreports or composite report if they are already in the repository.

Importing Reports from Public or Personal Folders


You can import reports exported from any folder in the repository. When possible, Data Analyzer imports
reports to the same folder in the target repository. For example, it imports reports from the public folder to the
public folder. If a report of the same name already exists in the same folder, you can overwrite the existing
report. When Data Analyzer imports a report to a repository that does not have the same folder as the
originating repository, Data Analyzer creates a new folder of that name for the report.
When you import a report exported from a personal folder, Data Analyzer creates a new folder within the
public folder called Personal Reports with the date of import and creates a subfolder named for the owner of the
personal folder. For example, if you import a report exported from a personal folder called Mozart, Data
Analyzer creates a public folder called Personal Reports with the import date, such as P
8 subfolder security
the personal folders, you are the owner of the new public folder.

Steps for Importing a Report


To import a report:

1. Click Administration > XML Export/Import > Import Reports.


The Import Reports page appears.
2. To validate the XML file against the DTD, select Validate XML against DTD.
3. Click Browse to select an XML file from which to import reports.
4. Click Open.
The name and location of the XML file display on the Import Reports page.
5. Click Import XML.
Data Analyzer displays the reports found in the XML file.

Importing a Report 53
Table 7-10 shows the properties that Data Analyzer displays for the reports:

Table 7-10. Imported Report Properties

Property Description

Name Name of the reports found in the XML file.

Last Modified Date Date when the report was last modified.

Last Modified By User name of the Data Analyzer user who last modified the report.

Path Location of the report in the Public Folders or Personal Folder.

6. To allow all users to have access to the reports, select Publish to Everyone.
To immediately update the data for all the cached reports in the list, select Run Cached Reports after
Import. After you import the reports, Data Analyzer runs the cached reports in the background.
For more information about attaching the imported cached reports to a schedule immediately, see
Attaching Imported Cached Reports to a Time-Based Schedule on page 24 and Attaching Imported
Cached Reports to an Event-Based Schedule on page 35.
7. Click Continue.
If you successfully import the reports, Data Analyzer displays a message that you have successfully
imported them. When necessary, Data Analyzer lists any folders created for the reports. If you import
cached reports, it displays a message that you need to assign the cached reports to a schedule in the target
repository.
If attributes or metrics associated with the report are not defined in the repository, Data Analyzer displays
a list of the undefined objects. If you import the report, you might not be able to run it successfully. To
cancel the import process, click Cancel. Create the required objects in the target repository before
attempting to import the report again.
If reports in the XML file are already defined in the repository, a list of the duplicate reports appears. To
overwrite any of the reports, select Overwrite next to the report name. To overwrite all reports, select
Overwrite at the top of the list.
8. Click Continue.
Data Analyzer imports the definitions of all selected reports.

Importing a Global Variable


You can import global variables that are not defined in the target repository. If the XML file contains global
variables already in the repository, you can cancel the process. If you continue the import process, Data
Analyzer imports only the global variables that are not in the target repository.

To import a global variable:

1. Click Administration > XML Export/Import > Import Global Variables.


The Import Global Variables page appears.
2. To validate the XML file against the DTD, select Validate XML against DTD.
3. Click Browse to select an XML file from which to import global variables.
4. Click Open.
The name and location of the XML file display on the Import Global Variables page.
5. Click Import XML.

54 Chapter 7: Importing Objects to the Repository


Data Analyzer displays the global variables found in the XML file.
Table 7-11 shows the information that Data Analyzer displays for the global variables:

Table 7-11. Imported Global Variable Description

Property Description

Name Name of the global variable found in the XML file.

Value Value of the global variable.

6. Click Continue.
Data Analyzer does not import global variables whose names exist in the repository, even if the values are
different.
If the XML file includes global variables already in the repository, Data Analyzer displays a warning. If you
continue the import process, Data Analyzer imports only the variables that are not in the repository. To
continue the import process, click Continue.

Importing a Dashboard
Dashboards display links to reports, shared documents, and indicators. When you import a dashboard from an
XML file, Data Analyzer imports the following objects associated with the dashboard:
Reports
Indicators
Shared documents
Dashboard filters
Discussion comments
Feedback
Data Analyzer does not import the following objects associated with the dashboard:
Access permissions
Attributes and metrics in the report
Real-time objects
Dashboards are associated with the folder hierarchy. When you import a dashboard, Data Analyzer stores the
imported dashboard in the following manner:
Dashboards exported from a public folder. Data Analyzer imports the dashboards to the corresponding
public folder in the target repository. When Data Analyzer imports a dashboard to a repository that does not
have the same folder as the originating repository, Data Analyzer creates a new folder of that name for the
dashboard.
Dashboards exported from a personal folder. Data Analyzer imports the dashboards to a new Public Folders
> > folder.
Personal dashboard. Data Analyzer imports a personal dashboard to the Public Folders folder.
Dashboards exported from an earlier version of Data Analyzer. Data Analyzer imports the dashboards to
the Public Folders > Dashboards folder. If the Dashboards folder already exists at the time of import, then
creates Analyzer Data exam
Dashboards_1 or Dashboards_2).
When you import a dashboard, Data Analyzer imports all indicators for the originating report and workflow
reports in a workflow. However, indicators for workflow reports do not display on the dashboard after you
import it. You must add those indicators to the dashboard manually.

Importing a Dashboard 55
If an object exists in the repository, Data Analyzer provides an option to overwrite the object.
When you import a dashboard, make sure all the metrics and attributes used in reports associated with the
dashboard are defined in the target repository. If the attributes or metrics in a report associated with the
dashboard do not exist, the report does not display on the imported dashboard.
Data Analyzer does not automatically display imported dashboards in your subscription list on the View tab.
You must manually subscribe to imported dashboards to display them in the Subscription menu.

To import a dashboard:

1. Click Administration > XML Export/Import > Import Dashboards.


The Import Dashboards page appears.
2. To validate the XML file against the DTD, select Validate XML against DTD.
3. Click Browse to select an XML file from which to import dashboards.
4. Click Open.
The name and location of the XML file display on the Import Dashboards page.
5. Click Import XML.
Data Analyzer displays the list of dashboards found in the XML file.
Table 7-12 shows the information that Data Analyzer displays for the dashboards:

Table 7-12. Imported Dashboard Information

Property Description

Name Name of the dashboard found in the XML file.

Last Modified Date Date when the dashboard was last modified.

Last Modified By User name of the Data Analyzer user who last modified the dashboard.

6. Click Continue.
Data Analyzer displays a list of the metrics and attributes in the reports associated with the dashboard that
are not in the repository.
Data Analyzer does not import the attributes and metrics in the reports associated with the dashboard. If
the attributes or metrics in a report associated with the dashboard do not exist, the report does not display
on the imported dashboard.
To cancel the import process, click Cancel.
7. To continue the import process, click Apply.
Data Analyzer displays a list of the dashboards, reports, and shared documents already defined in the
repository.
To overwrite a dashboard, report, or shared document, select Overwrite next to the item name. To
overwrite all dashboards, reports, or shared documents, select Overwrite at the top of the list.
8. Click Apply.
Data Analyzer imports the definitions of all selected dashboards and the objects associated with the
dashboard.

56 Chapter 7: Importing Objects to the Repository


Importing a Security Profile
A security profile consists of data restrictions and access permissions for objects in the Schema Directory,
including folders, attributes, and metrics. Data Analyzer keeps a security profile for each user or group in the
repository.
When you import a security profile from an XML file, you must first select the user or group to which you want
to assign the security profile. You can assign the same security profile to more than one user or group.
When you import a security profile and associate it with a user or group, you can either overwrite the current
security profile or add to it. When you overwrite a security profile, Data Analyzer assigns the user or group only
the data restrictions and access permissions found in the new security profile. Data Analyzer removes the old
restrictions associated with the user or group.
When you append a security profile, Data Analyzer appends new data restrictions to the old restrictions but
overwrites old access permissions with the new access permissions. When a user or group has a data restriction
and the imported security profile has a data restriction for the same fact table or schema and associated
attribute, Data Analyzer joins the restrictions using the OR operator.
example,
For you
import
security
a profile
with
following
the datarestriction
Sales
the
for
fact
table:
R
The
. Salesgrouphas
NanexistingSalesfacttabledatarestriction:R t a
s p . If you overwrite existing security profiles, the Sales group restriction changes to
data related to the United States. If you append the profile, the Sales group data restriction changes to the
followingrestriction:R l R . S

Importing a User Security Profile


You can import a user security profile and associate it with one or more users.

To import a user security profile:

1. Click Administration > XML Export/Import > Import Security Profiles.


2. Click Import to Users.
The Import Security Profile page displays all users in the repository.
3. Click Overwrite to replace existing security profiles with the imported security profile.
Or, click Append to add the imported security profile to existing security profiles.
4. Select the users you want to associate with the security profile.
To associate the security profiles with all displayed users, select the check box under Users at the top of the
list. To associate the security profile with all users in the repository, select Import To All.
5. Click Continue.
The Import Security Profiles page appears.
6. To validate the XML file against the DTD, select Validate XML against DTD.
7. Click Browse to select an XML file from which to import a security profile.
8. Click Open.
The name and location of the XML file display on the Import Security Profiles page.
9. Click Import XML.
The Import Security Profiles window displays the access permissions and data restrictions for the security
profile.

Importing a Security Profile 57


Table 7-13 shows the information that Data Analyzer displays for the restricted objects:

Table 7-13. Imported Security Profile: Restricted Objects

Property Description

Object Name Indicates the Schema Directory path of the restricted schema object if the
restricted object is a folder. Indicates the fact or dimension table and attribute
name if the object is an attribute. Indicates the fact table and metric name if the
object is a metric.

Type Indicates whether the schema object is a folder, attribute, or metric.

Table 7-14 shows the information that Data Analyzer displays for the data restrictions:

Table 7-14. Imported Security Profile: Data Restrictions

Property Description

Schema Table Name Name of the restricted table found in the security profile.

Security Condition Description of the data access restrictions for the table.

10. Click Continue.


Data Analyzer displays a list of the objects in the security profile that are not in the repository. To cancel
the import process, click Cancel.
11. To continue the import process, click Continue.
Data Analyzer imports the security profile and associates it with all selected users. It imports access
permissions and data restrictions only for objects defined in the repository.

Importing a Group Security Profile


You can import a group security profile and associate it with one or more groups.

To import a group security profile:

1. Click Administration > XML Export/Import > Import Security Profile.


2. Click Import to Groups.
The Import Security Profile page displays all groups in the repository.
3. Click Overwrite to replace existing security profiles with the imported security profile. Click Append to
add the imported security profile to existing security profiles.
4. Select the groups you want to associate with the security profile.
To associate the security profiles with all displayed groups, select the check box under Groups at the top of
the list. To associate the security profile with all groups in the repository, select Import To All.
5. Click Continue.
The Import Security Profile page appears.
6. To validate the XML file against the DTD, select Validate XML against DTD.
7. Click Browse to select an XML file from which to import a security profile.
8. Click Open.
The name and location of the XML file display on the Import Security Profile page.
9. Click Import XML.
The list of access permissions and data restrictions that make up the security profile appears.
10. Click Continue.

58 Chapter 7: Importing Objects to the Repository


Data Analyzer displays a list of the objects in the security profile that are not in the repository. To cancel
the import process, click Cancel.
11. To continue the import process, click Continue.
Data Analyzer imports the security profile and associates it with all selected groups. It imports access
permissions and data restrictions only for objects defined in the repository.

Importing a Schedule
You can import a time-based or event-based schedule from an XML file. When you import a schedule, Data
Analyzer does not attach the schedule to any reports.
When you import a schedule from an XML file, you do not import the task history or schedule history.

To import a schedule:

1. Click Administration > XML Export/Import > Import Schedules.


The Import Schedules page appears.
2. To validate the XML file against the DTD, select Validate XML against DTD.
3. Click Browse to select an XML file from which to import a schedule.
4. Click Open.
The name and location of the XML file display on the Import Schedules page.
5. Click Import XML.
The list of objects found in the XML file appears.
Table 7-15 shows the information that Data Analyzer displays for the schedules found in the XML file:

Table 7-15. Imported Schedule Information

Property Description

Name Name of the schedule found in the XML file.

Last Modified Date Date when the schedule was last modified.

Last Modified By User name of the person who last modified the schedule.

6. Click Continue.
If the schedules in the XML file are already defined in the repository, a list of the duplicate schedules
appears.
To overwrite a schedule, click the Overwrite check box next to the schedule. To overwrite all schedules,
click the Overwrite check box at the top of the list.
7. Click Continue.
Data Analyzer imports the schedules. You can then attach reports to the imported schedule.

Importing a Schedule 59
Troubleshooting
When I import my schemas into Data Analyzer, I run out of time. Is there a way to raise the transaction time out
period?
The default transaction time out for Data Analyzer is 3600 seconds (1 hour). If you are importing large
amounts of data from XML and the transaction time is not enough, you can change the default transaction time
out value. To change the default transaction time out for Data Analyzer, edit the value of the
import.transaction.timeout.seconds property in the DataAnalyzer.properties file. For more information about
editing the DataAnalyzer.properties file, see Configuration Files on page 125.
After you change this value, you must restart the application server. You can now run large import processes
without timing out.

I have an IBM DB2 8.x repository. When I import large XML files, Data Analyzer generates different errors. How can I
import large XML files?
The Data Analyzer installer installs a JDBC driver for IBM DB2 8.x. If you use this driver to connect to a DB2
8.x repository database, Data Analyzer might display error messages when you import large XML files. You can
modify the settings of the application server, the database, or the JDBC driver to solve the problem. You might
need to contact your database system administrator to change some of these settings.
Depending on the error that Data Analyzer generates, you might want to modify the following parameters:
DynamicSections value of the JDBC driver
Page size of the temporary table space
Heap size for the application

Increasing the DynamicSections Value


Data Analyzer might display the following message when you import large XML files:
[ j x
J a mw s rs v
d

The error occurs when the default value of the DynamicSections property of the JDBC driver is too small to
handle large XML imports. The default value of the DynamicSections connection property is 200. You must
increase the default value of DynamicSections connection property to at least 500.
Use the DataDirect Connect for JDBC utility to increase the default value of the DynamicSections connection
property and recreate the JDBC driver package. Download the utility from the Product Downloads page of
DataDirect Technologies web site:
h

To increase the value of the DynamicSections property:

1. On the Product Downloads page, click the DataDirect Connect for JDBC Any Java Platform link and
complete the registration information to download the file.
The name of the download file is connectjdbc.jar.
2. Extract the contents of the connectjdbc.jar file in a temporary directory and install the DataDirect Connect
for JDBC utility.
Follow the instructions in the DataDirect Connect for JDBC Installation Guide.
3. On the command line, run the following file extracted from the connectjdbc.jar file:
Windows: Installer.bat
UNIX: Installer.sh

60 Chapter 7: Importing Objects to the Repository


4. Enter the following license key and click Add:
e

5. Click Next twice and then click Install.


6. Click Finish to complete the installation.
The installation program for the DataDirect Connect for JDBC utility creates the testforjdbc folder in the
directory where you extracted the connectjdbc.jar file.
7. In the testforjdbc folder, run the Test for JDBC Tool:
Windows: testforjdbc.bat
UNIX: testforjdbc.sh
8. On the Test for JDBC Tool window, click Press Here to Continue.
9. Click Connection > Connect to DB.
10. In the Database field, enter the following:
j > e ;
d D
R

S N
database.
D the the of nam
repository
database.
11. In the User Name and Password fields, enter the user name and password you use to connect to the
repository database from Data Analyzer.
12. Click Connect, and then close the window.
13. Restart the application server.
If you continue getting the same error message when you import large XML files, you can run the Test for
JDBC Tool again and increase the value of DynamicSections to 750 or 1000.

Modifying the Page Size of the Temporary Table Space


Data Analyzer might display the following message when you import large XML files:
S a i

This problem occurs when the row length or number of columns of the system temporary table exceeds the
limit of the largest temporary table space in the database.
To resolve the error, create a new system temporary table space with the page size of 32KB. For more
information, see the IBM DB2 documentation.

Increasing Heap Size for the Application


Data Analyzer might display the following message when you import large XML files:
[ i t a
ES

This problem occurs when there is not enough storage available in the database application heap to process the
import request.
To resolve the problem, log out of Data Analyzer and stop the application server. On the repository database,
increase the value of the application heap size configuration parameter (APPLHEAPSZ) to 512. Restart the
application server.
For more information, see the IBM DB2 documentation.

Troubleshooting 61
62 Chapter 7: Importing Objects to the Repository
CHAPTER 8

Using the Import Export Utility


This chapter includes the following topics:
Overview, 63
Running the Import Export Utility, 64
Error Messages, 67
Troubleshooting, 68

Overview
The Import Export utility lets you import and export Data Analyzer repository objects from the command line.
Use the Import Export utility to migrate repository objects from one repository to another. For example, you
can use the utility to quickly migrate Data Analyzer repository objects from a development repository into a
production repository.
You can use the Import Export utility to import objects from Data Analyzer 5.0 repositories or later. You can
also use the utility to archive your repository without using a browser.
When you run the Import Export utility, Data Analyzer imports or exports all objects of a specified type. For
example, you can run the utility to import all reports from an XML file or export all dashboards to an XML file.
You must run the utility multiple times to import or export different types of objects.
Use the utility to import or export the security profile of an individual user or group. You cannot use the utility
to import or export other individual objects. For example, you cannot use the utility to export a specific user or
report to an XML file.
To import or export individual objects, use the Data Analyzer Administration tab. You can also use the Data
Analyzer Administration tab to import or export all objects of a specified type. When you use the Import
Export utility, the same rules as those about import or export from the Data Analyzer Administration tab apply.
For example, with the Import Export utility or the Data Analyzer Administration tab, you can import only
those global variables that do not already exist in the repository.
If Data Analyzer is installed with the LDAP authentication method, you cannot use the Import Export utility to
import users, groups, or roles. With the LDAP authentication method, Data Analyzer does not store user
passwords in the Data Analyzer repository. Data Analyzer authenticates the passwords directly in the LDAP
directory.

63
Running the Import Export Utility
Before you run the Import Export utility to import or export repository objects, you must meet the following
requirements:
To run the utility, you must have the System Administrator role or the Export/Import XML Files privilege.
To import or export users, groups, or roles, you must also have the Manage User Access privilege.
Data Analyzer must be running.
You can import Data Analyzer objects from XML files that were created when you exported repository objects
from Data Analyzer. You can use files exported from Data Analyzer 5.0 or later.
The default transaction time out for Data Analyzer is 3,600 seconds (1 hour). If you are importing large
amounts of data from XML files and the transaction time is not enough, you can change the default transaction
time out value. To change the default transaction time out for Data Analyzer, edit the value of the
import.transaction.timeout.seconds property in DataAnalyzer.properties. After you change this value, you must
restart the application server.
When you run the Import Export utility, you specify options and arguments to import or export different types
of objects. Specify an option by entering a hyphen (-) followed by a letter. The first word after the option letter
is the argument.
To specify the options and arguments, use the following rules:
Specify the options in any order.
Utility name, options, and argument names are case sensitive.
If the option requires an argument, the argument must follow the option letter.
If any argument contains more than one word, enclose the argument in double quotes.
To run the utility on Windows, open a command line window. On UNIX, run the utility as a shell command.
Note: Back up the target repository before you import repository objects into it. You can back up a Data
Analyzer repository with the Repository Backup utility.

To run the Import Export utility:

1. Go to the Data Analyzer utilities directory.


The default directory is <PowerCenter_InstallationDirectory>/DataAnalyzer/import-exportutil/
2. Run the utility with the following format:
Windows:
] I [ r o

UNIX:
o I e p

Table 8-1 lists the options and arguments you can specify:

Table 8-1. Options and Arguments for the Import Export Utility

Option Argument Description

-i repository object type Import a repository object type. For more information about
repository object types, see Table 8-2 on page 66.
Use the -i or -e option, but not both.

-e repository object type Export a repository object type. For more information about
repository object types, see Table 8-2 on page 66.
Use the -i or -e option, but not both.

64 Chapter 8: Using the Import Export Utility


Table 8-1. Options and Arguments for the Import Export Utility

Option Argument Description

-w No argument Import only. Instructs the Import Export utility to overwrite


existing repository objects of the same name. If you do not
specify this option and if a repository object with the same name
already exists, the utility exits without completing the operation.
If you do not use a hyphen when importing a security profile, the
security profile being imported is appended to the existing
security profile of the user or group.
If you use this option when exporting repository objects, the
utility displays an error message.

-f XML file name Name of the XML file to import from or export to. The XML file
must follow the naming conventions for the operating system
where you run the utility.

You can specify a path for the XML file.


If you specify a path for the XML file:
- When you import a repository object type, the Import Export
utility looks for the XML file in the path you specify.
- When you export an object type, the utility saves the XML file
in the path you specify.
For example, to have the utility save the file in the c:/PA
directory, enter the following command:
I .
a a
h n
m

If you do not specify a path for the XML file:


- When you import a repository object type, the Import Export
utility looks for the XML file in the directory where you run the
utility.
- When you export an object type, the utility saves the XML file
in the directory where you run the utility.
For example, when you enter the following command, the utility
places Users.xml in the directory where you run the utility:
I u
a
h n
m

-u user name Data Analyzer user name.

-p password Password for the Data Analyzer user name.

-l url URL for accessing Data Analyzer. Contact the system


administrator for the URL. The Data Analyzer URL has the
following format:
/ : o h
i t r o p e R

ReportingServiceName is the name of the Reporting Service


that runs the Data Analyzer instance. For example,
PowerCenter runs on a machine with hostname fish.ocean.com
and has a Reporting Service named IASReports with port
number 18080. Use the following URL for Data Analyzer:
h r

-h No argument Displays a list of all options and their descriptions, and a list of
valid repository objects.

-n user name or group Use to import or export the security profile of a user or group.
name For more information, see Table 8-2 on page 66.

Running the Import Export Utility 65


Table 8-2 lists the repository object types you can import or export using the Import Export utility and an
example for each. Enter the repository object type as listed below:

Table 8-2. Valid Repository Object Types

Repository
Description Example
Object Type

schema Schemas To import schemas from the PASchemas.xml file into the
repository, use the following command:
I m
- d
h r
>

timedim Time dimension To import time dimension tables from the TD.xml file into the
tables repository, use the following command:
I u
d -
h r
>

report Reports To import reports from the Reports.xml file into the
repository, use the following command:
I s
- d
h r
>

variable Global variables. You To export global variables to the GV.xml file, use the
can import global following command:
variables that do not I G
already exist in the - d
repository. h c

dashboard Dashboards To export dashboards to the Dash.xml file, use the following
command:
I h
- d
h r
>

usersecurity Security profile of a To export the security profile of user jdoe to the
<security user. You must JDsecurity.xml file, use the following command:
profile option>
n specify the following I
security profile - d p a
option: -
h r
-n <user name>
>

groupsecurity Security profile of a To export the security profile of group Managers to the
<security group. You must Profiles.xml file, use the following command:
profile option> specify the following I n
a security profile - m
option: -
h r
-n <group name>
>

schedule Schedules To export all schedules to the Schedules.xml file, use the
following command:
I d
- d
h r
>

The Import Export utility runs according to the specified options. If the utility successfully completes the
requested operation, a message indicates that the process is successful. If the utility fails to complete the
requested operation, an error message displays.

66 Chapter 8: Using the Import Export Utility


Error Messages
If the Import Export utility fails to complete the requested operation, it displays an error message. The error
message indicates why the requested operation failed. If the requested operation fails because a required option
or argument is missing or not specified correctly, the Import Export utility also displays a list of all options and
their descriptions, and a list of valid repository objects.
The Import Export utility can display the following error messages:

Unknown error.
Cause: Utility failed to run for unknown reasons.
Action: Contact the system administrator or Informatica Global Customer Support.

Incorrect number of command-line options.


Cause: You omitted an option or included more options than needed.
Action: Check the syntax and spelling.

Unknown option.
Cause: You entered an incorrect option letter. For example, you entered -x or -E to export a file.
Action: Check the validity and case sensitivity of the option letters. Check the XML file name.

Illegal option value.


Cause: You entered an incorrect argument for an option letter.
Action: Check the spelling of the option values you entered.

The import file does not exist or cannot be read.


Cause: The XML file to be imported does not exist or does not contain valid XML data or the utility
cannot access the file.
Action: Check that a valid XML file, with the specified name, exists in the specified directory.

Invalid username or password.


Cause: The user does not exist in Data Analyzer or password is incorrect.
Action: Check that the user exists in Data Analyzer or the password is correct.

The user does not have privileges to import/export.


Cause: The user does not have the Export/Import XML Files privilege or the Manage User Access
privilege to import or export users, groups, or roles.
Action: Assign the appropriate privileges to the user.

The export file cannot be written.


Cause: The directory where you want to place the XML file is read only or has run out of hard disk
space.
Action: Assign write permission to the user for the directory where you want to place the XML file. Or,
make sure there is enough hard disk space.

The import file contains a different repository object type than the repository object type given for the
option -i.
Cause: The XML file specified for the import (-i) option does not contain the correct object type.
Action: Use the correct object type or a different XML file.

Error Messages 67
A communication error has occurred with Data Analyzer. The root cause is: <error message>.
Cause: See the root cause message.
Action: The action depends on the root cause. Check that the URL is correct and try to run the utility
again. Check that Data Analyzer is running and try to run the utility again. If error still occurs,
contact Informatica Global Customer Support.

The user or group does not exist.


Cause: User name or group name that you typed for importing or exporting a security profile does not
exist.
Action: Check the spelling of the user name or group name.

An export file with the provided filename already exists.


Cause: An XML file of the same name already exists in the specified path.
Action: Delete the XML file before you enter the command.

The Data Analyzer session is invalid.


Cause: Data Analyzer session has timed out.
Action: Run the utility again.

Global variables cannot be overwritten.


Cause: You cannot import global variables if they already exist in the repository. If the XML file
includes global variables already in the repository, the Import Export utility displays this error
message.
Action: If you want to import global variables already in the repository, first delete them from Data
Analyzer, and then run the utility.

Import file is empty.


Cause: There is no data in the XML file.
Action: Use a valid XML file.

The configured security realm does not support the import of users, groups and roles.
Cause: Data Analyzer is installed with the LDAP authentication method. You cannot use the Import
Export utility to import users, groups, or roles.
Action: Contact the Data Analyzer system administrator.

Troubleshooting
Importing a Large Number of Reports
If you use the Import Export utility to import a large number of reports (import file size of 16MB or more), the
Java process for the Import Export utility might run out of memory and the utility might display an exception
message. If the Java process for the Import Export utility runs out of memory, increase the memory allocation
for the process. To increase the memory allocation for the Java process, increase the value for the -mx option in
the script file that starts the utility.
Note: Back up the script file before you modify it.

68 Chapter 8: Using the Import Export Utility


To increase the memory allocation:

1. Locate the Import Export utility script file in the Data Analyzer utilities directory.
The default directory is <PowerCenter_InstallationDirectory>/DataAnalyzer/import-exportutil/
2. Open the script file with a text editor:
Windows: ImportExport.bat
UNIX: ImportExport.sh
3. Locate the -mx option in the Java command:
j

4. Increase the value for the -mx option from 256 to a higher number depending on the size of the import file.
Tip: Increase the value to 512. If the utility still displays an exception, increase the value to 1024.

5. Save and close the Import Export utility script file.

Using SSL with the Import Export Utility


To use SSL, Data Analyzer needs a certificate that must be signed by a trusted certificate authority (CA). By
default, the trusted CAs are defined in the cacerts keystore file in the JAVA_HOME/jre/lib/security/ directory.
If Data Analyzer uses a certificate signed by a CA not defined in the default cacerts file or if you have created
your own trusted CA keystore, you must provide the location of the trusted keystore when you run the Import
Export utility. To specify the location of the trusted CAs, add the following parameter to the Import Export
utility script:
v a j D -

If Data Analyzer uses a certificate signed by a CA defined in the default cacerts file, such as Verisign, you do not
need to specify the location of the trusted CA keystore when you run the Import Export utility.
Note: Back up the Import Export script file before you modify it.

To specify the location of the trusted CAs:

1. Locate the Import Export utility script in the Data Analyzer utilities directory:
<

2. Open the script file with a text editor:


Windows: ImportExport.bat
UNIX: ImportExport.sh
3. Add the trusted CA parameter to the Java command that starts the ImportExport utility:
u j
s o p e r

TrustedCAKeystore is the keystore for the trusted CAs.


4. Save and close the Import Export utility file.
When you run the Import Export utility, make sure that the URL you provide with the -l option starts with
https:// and uses the correct port for the SSL connection.

Troubleshooting 69
70 Chapter 8: Using the Import Export Utility
CHAPTER 9

Managing System Settings


This chapter includes the following topics:
Overview, 71
Managing Color Schemes and Logos, 72
Managing Logs, 76
Managing LDAP Settings, 79
Managing Delivery Settings, 80
Specifying Contact Information, 82
Viewing System Information, 82
Setting Rules for Queries, 83
Configuring Report Table Scroll Bars, 85
Configuring Report Headers and Footers, 85
Configuring Departments and Categories, 87
Configuring Display Settings for Groups and Users, 88

Overview
You can configure the following administrative settings:
Color schemes, images, and logos. Modify the color schemes, images, and logos of Data Analyzer to match
those of your organization.
Log files. View Data Analyzer log files for information on user and system activity.
LDAP settings. Register LDAP servers to enable users to access LDAP directory lists from Data Analyzer.
Delivery settings. Register an outbound mail server to allow users to email reports and shared documents,
and receive alerts. You can also configure alert delivery devices.
Contact information. Provide the name, email address, and phone number of the Data Analyzer system
administrator. Users might find the administrator contact information useful in the event of a system
problem.
System information. View the configuration information of the machine hosting Data Analyzer.
Query governing. Define upper limits on query time, report processing time, and number of table rows
displayed.
Report settings. Determine whether scroll bars appear in report tables.

71
Report header and footer. Create the headers and footers printed in Data Analyzer reports.
Metadata configuration. Create department and category names for your organization. You can associate
repository objects with a department or category to help you organize the objects. When you associate
repository objects with a department or category, you can search for these objects by department or category
on the Find tab.
Display Settings. Control display settings for users and groups.

Managing Color Schemes and Logos


A color scheme defines the look and feel of Data Analyzer. You can edit existing color schemes or create new
color schemes, using your own images and colors.
Data Analyzer references the image and logo files in the Data Analyzer images directory on the web server
associated with the application server. Use any HTML hexadecimal color code to define colors.
You can set a default color scheme for all users and groups. You can also assign users and groups to specific color
schemes. By default, the Informatica color scheme is the default color scheme for all users and groups in Data
Analyzer.
The color schemes and image files used in Data Analyzer are stored in the EAR directory. You can modify or
add color schemes and images in the EAR directory to customize the Data Analyzer color schemes and images
for the organization.

Using a Predefined Color Scheme


Data Analyzer provides the following predefined color schemes that you can use or modify:
Informatica color scheme. This is the default Data Analyzer color scheme. The EAR directory containing
images for this color scheme is in the following location:
t s u c /

This is the default image directory for Data Analyzer.


Betton Books color scheme. Alternative predefined color scheme. The EAR directory containing images for
the Betton Books color scheme is in the following location:
/

Adding a Logo to a Predefined Color Scheme


To use a predefined color scheme with your own logo or login page image, complete the following steps:
1. Copy the logo or login image file to the predefined images folder.
2. Edit the predefined color scheme and change the file name of the Logo Image URL field or the Login Page
Image URL to the name of your image file.
3. Enter the following information in the predefined color scheme settings:
Images Directory. Predefined color scheme folder name. For the Informatica color scheme, leave the
g Images the
field.
Logo Image URL. Enter the name of the logo image file you want to use.
Login Page Image URL. Enter the name of the login page image file that you want to use.
All file names are case sensitive.

72 Chapter 9: Managing System Settings


You can also enter a URL for the logo and login image files. For example, if the host name of the web server
where you have the logo file is http://monet.PaintersInc.com, port 16080, enter the following URL in the Logo
Image URL field:
h

The URL can point to a logo file in the Data Analyzer machine or in another web server. If you specify a URL,
use the forward slash (/) as a separator.
Data Analyzer uses all the colors and images of the selected predefined color scheme with your logo or login
page image. If you modify a predefined color scheme, you might lose your changes when you upgrade to future
versions of Data Analyzer.

Editing a Predefined Color Scheme


You can edit the colors and image directories for predefined color schemes and preview the changes.

To edit a predefined color scheme:

1. Click Administration > System Management > Color Schemes and Logos.
The Color Schemes and Logos page displays the list of available color schemes.
2. To edit the settings of a color scheme, click the name of the color scheme.
The Color Scheme page displays the settings of the color scheme. It also displays the directory for the
images and the URL for the background, login, and logo image files.
3. Optionally, enter file and directory information for color scheme images:
Images Directory. Name of the color scheme directory where you plan to store the color and image files.
If blank, Data Analyzer looks for the images in the default image directory.
Background Image URL. Name of a background image file in the color scheme directory or the URL to
a background image on a web server.
Logo Image URL. Name of a logo file image in the color scheme directory or the URL to a logo image
on a web server.
Login Page Image URL. Name of the login page image file in the color scheme directory or the URL to
a login image on a web server. To display the login page properly, the width of your login page image
must be approximately 1600 pixels, or the width of your monitor setting. The height of your login page
image must be approximately 240 pixels.
All file names are case sensitive. If you specify a URL, use the forward slash (/) as a separator.
4. Enter hexadecimal color codes to represent the colors you want to use.
The color scheme uses the hexadecimal color codes for each display item. For more information about
hexadecimal color codes, see HTML Hexadecimal Color Codes on page 117.
Table 9-1 shows the display items you can modify in the Color Scheme page:

Table 9-1. Display Items in the Color Scheme Page

Display Item Description

Background Background color of Data Analyzer.

Page Header Page header of Data Analyzer.

Primary Report heading on the Analyze tab.

Secondary Report sub-heading on the Analyze tab.

Heading Section heading such as the container heading on the View tab.

Sub-Heading Section sub-heading such as the container sub-heading on the View


tab.

Managing Color Schemes and Logos 73


Table 9-1. Display Items in the Color Scheme Page

Display Item Description

Section Background color for sections such as forms on the Administration


tab, pop-up windows, and tabs with drop-down lists.

Odd Table Row Odd rows in a list.

Even Table Row Even rows in a list.

Selected Rows Rows you select in the report table or on tabs such as the Find tab.

Primary Navigation Tab Colors Alerts, View, Find, Analyze, Administration, Create, and Manage
Account tabs.

Secondary Navigation Colors Menu items on the Administration tab, including Schema Design, XML
Export/Import, System Management, Real-time Configuration,
Scheduling, and Access Management.

Button Colors Buttons in Data Analyzer.

Tab Colors Tabs under the Primary Navigation tab. Tabs include items such as the
Define Report Properties tab in Step 5 of the Create Report wizard
and the toolbar on the Analyze tab.
Use the same color in Section for the Selected field in Tab Colors so
that color flows evenly for each tab under the Primary Navigation tab.

5. To preview the choices, click Preview.


The Color Scheme Preview window displays an example of the way Data Analyzer will appear with the
color scheme.
6. Click Close to close the Color Scheme Preview window.
7. Click OK to save your changes.

Creating a Color Scheme


You can create a Data Analyzer color scheme. When you create a color scheme, you can use your own images
and logos. Make sure Data Analyzer can access the images to use with the color scheme.
To create a color scheme, complete the following steps:
1. Create a folder for the images and make sure it contains the new images.
2. Create a new color scheme in Data Analyzer and use the new folder as the Images Directory.

Step 1. Create a New Color Scheme Folder


Create a folder in the color schemes directory and copy the image files you want to use to this folder. The name
of the color scheme folder can be up to 10 characters.
To create a new color scheme folder, navigate to the EAR directory. Add the directory and files for the new
color scheme under the default image directory.

To create a new color scheme folder:

1. Create a folder for the new color scheme:


/

2. Create a folder for the images and logo.


3. Copy your image files into the new folder.
For your a
and image files into the new directory:
/

74 Chapter 9: Managing System Settings


You must have image files for all buttons and icons that display in Data Analyzer.
Since Data Analyzer references the image files to display them in Data Analyzer, the image files for your
color scheme must have the same names and format as the image files for the predefined color schemes.
The background and logo image files can have file names that you specify, in GIF or JPG format.
After you set up the folder for the images to use in a new color scheme, you can create the color scheme in Data
Analyzer and use the new color scheme directory.

Step 2. Create a New Color Scheme in Data Analyzer


On the Color Schemes page, set the colors you want to use for the color scheme and provide the new folder
name for the images.
The new color scheme folder must exist in the EAR directory for Data Analyzer to access it.

To create a new color scheme in Data Analyzer:

1. Click Administration > System Management > Color Schemes and Logos.
The Color Schemes and Logos page displays the list of available color schemes.
2. Click Add.
The Color Scheme page appears.
3. Enter the name and description of the new color scheme.
4. In the Images Directory field, enter the name of the color scheme folder you created.
5. In the Background Image URL field, enter the file name of the background image you want to use.
All file names are case sensitive. Make sure the image file is saved in the color scheme folder you created
earlier.
6. In the Logo Image URL field, enter the file name of the logo image to use.
7. In the Login Page Image URL field, enter the file name of the login page image to use.
8. Enter the hexadecimal codes for the colors you want to use in the new color scheme.
If you do not set up new colors for the color scheme, Data Analyzer uses a default set of colors that may not
match the colors of your image files. For more information about display items on the Color Scheme page,
see Table 9-1 on page 73. For more information about hexadecimal color codes, see HTML Hexadecimal
Color Codes on page 117.
9. Click Preview to preview the new color scheme colors.
10. Click OK to save the new color scheme.

Selecting a Default Color Scheme


You can select a default color scheme for Data Analyzer. If you do not specify a color scheme for a user or
group, Data Analyzer uses the Informatica color scheme.

To select a default color scheme:

1. Click Administration > System Management > Color Schemes and Logos.
The Color Schemes and Logos page appears.
2. To set the default color scheme for Data Analyzer, select Default next to the color scheme name.
3. Click Apply.
Data Analyzer uses the selected color scheme as the default for the repository.

Managing Color Schemes and Logos 75


Assigning a Color Scheme
You can assign color schemes to users and groups. Assign specific color schemes when you want a user or group
to use a color scheme other than the default color scheme.
When you assign a user and its group to different color schemes, the user color scheme takes precedence over
the group color scheme. When a user belongs to more than one group, the color scheme for the primary group
takes precedence over the other group color schemes. If the user does not have a primary group, Data Analyzer
uses the default color scheme.
You can assign color schemes to users and groups when you edit the color scheme. You can also assign color
schemes when you edit the user or group on the Access Management page.

To assign a color scheme:

1. Click Administration > System Management > Color Schemes and Logos.
2. Click the name of the color scheme you want to assign.
3. To assign the color scheme to a user or group, click Edit.
The Assign Color Scheme window appears.
4. Use the search options to produce a list of users or groups.
5. In the Query Results area, select the users or groups you want to assign to the color scheme, and click Add.
To assign additional users or groups, repeat steps 3 to 5.
6. Click OK to close the dialog box.
7. Click OK to save the color scheme.

Managing Logs
Data Analyzer provides the following logs to track events and information:
User log. Lists the location and login and logout times for each user.
Activity log. Lists Data Analyzer activity, including the success or failure of the activity, activity type, the
user requesting the activity, the objects used for the activity, and the duration of the request and activity. You
can also configure it to log report queries.
System log. Lists error, warning, informational, and debugging messages.
Global cache log. Lists error, warning, informational, and debugging messages about the size of the Data
Analyzer global cache.
JDBC log. Lists all repository connection activities.

Viewing the User Log


With the user log, you can track user activity in Data Analyzer. Data Analyzer stores the user log entries in the
repository. You can view, clear, and save the user log.
The user log lists the following information:
Login name. The name of the user accessing Data Analyzer.
Remote host. The host name accessing Data Analyzer when available.
Remote address. The IP address accessing Data Analyzer when available.
Login time. The date and time the user logged in based on the machine running the Data Analyzer server.
Logoff time. The date and time the user logged out based on the machine running the Data Analyzer server.

76 Chapter 9: Managing System Settings


Duration. The difference between login and logout times for each user. If the user has not logged out,
duration displays the length of time the user has been logged into Data Analyzer.
User role. The role of the user. To view the role of the user, hold the pointer over the user name.
To view the user log, click Administration > System Management > User Log.
By default, Data Analyzer displays up to 1,000 rows in the user log. You can change the number of rows by
editing the value of the logging.user.maxRowsToDisplay property in DataAnalyzer.properties. For more
information about editing DataAnalyzer.properties, see Configuration Files on page 125.
If you sort the user log by a column, Data Analyzer sorts on all user log data, not just the currently displayed
rows.

Saving and Clearing the User Log


You can save the user log to an XML file. You might save a user log before clearing it to keep a record of user
access.
You can clear the Data Analyzer user log. When you clear the user log, Data Analyzer clears all entries except for
users who have logged in during the past 24 hours and have not yet logged off.

To save a user log:

1. Click Administration > System Management > User Log.


2. Click Save, and then follow the prompts to save the log to disk.

To clear the user log:

1. Click Administration > System Management > User Log.


2. Click Clear.
Data Analyzer deletes the log entries from the repository.

Configuring and Viewing the Activity Log


With the activity log, you can track the activity requests for your Data Analyzer server, such as the number of
requests to view or run reports. Data Analyzer stores the activity log entries in the repository. Clear the activity
log on a regular basis to optimize repository performance.
By default, the activity log tracks the following information:
Activity ID. The identification number of the activity.
Request ID. The identification number of the request that the activity belongs to.
User name. The Data Analyzer user requesting the activity.
Source. The source type of the activity request, such as web, API, or scheduler.
Status. The status of the activity, such as Success or Failure.
Activity. The requested activity, such as Execute or Update.
Object name. The name of the object requested.
Object type. The type of object requested, such as report.
DB access. The time in milliseconds Data Analyzer takes to send the activity request to the data warehouse.
Duration. The overall time in milliseconds takes to perform the request. Use this statistic to optimize
database performance and schedule reports.
Start time. The time the user issued the activity request.
User role. To view the role of the user, hold the pointer over the user name.
SQL. (XML file only.) The SQL statement used to run a report.
Tables. (XML file only.) The tables used in the SQL statement for a report.

Managing Logs 77
To view the activity log, click Administration > System Management > Activity Log.
By default, Data Analyzer displays up to 1,000 rows in the activity log. You can change the number of rows by
editing the value of the logging.activity.maxRowsToDisplay property in the DataAnalyzer.properties file.
If you sort the activity log by a column, Data Analyzer sorts on all activity log data, not just the currently
displayed rows.
You can configure the activity log to provide the query used to perform the activity and the database tables
accessed to complete the activity. This additional information appears in the XML file generated when you save
the activity log.

To configure the activity log:

1. Click Administration > System Management > Log Configuration.


2. Click SQL in the Activity Log area to log queries. To log the tables accessed in the query, select both SQL
and Tables.
Data Analyzer logs the additional details. To view the information, save the activity log to file.

Saving and Clearing the Activity Log


You can save the activity log to an XML file. You might save the activity log to file before you clear it to keep a
record of Data Analyzer activity. You might also save the activity log to view information about the SQL
statements and tables used for reports.
You can clear the activity log of all entries to free space and optimize repository performance. When you clear
the activity log, Data Analyzer clears all entries from the log.

To save an activity log:

1. Click Administration > System Management > Activity Log.


2. Click Save, and then follow the prompts to save the log to disk.

To clear the activity log:

1. Click Administration > System Management > Activity Log.


2. Click Clear.

Configuring the System Log


Data Analyzer generates a system log file named ias.log which logs messages produced by Data Analyzer. You
can view the system log file with any text editor.
You can locate the system log file in the following directory:
P
g S

By default, the System log displays error and warning messages. You can choose to display the following
messages in the system log:
Errors
Warnings
Information
Debug

To specify the messages displayed in the system log file:

X Click Administration > System Management > Log Configuration.


You can change the name of the log file and the directory where it is saved by editing the log4j.xml file.

78 Chapter 9: Managing System Settings


To configure the name and location of the system log file:

1. Locate the log4j.xml file in the following directory:


P
s

The above folder is available after you enable the Reporting Service and the Data Analyzer instance is
started.
2. Open the file with a text editor and locate the following lines:
<
S << e v a
m a N

3. Modify the value of the File parameter to specify the name and location for the log file.
If you specify a path, use the forward slash (/) or two backslashes (\\) in the path as the file separator. Data
Analyzer does not support a single backslash as a file separator.
For example, if you want to save the Data Analyzer system logs to a file named mysystem.log in a folder
called Log_Files in the D: drive, modify the File parameter to include the path and file name:
<

4. Save the file.


Your changes will take affect in Data Analyzer within several minutes.

Configuring the JDBC Log


Data Analyzer generates a JDBC log file. You can view the log file with any text editor.
If you installed JBoss Application Server using the PowerCenter installer, locate the JDBC log file in the
following directory:
<

You can change the name of the file and the directory where it is saved by editing the jdbc.log.file property in
the DataAnalyzer.properties file. You can also determine whether Data Analyzer appends data to the file or
overwrites the existing JDBC log file by editing the jdbc.log.append property in DataAnalyzer.properties.

Managing LDAP Settings


Lightweight Directory Access Protocol (LDAP) is a set of protocols for accessing information directories. You
can use LDAP in the following ways:
Authentication. You use the PowerCenter LDAP authentication to authenticate the Data Analyzer users and
groups.FormoreinformationaboutLDAPauthentication,seetheP u
Access LDAP directory contacts. You use the LDAP settings in Data Analyzer to access contacts within the
LDAP directory service when you send email from Data Analyzer.
To access contacts in the LDAP directory service, you can add the LDAP server on the LDAP Settings page.
After you set up the connection to the LDAP directory service, users can email reports and shared documents to
LDAP directory contacts.
When you add an LDAP server, you must provide a value for the BaseDN property. In the BaseDN property,
enter the Base distinguished name entries for your LDAP directory. The Base distinguished name entries define
the type of information that is stored in the LDAP directory. If you do not know the value for BaseDN, contact
your LDAP system administrator.

Managing LDAP Settings 79


If you use Microsoft Active Directory as the LDAP directory, you must choose System authentication as the
type of authentication on the LDAP Settings page. You must enter a valid system name and system password for
the LDAP server. Contact your LDAP system administrator for the system name and system password.
The following example lists the values you need to enter on the LDAP Settings page for an LDAP server
running Microsoft Active Directory:
N
U
B
A
S
S a

The following example lists the values you need to enter on the LDAP Settings page for an LDAP server
running a directory service other than Microsoft Active Directory:
N
U
B
A o

To add an LDAP server:

1. Click Administration > System Management > LDAP Settings.


The LDAP Settings page appears.
2. Click Add.
3. Enter the following information.
Table 9-2 lists the LDAP server settings you can enter:

Table 9-2. LDAP Server Settings

Setting Description

Name Name of the LDAP server you want to configure.

URL URL for the server. Use the following format:


/ : p a d l

BaseDN Base distinguished name entry identifies the type of information stored in the
LDAP directory. If you do not know the BaseDN, contact your LDAP system
administrator.

Authentication Authentication method your LDAP server uses. Select Anonymous if the LDAP
server allows anonymous authentication. If your LDAP server requires system
authentication, select System.
Select System if you use Microsoft Active Directory as an LDAP directory.

System Name System name of the LDAP server. Required when using System
authentication.

System Password System password for the LDAP server. Required when using System
authentication.

4. Click OK to save the changes.


To modify the settings of an LDAP server, click the name of the LDAP server on the LDAP Settings page.

Managing Delivery Settings


You can determine how users access Data Analyzer and which functions they can access with delivery settings.
You can configure the following delivery settings:

80 Chapter 9: Managing System Settings


Mail server. Allows Data Analyzer users to email reports and shared documents, and receive email alerts.
External URL. Allows users to connect to Data Analyzer from the internet.
SMS/text messaging and mobile carriers. Allows users to register an SMS/Text pager or phone as an alert
delivery device.

Configuring the Mail Server


The mail server provides outbound email access for Data Analyzer and users. You can configure one outbound
mail server at a time. With outbound mail server configured, users can email reports and shared documents.
The mail server you configure must support Simple Mail Transfer Protocol (SMTP). Depending on the mail
server, you might need to create a mail server connector before configuring the mail server.

To configure the mail server:

1. Click Administration > System Management > Delivery Settings.


The Delivery Settings page appears.
2. In the Mail Server field, enter the URL to the outbound mail server.
3. Click Apply.

Configuring the External URL


The external URL links Data Analyzer with your proxy server. Configure an external URL so that users can
access Data Analyzer from the internet. Enter the URL for the proxy server you configured during installation.

To configure the external URL:

1. Click Administration > System Management > Delivery Settings.


The Delivery Settings page appears.
2. In the External URL field, enter the URL for the proxy server.
The URL must begin with http:// or https://.
3. Click Apply.

Configuring SMS/Text Messaging and Mobile Carriers


To allow users to receive one-way SMS/Text message alerts on a phone or pager, you must configure SMS/Text
messaging. To receive SMS/Text message alerts, the users also need to select a mobile carrier. Data Analyzer
configures the following mobile carriers:
ATT
Cingular
Nextel
Sprint
Verizon
You can configure additional mobile carriers by entering connection information for the carriers. For more
U information about using an SMS/Text pager or phone as an alert device, see the D .

To configure SMS/Text Messaging and mobile carriers:

1. Click Administration > System Management > Delivery Settings.


The Delivery Settings page displays.
2. In the Delivery Settings area, select SMS/Text Messaging.

Managing Delivery Settings 81


3. To add a mobile carrier, in the Mobile Carriers task area, enter the name and address for the mobile carrier.
In the address field, enter the domain and extension of the email address associated with your device. If you
do not know the domain and extension, see your wireless carrier documentation.
eless if For m .
4. Click Add.
Data Analyzer adds the mobile carrier to the list of mobile carriers.

Specifying Contact Information


When a system problem occurs, users may need to contact the system administrator. You can specify contact
information for the system administrator in the System Management Area.

To specify contact information:

1. Click Administration > System Management > Contact Information.


2. Enter the name, phone number, and email address of the system administrator.
3. Click Apply.

Viewing System Information


On the System Information page, you can view information about Data Analyzer and the machine that hosts it.
The System Information page contains the following sections:
System Information. The System Information section lists the Data Analyzer version and build, repository
version, database server type, database version, driver name, driver version, JDBC connection string, and
user name.
Operating System. The Operating System section displays the operating system, version, and architecture of
the machine hosting Data Analyzer.
Java. The Java section displays the following information about the Java environment on the machine
hosting Data Analyzer:
Application Server. The version of the application server that runs Data Analyzer.
Servlet API. The version of the Java Servlet API.
Java Version. The version of the Java Virtual Machine (JVM).
Vendor. The Java vendor.
Vendor URL. The Java vendor web site.
Home. The home directory of the JVM.
Classpath. A list of the paths and files contained in the Java classpath system variable.

To view system information:

X Click Administration > System Management > System Information.

82 Chapter 9: Managing System Settings


Setting Rules for Queries
You can configure the time limit on each SQL query for a report, the time limit on processing a report, and the
maximum number of rows that each query returns. You can set up these rules for querying at the following
levels:
System
Group
User
Report
When you change the system query governing setting or the query governing setting for a group or user, you
must log out of Data Analyzer and log in again for the new query governing settings to take effect.

Setting Query Rules at the System Level


You can specify the query governing settings for all reports in the repository. These settings apply to all reports,
unless you override them at the group, user, or report level.

To set up group query governing rules:

1. Click Administration > System Management > Query Governing.


The Query Governing page appears.
2. Enter the query governing rules.
Table 9-3 describes the system query governing rules you can enter:

Table 9-3. System Query Governing Settings

Setting Description

Query Time Limit Maximum amount of time for each SQL query. Default is 240 seconds.

Report Processing Time Maximum amount of time allowed for the application server to run the
Limit report. You may have more than one SQL query for the report. Report
Processing Time includes time to run all queries for the report.
Default is 600 seconds.

Row Limit Maximum number of rows SQL returns for each query. If a query returns
more rows than the row limit, Data Analyzer displays a warning message
and drops the excess rows. Default is 20,000 rows.

3. Click Apply.

Setting Query Rules at the Group Level


You can specify query governing settings for all reports belonging to a specific group. Query governing settings
for the group override system query governing settings. If a user belongs to one or more groups in the same level
in the group hierarchy, Data Analyzer uses the largest query governing setting from each group.

To set up group query governing rules:

1. Click Administration > Access Management > Groups.


2. Click Edit next to the group whose properties you want to modify.
3. In the Query Governing section, clear the Use Default Settings option.
When you clear this option, Data Analyzer uses the query governing settings entered on this page. When
this option is selected, Data Analyzer uses the system query governing settings.
4. Enter the query governing settings you want to use.

Setting Rules for Queries 83


For more information about each setting, see Table 9-3 on page 83.
5. Click OK.
Data Analyzer saves the group query governing settings.

Setting Query Rules at the User Level


You can specify query governing settings for all reports belonging to a specific user. Query governing settings for
the user override group and system query governing settings.

To set up user query governing rules:

1. Click Administration > Access Management > Users.


2. Click the user whose properties you want to modify.
3. In the Query Governing section, clear the Use Default Settings option.
When you clear this option, Data Analyzer uses the query governing settings entered on this page. When
this option is selected, Data Analyzer uses the query governing settings for the group assigned to the user.
4. Enter the query governing settings you want to use.
For more information about each setting, see Table 9-3 on page 83.
5. Click OK.
Data Analyzer saves the user query governing settings.

Query Governing Rules for Users in Multiple Groups


If you specify query governing settings for a user, Data Analyzer uses the query governing setting when it runs
reports for the user. If you do not specify query governing settings for a user, Data Analyzer uses the query
governing settings for the group that the user belongs to.
If a user belongs to multiple groups, Data Analyzer assigns the user the least restrictive query governing settings
available. Data Analyzer ignores groups with the system default query governing settings.
For example, you have not specifically configured query governing settings for a user. The user belongs to three
groups with the following query governing settings:

Group Row Limit Query Time Limit

Group 1 25 rows 30 seconds

Group 2 Default query governing settings

Group 3 18 rows 120 seconds

Data Analyzer does not consider Group 2 in determining the group query governing settings to use for the user
reports. For the row limit, Data Analyzer uses the setting for Group 1 since it is the least restrictive setting. For
query time limit, Data Analyzer uses the setting for Group 3 since it is the least restrictive setting.

Setting Query Rules at the Report Level


You can specify query governing settings for a specific report. Query governing settings for a specific report
override group, user, and system query governing settings.

To set up report query governing rules:

1. Click the Find tab.


2. Click the report whose properties you want to modify.
3. Click Edit.

84 Chapter 9: Managing System Settings


4. Click Publish.
5. On the Report Properties tab, click More Options.
6. In the Query Governing section, clear the Use Default Settings option.
When you clear this option, Data Analyzer uses the query governing settings entered on this page. When
this option is selected, Data Analyzer uses the query governing settings for the user.
7. Enter the query governing settings you want to use.
For more information about each setting, see Table 9-3 on page 83.
8. Click Save.

Configuring Report Table Scroll Bars


You can configure report tables to appear with a scroll bar. When you enable the Show Scroll Bar on Report
Table option, Data Analyzer displays a scroll bar when data in a report table extends beyond the size of the
browser window. When the option is disabled, you use the browser scroll bar to navigate large report tables. By
default, Data Analyzer displays scroll bars in report tables.

To change report table scroll bar display:

1. Click Administration > System Management > Report Settings.


The Report Settings page appears.
2. To allow scroll bars, select Show Scroll Bar on Report Table. To disable scroll bars, clear the option.
3. Click Apply.

Configuring Report Headers and Footers


In the Header and Footer page, you can configure headers and footers for reports. You can configure Data
Analyzer to display text, images, or report information such as report name. Headers and footers display on the
report when you complete the following report tasks:
Print. Headers and footers display in the printed version of the report.
Export. Headers and footers display when you export to an HTML or PDF file.
Broadcast. Headers and footers display when you broadcast a report as an HTML, PDF, or Excel file.
Archive. Headers and footers display when you archive a report as an HTML, PDF, or Excel file.
Email. Headers and footers display when you email a report as an HTML or PDF file.
You can display text or images in the header and footer of a report. When you select the headers and footers to
display, preview the report to verify that the headers and footers display properly with enough spaces between
text or images.
Table 9-4 lists the options you can select to display in the report headers and footers:

Table 9-4. Display Options for Report Headers and Footers

Header/Footer Display Options

Left Header Text or image file.

Center Header Text.

Configuring Report Table Scroll Bars 85


Table 9-4. Display Options for Report Headers and Footers

Header/Footer Display Options

Right Header Text.

Left Footer One or more of the following report properties:


- Name. Name of the report.
- User Name. Name of the user. Users can specify their names on the Manage
Account tab. If a user specifies a first name, middle name, or last name, Data
Analyzer displays the specified name in the footer.
- Last Update. Date when the report was last updated.
- Printed On. Date and time when you print, export, broadcast, archive, or email the
report.

Center Footer Text and Page Number.

Right Footer Text or image file.

The image files you display in the left header or the right footer of a report can be any image type supported by
your browser. By default, Data Analyzer looks for the header and footer image files in the image file directory
for the current Data Analyzer color scheme.
The report header and footer image files are stored with the color scheme files in the EAR directory. If you want
to modify or use a new image for the left header or right footer, you must update the images in the EAR
directory.
If you want to use an image file in a different location, enter the complete URL for the image when you
configure the header or footer. For example, if the host name of the web server where you saved the
Header_Logo.gif image file is http://monet.PaintersInc.com, port 16080, enter the following URL:
h

If Data Analyzer cannot find the header or footer image in the color scheme directory or the URL, Data
Analyzer does not display any image for the report header or footer.
You can use the PDF.HeaderFooter.ShrinktoWidth property in the DataAnalyzer.properties file to determine
how Data Analyzer handles long headers and footers. When you enter a large amount of text in a header or
footer, Data Analyzer shrinks the font to fit the text in the allotted space by default. You can also configure Data
Analyzer to keep header and footer text the configured font size, allowing Data Analyzer to display only the text
that fits in the header or footer.

To configure report headers and footers:

1. Click Administration > System Management > Header and Footer.


The Report Header and Footer page appears.

Select an
option
and enter
text to
display.

Select report Select an option and enter text, or select report Select to display text or
properties to display. property to display. Or select to display both. image. Enter the text or
image file name to display.

86 Chapter 9: Managing System Settings


2. To configure report headers, select the headers you want to display and enter the header text.
To use text for left headers, select the top field and enter the text to display. To use an image for the left
header, select the lower field and enter the name of an image file in the Data Analyzer EAR file or specify a
URL for the image.
3. To configure report footers, select the footer you want to display.
For left footers, you can choose properties specific to the report.
To use text for the right footer, select the top field and enter the text to use. To use an image for the right
footer, select the lower field and enter the name of the file to use.
For more information about the header and footer display options, see Table 9-4 on page 85.
Data Analyzer looks for the header and footer images in the image directory for the color scheme. If the
image is not in the default image directory, specify the complete URL.
4. Click Preview to see how the report will look with the headers and footers you selected.
Adobe Acrobat launches in a new browser window to display a preview of the report.
5. Close the preview window.
6. On the Report Header and Footer page, click Apply to set the report header and footer.
Or click Cancel to discard the changes to the headers and footers.
Note: If you make more changes in the report header and footer configuration, close the preview window
and click Preview again to see the new report header and footer.

Configuring Departments and Categories


You can associate repository objects with a department or category to organize repository objects. Associating
repository objects with a department or category can also help you search for these objects on the Find tab.
You might use department names to organize repository objects according to the departments in your
organization, such as Human Resource and Development. You might use category names to organize repository
objects according to object characteristics, such as Quarterly or Monthly.

To configure department and category:

1. Click Administration > System Management > Metadata Configuration.


The Categories Departments page appears.
2. In the Categories area, enter the name of the category.
3. Click Add.
The category name appears in the list in the Categories area.
4. In the Departments area, enter the name of the department.
5. Click Add.
The department name appears in the list in the Departments area.
6. Click OK.
Data Analyzer saves the department or category names you added. You can associate the category or
department you created with repository objects.

Configuring Departments and Categories 87


Configuring Display Settings for Groups and Users
By default, if you have more than 100 groups or users, Data Analyzer displays a Search box so you can find the
group or user you want to edit. If Data Analyzer returns more than 1,000 groups or users in the search results,
refine the search criteria.
You can customize the way Data Analyzer displays users or groups. Data Analyzer provides the following
properties in a file named web.xml so you can configure the user or group display according to your
requirements:
showSearchThreshold. Determines the number of groups or users Data Analyzer displays before displaying
the Search box. Default is 100.
searchLimit. Determines the maximum number of groups or users in the search results before you must
refine the search criteria. Default is 1,000.
Note: The web.xml file is stored in the EAR directory. Back up the web.xml file before you modify it.

To change group or user display options in web.xml:

1. Open the /custom/properties/web.xml file with a text editor and locate the line containing the following
property:
w o h s

The value of the showSearchThreshold property is the number of groups or users Data Analyzer displays
without providing the Search box.
2. Change the value of the showSearchThreshold property according to your requirements.
n i <
m a
S
h
m a
0
i / <

3. Locate the line containing the following property:


e s

The value of the searchLimit property is the maximum number of groups or users in the search result
before you must refine the search criteria.
4. Change the value of the searchLimit property according to your requirements.
n i <
m a
S
r i e L s
m /
1
i / <

5. Save and close web.xml.


6. Restart Data Analyzer.

88 Chapter 9: Managing System Settings


CHAPTER 10

Working with Data Analyzer


Administrative Reports
This chapter includes the following topics:
Overview, 89
Setting Up the Data Analyzer Administrative Reports, 90
Using the Data Analyzer Administrative Reports, 93

Overview
Data Analyzer provides a set of administrative reports that enable system administrators to track user activities
and monitor processes. The reports provide a view into the information stored in the Data Analyzer repository.
They include details on Data Analyzer usage and report schedules and errors.
The Data Analyzer administrative reports use an operational schema based on tables in the Data Analyzer
repository. They require a data source that points to the Data Analyzer repository. They also require a data
connector that includes the Data Analyzer administrative reports data source and operational schema.
After you set up the Data Analyzer administrative reports, you can view and use the reports just like any other
set of reports in Data Analyzer. If you need additional information in a report, you can modify it to add metrics
or attributes. You can add charts or indicators, or change the format of any report. You can enhance the reports
to suit your needs and help you manage the users and processes in Data Analyzer more efficiently.
You can view the administrative reports in two areas:
Administrators Dashboard. On the Administrators Dashboard, you can quickly see how well Data
Analyzer is working and how often users log in.
Data Analyzer Administrative Reports folder. You can access all administrative reports in the Data Analyzer
Administrative Reports public folder under the Find tab.

Administrators Dashboard
The Administrators Dashboard displays the indicators associated with the administrative reports. The
Administrators Dashboard has the following containers:
Todays Usage. Provides information on the number of users who logged in for the day, the number of
reports accessed in each hour for the day, and any errors encountered when Data Analyzer runs cached
reports.

89
Historical Usage. Displays the users who logged in the most number of times during the month, the longest
running on-demand reports, and the longest running cached reports for the current month.
Future Usage. Lists the cached reports in Data Analyzer and when they are scheduled to run next.
Admin Reports. Provides a report on the Data Analyzer users who have never logged in. Also provides
reports on the most and least accessed reports for the year.

Data Analyzer Administrative Reports Folder


The Data Analyzer Administrative Reports folder stores all the administrative reports. You can view, open, and
run reports from this folder.

Setting Up the Data Analyzer Administrative Reports


Informatica ships a set of prepackaged administrative reports for Data Analyzer. After you create a Reporting
Service in the PowerCenter Administration Console and the corresponding Data Analyzer instance is running
properly, you can set up the administrative reports on Data Analyzer.
You must enable the Reporting Service and access the Data Analyzer URL to set up the administrative reports.
To set up the administrative reports, complete the following steps:
1. Create a data source for the Data Analyzer repository. The administrative reports display information
from the Data Analyzer repository. You need a data source to connect to the repository. For more
information, see Step 1. Set Up a Data Source for the Data Analyzer Repository on page 90.
2. Import the administrative reports to the Data Analyzer repository. Import the XML files in the
<Informatica installation directory>\services\ReportingService\DA-tools\AdministrativeReports folder to
the Data Analyzer repository. For more information, see Step 2. Import the Data Analyzer Administrative
Reports on page 91.
3. Add the repository data source to a data connector. To run the administrative reports, you need a data
connector that contains the data source to the repository. For more information about, see Step 3. Add the
Data Source to a Data Connector on page 91.
4. Add the administrative reports to a schedule. To have the reports and indicators regularly updated, you
can run the administrative reports on specific schedules. For more information, see Step 4. Add the
Administrative Reports to Schedules on page 92.

Step 1. Set Up a Data Source for the Data Analyzer Repository


The administrative reports provide information on the Data Analyzer processes and usage. The information
comes from the Data Analyzer repository. You must create a data source that points to the Data Analyzer
repository, and then add the data source to a data connector.
Note: If you have a data source that points to the Data Analyzer repository, you can skip this step and use the
existing data source for the administrative reports.

To create the repository data source:

1. Click Administration > Schema Design > Data Sources.


2. On the Data Sources page, click Add.
The Data Source page appears.
3. Select JDBC Data Source.
4. Enter a name and description for the data source.

90 Chapter 10: Working with Data Analyzer Administrative Reports


5. Select the server type of your Data Analyzer repository.
Data Analyzer provides JDBC drivers to connect to the Data Analyzer repository and data warehouse.
When you select the server type, Data Analyzer supplies the driver name and connection string format for
the JDBC drivers that Data Analyzer provides.
The server type list includes the following databases:
Oracle. Select to connect to an Oracle repository.
SQL Server. Select to connect to a Microsoft SQL Server repository.
DB2. Select to connect to an IBM DB2 repository.
Sybase ASE. Select to connect to a Sybase repository.
Teradata. Data Analyzer does not support a Teradata repository.
Other. Select if you want to use a different driver or you have a repository that requires a different driver
than those provided by Data Analyzer. When you select Other, you must provide the driver name and
connection string.
6. Customize the JDBC connection string with the information for your Data Analyzer repository database.
7. Enter the user name and password to connect to the repository database.
8. Test the connection.
If the connection fails, verify that the repository database information is correct. Consult your database
administrator if necessary.
9. Click OK.

Step 2. Import the Data Analyzer Administrative Reports


Before you import the Data Analyzer administrative reports, ensure that the Reporting Service is enabled and
the Data Analyzer instance is running properly.
Import the XML files under the <Informatica installation directory>\services\ReportingService\DA-
tools\AdministrativeReports folder. The XML files contain the schemas, schedules, dashboards, and database-
specific global variables that you need to run the administrative reports. For more information about importing
XML files, see Importing Objects to the Repository on page 47.

Step 3. Add the Data Source to a Data Connector


Data Analyzer uses a data connector to connect to a data source and read the data for a report. Typically, Data
Analyzer uses the system data connector to connect to all the data sources required for Data Analyzer reports.
To enable Data Analyzer to run the administrative reports, add the administrative reports data source to the
system data connector. If you have several data connectors and you want to use a specific data connector for the
administrative reports, add the administrative reports data source to the specific data connector.
If Data Analyzer does not have a data connector, you must create one before running the Data Analyzer
administrative reports. For more information about data connectors, see the D
G d

To add the administrative reports data source to the system data connector:

1. Click Administration > Schema Design > Data Connectors.


The Data Connectors page appears.
2. Click the name of the system data connector.
Data Analyzer displays the properties of the system data connector.
3. In the Additional Schema Mappings section, click Add.
Data Analyzer expands the section and displays the available schemas in the repository.

Setting Up the Data Analyzer Administrative Reports 91


4. In the Data Source list, select the administrative reports data source you created earlier.
5. In the Available Schemas section, select PA_Reposit and click Add >>.
The PA_Reposit operational schema is one of the schemas installed by the PowerCenter Reports installer.
6. Click Add.
Data Analyzer displays the additional schema mapping for the system data connector.
7. Click OK.
You can now run the administrative reports using the system data connector.

Step 4. Add the Administrative Reports to Schedules


Data Analyzer provides a set of schedules that you can use to run the administrative reports on a regular basis.
After you import all the necessary objects for the administrative reports, verify that the cached reports are
assigned to the appropriate schedules.
The public folder named Data Analyzer Administrative Reports contains the administrative reports.

To add the administrative reports to schedules:

1. Click the Find Tab.


2. In the folders section of the Find tab, click Public Folders.
3. Locate and click the folder named Data Analyzer Administrative Reports.
4. Select a report to add to a schedule.
5. Click Edit.
The report appears in the Create Report wizard.
6. Click Publish.
7. On the Properties tab, select Cached, and then select Hourly Refresh from the list of schedules.
8. Save the report.
9. Repeat steps 1 to 8 to verify that the following administrative reports are assigned to the appropriate
schedules:

Report Schedule

Todays Logins Hourly Refresh

Todays Report Usage by Hour Hourly Refresh

Top 5 Logins (Month To Date) Midnight Daily

Top 5 Longest Running On-Demand Reports (Month To Date) Midnight Daily

Top 5 Longest Running Scheduled Reports (Month To Date) Midnight Daily

Total Schedule Errors for Today Hourly Refresh

The Hourly Refresh schedule is one of the schedules installed by the PowerCenter Reports installer. The
Midnight Daily schedule is one of the schedules created when you install Data Analyzer.
After you complete the steps to add the reports to the schedules, you might want to review the list of
reports in the Data Analyzer Administrative Reports folder to make sure that the cached reports have been
added to the correct schedule.
10. To review the schedule for a report in the Data Analyzer Administrative Reports folder, select a report and
look at the Report Properties section.
After you schedule the administrative reports, you need to create a data source for the repository.

92 Chapter 10: Working with Data Analyzer Administrative Reports


Using the Data Analyzer Administrative Reports
The Data Analyzer administrative reports are located in the Data Analyzer Administrative Reports public folder
on the Find tab. You can also access these reports from the Administrators Dashboard.
Data Analyzer provides the following administrator reports, listed in alphabetical order:
Activity Log Details. Use this on-demand report to view the activity logs. You can access this report from
the Find tab.
Bottom 10 Least Accessed Reports this Year. Use this on-demand report to determine the 10 least used
reports in the current calendar year. You can access this report from the Admin Reports container on the
Administrators Dashboard and from the Find Tab.
Report Activity Details. View this report as part of the analytic workflows for several primary reports or as a
standalone report. When you run the Report Activity Details from the Find tab, it displays access
information for all reports in the repository.
Report Activity Details for Current Month. This on-demand report provides information about the reports
accessed within the current month. You can access this report from the Find tab.
Report Refresh Schedule. This report provides information about the next scheduled update for cached
reports. Use this report to monitor the update time for various reports. You can access this report from the
Future Usage container on the Administrators Dashboard and from the Find Tab. Data Analyzer updates
this cached report based on the Hourly Refresh schedule.
Reports Accessed by Users Today. Use this report to get information on the reports accessed by users in the
current day. You can view this report as part of the analytic workflow for the Todays Logins primary report
or as a standalone report. When you run this report from the Find tab, the report provides detailed
information about all reports accessed by any user in the current day.
Todays Logins. This report provides the login count and average login duration for users who logged in on
the current day. It is the primary report for an analytic workflow. Use this report to determine the system
usage for the current day. You can access this report from the Todays Usage container on the
Administrators Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the
Hourly Refresh schedule.
Todays Report Usage by Hour. This report provides information about the number of reports accessed for
each hour of the current day. It is the primary report for an analytic workflow. Use this report to monitor
the update time for various reports. You can access this report from the Todays Usage container on the
Administrators Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the
Hourly Refresh schedule.
Top 10 Most Accessed Reports this Year. Use this report to determine the reports most accessed by users in
the current calendar year. The report shows the list of 10 reports that users find most useful. It is the
primary report for an analytic workflow. You can access this report from the Admin Reports container on
the Administrators Dashboard and from the Find Tab.
Top 5 Logins (Month To Date). Use this report to determine the users who logged in to Data Analyzer the
most number of times in the current month. The report displays the user names and number of times each
user logged in. You can access this report from the Historical Usage container on the Administrators
Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Midnight Daily
schedule.
Top 5 Longest Running On-Demand Reports (Month To Date). This report displays the average response
time for the five longest-running on-demand reports in the current month to date. Use this report to help
you tune the database or web server. You can also use it to determine whether an on-demand report needs to
run on a schedule. You can access this report from the Historical Usage container on the Administrators
Dashboard and from the Find Tab. Data Analyzer updates this cached report based on the Midnight Daily
schedule.
Top 5 Longest Running Scheduled Reports (Month To Date). This report displays the time that Data
Analyzer takes to display the five longest running cached reports in the current month to date. Use this
report for performance tuning and for determining whether a cached report needs to run on demand. You

Using the Data Analyzer Administrative Reports 93


can access this report from the Historical Usage container on the Administrators Dashboard and from the
Find Tab. Data Analyzer updates this cached report based on the Midnight Daily schedule.
Total Schedule Errors for Today. This report provides the number of errors Data Analyzer encountered
when running cached reports. Use this report to monitor cached reports and modify them if necessary. You
can access this report from the Todays Usage container on the Administrators Dashboard and from the
Find Tab. Data Analyzer updates this cached report based on the Hourly Refresh schedule.
User Log Details. Use this on-demand report to view the user logs. You can access this report from the Find
tab.
User Logins (Month To Date). This report displays the number of times each user logged in during the
month. Use this report to determine how often users log in to Data Analyzer. You can access this report
from the Historical Usage container on the Administrators Dashboard and from the Find Tab.
Users Who Have Never Logged On. This report provides information about users who have never logged in
to Data Analyzer. Use this report to make administrative decisions about disabling accounts. You can access
this report from the Admin Reports container on the Administrators Dashboard and from the Find Tab.

94 Chapter 10: Working with Data Analyzer Administrative Reports


CHAPTER 11

Performance Tuning
This chapter includes the following topics:
Overview, 95
Database, 95
Operating System, 97
Application Server, 102
Data Analyzer Processes, 107

Overview
Data Analyzer requires the interaction of several components and services, including those that may already
exist in the enterprise infrastructure, such as the enterprise data warehouse and authentication server.
Data Analyzer is built on JBoss Application Server and uses related technology and application programming
interfaces (APIs) to accomplish its tasks. JBoss Application Server is a Java 2 Enterprise Edition (J2EE)-
compliant application server. Data Analyzer uses the application server to handle requests from the web
browser. It generates the requested contents and uses the application server to transmit the content back to the
web browser. Data Analyzer stores metadata in a repository database to keep track of the processes and objects it
needs to handle web browser requests.
You can tune the following components to optimize the performance of Data Analyzer:
Database
Operating system
Application server
Data Analyzer

Database
Data Analyzer has the following database components:
Data Analyzer repository
Data warehouse

95
The repository database contains the metadata that Data Analyzer uses to construct reports. The data
warehouse contains the data for the Data Analyzer reports.
The data warehouse is where the report SQL queries are executed. Typically, it has a very high volume of data.
The execution time of the reports depends on how well tuned the database and the report queries are. Consult
the database documentation on how to tune a high volume database for optimal SQL execution.
The Data Analyzer repository database contains a smaller amount of data than the data warehouse. However,
since Data Analyzer executes many SQL transactions against the repository, the repository database must also be
properly tuned to optimize the database performance. This section provides recommendations for tuning the
Data Analyzer repository database for best performance.
Note: Host the Data Analyzer repository and the data warehouse in separate database servers. The following
repository database tuning recommendations are valid only for a repository that resides on a database server
separate from the data warehouse. If you have the Data Analyzer repository database and the data warehouse in
the same database server, you may need to use different values for the parameters than those recommended here.

Oracle
This section provides recommendations for tuning the Oracle database for best performance.

Statistics
To ensure that the repository database tables have up-to-date statistics, periodically run the following command
for the repository schema:
E
c

Shared Pool and Database Cache Size


For optimal performance, set the following parameter values for the Data Analyzer repository database:
s
d

For more information about tuning an Oracle database, see the Oracle documentation.

User Connection
For an Oracle repository database running on HP-UX, you may need to increase the number of user
connections allowed for the repository database so that Data Analyzer can maintain continuous connection to
the repository.
To enable more connections to the Oracle repository, complete the following steps:
1. At the HP-UX operating system level, raise the maximum user process (maxuprc) limit from the default of
75 to at least 300.
Use the System Administration Manager tool (SAM) to raise the maxuprc limit. Raising the maxuprc limit
requires root privileges. You need to restart the machine hosting the Oracle repository for the changes to
take effect.
2. In Oracle, raise the values for the following database parameters in the init.ora file:
Raise the value of the processes parameter from 150 to 300.
Raise the value of the pga_aggregate_target parameter from 32 MB to 64 MB (67108864).
Updating the database parameters requires database administrator privileges. You need to restart Oracle for the
changes to take effect.
If the Data Analyzer instance has a high volume of usage, you may need to set higher limits to ensure that Data
Analyzer has enough resources to connect to the repository database and complete all database processes.

96 Chapter 11: Performance Tuning


IBM DB2
To ensure that the repository database tables have up-to-date statistics, periodically run the following command
for the repository schema:
R> S

Analysis of table statistics is important in DB2. If you do not update table statistics periodically, you may
encounter transaction deadlocks during times of high concurrency usage.
For optimal performance, set the following parameter values for the Data Analyzer repository database:
L
A M
D
G O L
G O L

For more information about DB2 performance tuning, refer to the following IBM Redbook:
h s

Microsoft SQL Server 2000


To ensure that repository database tables and indexes have up-to-date statistics, periodically run the
s stored procedure on the repository

Operating System
For all UNIX operating systems, make sure the file descriptor limit for the shell running the application server
leastprocess tolimit.
The following recommendations for tuning the operating system are based on information compiled from
various application server vendor web sites.

Linux
To optimize Data Analyzer on Linux, you need to make several changes to your Linux environment. You must
modify basic system and kernel settings to allow the Java component better access to the resources of your
system:
Enlarge the shared memory and shared memory segments.
Enlarge the maximum open file descriptors.
Enlarge the maximum per-process open file descriptors.

Enlarging Shared Memory and Shared Memory Segments


By default, Linux limits the amount of memory and the number of memory segments that can be shared among
applications to a reasonably small value. You need to increase these values because the Java threads need to have
access to the same area of shared memory and its resultant segments.
To change these parameters, enter the following commands as root on the machine where you install Data
Analyzer:
#
#

These changes only affect the system as it is running now. Enter the following commands to make them
permanent:
#

Operating System 97
p 2 # e
1 # ' 2 r e e

Enlarging the Maximum Open File Descriptors


Linux has a programmed limit for the number of files it allows to be open at any one time. By default, this is set
to 4096 files. Increasing this limit removes any bottlenecks from all the Java threads requesting files. Enter the
following command as root to increase the maximum number of open file descriptors:
#

These changes affect the system as it is currently running. Enter the following commands to make them
permanent:
p # e

Enlarging the Maximum Per-Process Open File Descriptors


Increase the maximum number of open files allowed for any given process. Enter the following commands as
root to increase the maximum open file descriptors per process:
e # i e s n l
# >
# >
# e >

Additional Recommended Settings


Table 11-1 shows additional recommended settings for Linux operating system parameters:

Table 11-1. Recommended Settings for Linux Parameters

Linux Parameters Suggested Values

/sbin/ifconfig lo mtu 1500

kernel.msgmni 1024

net.ipv4.tcp_max_syn_backlog 8192

HP-UX
You can tune the following areas in the HP-UX operating system to improve overall Data Analyzer
performance:
Kernel
Java Process
Network

Kernel Tuning
HP-UX has a Java-based configuration utility called HPjconfig which shows the basic kernel parameters that
need to be tuned and the different patches required for the operating system to function properly. You can
download the configuration utility from the following HP web site:
h

The HPjconfig recommendations for a Java-based application server running on HP-UX 11 include the
following parameter values:
M
M
x a M
f x a M
x a M

98 Chapter 11: Performance Tuning


a c N
f N
t k N
p N

Note: For Java processes to function properly, it is important that the HP-UX operating system is on the proper
patch level as recommended by the HPjconfig tool.
For more information about kernel parameters affecting Java performance, see the HP documentation. For
more information about tuning the HP-UX kernel, see the document titled Tunable Kernel Parameters on the
following HP web site:
h

Java Process
You can set the JVM virtual page size to improve the performance of a Java process running on an HP-UX
machine. The default value for the Java virtual machine instruction and data page sizes is 4 MB. Increase the
value to 64 MB to optimize the performance of the application server that Data Analyzer runs on.
To set the JVM virtual page size, use the following command:
c +/

Network Tuning
For network performance tuning, use the ndd command to view and set the network parameters.
Table 11-2 provides guidelines for ndd settings:

Table 11-2. Recommended ndd Settings for HP-UX

ndd Setting Recommended Value

tcp_conn_request_max 16384

tcp_xmit_hiwater_def 1048576

tcp_time_wait_interval 60000

tcp_recv_hiwater_def 1048576

tcp_fin_wait_2_timeout 90000

For example, to set the tcp_conn_request_max parameter, use the following command:
n

After modifying the settings, restart the machine.

Solaris
You can tune the Solaris operating system to optimize network and TCP/IP operations in the following ways:
Use the ndd command.
Set parameters in the /etc/system file.
Set parameters on the network card.

Setting Parameters Using ndd


Use the ndd command to set the TCP-related parameters, as shown in the following example:
n
Tip: Use the netstat -s -P tcp command to view all available TCP-related parameters.

Operating System 99
Table 11-3 lists the TCP-related parameters that you can tune and their recommended values:

Table 11-3. Recommended ndd Settings for Solaris

ndd Setting Recommended Value

/dev/tcp tcp_time_wait_interval 60000

/dev/tcp tcp_conn_req_max_q 16384

/dev/tcp tcp_conn_req_max_q0 16384

/dev/tcp tcp_ip_abort_interval 60000

/dev/tcp tcp_keepalive_interval 30000

/dev/tcp tcp_rexmit_interval_initial 4000

/dev/tcp tcp_rexmit_interval_max 10000

/dev/tcp tcp_rexmit_interval_min 3000

/dev/tcp tcp_smallest_anon_port 32768

/dev/tcp tcp_xmit_hiwat 131072

/dev/tcp tcp_recv_hiwat 131072

/dev/tcp tcp_naglim_def 1

/dev/ce instance 0

/dev/ce rx_intr_time 32

/dev/tcp tcp_fin_wait_2_flush_interval 67500

Note: Prior to Solaris 2.7, the tcp_time_wait_interval parameter was called tcp_close_
wait_interval. This parameter determines the time interval that a TCP socket is kept alive after issuing a close
call. The default value of this parameter on Solaris is four minutes. When many clients connect for a short
period of time, holding these socket resources can have a significant negative impact on performance. Setting
this parameter to a value of 60000 (60 seconds) has shown a significant throughput enhancement when running
benchmark JSP tests on Solaris. You might want to decrease this setting if the server is backed up with a queue
of half-opened connections.

Setting Parameters in the /etc/system File


Each socket connection to the server consumes a file descriptor. To optimize socket performance, configure
your operating system to have the appropriate number of file descriptors.
Change the default file descriptor limits, the hash table size, and other tuning parameters in the /etc/system file.
Note: Restart the machine if you modify /etc/system parameters.

Table 11-4 lists the /etc/system parameters that you can tune and the recommended values:

Table 11-4. Recommended /etc/system Settings for Solaris

Parameter Recommended Value

rlim_fd_cur 8192

rlim_fd_max 8192

tcp:tcp_conn_hash_size 32768

semsys:seminfo_semume 1024

semsys:seminfo_semopm 200

*shmsys:shminfo_shmmax 4294967295

autoup 900

100 Chapter 11: Performance Tuning


Table 11-4. Recommended /etc/system Settings for Solaris

Parameter Recommended Value

tune_t_fsflushr 1

*Note: Set only on machines that have at least 4 GB of RAM.

Setting Parameters on the Network Card


Table 11-5 lists the CE Gigabit card parameters that you can tune and the recommended values:

Table 11-5. Recommended CE Gigabit Card Settings for Solaris

Parameter Recommended Value

ce:ce_bcopy_thresh 256

ce:ce_dvma_thresh 256

ce:ce_taskq_disable 1

ce:ce_ring_size 256

ce:ce_comp_ring_size 1024

ce:ce_tx_ring_size 4096

For more information about Solaris tuning options, see the Solaris Tunable Parameters Reference Manual.

AIX
If an application on an AIX machine transfers large amounts of data, you can increase the TCP/IP or UDP
buffer n set
For example, to set the tcp_sendspace parameter, use the following command:
/

Table o values:

Table 11-6. Recommended Buffer Size Settings for no Command for AIX

Parameter Recommended Value

tcp_sendspace 262144

tcp_recvspace 262144

rfc1323 1

tcp_keepidle 600

Table values: you

Table 11-7. Recommended Buffer Size Settings for nfso Command for AIX

Parameter Recommended Value

nfs_socketsize 200000

nfs_tcp_socketsize 200000

To permanently set the values when the system restarts, add the commands to the /etc/rc.net file.
For more information about AIX tuning options, see the Performance Management Guide on the IBM web site:
h /

Operating System 101


Windows
Disable hyper-threading on a four-CPU Windows 200 machine to provide better throughput for a clustered
application server in a high concurrency usage environment. Usually, the Windows 2000 default settings for the
TCP/IP parameters are adequate to ensure optimal network performance.

Application Server
JBoss Application Server consists of several components, each of which has a different set of configuration files
and parameters that can be tuned. The following are some of the JBoss Application Server components and
recommendations for tuning parameters to improve the performance of Data Analyzer running on JBoss
Application Server.

Servlet/JSP Container
JBoss Application Server uses the Apache Tomcat 5.5 Servlet/JSP container. You can tune the Servlet/JSP
container to make an optimal number of threads available to accept and process HTTP requests.
To tune the Servlet/JSP container, modify the following configuration file:
<
S

The following is a typical configuration:


<
<
m d m a
e c 4
d

The following parameters may need tuning:


maxThreads. Maximum number of request processing threads that can be created in the pool, which
determines the maximum number of simultaneous requests that the Servlet/JSP container can handle. If not
specified, the parameter is set to 200.
maxSpareThreads. Maximum number of unused request processing threads that can exist before the pool
begins stopping the unused threads. If not specified, the parameter is set to 50.
minSpareThreads. Number of request processing threads initially created in the pool. Set the attribute to a
value smaller than the value set for maxThreads. If not specified, the parameter is set to 4.
By default, Data Analyzer is configured to have a maximum of 250 request processing threads which is
acceptable for most environments. You may need to modify this value to achieve better performance. Increasing
the number of threads means that more users can use Data Analyzer concurrently. However, more concurrent
users may cause the application server to sustain a higher processing load, leading to a general slow down of
Data Analyzer. Decreasing the number of threads means that fewer users can use Data Analyzer concurrently.
Fewer concurrent users may alleviate the load on the application server, leading to faster response times.
However, some users may need to wait for their HTTP request to be served.
If the number of threads is too low, then the following message may appear in the log files:
a El

Although the Servlet/JSP container configuration file contains additional properties, Data Analyzer may
generate unexpected results if you modify properties that are not documented in this section. For additional
information about configuring the Servlet/JSP container, see the Apache Tomcat Configuration Reference on
the Apache Tomcat website:
h

102 Chapter 11: Performance Tuning


The Servlet/JSP container configuration file does not determine how JBoss Application Server handles threads.
You can also define and configure thread handling in the JBoss Application Server configuration files. For more
information about configuring thread management on JBoss Application Server, see the JBoss Application
Server documentation.

JSP Optimization
Data Analyzer uses JavaServer Pages (JSP) scripts to generate content for the web pages used in Data Analyzer.
Typically, the JSP scripts must be compiled when they are executed for the first time. To avoid having the
application server compile JSP scripts when they are executed for the first time, Informatica ships Data Analyzer
with pre-compiled JSPs.
If you find that you need to compile the JSP files either because of customizations or while patching, you can
modify the following configuration file to optimize the JSP compilation:
<
S

The following is a typical configuration:


s <

<

/ <

The following parameter may need tuning:


development. When set to true, checks for modified JSPs at every access. Set the development parameter to
false in a production installation.
If you set the development parameter to true, you can set the checkInterval parameter to specify when the JSPs
are checked.
checkInterval. Checks for changes in the JSP files on an interval of n seconds. This works only when the
development parameter is set to true. For example:
<
<

Note: Make sure that the checkInterval is not too low. In production environment, set it to 600 seconds.

EJB Container
Data Analyzer uses Enterprise Java Beans extensively. It has over 50 stateless session beans (SLSB) and over 60
entity beans (EB). There are also six message-driven beans (MDBs) used for scheduling and real-time processes.

Stateless Session Beans


For SLSBs, the most important tuning parameter is the EJB pool. You can tune the EJB pool parameters in the
following file:
<
S

The following is a typical configuration


t n o c <
S

Application Server 103


<
<
/
o

<

o
<
M

o
<

o
<

n o c / <

The following parameter may need tuning:


MaximumSize. Represents the maximum number of objects in the pool. If <strictMaximumSize> is set to
true, then <MaximumSize> is a strict upper limit for the number of objects that will be created. If
<strictMaximumSize> is set to false, the number of active objects can exceed the <MaximumSize> if there are
requests for more objects. However, only the <MaximumSize> number of objects will be returned to the
pool.
You can set two other parameters to fine tune the EJB pool. These parameters are not set by default in Data
Analyzer. They may be tuned after you have completed proper iterative testing in Data Analyzer to increase the
throughput for high concurrency installations:
strictMaximumSize. When the value is set to true, the <strictMaximumSize> enforces a rule that only
<MaximumSize> number of objects will be active. Any subsequent requests will wait for an object to be
returned to the pool.
strictTimeout. If you set <strictMaximumSize> to true, then <strictTimeout> is the amount of time that
requests will wait for an object to be made available in the pool.

Message-Driven Beans (MDB)


MDB tuning parameters are very similar to stateless bean tuning parameters. The main difference is that MDBs
are not invoked by clients. Instead, the messaging system delivers messages to the MDB when they are available.
To tune the MDB parameters, modify the following configuration file:
P
S

The following is a typical configuration:


t n o c <
M e B

104 Chapter 11: Performance Tuning


c
<
/

<

n o c / <

The following parameter may need tuning:


MaximumSize. Represents the maximum number of objects in the pool. If <strictMaximumSize> is set to
true, then <MaximumSize> is a strict upper limit for the number of objects that will be created. Otherwise,
if <strictMaximumSize> is set to false, the number of active objects can exceed the <MaximumSize> if there
are requests for more objects. However, only the <MaximumSize> number of objects will be returned to the
pool.

Enterprise Java Beans


Data Analyzer EJBs use bean-managed persistence (BMP) as opposed to container-managed persistence (CMP).
The EJB tuning parameters are in the following configuration file:
<
S

The following is a typical configuration:


<

c
<
/

Application Server 105


/

<
/
<

<
/
c
<

n o c / <

The following parameter may need tuning:


MaximumSize. Represents the maximum number of objects in the pool. If <strictMaximumSize> is set to
true, then <MaximumSize> is a strict upper limit for the number of objects that will be created. Otherwise,
if <strictMaximumSize> is set to false, the number of active objects can exceed the <MaximumSize> if there
are requests for more objects. However, only the <MaximumSize> number of objects will be returned to the
pool.
You can set two other parameters to fine tune the EJB pool. These parameters are not set by default in Data
Analyzer. They may be tuned after you have completed proper iterative testing in Data Analyzer to increase the
throughput for high concurrency installations:
strictMaximumSize. When the value is set to true, the <strictMaximumSize> parameter enforces a rule that
only <MaximumSize> number of objects will be active. Any subsequent requests will wait for an object to be
returned to the pool.
strictTimeout. If you set <strictMaximumSize> to true, then <strictTimeout> is the amount of time that
requests will wait for an object to be made available in the pool.

106 Chapter 11: Performance Tuning


Data Analyzer Processes
To design schemas and reports and use Data Analyzer features more effectively, use the following guidelines.

Aggregation
Data Analyzer can run more efficiently if the data warehouse has a good schema design that takes advantage of
aggregate tables to optimize query execution. Data Analyzer performance improves if the data warehouse
contains good indexes and is properly tuned.

Ranked Reports
Data Analyzer supports two-level ranking. If the report has one level of ranking, Data Analyzer delegates the
ranking task to the database by doing a multi-pass query to first get the ranked items and then running the
actual query with ranking filters. If the ranking is defined on a calculation that is performed in the middle tier,
Data Analyzer has to pull all the data before it evaluates the calculation expression and ranks the data and filter.
If you have a data warehouse with a large volume of data, avoid creating reports with ranking defined on custom
attributes or custom metrics. These types of reports consume resources and may slow down other Data Analyzer
processes.
A report with second level ranking, such as the top 10 products and the top five customers for each product,
requires a multi-pass SQL query to first get the data to generate the top 10 products and then get the data for
each product and correspondingt top five customers. If the report is defined to show T ,
Data Analyzer runs another SQL query to get the aggregated values for the rows not shown in the report. For
optimal performance, create reports with two levels of ranking based on smaller schemas or on schemas that
have good aggregate tables and indexes. Also, consider making the report cached so that it can run in the
background.

Datatype of Table Columns


Data Analyzer uses JDBC drivers to connect to the data warehouse. JDBC uses a different data structure when
it returns data, based on the column datatype defined in the database. If a column has a numeric datatype,
JDBC packages the returned data in a BigDecimal format, which has a high degree of precision. If a high degree
of precision is not required, then a BigDecimal format for columns in tables with a large volume of data adds
unnecessary overhead. Set column datatypes to reflect the actual precision required.

Date Columns
By default, Data Analyzer performs date manipulation on any column with a datatype of Date. If a report
includes a column that contains date and time information but the report requires a daily granularity, Data
Analyzer includes conversion functions in the WHERE clause and SELECT clause to get the proper
aggregation and filtering by date only, not including time. However, conversion functions in a query prevent
the use of database indexes and makes the SQL query inefficient.
Use the D property for an attribute to have Data Analyzer include conversion functions
in the SQL query. If a column contains date and time information, set the D attribute
property so that Data Analyzer includes conversion functions in the SQL query for any report the uses the
column. If a column contains date information only, clear the D attribute property so
that Data Analyzer does not include conversion functions in the SQL query for any report the uses the column.

JavaScript on the Analyze Tab


The Analyze tab in Data Analyzer uses JavaScript for user interaction. Each cell in a report on the Analyze tab
has embedded JavaScript objects to capture various user interactions. If there are over 5,000 cells in a report,
Data Analyzer may display messages warning that the JavaScripts on the page are running too slow. On a slow
workstation with a CPU speed less than 1 GHz, the report may take several minutes to display.

Data Analyzer Processes 107


Make sure that a report displayed in the Analyze tab has a restriction on the number of cells displayed on a page.
You can control the number of rows displayed on a page in Layout and Setup, Step 4 of the Create Report
wizard. On the Formatting tab, set the number of rows to display per page for a report on the Analyze tab.

Interactive Charts
An interactive chart uses less application server resources than a regular chart. On the machine hosting the
application server, an interactive chart can use up to 25% less CPU resources than a regular chart. On a typical
workstation with a CPU speed greater than 2.5 GHz, interactive charts display at about the same speed as
regular charts. Use interactive charts whenever possible to improve performance.
D theinteractive
enable
see
charts,
preferences
general
information
your
to
more
editing
about
For
U .

Number of Charts in a Report


Data Analyzer generates the report charts after it generates the report table. Since a chart may use only a subset
of the report columns and rows as a datapoint, Data Analyzer generates a subset of the report dataset for each
chart. This means that each chart that Data Analyzer generates for a report has computing overhead associated
with it.
To keep Data Analyzer scalable, consider the overhead cost associated with report charts and create the
minimum set of charts required by the end user. Report designers who create a large number of charts to cover
all possible user requirements can weaken the performance and scalability of Data Analyzer.

Scheduler and User-Based Security


Data Analyzer supports parallel execution of both time-based and event-based schedulers. However, Data
Analyzer runs only the tasks in an event in parallel mode. Within a task, Data Analyzer runs subtasks
sequentially.
For example, you have five reports with user-based security and there are 500 security profiles for subscribers to
the report. Data Analyzer must execute each of the five reports for each of the 500 security profiles. Since
generating a report for each security profile is a subtask for each report, Data Analyzer cannot take advantage of
parallel scheduler execution and sequentially generates the report for each security profile.
For optimal performance, minimize the number of security profiles in Data Analyzer.

Frequency of Schedule Runs


Setting the report schedules to run very frequently, such as every five minutes, can create problems. For
example, you add ReportA to a schedule that runs every five minutes. If ReportA takes six minutes to run, Data
Analyzer starts running ReportA again before the previous run is completed. This situation can drastically affect
the performance of Data Analyzer.
If you require reports to deliver real-time data, use the real-time message stream features available in Data
Analyzer. Do not use the report schedules to frequently update reports to simulate real-time reports.

Row Limit for SQL Queries


Data Analyzer fetches all the rows returned by an SQL query into the JVM before it displays them on the
report. Although Data Analyzer displays only 20 rows in a page, it may already have fetched hundreds of rows
and stored them in the JVM heap. Data Analyzer must pre-fetch all the rows so that the full dataset is available
for operations such as ranking or ordering data, performing time comparisons, or formatting reports into
sections.
To keep Data Analyzer from consuming more resources than necessary, it is important to restrict the number of
rows returned by the SQL query of a report. You can set parameters in Data Analyzer to restrict the number of
rows returned by an SQL query for a report and to manage the amount of memory it uses.

108 Chapter 11: Performance Tuning


Query Governing
You can restrict the number of rows returned by an SQL query for a report with the query governing settings in
Data Analyzer. You can set this parameter at the system level, user level, and report level. To improve
performance, limit the number of returned rows to a small value, such as 1000, at the server level. You can
increase the value for specific reports that require more data. For more information about query governing, see
Setting Rules for Queries on page 83.

ProviderContext.maxInMemory
When a user runs a report, Data Analyzer saves the dataset returned by the report query in the user session until
the user terminates the session. If there are a large number of concurrent users on Data Analyzer and each runs
multiple reports, the memory requirements can be considerable. By default, Data Analyzer keeps two reports in
the user session at a time. It uses a first in first out (FIFO) algorithm to overwrite reports in memory with more
recent reports.
You can edit the providerContext.maxInMemory property in DataAnalyzer.properties to set the number of
reports that Data Analyzer keeps in memory. Set the value as low as possible to conserve memory. The value
must be greater than or equal to 2. Typically, the default value of 2 is sufficient.
Data Analyzer retains report results that are part of a workflow or drill path in memory irrespective of the value
set in this property. Data Analyzer keeps the datasets for all reports in a workflow in the user session. Include
only reports that have small datasets in a workflow.
Note: A user must log out of Data Analyzer to release the user session memory. Closing a browser window does
not release the memory immediately. When a user closes a browser window without logging out, Data Analyzer
releases the memory after the expiration of session-timeout, which, by default, is 30 minutes.

ProviderContext.abortThreshold
When a user runs a report that involves calculation or building large result sets, Data Analyzer might run out of
memory that results in the users getting a blank page. Before Data Analyzer starts calculating the report or
building the tabular result set, it checks the amount of available memory. If the amount of free memory does
not meet a pre-defined percentage, Data Analyzer displays an error and stops processing the report request.
You can edit the providerContext.abortThreshold property in the DataAnalyzer.properties file to set the
maximum percentage of memory that is in use before Data Analyzer stops building report result sets and
executing report queries.
To calculate the percentage, divide the used memory by the total memory configured for the JVM. For example,
if the used memory is 1,000 KB, and the total memory configured for the JVM is 2,000 KB, the percentage of
memory that is in use is 50%. If the percentage is below the threshold, Data Analyzer continues with the
requested operation. If the percentage is above the threshold, then Data Analyzer displays an error.
Typically, you can set a threshold value between 50% and 99%. The default value is 95%.

Indicators in Dashboard
Data Analyzer uses two parallel threads to load indicators in the dashboards. These parallel threads are default
threads spawned by the browser.
Data Analyzer has been optimized to handle the way multiple indicators are queued up for loading:
In a dashboard with indicators based on cached and on-demand reports, Data Analyzer loads all indicators
based on cached reports before it loads indicators based on on-demand reports.
Gauges based on cached reports load the fastest because gauges have only one data value and they are cached
in the database along with the report model. Data Analyzer obtains the report model and the datapoint for
the gauge at the same time and can immediately create the gauge.
When there are multiple indicators based on a single report, Data Analyzer runs the underlying report once.
All indicators on a dashboard based on the same report use the same resultset. Both for cached and on-
demand reports.

Data Analyzer Processes 109


The table indicators use plain HTML instead of DHTML, which results in very little overhead for rendering
the table indicators on the browser.

Purging of Activity Log


Data Analyzer logs every activity or event that happens in Data Analyzer in the activity log. Similarly, Data
Analyzer records every user login in the user log. These logs can grow quickly. To improve Data Analyzer
performance, you must clear these two logs frequently. For more information about managing the activity and
user logs, see Managing System Settings on page 71.
For on-demand reports, Data Analyzer provides an estimate of the length of time a report takes to display. Data
Analyzer uses the data in the activity log to calculate the E for an on-demand
report. If the activity log contains a lot of data, then the SQL query to calculate the estimated time may take
considerable CPU resources because it calculates the estimated time by doing an average of all the entries for a
specified number of days. You can specify the number of days that Data Analyzer uses for the estimate by
editing the queryengine.estimation.window property in DataAnalyzer.properties. For most cases, the default
value of 30 days is fine.
For more information about the estimation window property, see Properties in DataAnalyzer.properties on
page 126.

Recommendations for Dashboard Design


When you design a dashboard, use the following recommendations:
For dashboard indicators, use indicators based on cached reports instead of on-demand reports.
Typically, dashboards provide summarized information. Use aggregate tables for indicators based on
on-demand reports on the dashboards.
Use position-based indicators instead of value-based indictors for reports with a volume of more than 2,000
rows. Position-based indicators can use indexes in the java collection for faster access of the database,
whereas value-based indicators have to perform a linear scan of the rowset to match up the values. Hence,
the scan can get progressively slower for large datasets.
In a high usage environment, use interactive charts on the dashboard. Regular charts are rendered at server
side and use the server CPU resources. Interactive charts are rendered on the browser and require much less
server resources.

Chart Legends
When Data Analyzer displays charts with legends, the Data Analyzer charting engine must perform many
complex calculations to fit the legends in the limited space available on the chart. Depending on the number of
legends in a chart, it might take Data Analyzer from 10% to 50% longer to render a chart with legends. If
legends are not essential in a chart, consider displaying the chart without legends to improve Data Analyzer
performance.

Connection Pool Size for the Data Source


Data Analyzer internally maintains a pool of JDBC connections to the data warehouse. This pool of JDBC
connections is different from the pool of connections to the repository defined at the application server level.
To optimize the database connection pool for a data source, modify the connection pool settings in
DataAnalyzer.properties. The following is a typical configuration:
#
#
p a n y d
p a n y d
d
d

110 Chapter 11: Performance Tuning


d
d

The following parameters may need tuning:


dynapool.minCapacity. Minimum number of connections maintained in the data source pool. Set to 0 to
ensure that no connections are maintained in the data source pool. If the value is 0, Data Analyzer creates a
new connection to the data source to calculate a report. Default is 2.
dynapool.maxCapacity. Maximum number of connections that the data source pool can grow to. Set the
value to the total number of concurrent users. If you set a value less than the number of concurrent users,
Data Analyzer returns an error message to some users.
dynapool.evictionPeriodMins. Number of minutes between eviction runs or clean up operations during
which Data Analyzer cleans up failed and idle connections from the connection pool. Default is 5 minutes.
You can set the value to half of the value set for the parameter dynapool.connectionIdleTimeMins so that
Data Analyzer performs the eviction run, frees the connections for report calculations, and does not allow a
connection to remain idle for too long.
dynapool.waitForConnectionSeconds. Number of seconds Data Analyzer waits for a connection from the
pool before it aborts the operation. Default is 1. If you set the parameter to 0, Data Analyzer does not wait
and aborts the operation.
dynapool.connectionIdleTimeMins. Number of minutes that a connection may remain idle. Data Analyzer
ignores this property if the parameter dynapool.evictionPeriodMins is not set. Default is 10. Enter a positive
value for this parameter. If you set the parameter to 0 or a negative value, Data Analyzer sets the parameter
to the default value.

Server Location
Data Analyzer runs on an application server and reads data from a database server. For optimal performance,
these servers must have enough CPU power and RAM. There should also be minimal network latency between
these servers.

Server Location and CPU Power and RAM


If you locate the application server and database server in a single machine, the machine must have enough
CPU power and RAM to handle the demands of each of the server processes. Although a single-machine
architecture means that there is no network latency, the requirements for a very powerful machine makes it an
expensive solution. It also becomes a single point of failure.
An alternative to the single-machine architecture is a distributed system where the servers are located on
different machines across a network. This type of distributed architecture can be more economical because it
can leverage existing infrastructure. However, network latency is an issue in a distributed architecture.

Server Location and Network Latency


There are two database components in Data Analyzer: the repository and data warehouse.
Data Analyzer runs a large number of SQL queries against the repository to get the metadata before running
any report. Data Analyzer runs only a few SQL queries against the data warehouse.
The SQL queries that Data Analyzer runs against the repository are not CPU or IO intensive. However, since
Data Analyzer runs a large number of them, network latency between the application server and the repository
database must be minimal. For optimal performance, have the repository database as close as possible to the
application server Data Analyzer runs on.
The SQL queries that Data Analyzer runs against the data warehouse return many rows and are CPU and IO
intensive. Typically, the data warehouse requires more CPU power than the repository database. Since the
queries return many rows, network latency between the application server and the data warehouse must also be
minimal.
Note: Data Analyzer connects to only one repository database. However, it can connect to more than one data
warehouse.

Data Analyzer Processes 111


You can keep the repository and data warehouse on the same database but in separate schemas as long as the
machine has enough CPU and memory resources to handle the repository SQL queries and the data warehouse
SQL queries.
As with any major software implementation project, carefully perform capacity planning and testing before a
Data Analyzer deployment. The choice of architecture depends on the requirements of the organization. Make
sure that all processes have enough resources to function optimally.

112 Chapter 11: Performance Tuning


CHAPTER 12

Customizing the Data Analyzer


Interface
This chapter includes the following topics:
Overview, 113
Using the Data Analyzer URL API, 113
Using the Data Analyzer API Single Sign-On, 114
Setting Up Color Schemes and Logos, 114
Setting the UI Configuration Properties, 114

Overview
You can customize the Data Analyzer user interface so that it meets the requirements for web applications in
your organization. Data Analyzer provides several ways to allow you to modify the look and feel of Data
Analyzer.
You can use the following techniques to customize Data Analyzer:
Use the URL API to display Data Analyzer web pages on a portal.
Use the Data Analyzer API single sign on (SSO) scheme to access Data Analyzer web pages without a user
login.
Set up custom color schemes and logos on the Data Analyzer Administration tab.
Set the user interface (UI) configuration properties in the DataAnalyzer.properties file to display or hide the
Data Analyzer header or navigation bar.

Using the Data Analyzer URL API


You can use the URL interface provided with the Data Analyzer API to provide links in a web application or
portal to specific pages in Data Analyzer, such as dashboard, report, or tab pages. The URL consists of the Data
Analyzer location and parameters that determine the content and interface for the Data Analyzer page.
For more informationDabout the Data Analyzer URL API, see the D

113
Using the Data Analyzer API Single Sign-On
When you access Data Analyzer, the login page appears. You must enter a user name and password.
Ordinarily, if you display Data Analyzer web pages in another web application or portal, the Data Analyzer
login appears even if you have already logged in to the portal where the Data Analyzer pages are displayed. To
avoid multiple logins, you can set up an SSO mechanism that allows you to log in once and be authenticated in
all subsequent web applications that you access.
The Data Analyzer API provides an SSO mechanism that you can use when you display Data Analyzer pages in
another web application or portal. You can configure Data Analyzer to accept the portal authentication and
bypass
the
Data
Analyzer
login
page.
For
more
information
about
the
Data
Analyzer
API
SSO,
see
the
D
A .

Setting Up Color Schemes and Logos


Data Analyzer provides two color schemes for the Data Analyzer interface. You can use the default Informatica
scheme
named
Bsample
color
thescheme
and
color B scheme.
custom
colora for
poi
start
You can also create color schemes and use custom graphics, buttons, and logos to match the standard color
scheme for the web applications in your organization. For more information, see Managing Color Schemes
and Logos on page 72.

Setting the UI Configuration Properties


In DataAnalyzer.properties, you can define a user interface configuration that determines how Data Analyzer
handles specific sections of the user interface.
The UI configuration include the following properties:
u
u

The properties determine what displays in the header section of the Data Analyzer user interface which includes
the logo, the logout and help links, and the navigation bar:

Navigation Bar Header Section

Default UI Configuration
By default, when a user logs in to Data Analyzer through the Login page, the logo, logout and help links, and
navigation bar display on all the Data Analyzer pages. To hide the navigation bar or the header section on the
Data
properties to false.
To hide the whole header section, add the following property:
u

To hide only the navigation bar, add the following property:


u

114 Chapter 12: Customizing the Data Analyzer Interface


Tip: DataAnalyzer.properties includes examples of the properties for the default UI configuration. If you want
to change the default configuration settings, uncomment the default properties and update the values of the
properties.

UI Configuration Parameter in Data Analyzer URL


If you use the URL API to display Data Analyzer pages on another web application or a portal, you can add a
configuration to DataAnalyzer.properties and include the configuration name in the URL. The header section
of the Data Analyzer page appears on the portal according to the setting in the configuration.
For example, to display the Data Analyzer administration page on a portal without the navigation bar, complete
the following steps:
1. Add the following properties to DataAnalyzer.properties, specifying a configuration name:
u
n o c i u

2. Include the parameter <UICONFIG> and the configuration name in the URL when you call the Data
Analyzer Administration page from the portal:
h

For more informationDabout the Data Analyzer URL API, see the D
The default settings determine what Data Analyzer displays after the Login page. If you access a Data Analyzer
page with a specific configuration through the URL API and the session expires, the Login page appears. After
you login, Data Analyzer displays the Data Analyzer pages based on the default configuration, not the
configuration passed through the URL. To avoid this, complete one of the following tasks:
Change the values of the default configuration instead of adding a new configuration.
Set the default configuration to the same values as your customized configuration.
Customize the Data Analyzer login page to use your customized configuration after user login.

Configuration Settings
Use the following guidelines when you set up a configuration in DataAnalyzer.properties:
The default configuration properties are not required in DataAnalyzer.properties. Add them only if you
want to modify the default configuration settings or create new UI configurations.
The configuration name can be any length and is case sensitive. It can include only alphanumeric characters.
It cannot include special characters.
Setting the ShowHeader property to false implicitly sets the ShowNav property to false.
For more information about modifying the settings in DataAnalyzer.properties, see Configuration Files on
page 125.
The following examples show what appears on the Data Analyzer header when the UI configuration properties
are set to different values:
S (default
setting)

Setting the UI Configuration Properties 115


S

and S

Note: Data Analyzer stores DataAnalyzer.properties in the Data Analyzer EAR file.

116 Chapter 12: Customizing the Data Analyzer Interface


APPENDIX A

Hexadecimal Color Codes


This appendix includes the following topic:
HTML Hexadecimal Color Codes, 117

HTML Hexadecimal Color Codes


You can create new color schemes for Data Analyzer by entering valid HTML hexadecimal color codes into the
appropriate fields on the Color Scheme page. For example, you can alter the colors in Data Analyzer to match
your corporate color scheme. For more information about creating a color scheme, see Managing Color
Schemes and Logos on page 72.
Table A-1 lists the colors and hexadecimal color codes you can use when creating color schemes for Data
Analyzer:

Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

alice blue F0F8FF blue 0000FF

antique white FAEBD7 blue violet 8A2BE2

antique white1 FFEFDB blue1 0000FF

antique white2 EEDFCC blue2 0000EE

antique white3 CDC0B0 blue3 0000CD

antique white4 8B8378 blue4 00008B

aquamarine 7FFFD4 brown A52A2A

aquamarine1 7FFFD4 brown1 FF4040

aquamarine2 76EEC6 brown2 EE3B3B

aquamarine3 66CDAA brown3 CD3333

aquamarine4 458B74 brown4 8B2323

azure F0FFFF burlywood DEB887

azure1 F0FFFF burlywood1 FFD39B

azure2 E0EEEE burlywood2 EEC591

azure3 C1CDCD burlywood3 CDAA7D

117
Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

azure4 838B8B burlywood4 8B7355

beige F5F5DC cadet blue 5F9EA0

bisque FFE4C4 cadet blue1 98F5FF

bisque1 FFE4C4 cadet blue2 8EE5EE

bisque2 EED5B7 cadet blue3 7AC5CD

bisque3 CDB79E cadet blue4 53868B

bisque4 8B7D6B chartreuse 7FFF00

black 000000 chartreuse1 7FFF00

blanched almond FFEBCD chartreuse2 76EE00

chartreuse3 66CD00 dark khaki BDB76B

chartreuse4 458B00 dark magenta 8B008B

chocolate D2691E dark olive green 556B2F

chocolate1 FF7F24 dark orange FF8C00

chocolate2 EE7621 dark orange1 FF7F00

chocolate3 CD661D dark orange2 EE7600

chocolate4 8B4513 dark orange3 CD6600

coral FF7F50 dark orange4 8B4500

coral1 FF7256 dark orchid 9932CC

coral2 EE6A50 dark orchid1 BF3EFF

coral3 CD5B45 dark orchid2 B23AEE

coral4 8B3E2F dark orchid3 9A32CD

cornflower blue 6495ED dark orchid4 68228B

cornsilk FFF8DC dark red 8B0000

cornsilk1 FFF8DC dark salmon E9967A

cornsilk2 EEE8CD dark sea green 8FBC8F

cornsilk3 CDC8B1 dark slate blue 483D8B

cornsilk4 8B8878 dark slate gray 2F4F4F

cyan 00FFFF dark turquoise 00CED1

cyan1 00FFFF dark violet 9400D3

cyan2 00EEEE dark goldenrod3 CD950C

cyan3 00CDCD dark olive green1 CAFF70

cyan4 008B8B dark olive green2 BCEE68

dark blue 00008B dark olive green3 A2CD5A

dark cyan 008B8B dark olive green4 6E8B3D

dark goldenrod B8860B dark sea green1 C1FFC1

dark goldenrod1 FFB90F dark sea green2 B4EEB4

dark goldenrod2 EEAD0E dark sea green3 9BCD9B

dark goldenrod4 8B6508 dark sea green4 698B69

118 Appendix A: Hexadecimal Color Codes


Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

dark gray A9A9A9 dark slate gray1 97FFFF

dark green 006400 dark slate gray2 8DEEEE

dark slate gray3 79CDCD gold3 CDAD00

dark slate gray4 528B8B deep sky blue 00BFFF

deep pink FF1493 deep sky blue1 00BFFF

deep pink1 FF1493 deep sky blue2 00B2EE

deep pink2 EE1289 deep sky blue3 009ACD

deep pink3 CD1076 deep sky blue4 00688B

deep pink4 8B0A50 dim gray 696969

dark slate gray3 79CDCD dodger blue 1E90FF

deep sky blue 00BFFF gold4 8B7500

deep sky blue1 00BFFF goldenrod DAA520

deep sky blue2 00B2EE goldenrod1 FFC125

deep sky blue3 009ACD goldenrod2 EEB422

deep sky blue4 00688B goldenrod3 CD9B1D

dim gray 696969 goldenrod4 8B6914

dodger blue 1E90FF gray BEBEBE

dodger blue1 1E90FF gray0 000000

dodger blue2 1C86EE gray1 030303

dodger blue3 1874CD gray10 1A1A1A

dodger blue4 104E8B gray100 FFFFFF

firebrick B22222 gray11 1C1C1C

firebrick1 FF3030 gray12 1F1F1F

firebrick2 EE2C2C gray13 212121

firebrick3 CD2626 gray14 242424

firebrick4 8B1A1A gray15 262626

floral white FFFAF0 gray16 292929

forest green 228B22 gray17 2B2B2B

gainsboro DCDCDC gray18 2E2E2E

ghostwhite F8F8FF gray19 303030

gold FFD700 gray2 050505

gold1 FFD700 gray20 333333

gold2 EEC900 gray21 363636

gray22 383838 gray50 7F7F7F

gray23 3B3B3B gray51 828282

gray24 3D3D3D gray52 858585

gray25 404040 gray53 878787

gray26 424242 gray54 8A8A8A

HTML Hexadecimal Color Codes 119


Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

gray27 454545 gray55 8C8C8C

gray28 474747 gray56 8F8F8F

gray29 4A4A4A gray57 919191

gray3 080808 gray58 949494

gray30 4D4D4D gray59 969696

gray31 4F4F4F gray6 0F0F0F

gray32 525252 gray60 999999

gray33 545454 gray61 9C9C9C

gray34 575757 gray62 9E9E9E

gray35 595959 gray63 A1A1A1

gray36 5C5C5C gray64 A3A3A3

gray37 5E5E5E gray65 A6A6A6

gray38 616161 gray66 A8A8A8

gray39 636363 gray67 ABABAB

gray4 0A0A0A gray68 ADADAD

gray40 666666 gray69 B0B0B0

gray41 696969 gray7 121212

gray42 6B6B6B gray70 B3B3B3

gray43 6E6E6E gray71 B5B5B5

gray44 707070 gray72 B8B8B8

gray45 737373 gray73 BABABA

gray46 757575 gray74 BDBDBD

gray47 787878 gray75 BFBFBF

gray48 7A7A7A gray76 C2C2C2

gray49 7D7D7D gray77 C4C4C4

gray5 0D0D0D gray78 C7C7C7

gray79 C9C9C9 honeydew1 F0FFF0

gray8 141414 honeydew2 E0EEE0

gray80 CCCCCC honeydew3 C1CDC1

gray81 CFCFCF honeydew4 838B83

gray82 D1D1D1 hot pink FF69B4

gray83 D4D4D4 hot pink3 CD6090

gray84 D6D6D6 hot pink4 8B3A62

gray85 D9D9D9 hot pink1 FF6EB4

gray86 DBDBDB indian red CD5C5C

gray87 DEDEDE indian red1 FF6A6A

gray88 E0E0E0 indian red2 EE6363

gray89 E3E3E3 indian red3 CD5555

120 Appendix A: Hexadecimal Color Codes


Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

gray9 171717 indian red4 8B3A3A

gray90 E5E5E5 ivory FFFFF0

gray91 E8E8E8 ivory1 FFFFF0

gray92 EBEBEB ivory2 EEEEE0

gray93 EDEDED ivory3 CDCDC1

gray94 F0F0F0 ivory4 8B8B83

gray95 F2F2F2 khaki F0E68C

gray96 F5F5F5 khaki1 FFF68F

gray97 F7F7F7 khaki2 EEE685

gray98 FAFAFA khaki3 CDC673

gray99 FCFCFC khaki4 8B864E

green 00FF00 lavender E6E6FA

green yellow ADFF2F lavender blush FFF0F5

green1 00FF00 lavender blush1 FFF0F5

green2 00EE00 lavender blush2 EEE0E5

green3 00CD00 lavender blush3 CDC1C5

green4 008B00 lavender blush4 8B8386

hot pink 2 EE6AA7 lawn green 7CFC00

honeydew F0FFF0 lemon chiffon FFFACD

lemon chiffon 2 EEE9BF light yellow2 EEEED1

lemon chiffon 3 CDC9A5 light yellow3 CDCDB4

lemon chiffon1 FFFACD light yellow4 8B8B7A

lemon chiffon4 8B8970 light blue1 BFEFFF

light blue ADD8E6 light blue4 68838B

light blue2 B2DFEE light cyan1 E0FFFF

light blue3 9AC0CD light cyan2 D1EEEE

light coral F08080 light cyan3 B4CDCD

light cyan E0FFFF light cyan4 7A8B8B

light goldenrod EEDD82 light pink1 FFAEB9

light goldenrod yellow FAFAD2 light pink2 EEA2AD

light goldenrod1 FFEC8B light pink3 CD8C95

light goldenrod2 EEDC82 light pink4 8B5F65

light goldenrod3 CDBE70 light sky blue1 B0E2FF

light goldenrod4 8B814C light sky blue2 A4D3EE

light gray D3D3D3 light sky blue3 8DB6CD

light green 90EE90 light skyblue4 607B8B

light pink FFB6C1 light steel blue1 CAE1FF

light salmon FFA07A light steel blue2 BCD2EE

HTML Hexadecimal Color Codes 121


Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

light salmon1 FFA07A light steel blue3 A2B5CD

light salmon2 EE9572 lime green 32CD32

light salmon3 CD8162 linen FAF0E6

light salmon4 8B5742 magenta FF00FF

light sea green 20B2AA magenta1 FF00FF

light sky blue 87CEFA magenta2 EE00EE

light slate blue 8470FF magenta3 CD00CD

light slate gray 708090 magenta4 8B008B

light steel blue B0C4DE maroon B03060

light steel blue4 6E7B8B maroon1 FF34B3

light yellow FFFFE0 maroon2 EE30A7

light yellow1 FFFFE0 maroon3 CD2990

maroon4 8B1C62 navy 000080

medium slate blue 7B68EE old lace FDF5E6

medium aquamarine 66CDAA olive drab 6B8E23

medium blue 0000CD olive drab1 C0FF3E

medium orchid BA55D3 olive drab2 B3EE3A

medium orchid1 E066FF olive drab3 9ACD32

medium orchid2 D15FEE olive drab4 698B22

medium orchid3 B452CD orange FFA500

medium orchid4 7A378B orange red FF4500

medium purple 9370DB orange red1 FF4500

medium purple1 AB82FF orange red2 EE4000

medium purple2 9F79EE orange red3 CD3700

medium purple3 8968CD orange red4 8B2500

medium purple4 5D478B orange1 FFA500

medium sea green 3CB371 orange2 EE9A00

medium spring green 00FA9A orange3 CD8500

medium turquoise 48D1CC orange4 8B5A00

medium violet red C71585 orchid DA70D6

midnight blue 191970 orchid1 FF83FA

mint cream F5FFFA orchid2 EE7AE9

misty rose FFE4E1 orchid3 CD69C9

misty rose1 FFE4E1 orchid4 8B4789

misty rose2 EED5D2 pale goldenrod EEE8AA

misty rose3 CDB7B5 pale green 98FB98

misty rose4 8B7D7B pale green1 9AFF9A

moccasin FFE4B5 pale green2 90EE90

122 Appendix A: Hexadecimal Color Codes


Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

navajo white FFDEAD pale green3 7CCD7C

navajo white1 FFDEAD pale green4 548B54

navajo white2 EECFA1 pale turquoise AFEEEE

navajo white3 CDB38B pale turquoise1 BBFFFF

navajo white4 8B795E pale turquoise2 AEEEEE

pale turquoise3 96CDCD red1 FF0000

pale turquoise4 668B8B red2 EE0000

pale violet red DB7093 red3 CD0000

pale violet red 2 EE799F red4 8B0000

pale violet red 3 CD6889 rosy brown BC8F8F

pale violet red1 FF82AB rosybrown1 FFC1C1

pale violet red4 8B475D rosybrown2 EEB4B4

papaya whip FFEFD5 rosybrown3 CD9B9B

peach puff FFDAB9 rosybrown4 8B6969

peach puff1 FFDAB9 royal blue 4169E1

peach puff2 EECBAD royal blue1 4876FF

peach puff3 CDAF95 royal blue2 436EEE

peach puff4 8B7765 royal blue3 3A5FCD

peru CD853F royal blue4 27408B

pink FFC0CB saddle brown 8B4513

pink1 FFB5C5 salmon FA8072

pink2 EEA9B8 salmon1 FF8C69

pink3 CD919E salmon2 EE8262

pink4 8B636C salmon3 CD7054

plum DDA0DD salmon4 8B4C39

plum1 FFBBFF sandy brown F4A460

plum2 EEAEEE sea green 2E8B57

plum3 CD96CD seagreen1 54FF9F

plum4 8B668B seagreen2 4EEE94

powder blue B0E0E6 seagreen3 43CD80

purple A020F0 seagreen4 2E8B57

purple1 9B30FF seashell FFF5EE

purple2 912CEE seashell1 FFF5EE

purple3 7D26CD seashell2 EEE5DE

purple4 551A8B seashell3 CDC5BF

red FF0000 seashell4 8B8682

sienna A0522D steel blue2 5CACEE

sienna1 FF8247 steel blue3 4F94CD

HTML Hexadecimal Color Codes 123


Table A-1. HTML Color Codes for Color Schemes

Color Name Color Code Color Name Color Code

sienna2 EE7942 steel blue4 36648B

sienna3 CD6839 tan D2B48C

sienna4 B47268 tan1 FFA54F

sky blue 87CEEB tan2 EE9A49

sky blue1 87CEFF tan3 CD853F

sky blue2 7EC0EE tan4 8B5A2B

sky blue3 6CA6CD thistle D8BFD8

sky blue4 4A708B thistle1 FFE1FF

slate blue 6A5ACD thistle2 EED2EE

slate blue1 836FFF thistle3 CDB5CD

slate blue2 7A67EE thistle4 8B7B8B

slate blue3 6959CD tomato FF6347

slate blue4 473C8B tomato1 FF6347

slate gray 778899 tomato2 EE5C42

slate gray1 C6E2FF tomato3 CD4F39

slate gray2 B9D3EE tomato4 8B3626

slate gray3 9FB6CD turquoise 40E0D0

slategray4 6C7B8B turquoise1 00F5FF

snow1 FFFAFA turquoise2 00E5EE

snow2 EEE9E9 turquoise3 00C5CD

snow3 CDC9C9 turquoise4 00868B

snow4 8B8989 violet EE82EE

spring green 00FF7F violet red D02090

spring green1 00FF7F violet red 1 FF3E96

spring green2 00EE76 violet red 2 EE3A8C

spring green3 00CD66 violet red3 CD3278

spring green4 008B45 violet red4 8B2252

steel blue 4682B4 wheat F5DEB3

steel blue1 63B8FF wheat1 FFE7BA

wheat2 EED8AE yellow green 9ACD32

wheat3 CDBA96 yellow1 FFFF00

wheat4 8B7E66 yellow2 EEEE00

white FFFFFF yellow3 CDCD00

white smoke F5F5F5 yellow4 8B8B00

yellow FFFF00

124 Appendix A: Hexadecimal Color Codes


APPENDIX B

Configuration Files
This appendix includes the following topics:
Overview, 125
Modifying the Configuration Files, 125
Properties in DataAnalyzer.properties, 126
Properties in infa-cache-service.xml, 133
Properties in web.xml, 137

Overview
To customize Data Analyzer for your organization, you can modify the Data Analyzer configuration files. The
configuration files define the appearance and operational parameters of Data Analyzer.
You can modify the following configuration files:
DataAnalyzer.properties. Contains the configuration settings for an instance of Data Analyzer. They are
stored in the Data Analyzer EAR directory.
infa-cache-service.xml. Contains the global cache configuration settings for Data Analyzer. Although infa-
cache-service.xml contains many settings, you only need to modify specific settings. They are stored in the
Data Analyzer EAR directory.
web.xml. Contains additional configuration settings for an instance of Data Analyzer. Although web.xml
contains many settings, you only need to modify specific settings. They are stored in the Data Analyzer EAR
directory.

Modifying the Configuration Files


Each instance of Data Analyzer has an associated enterprise archive (EAR) directory. The following
configuration files that contain the settings for an instance of Data Analyzer are stored in its EAR directory:
DataAnalyzer.properties
infa-cache-service.xml
web.xml

125
To change the settings in the configuration files stored in the Data Analyzer EAR directory, complete the
following steps:
1. With a text editor, open the configuration file you want to modify and search for the setting you want to
customize.
2. Change the settings and save the configuration file.
3. Restart Data Analyzer.

Properties in DataAnalyzer.properties
The DataAnalyzer.properties file contains the configuration settings for an instance of Data Analyzer. You can
modify DataAnalyzer.properties to customize the operation of an instance of Data Analyzer.
You must customize some properties in DataAnalyzer.properties together to achieve a specific result. In the
following groups of properties, you may need to modify more than one property to effectively customize Data
Analyzer operations:
Dynamic Data Source Pool Properties. Data Analyzer internally maintains a pool of JDBC connections to
the data source. Several properties in DataAnalyzer.properties control the processes within the connection
pool. To optimize the database connection pool for a data source, modify the following properties:
dynapool.minCapacity
dynapool.maxCapacity
dynapool.evictionPeriodMins
dynapool.waitForConnectionSeconds
dynapool.connectionIdleTimeMins
datamart.defaultRowPrefetch
For more information, see Connection Pool Size for the Data Source on page 110.
Security Adapter Properties. If you use LDAP authentication, Data Analyzer periodically updates the list of
users and groups in the repository with the list of users and groups in the LDAP directory service. Data
Analyzer provides a synchronization scheduler that you can customize to set the schedule for these updates
based on the requirements of your organization. To customize the synchronization scheduler, you can
modify the following properties.
securityadapter.frequency
securityadapter.syncOnSystemStart

126 Appendix B: Configuration Files


UI Configuration Properties. This set of properties determine the look and feel of the Data Analyzer user
interface. Together they define a single user interface configuration. To customize the navigation and header
display of Data Analyzer, you can modify the following properties:
uiconfig.ConfigurationName.ShowHeader
uiconfig.ConfigurationName.ShowNav
Note: Do not modify the properties in the section of DataAnalyzer.properties labeled F
u .
Table B-1 describes the properties in the DataAnalyzer.properties file:

Table B-1. Properties in DataAnalyzer.properties

Property Description

alert.fromaddress From address used for alerts sent by Data Analyzer. If you
use an SMTP mail server, you must enter an email address
that includes a domain.
Default is alert@informatica.com. Leaving the default value
does not affect alert functionality. However, you need to
enter a valid email address for your organization.

api.compatibility.level Compatibility level of the API. Supported values are 40 or


blank. Set it to 40 to force the current API to behave in the
same way as the Data Analyzer 4.0 and 4.1 API. Set it to
blank to use the current API.

Cache.GlobalCaching Determines whether global caching is enabled for the


repository. If set to true, Data Analyzer creates a cache in
memory for repository objects accessed by Data Analyzer
users. When a user accesses an object that exists in the
cache, Data Analyzer retrieves it from the cache instead of
accessing the repository. Set to true to increase Data
Analyzer performance. If set to false, Data Analyzer
retrieves objects from the repository each time a user
accesses them. You might want to disable global caching for
the following reasons:
- The machine running Data Analyzer has insufficient
memory for the global cache.
- The machine where the repository database resides
performs fast enough that enabling global caching does
not provide a performance gain.
When global caching is enabled, infa-cache-service.xml
determines how the global cache is configured. You can
modify several properties in this file to customize how the
global cache works. For more information about configuring
global caching, see Properties in infa-cache-service.xml
on page 133.
Default is true.

Cache.Report.Subscription.NoOfDaysToExpire Number of days before a subscription for cached reports


expires.
Default is 7.

Chart.Fontname Font to use in all charts generated by this instance of Data


Analyzer.
The font must exist on the machine hosting Data Analyzer. If
you are using the Internet Explorer browser, have installed
Adobe SVG Viewer, and enabled interactive charts, the font
must also exist on the workstation that accesses Data
Analyzer. If you are using the Mozilla Firefox browser, the
font does not have to exist on the workstation.
For more information about editing your general preferences
to enable interactive charts, see the Data Analyzer User
Guide.
Default is Helvetica.

Properties in DataAnalyzer.properties 127


Table B-1. Properties in DataAnalyzer.properties

Property Description

Chart.Fontsize Maximum font size to use on the chart axis labels and
legend. Data Analyzer determines the actual font size, but
will not use a font size larger than the value of this property.
Default is 10.

Chart.MaxDataPoints Maximum number of data points to plot in all charts. If Data


Analyzer users select more data points than the value of this
property, an error message appears.
Default is 1000.

Chart.Minfontsize Minimum font size to use on the chart axis labels and
legend. The value must be smaller than the value of
Chart.Fontsize. Data Analyzer determines the actual font
size, but will not use a font size smaller than the value of this
property.
Default is 7.

compression.alwaysCompressMimeTypes MIME types for dynamic content that Data Analyzer always
compresses, without verifying that the browser can support
compressed files of this MIME type. Some MIME types are
handled by plug-ins that decompress natively. These MIME
types may work with compression regardless of whether the
browser supports compression or if an intervening proxy
would otherwise break compression. Enter a comma-
separated list of MIME types. Using this property may result
in marginally better performance than using
compressionFilter.compressableMimeTypes. However, if
Data Analyzer compresses a MIME type not supported by
the browser, the browser might display an error.
By default, no MIME types are listed. Data Analyzer
compresses only the MIME types listed in
compressionFilter.compressableMimeTypes after verifying
browser support.

compressionFilter.compressableMimeTypes MIME types for dynamic content that Data Analyzer


compresses. If the browser does not support compressed
files of a MIME type, Data Analyzer does not compress
dynamic content of the unsupported MIME type. Enter a
comma-separated list of MIME types.
By default, Data Analyzer compresses dynamic content of
the following MIME types: text/html, text/javascript,
application/x-javascript.

compressionFilter.compressThreshold Minimum size (in bytes) for a response to trigger


compression. Data Analyzer compresses responses if the
response size is larger than this number and if it has a
compressible MIME type. Typically, the default is sufficient
for most organizations.
Default is 512.

CustomLayout.MaximumNumberofContainers Maximum number of containers allowed in custom layouts


for dashboards.
Default is 30.

datamart.defaultRowPrefetch Maximum number of rows that Data Analyzer fetches in a


report query.
Default is 20.

128 Appendix B: Configuration Files


Table B-1. Properties in DataAnalyzer.properties

Property Description

datamart.transactionIsolationLevel. Transaction isolation level for each data source used in your
DataSourceName Data Analyzer instance. Add a property for each data source
and then enter the appropriate value for that data source.
Supported values are:
- NONE. Transactions are not supported.
- READ_COMMITTED. Dirty reads cannot occur. Non-
repeatable reads and phantom reads can occur.
- READ_UNCOMMITTED. Dirty reads, non-repeatable
reads, and phantom reads can occur.
- REPEATABLE_READ. Dirty reads and non-repeatable
reads cannot occur. Phantom reads can occur.
- SERIALIZABLE. Dirty reads, non-repeatable reads, and
phantom reads cannot occur.
If no property is set for a data source, Data Analyzer uses
the default transaction level of the database.

For example, you have a data source named ias_demo that


you want to set to READ_UNCOMMITTED and another data
source named ias_test that you want to set to
REPEATABLE_READ (assuming that the databases these
data sources point to support the respective transaction
levels). Add the following entries:
- datamart.transactionIsolationLevel.ias_demo=READ_UNC
OMMITTED
- datamart.transactionIsolationLevel.ias_test=REPEATABLE
_READ

DataRestriction.OldBehavior Provided for backward compatibility. If set to true, Data


Analyzer uses the data restriction merging behaviors in Data
Analyzer 4.x and previous releases and does not support
AND/OR conditions in data restriction filters. If set to false,
Data Analyzer uses the data restriction merging behavior
provided in Data Analyzer 5.0.1 and supports AND/OR
conditions in data restriction filters.
Default is false.

datatype.CLOB.datalength Determines the maximum number of characters in a CLOB


attribute that Data Analyzer displays in a report cell.
Increasing this setting can slowData Analyzer performance.
For more information about CLOB support, see the Data
Analyzer Schema Designer Guide.
Default is 1000.

dynapool.allowShrinking Determines whether the pool can shrink when connections


are not in use.
Default is true.

dynapool.capacityIncrement Number of connections that can be added at one time.


Default is 2.

dynapool.initialCapacity Minimum number of initial connections in the data source


pool. Set the value to 25% of the maximum concurrent
users. The value cannot exceed dynapool.maxCapacity.
Default is 2.

dynapool.maxCapacity Maximum number of connections that the data source pool


may grow to. Set the value to the total number of concurrent
users. The value must be greater than zero.
Default is 20.

dynapool.poolNamePrefix String to use as a prefix for the dynamic JDBC pool name.
Default is IAS_.

Properties in DataAnalyzer.properties 129


Table B-1. Properties in DataAnalyzer.properties

Property Description

dynapool.refreshTestMinutes Frequency in minutes at which Data Analyzer performs a


health check on the idle connections in the pool. Data
Analyzer should not perform the check too frequently
because it locks up the connection pool and may prevent
other clients from grabbing connections from the pool.
Default is 60.

dynapool.shrinkPeriodMins Number of minutes Data Analyzer allows an idle connection


to be in the pool. After this period, the number of
connections in the pool reverts to the value of its
initialCapacity parameter if the allowShrinking parameter is
true.
Default is 5.

dynapool.waitForConnection Determines whether Data Analyzer waits for a database


connection if none are available in the connection pool.
Default is true.

dynapool.waitSec Maximum number of seconds a client waits to grab a


connection from the pool if none is readily available before
giving a timeout error.
Default is 1.

GroupBySuppression.GroupOnAttributePair Determines whether Data Analyzer groups values by row


attributes in cross tabular report tables for reports with a
suppressed GROUP BY clause when the data source stores
a dataset in more than one row in a table. Set to true to
group values by the row attributes. Set to false if you do not
want the Data Analyzer report to group the data based on
the row attributes. If the data source stores a dataset in a
single row in a table, the value of this property does not
affect how the report displays. For more information, see the
Data Analyzer User Guide.
Default is true.

help.files.url URL for the location of Data Analyzer online help files. By
default, the installation process installs online help files on
the same machine as Data Analyzer and sets the value of
this property.

host.url URL for the Data Analyzer instance. By default, the Data
Analyzer installation sets the value of this property in the
following format:
http://Hostname:PortNumber/InstanceName/

import.transaction.timeout.seconds Number of seconds after which the import transaction times


out. To import a large XML file, you might need to increase
this value.
Default is 3600 seconds (1 hour).

Indicator.pollingIntervalSeconds Frequency in seconds that Data Analyzer refreshes


indicators with animation.
Default is 300 seconds (5 minutes).

jdbc.log.append Determines whether to append or overwrite new log


information to the JDBC log file. Set to true to append new
messages. Set to false to overwrite existing information.
Default is true.

130 Appendix B: Configuration Files


Table B-1. Properties in DataAnalyzer.properties

Property Description

jdbc.log.file Name of the JDBC log file.


To specify a path, use the forward slash (/) or two
backslashes (\\) as the file separator. Data Analyzer does
not support a single backslash as a file separator. For
example, to set the log file to myjdbc.log in a directory called
Log_Files in the D: drive, set the value of the property to
include the path and file name:
jdbc.log.file=d:/Log_Files/myjdbc.log
If you do not specify a path, Data Analyzer creates the
JDBC log file in the following default directory:
<
r
Default is iasJDBC.log.

logging.activity.maxRowsToDisplay Maximum number of rows to display in the activity log. If set


to zero, Data Analyzer displays an unlimited number of
rows. If not specified, defaults to 1000. Displaying a number
larger than the default value may cause the browser to stop
responding.
Default is 1000.

logging.user.maxRowsToDisplay Maximum number of rows to display in the user log. If set to


zero, Data Analyzer displays an unlimited number of rows. If
not specified, defaults to 1000. Displaying a number larger
than the default value may cause the browser to stop
responding.
Default is 1000.

Maps.Directory Directory where the XML files that represent maps for the
Data Analyzer geographic charts are located. The directory
must be located on the machine where Data Analyzer is
installed. The default location is in the following directory:
<
a n A a t

PDF.HeaderFooter.ShrinkToWidth Determines how Data Analyzer handles header and footer


text in reports saved to PDF. Set to true to allow Data
Analyzer to shrink the font size of long headers and footers
to fit the configured space. Set to false to use the configured
font size and allow Data Analyzer to display only the text
that fits in the header or footer. For more information, see
Configuring Report Headers and Footers on page 85.
Default is true.

providerContext.maxInMemory Number of reports that Data Analyzer keeps in memory for a


user session. Data Analyzer does not consider the value set
for this property while retaining results of the reports that are
part of workflow or drill path. The default value is 2. Data
Analyzer does not retain report results when you set the
property value below 2.

providerContext.abortThresHold Defines the maximum percentage of memory that is in use


before Data Analyzer stops building report result sets and
running report queries. The percentage is calculated by
dividing the used memory by the total memory configured
for the JVM. If the percentage is below the threshold, Data
Analyzer continues with the requested operation. If the
percentage is above the threshold, Data Analyzer displays
an error and notifies the user about the low memory
condition.
Default is 95.

queryengine.estimation.window Number of days used to estimate the query execution time


for a particular report. Data Analyzer estimates the
execution time for a report by averaging all execution times
for that report during this estimation window.
Default is 30.

Properties in DataAnalyzer.properties 131


Table B-1. Properties in DataAnalyzer.properties

Property Description

ReportingService.batchsize Number of users that the PowerCenter Service Manager


processes in a batch. During synchronization, the Service
Manager copies the users from the domain configuration
database to the Data Analyzer repository in batches. The
Service Manager considers the value set for this property as
the batch size to copy the users. You can add this property
to DataAnalyzer.properties and set the value of the batch
size.
Default is 100.

report.maxRowsPerTable Maximum number of rows to display for each page or


section for a report on the Analyze tab.
Default is 65.

report.maxSectionSelectorValues Maximum number of attribute values users can select for a


sectional report table. If a report has more sections than the
value set for this property, Data Analyzer displays all
sections on the Analyze tab.
Default is 300.

report.maxSectionsPerPage Maximum number of sectional tables to display per page on


the Analyze tab. If a report contains more sectional tables
than this number, Data Analyzer displays the sections on
multiple pages.
Default is 15.

report.showSummary Determines whether Data Analyzer displays the Summary


section in a sectional report table when you email a report
from the Find tab or when you use the Data Analyzer API to
generate a report. Set to true to display the Summary
section and hide the Grand Totals section on the Analyze
tab, in reports emailed from the Find tab, and in reports
generated by the Data Analyzer API. Set to false to display
both the Summary and Grand Totals sections on the
Analyze tab but hide these sections in reports emailed from
the Find tab and in reports generated by the Data Analyzer
API.
Default is false.

report.userReportDisplayMode Determines the default tab on which Data Analyzer opens a


report when users double-click a report on the Find tab.
Possible values are view or analyze. Users can change this
default report view by editing their Report Preferences on
the Manage Account tab.
Default is view.

securityadapter.frequency Determines the number of minutes between synchronization


of the Data Analyzer user list. This property specifies the
interval between the end of the last synchronization and the
start of the next synchronization. If the value is not an
increment of 5, Data Analyzer rounds the value up to the
next value divisible by 5. If you set the time interval to 0,
Data Analyzer disables all user list synchronization,
including synchronization at startup.
Default is 720 minutes (12 hours).

securityadapter.syncOnSystemStart Determines whether Data Analyzer synchronizes the user


list at startup. If true, Data Analyzer synchronizes the user
list when it starts. If the property is not set, or is set to false,
Data Analyzer does not synchronize the user list at startup.
Default is true.

servlet.compress Determines whether the servlet compresses files. Set to true


to enable servlet compression. Set to false to disable. Set to
false only if you see problems with compressed content.
Default is true.

132 Appendix B: Configuration Files


Table B-1. Properties in DataAnalyzer.properties

Property Description

servlet.compress.jscriptContentEncoding Determines whether the servlet compresses JavaScript


loaded by <script> tags through content-encoding for
browsers that support this compression. Set to true to
enable servlet compression of JavaScript. Set to false to
disable. Set to false only if you see problems with
compressed JavaScript.
Default is true.

servlet.useCompressionThroughProxies Determines whether the server verifies that the browser


contains an Accept-Encoding header and thus supports
compression before sending a compressed response. Set to
false to force the server to check if the browser can handle
compression before sending compressed files. Set to true to
allow the server to send compressed files without checking
for browser support. Set to true only all browsers used by
Data Analyzer users support compression.
Default is false.

TimeDimension. Applicable to the following types of time dimension:


useDateConversionOnPrimaryDate - Date only
- Date and time in separate tables
- Date and time as separate attributes in same table
Determines whether Data Analyzer converts a primary date
column from date and time to date before using the primary
date in SQL queries with date field comparisons.
For example, the data source is DB2, you define a Date
Only time dimension, and this property is set to the default
value of false. Data Analyzer uses the primary date in date
comparisons without any date conversion. However, if the
datatype of the primary date column in the table is
TIMESTAMP, DB2 generates an error when Data Analyzer
compares the primary date column with another column that
has a DATE datatype. In this case, a date conversion is
necessary to avoid SQL errors. To ensure that Data
Analyzer always converts the primary date column to DATE
before using it in date comparisons, set this property to true.
The date conversion ensures that Data Analyzer accurately
compares dates, but can have impact on performance. Set
this property to false if the primary date is stored in a DATE
column and date conversion is not necessary.
Default is false.

uiconfig.ConfigurationName.ShowHeader Determines whether to display the header section for the


Data Analyzer pages, including the logo, navigation bar,
help, and logout links, for the given user interface
configuration. Set to false to hide the header section. Set to
true to display the header section. Setting ShowHeader to
false implicitly sets ShowNav to false.

uiconfig.ConfigurationName.ShowNav Determines whether to display the Data Analyzer navigation


bar for the given configuration. Set to false to hide the
navigation bar. Set to true to display the navigation bar.

Properties in infa-cache-service.xml
A cache is a memory area where frequently accessed data can be stored for rapid access. The
Cache.GlobalCaching property in DataAnalyzer.properties determines whether global caching is enabled for
Data Analyzer. For more information about enabling global caching, see Properties in
DataAnalyzer.properties on page 126.

Properties in infa-cache-service.xml 133


When global caching is enabled, Data Analyzer creates a global cache in memory for repository objects accessed
by Data Analyzer users. When a user first accesses an object, for example, a report or dashboard, Data Analyzer
retrieves the object from the repository and then stores the object in memory. The next time a user accesses the
same object, Data Analyzer retrieves the object from the global cache instead of the repository.
If a user updates an object that exists in the global cache, Data Analyzer removes the object from the cache and
then saves the updated object to the repository. The next time a user accesses the updated object, Data Analyzer
retrieves the object from the repository.
Data Analyzer stores data in the global cache in a hierarchical tree structure consisting of nodes. A node
contains the data for a single cached object.
When global caching is enabled, the properties in infa-cache-service.xml determine how the global cache is
configured. Use infa-cache-service.xml to configure the following global cache features:
Lock acquisition timeout
Eviction policy
If you disable global caching in the Cache.GlobalCaching property in DataAnalyzer.properties, Data Analyzer
ignores the properties in infa-cache-service.xml.
Data Analyzer uses JBoss Cache to maintain the global cache for Data Analyzer. Although infa-cache-
service.xml contains a number of properties to support the global cache, only the properties documented in this
section are supported by Data Analyzer. Changes to the default values of the unsupported properties may
generate unexpected results. For more information about JBoss Cache, see the JBoss Cache documentation
library:
h

Configuring the Lock Acquisition Timeout


The global cache uses an optimistic node locking scheme to prevent Data Analyzer from encountering
deadlocks. When a user modifies an object that exists in the global cache, Data Analyzer acquires a lock on the
object node when it commits the update or delete transaction to the repository. When the transaction
completes, Data Analyzer releases the lock on the object node.
The LockAcquisitionTimeout attribute in infa-cache-service.xml determines how long Data Analyzer attempts
to acquire a lock on an object node. If Data Analyzer cannot acquire a lock during this time period, it rolls back
the transaction and displays an appropriate message to the user.
Data Analyzer may not be able to acquire a lock on an object node in the global cache under the following
conditions:
Another user or background process has locked the same object node.
Data Analyzer has lost the connection to the repository.
If Data Analyzer frequently rolls back transactions due to lock acquisition timeouts, you can increase the value
of the LockAcquisitionTimeout attribute. By default, the LockAcquisitionTimeout attribute is set to 10,000
milliseconds.

To configure the lock acquisition timeout:

1. Locate infa-cache-service.xml in the following directory:


< e

2. Open the infa-cache-service.xml file with a text editor.


3. Locate the following text:
n

4. Change the attribute value according to your requirements.


<

5. Save and close infa-cache-service.xml.

134 Appendix B: Configuration Files


Configuring the Eviction Policy
To manage the size of the global cache, Data Analyzer uses an eviction policy to remove the least frequently used
objects from the cache when the cache approaches its memory limit. The eviction policy works on regions of
the global cache. Each global cache region contains the cached data for a particular object type. For example,
the /Reports region contains all cached reports.
Infa-cache-service.xml defines the following global cache regions:
/Dashboards. Dashboard definitions.
/Trees. Content folder definitions in the Find tab.
/Reports/User. User specific objects defined for reports. For example, indicators, gauges, and highlighting
rules added by each user.
/Reports/Variables. Global variables used in reports.
/Reports. Report definitions.
/Security. Access permissions on an object and data restrictions defined for users or groups.
/Users. User profiles, group definitions, and role definitions.
/Attributes. Attribute definitions.
/Metrics. Metric definitions.
/Time. Current time values for calendar and time dimension definitions.
/DataConnectors. Data connector definitions.
/DataSources. Data source definitions.
/Schemas. Operational, hierarchical, and analytic schemas. Calendar and time dimension definitions.
/System. Administrative system settings. For example, color schemes, logs, delivery settings, and contact
information.
/_default_. Default region if an object does not belong to any of the other defined regions.
Each global cache region defined in infa-cache-service.xml includes several eviction policy attributes. You can
modify these attributes to customize when Data Analyzer removes objects from the global cache. You can
configure a different eviction policy for each region so that Data Analyzer caches more or less objects of a
particular type. For example, if a large number of concurrent users frequently access dashboards but not reports,
you can increase the maximum number of dashboards and decrease the maximum number of reports that Data
Analyzer stores in the global cache.
Table B-2 lists the eviction policy attributes you can configure for the global cache:

Table B-2. Eviction Policy Attributes

Attribute Description

wakeUpIntervalSeconds Frequency in seconds that Data Analyzer checks for objects to remove from
the global cache. You can decrease this value to have Data Analyzer run
the eviction policy more frequently.
Default is 60 seconds.

maxNodes Maximum number of objects stored in the specified region of the global
cache. Set the value to 0 to have Data Analyzer cache an infinite number of
objects. Data Analyzer writes informational messages to a global cache log
file when a region approaches its maxNodes limit.
Default varies for each region.

Properties in infa-cache-service.xml 135


Table B-2. Eviction Policy Attributes

Attribute Description

timeToLiveSeconds Maximum number of seconds an object can remain idle in the global cache.
Defined for each region of the global cache. Set the value to 0 to define no
time limit. Default varies for each region.
By default, infa-cache-service.xml defines an idle time limit only for regions
that contain user specific data. For example, the /Users region has a
timeToLiveSeconds value of 1,800 seconds (30 minutes). Data Analyzer
removes cached user data if it has not been accessed for 30 minutes. If
Data Analyzer runs on a machine with limited memory, you can define idle
time limits for the other regions so that Data Analyzer removes objects from
the cache before the maxNodes limit is reached.

maxAgeSeconds Maximum number of seconds an object can remain in the global cache.
Defined for each region of the global cache. Set the value to 0 to define no
time limit. Default varies for each region.
By default, infa-cache-service.xml defines a maximum age limit for only the
/_default_ region. If Data Analyzer runs on a machine with limited memory,
you can define maximum age limits for the other regions so that Data
Analyzer removes objects from the cache before the maxNodes limit is
reached.

Data Analyzer checks for objects to remove from the global cache at the following times:
The wakeUpIntervalSeconds time period ends. Data Analyzer removes objects that have reached the
timeToLiveSeconds or maxAgeSeconds limits.
A global cache region reaches its maxNodes limit. Data Analyzer removes the least recently used object
from the region. Data Analyzer also removes objects from any region that have reached the
timeToLiveSeconds or maxAgeSeconds limits.

To configure the eviction policy:

1. Locate infa-cache-service.xml in the following directory:


< e

2. Open the infa-cache-service.xml file with a text editor.


3. Locate the following text:
= e m a n

4. Change the value of the wakeUpIntervalSeconds attribute according to your requirements.


<

5. Locate the region whose eviction policy you want to modify.


For example, to locate the /Dashboards region, locate the following text:
r a

6. Change the attribute values for the region according to your requirements.
For example, to change the attribute values for the /Dashboards region, modify the following lines:
< m

/ <

7. Repeat steps 5 to 6 for each of the global cache regions whose eviction policy you want to modify.
8. Save and close infa-cache-service.xml.

136 Appendix B: Configuration Files


Properties in web.xml
The web.xml file contains configuration settings for Data Analyzer. You can modify this file to customize the
operation of an instance of Data Analyzer. Although the web.xml file contains a number of settings, you
typically modify only specific settings in the file.
Table B-3 describes the properties in web.xml that you can modify:

Table B-3. Properties in web.xml

Property Description

enableGroupSynchronization If you use LDAP authentication, this property determines whether Data
Analyzer updates the groups in the repository when it synchronizes the list of
users and groups in the repository with the LDAP directory service. By default,
during synchronization, Data Analyzer deletes the users and groups in the
repository that are not found in the LDAP directory service. If you want to keep
user accounts in the LDAP directory service but keep the groups in the Data
Analyzer repository, set this property to false so that Data Analyzer does not
delete or add groups to the repository during synchronization.
When this property is set to false, Data Analyzer synchronizes only user
accounts, not groups. You must maintain the group information within Data
Analyzer.
Default is true.

login-session-timeout Session timeout, in minutes, for an inactive session on the Login page. If the
user does not successfully log in and the session remains inactive for the
specified time period, the session expires. After the user successfully logs in,
Data Analyzer resets the session timeout to the value of the session-timeout
property.
Default is 5.

searchLimit Maximum number of groups or users Data Analyzer displays in the search
results before requiring you to refine your search criteria.
Default is 1000.

session-timeout Session timeout, in minutes, for an inactive session. Data Analyzer terminates
sessions that are inactive for the specified time period.
Default is 30.

showSearchThreshold Maximum number of groups or users Data Analyzer displays before displaying
the Search box so you can find a group or user.
Default is 100.

TemporaryDir Directory where Data Analyzer stores temporary files. The directory must be a
shared file system that all servers in the cluster can access. If you specify a
new directory, Data Analyzer creates the directory in the following default
directory:
<PowerCenter_InstallationDirectory>/services/ReportingService/jboss/bin/
To specify a path, use the forward slash (/) or two backslashes (\\) as the file
separator. Data Analyzer does not support a single backslash as a file
separator. You can specify a full directory path such as D:/temp/DA.
Default is tmp_ias_dir.

Properties in web.xml 137


138 Appendix B: Configuration Files
INDEX

A B
access permissions background image URL
change permission 12 background image location 73
creating 12 business days
defined 11 default 27
Delete permission 12 setting 27
exclusive 12
inclusive 12
read permission 11 C
schedules 22
cache
setting 7, 11
global S cache
using wildcards 12
Cache.GlobalCaching property
write permission 11
configuring 127
activity log
Cache.Report.Subscription.NoOfDaysToExpire property
configuring maximum rows 78, 131
configuring 127
saving 77
cached reports
viewing and clearing 77
adding administrative reports to schedules 92
administrative reports
attaching to schedule after importing 24
adding to schedules 92
importing 53
Administrators Dashboard 89
Calendar
description 89
business days 27
list and description 93
daily view 26
public folder 90
holidays 27
setting up 90
leap years 26
Administrators Dashboard
monthly view 26
dashboard for administrative reports 89
viewing 26
AIX
weekly view 26
performance tuning 101
change permission
alert.fromaddress property
access S permissions
configuring 127
Chart.Fontname property
alerts
configuring 127
modifying From email address 127
Chart.Fontsize property
analytic workflows
configuring 128
S
Chart.MaxDataPoints property
importing reports 52
configuring 128
AND operator
Chart.Minfontsize property
multiple data restrictions 14
configuring 128
api.compatibility.level property
clearing
configuring 127
activity log 77
application server
event-based schedule histories 32
description 2
time-based schedule histories 22
arguments
user log 76
Import Export utility 64
color schemes
attaching
assigning 76
imported reports to event-based schedule 35
background image URL 73
reports to event-based schedule 30
creating 74
customizing 72, 114
images directory 73

139
list of color codes 117 datatype.CLOB.datalength property
login page image URL 73 configuring 129
logo image URL 73 date/time formats
primary 73 in localization 6
primary navigation 74 DB2 database
secondary 73 performance tuning 97
secondary navigation 74 default color scheme
selecting 75 using 72
using default 72 delete permission
access
viewing 75 S permissions
compression.alwaysCompressMimeTypes property deleting
configuring 128 data restrictions 17, 18
compressionFilter.compressableMimeTypes property event-based schedule histories 32
configuring 128 event-based schedules 33
compressionFilter.compressThreshold property scheduled reports 25, 35
configuring 128 time-based schedule histories 22
configuration files time-based schedules 23
DataAnalyzer.properties 126 disabling
infa-cache-service.xml 133 event-based schedules 33
web.xml 137 time-based schedules 23
contact information
specifying for system administrator 82
creating E
event-based schedules 30
enableGroupSynchronization property
holidays 27
configuring 137
time-based schedules 20
error messages
CustomLayout.MaximumNumberofColumns property
Import Export utility 67
configuring 128
event-based schedules
access permissions 22
D attaching imported reports 35
attaching reports 30
daily view creating 30
Calendar 26 defined 19
dashboards disabling 33
exporting 42 enabling 33
importing 55 histories 32
data access managing reports 33
restricting 8, 14 removing 33
Data Analyzer schedule monitoring 27
performance tuning 107 starting immediately 32
data restrictions stopping 33
AND operator 14 stopping immediately 28
by fact table 15 using PowerCenter Integration utility 31
by user or group 17 exclusive permissions
deleting 17, 18 S access
permissions
exporting 43 exporting Data Analyzer objects
importing 57 dashboards 42
OR operator 14 data restrictions 43
data sources global variables 42
creating 90 group security profile 43
creating for Metadata Reporter 90 metrics 38
description 3 overview 37
data warehouses reports 40
performance tuning 95 security profile 43
DataAnalyzer.properties time dimension tables 40
configuring 126 user security profile 43
datamart.defaultRowPrefetch property using Import Export utility 63
configuring 128 external URL
datamart.transactionIsolationLevel property defined 81
configuring 129 registering 81
DataRestriction.OldBehavior property
configuring 129

140 Index
F imported reports
attaching to schedules 24
fact tables importing
restricting data access 15 dashboards 55
footers data in multiple languages 5
configuring report footers 85 data restrictions 57
display options 85 global variables 54
group security profile 58
large XML files 60
G overview 47
global cache reports 52
configuring 133 schema objects 48
eviction policy 135 security profile 57
lock acquisition timeout 134 transaction timeout 60, 130
sizing 135 user security profile 57
global variables using Import Export utility 63
exporting 42 inclusive permissions
importing 54 S access
permissions
GroupBySuppression.GroupOnAttributePair property Indicator.pollingIntervalSeconds property
configuring 130 configuring 130
groups infa-cache-service.xml file
displaying 88 configuring 133
removing from the repository 8, 9, 10
restricting data access 17
searchLimit parameter 88, 137 J
showSearchThreshold parameter 88, 137 Java environment
viewing 82
JBoss Application Server
H description 2
header section JDBC
UI configuration 114 log file 79, 130
headers jdbc.log.append property
configuring report headers 85 configuring 130
display options 85 jdbc.log.file property
heap size configuring 131
importing large XML files 61
help.files.url property
configuring 130 L
histories language settings
clearing 32 backing up and restoring Data Analyzer repositories 5
clearing schedule 22 Data Analyzer repository 5
holidays data warehouse 5
creating 27 import and export repository objects 5
host.url property importing table definitions 5
configuring 130 language support
HP-UX display 5
performance tuning 98 LDAP authentication
server settings 79
synchronizing user list 132
I leap years
images directory Calendar 26
color scheme location 73 Linux
Import Export utility performance tuning 97
error messages 67 localization
format 64 Data Analyzer display language 5
options and arguments 64 date and number formats 6
repository objects 66 displaying reports in Chinese or Japanese when exporting to PDF
running 64 6
using 63 language settings 5
import.transaction.timeout.seconds property overview 5
configuring 130 setting metric or attribute default values 5

Index 141
log files
JDBC 79
P
managing 76 PDF.HeaderFooter.ShrinkToWidth property
logging.activity.maxRowsToDisplay property configuring 131
configuring 78, 131 using 86, 131
logging.user.maxRowsToDisplay property performance tuning
configuring 77, 131 AIX 101
login page image URL Data Analyzer processes 107
login page image location 73 database 95
login-session-timeout property DB2 database 97
configuring 137 HP-UX 98
logo image Linux 97
customizing 72 Microsoft SQL Server 2000 97
logo image location 73 operating system 97
Oracle database 96
Solaris 99
M Windows 102
permissions
mail servers
S access
permissions
configuring 81
setting 7
Maps.Directory property
post-session command
configuring 131
using the PowerCenter Integration utility 31
metrics
PowerCenter Integration utility
exporting 38
using in a post-session command 31
importing 48
PowerCenter Workflow Manager
Microsoft SQL Server 2000
using the PowerCenter Integration utility 31
performance tuning 97
predefined color scheme
monitoring
using 72
schedules 27
previewing
monthly view
report headers and footers 87
Calendar 26
primary display item
multiple instances of Data Analyzer
color scheme 73
configuration files 125
properties
defining in DataAnalyzer.properties 126
N defining in infa-cache-service.xml 133
defining in web.xml 137
navigation providerContext.abortThresHold property
color schemes 74 configuring 131
navigation bar providerContext.maxInMemory property
UI configuration 114 configuring 131
notifyias
using in PowerCenter post-session command 31
Q
O queries
setting rules 83
operating system query governing
performance tuning 97 query time limit 83
viewing 82 report processing time limit 83
operational schemas row limit 83
setting data restrictions 15 setting rules 83
operators specifying for users 10
AND 14 query time limit
OR 14 defined 83
options queryengine.estimation.window property
Import Export utility 64 configuring 131
OR operator
multiple data restrictions 14
Oracle R
performance tuning 96
read permissions
access S permissions
recurring schedules
time-based S schedules

142 Index
removing for cached administrative reports 92
S deleting stopping 28
report processing time limit scheduling
defined 83 business days 27
report.maxRowsPerTable property Calendar 26
configuring 132 holidays 27
report.maxSectionSelectorValues property schemas
configuring 132 restricting data access 15
report.maxSectionsPerPage property scroll bars
configuring 132 report table option 85
report.showSummary property searchLimit property
configuring 132 configuring 137
report.userReportDisplayMode property secondary display item
configuring 132 color schemes 73
ReportingService.batchsize security
configuring 132 access permissions 11
reports security profiles
S exporting 43
adding administrative reports to schedules 92 exporting user 43
administrative reports overview 89 group 43
attached to time-based schedules 23 importing 57
attaching imported reports to event-based schedule 35 importing group 58
attaching to event-based schedule 30 importing user 57
attaching to schedule after importing 24 securityadapter.frequency property
deleting from time-based schedules 25 configuring 132
displaying scroll bars in tables 85 securityadapter.syncOnSystemStart property
exporting Data Analyzer objects 40 configuring 132
header and footer display options 85 servlet.compress property
importing 52 configuring 132
in event-based schedules 33 servlet.compress.jscriptContentEncoding property
list of administrative reports 93 configuring 133
previewing headers and footers 87 servlet.useCompressionThroughProxies property
removing from event-based schedules 35 configuring 133
setting headers and footers 85 session-timeout property
viewing in event-based schedule 34 configuring 137
viewing properties 25 showSearchThreshold property
repository database configuring 137
performance tuning 95 single sign-on
restore S
repository language settings 5 with Data Analyzer API 114
row limit single-event schedules
time-based
query governing 83 S schedules
row-level security Solaris
restricting data access 14 performance tuning 99
running SQL queries
Import Export utility 64 row limit 83
setting rules 83
time limit 83
S starting
event-based schedules 32
saving
time-based schedules 22
activity log 77
stopping
system log 78
event-based schedules 33
user log 76
running schedules 28
schedule monitoring
time-based schedules 23
defined 27
synchronization scheduler
scheduled reports
customizing 132
deleting 25
system administrator
viewing 24, 34
using Import Export utility 63
schedules
system information
S event-based
schedules
viewing 82
S time-based
schedules
system log
attaching imported reports to schedules 24
configuring 78

Index 143
saving 78
viewing 78
V
viewing
activity log 77
T histories for event-based schedules 32
reports attached to event-based schedules 34
tasks
reports attached to time-based schedules 24
properties 25
system information 82
temporary table space
system log 78
importing large XML files 61
time-based schedule histories 22
TemporaryDir property
user log 76
configuring 137
time dimension tables
exporting Data Analyzer objects 40
time-based schedules
W
access permissions 22 web.xml
creating 20 configuring 137
defined 19 weekly view
deleting 23 Calendar 26
disabling and enabling 23 wildcards
histories 22 searching user directory 12
managing reports 23 Windows
schedule monitoring 27 performance tuning 102
starting immediately 22 work days
stopping immediately 23 scheduling 27
viewing the Calendar 26 write permissions
access
TimeDimension.useDateConversionOnPrimaryDate property S permissions
configuring 133
timeout
changing default for transactions 60 X
configuring for Data Analyzer session 4 XML files
transaction timeout heap size for application 61
changing the default 60 importing large files 60
temporary table space 61

U
UI configuration
default 114
properties 127
setting up 114, 127
URL parameter 115
UICONFIG
URL parameter 115
URL
background image for color schemes 73
login page image for color schemes 73
logo image for color schemes 73
URL API
S
using 113
user log
configuring maximum rows 77, 131
saving 76
viewing and clearing 76
users
displaying 88
restricting data access 17
searchLimit parameter 88, 137
showSearchThreshold parameter 88, 137
UTF-8 character encoding
Data Analyzer support 5

144 Index

Você também pode gostar