Escolar Documentos
Profissional Documentos
Cultura Documentos
Volume 1:
Tuning, Monitoring, and Troubleshooting your
MicroStrategy Business Intelligence System
Version: 9.0.1
Document Number: 09470901
Fifteenth Edition, January 2010, version 9.0.1
To ensure that you are using the documentation that corresponds to the software you are licensed to use, compare this version number
with the software version shown in “About MicroStrategy...” in the Help menu of your software.
If you have not executed a written or electronic agreement with MicroStrategy or any authorized MicroStrategy distributor, the following
terms apply:
This software and documentation are the proprietary and confidential information of MicroStrategy Incorporated and may not be
provided to any other person. Copyright © 2001-2010 by MicroStrategy Incorporated. All rights reserved.
THIS SOFTWARE AND DOCUMENTATION ARE PROVIDED “AS IS” AND WITHOUT EXPRESS OR LIMITED WARRANTY OF ANY
KIND BY EITHER MICROSTRATEGY INCORPORATED OR ANYONE WHO HAS BEEN INVOLVED IN THE CREATION,
PRODUCTION, OR DISTRIBUTION OF THE SOFTWARE OR DOCUMENTATION, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE AND
NONINFRINGMENT, QUALITY OR ACCURACY. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
SOFTWARE AND DOCUMENTATION IS WITH YOU. SHOULD THE SOFTWARE OR DOCUMENTATION PROVE DEFECTIVE,
YOU (AND NOT MICROSTRATEGY, INC. OR ANYONE ELSE WHO HAS BEEN INVOLVED WITH THE CREATION, PRODUCTION,
OR DISTRIBUTION OF THE SOFTWARE OR DOCUMENTATION) ASSUME THE ENTIRE COST OF ALL NECESSARY
SERVICING, REPAIR, OR CORRECTION. SOME STATES DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO
THE ABOVE EXCLUSION MAY NOT APPLY TO YOU.
In no event will MicroStrategy, Inc. or any other person involved with the creation, production, or distribution of the Software be liable
to you on account of any claim for damage, including any lost profits, lost savings, or other special, incidental, consequential, or
exemplary damages, including but not limited to any damages assessed against or paid by you to any third party, arising from the use,
inability to use, quality, or performance of such Software and Documentation, even if MicroStrategy, Inc. or any such other person or
entity has been advised of the possibility of such damages, or for the claim by any other party. In addition, MicroStrategy, Inc. or any
other person involved in the creation, production, or distribution of the Software shall not be liable for any claim by you or any other
party for damages arising from the use, inability to use, quality, or performance of such Software and Documentation, based upon
principles of contract warranty, negligence, strict liability for the negligence of indemnity or contribution, the failure of any remedy to
achieve its essential purpose, or otherwise. The entire liability of MicroStrategy, Inc. and your exclusive remedy shall not exceed, at
the option of MicroStrategy, Inc., either a full refund of the price paid, or replacement of the Software. No oral or written information
given out expands the liability of MicroStrategy, Inc. beyond that specified in the above limitation of liability. Some states do not allow
the limitation or exclusion of liability for incidental or consequential damages, so the above limitation may not apply to you.
The information contained in this manual (the Documentation) and the Software are copyrighted and all rights are reserved by
MicroStrategy, Inc. MicroStrategy, Inc. reserves the right to make periodic modifications to the Software or the Documentation without
obligation to notify any person or entity of such revision. Copying, duplicating, selling, or otherwise distributing any part of the Software
or Documentation without prior written consent of an authorized representative of MicroStrategy, Inc. are prohibited. U.S. Government
Restricted Rights. It is acknowledged that the Software and Documentation were developed at private expense, that no part is public
domain, and that the Software and Documentation are Commercial Computer Software provided with RESTRICTED RIGHTS under
Federal Acquisition Regulations and agency supplements to them. Use, duplication, or disclosure by the U.S. Government is subject
to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause at DFAR
252.227-7013 et. seq. or subparagraphs (c)(1) and (2) of the Commercial Computer Software—Restricted Rights at FAR 52.227-19,
as applicable. Contractor is MicroStrategy, Inc., 1861 International Drive, McLean, Virginia 22102. Rights are reserved under copyright
laws of the United States with respect to unpublished portions of the Software.
The following are either trademarks or registered trademarks of MicroStrategy Incorporated in the United States and certain other
countries:
MicroStrategy, MicroStrategy 6, MicroStrategy 7, MicroStrategy 7i, MicroStrategy 7i Evaluation Edition, MicroStrategy 7i Olap
Services, MicroStrategy 8, MicroStrategy 9, MicroStrategy Distribution Services, MicroStrategy MultiSource Option, MicroStrategy
Command Manager, MicroStrategy Enterprise Manager, MicroStrategy Object Manager, MicroStrategy Reporting Suite,
MicroStrategy Power User, MicroStrategy Analyst, MicroStrategy Consumer, MicroStrategy Email Delivery, MicroStrategy BI Author,
MicroStrategy BI Modeler, MicroStrategy Evaluation Edition, MicroStrategy Administrator, MicroStrategy Agent, MicroStrategy
Architect, MicroStrategy BI Developer Kit, MicroStrategy Broadcast Server, MicroStrategy Broadcaster, MicroStrategy Broadcaster
Server, MicroStrategy Business Intelligence Platform, MicroStrategy Consulting, MicroStrategy CRM Applications, MicroStrategy
Customer Analyzer, MicroStrategy Desktop, MicroStrategy Desktop Analyst, MicroStrategy Desktop Designer, MicroStrategy eCRM
7, MicroStrategy Education, MicroStrategy eTrainer, MicroStrategy Executive, MicroStrategy Infocenter, MicroStrategy Intelligence
Server, MicroStrategy Intelligence Server Universal Edition, MicroStrategy MDX Adapter, MicroStrategy Narrowcast Server,
MicroStrategy Objects, MicroStrategy OLAP Provider, MicroStrategy SDK, MicroStrategy Support, MicroStrategy Telecaster,
MicroStrategy Transactor, MicroStrategy Web, MicroStrategy Web Business Analyzer, MicroStrategy World, Alarm, Alarm.com,
Alert.com, Angel, Angel.com, Application Development and Sophisticated Analysis, Best In Business Intelligence, Centralized
Application Management, Changing The Way Government Looks At Information, DSSArchitect, DSS Broadcaster, DSS Broadcaster
Server, DSS Office, DSSServer, DSS Subscriber, DSS Telecaster, DSSWeb, eBroadcaster, eCaster, eStrategy, eTelecaster,
Information Like Water, Insight Is Everything, Intelligence Through Every Phone, Your Telephone Just Got Smarter, Intelligence To
Every Decision Maker, Intelligent E-Business, IWAPU, Personal Intelligence Network, Personalized Intelligence Portal, Query Tone,
Quickstrike, Rapid Application Development, Strategy.com, Telepath, Telepath Intelligence, Telepath Intelligence (and Design),
MicroStrategy Intelligent Cubes, The E-Business Intelligence Platform, The Foundation For Intelligent E-Business, The Integrated
Business Intelligence Platform Built For The Enterprise, The Intelligence Company, The Platform For Intelligent E-Business, The
Power Of Intelligent eBusiness, The Power Of Intelligent E-Business, The Scalable Business Intelligence Platform Built For The
Internet, Industrial-Strength Business Intelligence, Office Intelligence, MicroStrategy Office, MicroStrategy Report Services,
MicroStrategy Web MMT, MicroStrategy Web Services, Pixel Perfect, MicroStrategy Mobile, MicroStrategy Integrity Manager and
MicroStrategy Data Mining Services are all registered trademarks or trademarks of MicroStrategy Incorporated.
All other products are trademarks of their respective holders. Specifications subject to change without notice. MicroStrategy is not
responsible for errors or omissions. MicroStrategy makes no warranties or commitments concerning the availability of future products
or versions that may be planned or under development.
Patent Information
This product is patented. One or more of the following patents may apply to the product sold herein: U.S. Patent Nos. 6,154,766,
6,173,310, 6,260,050, 6,263,051, 6,269,393, 6,279,033, 6,501,832, 6,567,796, 6,587,547, 6,606,596, 6,658,093, 6,658,432,
6,662,195, 6,671,715, 6,691,100, 6,694,316, 6,697,808, 6,704,723, 6,707,889, 6,741,980, 6,765,997, 6,768,788, 6,772,137,
6,788,768, 6,792,086, 6,798,867, 6,801,910, 6,820,073, 6,829,334, 6,836,537, 6,850,603, 6,859,798, 6,873,693, 6,885,734,
6,888,929, 6,895,084, 6,940,953, 6,964,012, 6,977,992, 6,996,568, 6,996,569, 7,003,512, 7,010,518, 7,016,480, 7,020,251,
7,039,165, 7,082,422, 7,113,993, 7,181,417, 7,127,403, 7,174,349, 7,194,457, 7,197,461, 7,228,303, 7,260,577, 7,266,181,
7,272,212, 7,302,639, 7,324,942, 7,330,847, 7,340,040, 7,356,758, 7,356,840, 7,415,438, 7,428,302, 7,430,562, 7,440,898,
7,457,397, 7,486,780, 7,509,671, 7,516,181, 7,559,048 and 7,574,376. Other patent applications are pending.
Various MicroStrategy products contain the copyrighted technology of third parties. This product may contain one or more of the
following copyrighted technologies:
Graph Generation Engine Copyright © 1998-2010. Three D Graphics, Inc. All rights reserved.
Actuate® Formula One. Copyright © 1993-2010 Actuate Corporation. All rights reserved.
XML parser Copyright © 2003-2010 Microsoft Corporation. All rights reserved.
Xalan XSLT processor. Copyright © 1999-2010. The Apache Software Foundation. All rights reserved.
Xerces XML parser. Copyright © 1999-2010. The Apache Software Foundation. All rights reserved.
FOP XSL formatting objects. Copyright © 2004-2010. The Apache Software Foundation. All rights reserved.
Portions of Intelligence Server memory management Copyright 1991-2010 Compuware Corporation. All rights reserved.
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. (http://www.openssl.org/)
International Components for Unicode
Copyright © 1999-2010 Compaq Computer Corporation
Copyright © 1999-2010 Hewlett-Packard Company
Copyright © 1999-2010 IBM Corporation
Copyright © 1999-2010 Hummingbird Communications Ltd.
Copyright © 1999-2010 Silicon Graphics, Inc.
Copyright © 1999-2010 Sun Microsystems, Inc.
Copyright © 1999-2010 The Open Group
All rights reserved.
Real Player and RealJukebox are included under license from Real Networks, Inc. Copyright © 1999-2010. All rights reserved.
CONTENTS
1. Introduction to Introduction.................................................................................. 1
MicroStrategy System
Best practices for MicroStrategy system administration ................ 2
Administration
Understanding the MicroStrategy architecture .............................. 3
Storing information: the data warehouse ................................. 4
Indexing your data: MicroStrategy metadata ........................... 4
Processing your data: Intelligence Server ............................... 6
Tying it all together: projects and project sources.................... 7
Communicating with databases..................................................... 7
Connecting to the MicroStrategy metadata.............................. 8
Connecting to the data warehouse .......................................... 9
Caching database connections.............................................. 10
Benefiting from centralized database access control............. 11
E. Diagnostics and About the Diagnostics and Performance Logging tool............... 961
Performance Logging
Features you can trace and log ................................................. 961
Description of Guide
This chapter covers what users and groups are, what the different modes
are for authentication and how to implement them, how to control access
to data at both the application and database levels, and how to control
access to the application functionality. The examples section shows how
combinations of security features in both the MicroStrategy system and
in the database management systems can be used together.
This chapter covers making the system available to users. This includes
some best practices are for deploying the system, and how to implement
easy ways to install systems using SMS systems and silent installs; what
License Manager is and how to use it; and setting up security in the
MicroStrategy environment.
This chapter explains how you can make the system efficient and remove
load from Intelligence Server by using the caching and History List
features. It describes how caches work in the system, where they are
stored, what the matching requirements are for using a cache, how to
create pre-calculated data using aggregate tables, how to administer
caches including how to invalidate them. It also describes what the
History List is, how it is used in both Web and Desktop, and how to
administer it.
You can return data from your data warehouse and save it to Intelligence
Server memory, rather than directly displaying the results in a report.
This data can then be shared as a single in-memory copy, among many
different reports created by multiple users. The reports created from the
shared sets of data are executed against the in-memory copy, also known
as an Intelligent Cube. This chapter provides details to understand and to
create Intelligent Cubes your users can access when the execute reports
and documents.
This chapter describes how you can automate certain MicroStrategy jobs
and administrative tasks. Methods of automation include scheduling jobs
and administrative tasks, using MicroStrategy Distribution Services to
distribute reports and documents, and using automated installation
techniques.
This chapter explains how you can use the monitors available in the
system to see the state of the system at any time (past or present). It
describes how Enterprise Manager can help do this by monitoring
statistics that can be logged.
This chapter provides information for you to find the balance that
maximizes the use of your system’s capacity to provide the best
performance possible for the required number of users.
Other examples in this book use the Analytics Modules, which include a set
of precreated sample reports, each from a different business area. Sample
reports present data for analysis in such business areas as financial
reporting, human resources, and customer analysis.
MicroStrategy 9.0.1
• Create cache update subscriptions for documents (see Scheduling reports
and documents: Subscriptions, page 361).
• Create update packages from the command line (see Creating an update
package from the command line, page 330).
Data population for reports, page 812, includes new options for
normalizing report data.
SQL Global Optimization, page 824, uses a new default setting. Refer
to the information on this VLDB property for important upgrading
best practices.
MicroStrategy 9.0
• Enable integrated authentication (see Enabling integrated
authentication, page 152) or single sign-on authentication (see Enabling
SSO to MicroStrategy Web with Tivoli or Site Minder, page 168.)
• Save objects for copying later (see Copying objects in a batch: Update
packages, page 323.)
• Store History List data in your database (see Configuring History List
data storage, page 238.)
• Verify and compare report and document notes and PDFs, use expanded
prompt answer functionality with Integrity Manager (see the
MicroStrategy System Administration Guide, volume 2).
• New VLDB properties (see Details for all VLDB properties, page 664):
Prerequisites
Before working with this document, you should be familiar with the
information in the MicroStrategy Installation and Configuration Guide.
Resources
Documentation
MicroStrategy provides both manuals and online help; these two information
sources provide different types of information, as described below.
• Examples
• Checklists and high-level procedures to get started
Manuals
The following manuals are available from your MicroStrategy disk or the
machine where MicroStrategy was installed. The steps to access them are
below.
The best place for all users to begin is with the MicroStrategy Basic
Reporting Guide.
MicroStrategy Overview
• Introduction to MicroStrategy: Evaluation Guide
Concepts and high-level steps for using various administrative tools such
as MicroStrategy Command Manager, MicroStrategy Enterprise
Manager, MicroStrategy Integrity Manager, and MicroStrategy Health
Center.
To access the installed manuals and other documentation sources, see the
following procedures:
1 From the Windows Start menu, choose Programs (or All Programs),
MicroStrategy, then Product Manuals. A page opens in your browser
showing a list of available manuals in PDF format and other
documentation sources.
2 Click the link for the desired manual or other documentation source.
Ifmanual,
bookmarks are not visible on the left side of an Acrobat (PDF)
from the View menu click Bookmarks and Page. This step
varies slightly depending on your version of Adobe Acrobat Reader.
1 Within your UNIX or Linux machine, navigate to the directory where you
installed MicroStrategy. The default location is /opt/MicroStrategy,
or $HOME/MicroStrategy/install if you do not have write access to
/opt/MicroStrategy.
4 Click the link for the desired manual or other doucmentation source.
Ifmanual,
bookmarks are not visible on the left side of an Acrobat (PDF)
from the View menu click Bookmarks and Page. This step
varies slightly depending on your version of Adobe Acrobat Reader.
Help
• Help button: Use the Help button or ? (question mark) icon on most
software windows to see help for that window.
• Help menu: From the Help menu or link at the top of any screen, select
MicroStrategy Help to see the table of contents, the Search field, and the
index for the help system.
Documentation standards
MicroStrategy online help and PDF manuals (available both online and in
printed format) use standards to help you identify certain types of content.
The following table lists these standards.
Type Indicates
bold • Button names, check boxes, dialog boxes, options, lists, and menus that are the
focus of actions or part of a list of such GUI elements and their definitions
• Text to be entered by the user
Example: Click Select Warehouse.
Example: Type cmdmgr -f scriptfile.scp and press Enter.
italic • New terms defined within the text and in the glossary
• Names of other product manuals
• When part of a command syntax, indicates variable information to be replaced by the
user
Example: The aggregation level is the level of calculation for the metric.
Example: Type copy c:\filename d:\foldername\filename
Type Indicates
Courier • Calculations
font • Code samples
• Registry keys
• Path and file names
• URLs
• Messages displayed in the screen
Example: Sum(revenue)/number of months.
+ A keyboard command that calls for the use of more than one key (for example,
SHIFT+F1)
A warning icon alerts you to important information such as potential security risks; these
should be read before continuing.
Education
MicroStrategy Education Services provides a comprehensive curriculum and
highly skilled education consultants. Many customers and partners from
over 800 different organizations have benefited from MicroStrategy
instruction.
Courses that can help you prepare for using this manual or that address some
of the information in this manual include:
For the most up-to-date and detailed description of education offerings and
course curricula, visit www.microstrategy.com/Education.
Consulting
MicroStrategy Consulting Services provides proven methods for delivering
leading-edge technology solutions. Offerings include complex security
architecture designs, performance and tuning, project and testing strategies
and recommendations, strategic planning, and more. For a detailed
International support
MicroStrategy supports several locales. Support for a locale typically includes
native database and operating system support, support for date formats,
numeric formats, currency symbols etc. and availability of translated
interfaces and documentation. The level of support is defined in terms of the
components of a MicroStrategy Business Intelligence environment. A
MicroStrategy Business Intelligence environment consists of the following
components, collectively known as a configuration:
• Web browser
Technical Support
If you have questions about a specific MicroStrategy product, you should:
1 Consult the product guides, Help, and readme files. Locations to access
each are described above.
1 Verify that the issue is with MicroStrategy software and not a third party
software.
6 Discuss the issue with other users by posting a question about the issue
on the MicroStrategy Customer Forum at
https://resource.microstrategy.com/forum/.
The following table shows where, when, and how to contact MicroStrategy
Technical Support. If your Support Liaison is unable to reach MicroStrategy
Technical Support by phone during the hours of operation, they can leave a
voicemail message, send email or fax, or log a case using the Online Support
Interface. The individual Technical Support Centers are closed on certain
public holidays.
Support Liaisons should contact the Technical Support Center from which
they obtained their MicroStrategy software licenses or the Technical Support
Center to which they have been designated.
• Personal information:
Name (first and last)
• Case details:
• Business/system impact
If this is the Support Liaison’s first call, they should also be prepared to
provide the following:
• Street address
• Phone number
• Fax number
• Email address
• Case number: Please keep a record of the number assigned to each case
logged with MicroStrategy Technical Support, and be ready to provide it
when inquiring about an existing case
• Case description:
What steps have you taken to isolate and resolve the issue? What were
the results?
Feedback
Please send any comments or suggestions about user documentation for
MicroStrategy products to:
documentationfeedback@microstrategy.com
support@microstrategy.com
When you provide feedback to us, please include the name and version of the
products you are currently using. Your feedback is important to us as we
prepare for future releases.
Introduction
• Use the project life cycle of development, testing, production to fully test
your reports, metrics, and other objects before releasing them to users.
For an in-depth explanation of the project life cycle, see The project life
cycle, page 288.
• The first tier, at the bottom, consists of two databases: the data
warehouse, which contains the information that your users analyze; and
the MicroStrategy metadata, which contains information about your
MicroStrategy projects. For an introduction to these databases, see
Storing information: the data warehouse, page 4 and Indexing your
data: MicroStrategy metadata, page 4.
Ifsource
users on MicroStrategy Desktop connect via a two-tier project
(also called a direct connection), they can access the data
warehouse without Intelligence Server. For more information on
two-tier project sources, see Tying it all together: projects and
project sources, page 7.
• The third tier in this system is the MicroStrategy Web Server, which
delivers the reports to the MicroStrategy Web client. For an introduction
to MicroStrategy Web Server, see Appendix C, Administering
MicroStrategy Web.
• The last tier is the MicroStrategy Web client, which provides documents
and reports to the users.
Wizard,
For more information about running the MicroStrategy Configuration
see the MicroStrategy Installation and Configuration Guide.
To help explain how the MicroStrategy system uses the metadata to do its
work, imagine that a user runs a report with a total of revenue for a certain
region in a particular quarter of the year. The metadata stores information
about how the revenue metric is to be calculated, information about which
rows and tables in the data warehouse to use for the region, and the most
efficient way to retrieve the information.
• Application objects are the objects that are necessary to run reports.
These objects are generally created by a report designer, and can include
reports, report templates, filters, metrics, prompts, and so on. These
objects are built in MicroStrategy Desktop or Command Manager. The
Metadata DSN
There are two types of project sources, defined based on the type of
connection they represent:
two-tier),
In older systems you may encounter a 6.x Project connection (also
which connects directly to a MicroStrategy version 6
project in read-only mode.
You can also create and edit a project source using the Project Source
Manager in MicroStrategy Desktop. When you use the Project Source
Manager, you must specify the MicroStrategy Intelligence Server machine to
which to connect. It is through this connection that MicroStrategy Desktop
users retrieve metadata information.
A cached connection is used for a job if the following criteria are satisfied:
• The connection string for the cached connection matches the connection
string that will be used for the job.
• The driver mode (multiprocess versus multithreaded) for the cached
connection matches the driver mode that will be used for the job.
Intelligence Server does not cache any connections that have pre- or
post-SQL statements associated with them because these options
might drastically alter the state of the connection.
• Join options, such as the star join and full outer join
• Metric calculation options, such as when to check for NULLs and zeros
For more information about all of the VLDB properties, see SQL Generation
and Data Processing: VLDB Properties, page 651.
Default VLDB properties are set based on the database type specified in the
database instance. MicroStrategy periodically updates the default settings as
database vendors add new functionality.
• Loads new database types. For example, properties for the newest
database servers were recently added.
• Loads updated properties for existing database types that are still
supported.
• Keeps properties for existing database types that are no longer supported.
If there were no updates for an existing database type, but the properties
for it have been removed from the Database.pds file, the process does
not remove them from your metadata.
For more information about VLDB properties, see SQL Generation and Data
Processing: VLDB Properties, page 651.
You may need to manually upgrade the database types if you chose not to run
the update metadata process after installing a new release.
The readme file for each MicroStrategy release lists all DBMSs that
are supported or certified for use with MicroStrategy.
• Loads existing report cache files from automatic backup files into
memory for each loaded project (up to the specified maximum RAM
setting)
This occurs only if report caching is enabled and the Load caches
on startup feature is enabled.
• Loads schedules
You can set Intelligence Server to load MDX cube schemas when it
starts, rather than loading MDX cube schemas upon running an
MDX cube report. For more details on this subject and steps to
load MDX cube schemas when Intelligence Server starts, see the
Configuring and Connecting Intelligence Server chapter of the
MicroStrategy Installation and Configuration Guide.
mstrctl -s IntelligenceServer rs
mstrctl -s IntelligenceServer us
Once the service is started, it is designed to run constantly, even after the
user who started it logs off the system. However, there are several reasons
why you may need to stop and restart it:
• If you are already using MicroStrategy Desktop, you may need to start
and stop Intelligence Server from within Desktop. For instructions, see
MicroStrategy Desktop, page 20.
• You can start and stop Intelligence Server as part of a Command Manager
script. For details, see MicroStrategy Command Manager, page 20.
• Finally, you can start and stop Intelligence Server from the command line
using MicroStrategy Server Control Utility. For instructions, see
Command line, page 20.
Service Manager requires that port 8888 be open. If this port is not
open, contact your network administrator.
If the icon is not present in the system tray, then from the Windows Start
menu, point to Programs, then MicroStrategy, then Tools, then choose
Service Manager.
2 In the Server drop-down list, select the name of the machine on which
MicroStrategy Intelligence Server is installed.
6 Click OK.
You can also set this using the Services option in the Microsoft
Window’s Control Panel.
5 On the Intelligence Server Options tab, select the Enabled check box for
the Re-starter Option.
3 Click Start.
MicroStrategy Desktop
You can start and stop MicroStrategy Intelligence Server from MicroStrategy
Desktop.
For the Command Manager syntax for starting and stopping Intelligence
Server, see the Command Manager online help (press F1 from within
Command Manager). For a more general introduction to MicroStrategy
Command Manager, see the MicroStrategy System Administration Guide,
volume 2.
Command line
You can start and stop Intelligence Server from a command prompt, using
the MicroStrategy Server Control Utility. This utility is invoked by the
command mstrctl. By default the utility is located in
For detailed instructions on how to use the Server Control Utility, see
Managing MicroStrategy services from the command line, page 23.
You can start and stop MicroStrategy Intelligence Server and choose a
startup option using the Windows Services window.
– Automatic means that the service starts when the computer starts.
– Disabled means that you cannot start the service until you change
the startup type to one of the other types.
5 When you are finished, click OK to close the Properties dialog box.
Some advanced tuning settings are only available when starting Intelligence
Server as a service. If you change these settings, they are applied the next
time Intelligence Server is started as a service.
Executing this file from the command line displays the following
administration menu in Windows, and a similar menu in UNIX.
To use these options, type the corresponding letter on the command line and
press Enter. For example, to monitor users, type U and press Enter. The
information is displayed.
Server Control Utility can also be used to start, stop, and restart other
MicroStrategy services, such as the Test Listener or the Enterprise Manager
Data Loader, and to view and set configuration information for those
services.
The following table lists the commands that you can perform with the Server
Control Utility. The syntax for using the Server Control Utility commands is:
mstrctl -m machinename [-l login] -s servicename
command [instancename]
[(> | <) filename.xml]
where:
• login is the login for the machine hosting the server instance or service,
and is required if you are not logged into that machine. You will be
prompted for a password.
Tomstrctl
retrieve a list of services on a machine, use the command
machinename ls.
List machines that the Server Control Utility can see and lm
affect. list-machines
Note: This command does not require a machine name,
login, or service name.
Configure a service
Display the configuration information for a service, in gsvc instancename [> filename.xml]
XML format. For more information, see Using files to get-service-configuration
store output and provide input, page 26. instancename [> filename.xml]
Note: You can optionally specify a file to save the
configuration properties to.
Specify the configuration information for a service, in ssvc instancename [< filename.xml]
XML format. For more information, see Using files to set-service-configuration
store output and provide input, page 26. instancename [< filename.xml]
Note: You can optionally specify a file to read the
configuration properties from.
Configure a server
Display the configuration information for a server gsic instancename [> filename.xml]
instance, in XML format. For more information, see get-server-instance-configuration
Using files to store output and provide input, page 26. instancename [> filename.xml]
Note: You can optionally specify a file to save the
configuration properties to.
Create a copy of a server instance. Specify the name for cpi instancename newinstancename
the new instance as newinstancename. copy-instance instancename
newinstancename
For example, the following command saves the default server instance
configuration to an XML file:
mstrctl -s IntelligenceServer
gsic > filename.xml
mstrctl -s IntelligenceServer
ssic < filename.xml
Processing jobs
Any request submitted to Intelligence Server from any part of the
MicroStrategy system is known as a job. Jobs may originate from servers
such as Narrowcast Server or Intelligence Server’s internal scheduler, or
from client applications such as Desktop, Web, Mobile, Integrity Manager, or
another custom-coded application.
The Job Monitor shows you which jobs are currently executing, and lets you
cancel jobs as necessary. For information about the job monitor, see
Monitoring currently executing jobs, page 462.
Those components are the stops the job makes in what is called a
“pipeline,” a path that the job takes as Intelligence Server works on it.
4 The result is sent back to the client application, which presents the result
to the user.
Most of the actual processing that takes place is done in steps 2 and 3
internally within Intelligence Server. Although the user request must be
received and the final results must be delivered (steps 1 and 4), those are
relatively simple tasks. It is more useful to explain how Intelligence Server
works. Therefore, the rest of this section discusses Intelligence Server
activity as it processes jobs. This includes:
• Processing report execution, page 29
Being familiar with this material should help you to understand and
interpret statistics, Enterprise Manager reports, and other log files available
within the system. This may help you to know where to look for bottlenecks
in the system and how you can tune the system to minimize their effects.
Component Function
Analytical Performs complex calculations on a result set returned from the data warehouse,
Engine Server such as statistical and financial functions. Also, sorts raw results returned from the
Query Engine into a cross-tabbed grid suitable for display to the user. In addition, it
performs subtotal calculations on the result set. Depending on the metric definitions,
the Analytical Engine will also perform metric calculations that were not or could not
be performed using SQL, such as complex functions.
Metadata Server Controls all access to the metadata for the entire project.
Object Server Creates, modifies, saves, loads and deletes objects from metadata. Also maintains a
server cache of recently used objects.Tthe Object Server does not manipulate
metadata directly. The Metadata Server does all reading/writing from/to the metadata;
the Object Server uses the Metadata Server to make any changes to the metadata.
Query Engine Sends the SQL generated by the SQL Engine to the data warehouse for execution.
Report Server Creates and manages all server reporting instance objects. Maintains a cache of
executed reports.
Resolution Resolves prompts for report requests. Works in conjunction with Object Server and
Server Element Server to retrieve necessary objects and elements for a given request.
Client
1 9
Intelligence Pipeline
2 4 3 9 5 6 8 7
Object Object
cache Server Report
cache
Metadata
Server
SQL/ODBC SQL/ODBC
Data
Metadata
Warehouse
2 The Resolution Server checks for prompts. If the report has one or more
prompts, the user must answer them. For information about these extra
steps, see Processing reports with prompts, page 31.
3 The Report Server checks the internal cache, if the caching feature is
turned on, to see whether the report results already exist. If the report
exists in the cache, Intelligence Server skips directly to the last step and
delivers the report to the client. If no valid cache exists for the report,
Intelligence Server creates the task list necessary to execute the report.
For more information on caching, see Result caches, page 204.
Prompts are resolved before the Server checks for caches. Users
may be able to retrieve results from cache even if they have
personalized the report with their own prompt answers.
4 The Resolution Server obtains the report definition and any other
required application objects from the Object Server. The Object Server
retrieves these objects from the object cache, if possible, or reads them
from the metadata via the Metadata Server. Objects retrieved from
metadata are stored in the object cache.
5 The SQL Generation Engine creates the optimized SQL specific to the
RDBMS being used in the data warehouse. The SQL is generated based
on the definition of the report and associated application objects
retrieved in the previous step.
6 The Query Engine runs the SQL against the data warehouse. The report
results are returned to Intelligence Server.
If the report has prompts, these steps are inserted in the regular report
execution steps presented above (see Processing report execution, page 29):
2 Intelligence Server puts the job in a sleep mode and tells the Result
Sender component to send a message to the client application prompting
the user for the information.
3 The user completes the prompt and the client application sends the user’s
prompt selections back to Intelligence Server.
5 This cycle repeats until all prompts in the report are resolved.
Aconnection
sleeping job times out after a certain period of time or if the
to the client is lost. If the prompt reply comes back
after the job has timed out, the user sees an error message.
All regular report processing resumes from the point at which Intelligence
Server checks for a report cache, if the caching feature is turned on.
Reports can also connect to Intelligent Cubes that can be shared by multiple
reports. These Intelligent Cubes also allow the Analytical Engine to perform
additional analysis without requiring any processing on the data warehouse.
For information on personal Intelligent Cubes and Intelligent Cubes, see the
OLAP Services Guide.
Intelligence Server must retrieve the objects from the metadata before it can
display them in the folder list and the object viewer.
This process is called object browsing and it creates what are called object
requests. It can cause a slight delay that you may notice the first time you
expand or select a folder. The retrieved object definitions are then placed in
Intelligence Server’s memory (cache) so that the information is displayed
immediately the next time you browse the same folder. This is called object
caching. For more information on this, see Object caches, page 262.
Component Function
Metadata Server Controls all access to the metadata for the entire project.
Object Server Creates, modifies, saves, loads and deletes objects from metadata. Also maintains
a server cache of recently used objects.
Source Net Receives, de-serializes, and passes metadata object requests to the object server.
Server
The diagram below shows the object request execution steps. An explanation
of each step follows the diagram.
C lie n t
1 5
2 4
O b je c t O b je c t
cache S e rve r
3
M e ta d a ta
S e rve r
S Q L /O D B C
M e ta d a ta
2 The Object Server checks for an object cache that can service the request.
If an object cache exists, it is returned to the client and Intelligence Server
skips to the last step in this process. If no object cache exists, the request
is sent to the Metadata Server.
3 The Metadata Server reads the object definition from the metadata
repository.
4 The requested objects are received by the Object Server where are they
deposited into memory object cache.
When users request attribute elements from the system, they are said to be
element browsing and create what are called element requests. More
specifically, this happens when users:
When Intelligence Server receives an element request from the user, it sends
a SQL statement to the data warehouse requesting attribute elements. When
it receives the results from the data warehouse, it then passes the results
back to the user. Also, if the element caching feature is turned on, it stores
the results in memory so that additional requests are retrieved from memory
instead of querying the data warehouse again. For more information on this,
see Element caches, page 249.
Component Function
DB Element Transforms element requests into report requests and then sends report requests to
Server the warehouse.
Element Net Receives, de-serializes, and passes element request messages to the Element
Server Server.
Element Server Creates and stores server element caches in memory. Manages all element
requests in the project.
Query Engine Sends the SQL generated by the SQL Engine to the data warehouse for execution.
Report Server Creates and manages all server reporting instance objects. Maintains a cache of
executed reports.
Resolution Server Resolves prompts for report requests. Works in conjunction with Object Server and
Element Server to retrieve necessary objects and elements for a given request.
Client
1 9
Intelligence Pipeline
2 8 5 6 7
DB Element
Server
Report
Server
SQL/ODBC
Data
W arehouse
2 The Element Server checks for a server element cache that can service the
request. If a server element cache exists, the element cache is returned to
the client. Skip to the last step in this process.
4 The Report Server receives the request and creates a report instance.
6 The SQL Engine Server generates the necessary SQL to satisfy the request
and passes it to the Query Engine Server.
7 The Query Engine Server sends the SQL to the data warehouse.
8 The elements are returned from the data warehouse to Intelligence Server
and deposited in the server memory element cache by the Element
Server.
Client
1 3 8
Intelligence Pipeline
5 2 4
Object Export
Server Engine
Metadata
Server
SQL/ODBC
Metadata
2 The Document Server inspects all dataset reports and prepares for
execution. It consolidates all prompts from from datasets into a single
prompt to be answered. All identical prompts are merged so that the
resulting prompt contains only one copy of each prompt question.
3 The Document Server, with the assistance of the Resolution Server, asks
the user to answer the consolidated prompt. The user’s answers are
stored in the Document Server.
4 The Document Server creates an individual report execution job for each
dataset report. Each job is processed by Intelligence Server, using the
report execution flow described in Processing report execution, page 29.
Prompt answers are provided by the Document Server to avoid further
prompt resolution.
5 After Intelligence Server has completed all the report execution jobs, the
Analytical Engine receives the corresponding report instances to begin
the data preparation step. Document elements are mapped to the
corresponding report instance to construct internal data views for each
element.
Document
and so on.
elements include grouping, data fields, Grid/Graphs,
6 The Analytical Engine evaluates each data view and performs the
calculations that are required to prepare a consolidated dataset for the
entire document instance. These calculations include calculated
expressions, derived metrics, and conditional formatting. The
consolidated dataset determines the number of elements for each group
and the number of detail sections.
7 The Document Server receives the final document instance to finalize the
document format:
of text, images, hyperlinks, tables, grid reports, and graph reports. Any
reports included in an HTML document are called the child reports of the
HTML document.
The diagram below shows the HTML document processing execution steps.
An explanation of each step follows the diagram.
C lie n t
1 5
M ic ro S tra te g y In te llig e n c e S e rv e r
In te llig e n c e P ip e lin e
2 4 3
HTML
R e s o lu tio n
D ocum ent
S e rv e r
S e rv e r
O b je c t
S e rv e r
M e ta d a ta
S e rv e r
S Q L /O D B C
M e ta d a ta
2 The HTML Document Server consolidates all prompts from child reports
into a single prompt to be answered. Any identical prompts are merged so
that the resulting single prompt contains only one copy of each prompt
question.
3 Resolution Server asks the user to answer the consolidated prompt. (The
user only needs to answer a single set of questions.)
4 The HTML Document Server splits the HTML document request into
separate individual jobs for the constituent reports. Each report goes
through the report execution flow as described above.
1 The user makes a request from a Web browser. The request is sent to the
Web server via HTTP or HTTPS.
4 Intelligence Server sends the results back to the MicroStrategy Web API
via XML.
6 Web sends the HTML to the client’s browser, which displays the results.
Exporting a report from MicroStrategy Web products lets users save the
report in another format that may provide additional capabilities for sharing,
printing, or further manipulation. This section explains the additional
processing the system must do when exporting a report in one of several
formats. This may help you to understand when certain parts of the
MicroStrategy platform are stressed when exporting.
• Export to Comma Separated File (CSV) or Excel with Plain Text, page 42
For information about governing report size limits for exporting, see
Limiting the information displayed at one time, page 551 and the
following sections.
Export to Comma Separated File (CSV) and Export to Excel with Plain Text is
done completely on Intelligence Server. These formats contain only report
data and no formatting information. The only difference between these two
formats is the internal “container” that is used.
1 MicroStrategy Web product receives the request for the export and passes
the request to Intelligence Server. Intelligence Server takes the XML
containing the report data and parses it for separators, headers and
metric values.
2 Intelligence Server then outputs the titles of the units in the Row axis. All
of these units end up in the same row of the result text.
3 Intelligence Server then outputs the title and header of one unit in the
Column axis.
4 Step 3 is repeated until all units in the Column axis are completed.
5 Intelligence Server outputs all of the headers of the Row axis and all
metric values one row at a time.
The MicroStrategy system performs these steps when exporting to Excel with
formatting:
1 MicroStrategy Web product receives the request for the export to Excel
and passes the request to Intelligence Server. Intelligence Server
produces an HTML document by combining the XML containing the
report data with the XSL containing formatting information.
3 Users can then choose to view the Excel file or save it depending on the
client machine operating system’s setting for viewing Excel files.
Export to PDF
ToReader
view the PDF files, the client machine must have Adobe Acrobat
5.0 version or greater.
4 Narrowcast Server submits one report per user or one multi-page report
for multiple users, depending on service definition.
Ensure the Administrator password has been changed. When you install Intelligence
Server, the Administrator account comes with a blank password that must be changed.
Set up access controls for the database (see Controlling access to data, page 70).
Depending on your security requirements you may need to:
• Set up security views to restrict access to specific tables, rows, or columns in the
database
• Split tables in the database to control user access to data by separating a logical data
set into multiple physical tables, which require separate permissions for access
• Assign security filters to users or groups to control access to specific data (these
operate similarly to security views, but at the application level)
Understand the MicroStrategy user model (see The MicroStrategy user model, page 48).
Use this model to:
• Set up security roles for users and groups to assign basic privileges and permissions
• Understand ACLs (access control lists) which allow users access permissions to
individual objects
• If you leave anonymous access disabled, ensure the hyperlink for guest access does
not appear on the login page. Use adminoptions.asp to turn this hyperlink off.
Assign privileges and permissions to control user access to application functionary. (See
Appendix B, Permissions and Privileges for a list of all default and available user
permissions and privileges.) You may need to:
• Assign the Denied All permission to a special user or group so that, even if permission
is granted at another level, permission is still denied
• Make sure guest users (anonymous authentication) have access to the Log folder
located in C:\Program Files\Common Files\MicroStrategy. This ensures that any
application errors that occur while a guest user is logged in can be written to the log
files.
Make use of standard Internet security technologies such as firewalls, digital certificates,
and encryption. You may need to:
• Enable the Encrypt User Credentials setting in the Login section of the Project defaults
preferences.
• If you are working with particularly sensitive or confidential data, enable the setting to
encrypt all communication between Web server and Intelligence Server. Note: There
may be a noticeable performance degradation since the system must encrypt and
decrypt all network traffic.
Locate the physical machine hosting the Web application in a physically secure location.
Restrict access to files stored on the machine hosting the Web application by implementing
standard file-level security offered by your operating system. Specifically, apply this type of
security to protect access to the MicroStrategy administrator pages, to prevent someone
from typing specific URLs into a browser to access these pages. (The default location of
the Admin page file is /MicroStrategy7/admin/admin.asp.) Be sure to restrict access to:
• adminoptions.asp
• delete.asp
Introduction
MicroStrategy has a robust security model that enables you to create users
and groups, and control what data they can see and what objects they can
use. The security model is covered in the following sections:
• The MicroStrategy user model, page 48
Users are defined in the MicroStrategy metadata, and exist across projects.
You do not have to define users for every project you create in a single
metadata repository.
For a list of the privileges assigned to each group, see Privileges for
out-of-the-box user groups, page 913.
All users except for guest users are automatically members of the Everyone
group. The Everyone group is provided to make it easy for you to assign
privileges, security role memberships, and permissions to all users.
Authentication-related groups
These groups are provided to assist you in managing the different ways in
which users can log into the MicroStrategy system. For details on the
different authentication methods, see Chapter 3, Identifying Users:
Authentication.
profile defined by the Public group. When a user logs in as a guest, a new
user is created dynamically and becomes a member of the Public group.
For more information about anonymous authentication and the
Public/Guest group, see Implementing anonymous authentication,
page 102.
• LDAP Users: The group into which users that are imported from an
LDAP server are added.
These groups are built-in groups that correspond to the licenses you have
purchased. Using these groups gives you a convenient way to assign
product-specific privileges.
• Web Analyst: Web Analysts can create new reports with basic report
functionality, and use ad hoc analysis from Intelligent Cubes with
interactive, slice and dice OLAP.
Administrator groups
• System Monitors: The System Monitors groups provide an easy way to
give users basic administrative privileges for all projects in the system.
Users in the System Monitors groups have access to the various
monitoring and administrative monitoring tools
Privileges
Privileges allow users to access and work with various functionality within
the software. All users created in the MicroStrategy system are assigned a set
of privileges by default.
To see which users are using certain privileges, use the License Manager. See
Using License Manager, page 190.
1 In Desktop, log into a project source. You must log in as a user with the
Create And Edit Users And Groups privilege.
3 Right-click the user and select Project Access. The User Editor opens.
The privileges that the user has for each project are listed, as well as the
source of those privileges (inherent to user, inherited from a group, or
inherited from a security role).
Permissions
Permissions allow users to interact with various objects in the MicroStrategy
system. All users created in the MicroStrategy system have certain access
rights to certain objects by default.
2 Expand the Security category. The dialog box lists all users and groups
with access to the object, and what permissions those users and groups
have for the object.
1 In Desktop, log into a project source. You must log in as a user with the
Create And Edit Users And Groups privilege.
2 Expand Administration, then User Manager, and then a group you want
the new user to be a member of. If you do not want the user to be a
member of a group, select Everyone.
3 From the File menu, point to New and then select User. The User Editor
opens.
4 Specify the user information for each tab. For details about each field, see
the online help.
to view the report definition and execute the report, but not to modify the
report definition or delete the report.
3 For the User or Group (click Add to select a new user or group), from the
Object drop-down list, select the predefined set of permissions. If the
object is a folder, you can also assign permissions to objects contained in
that folder using the Children drop-down list.
4 Click OK.
For specific information about each setting in the dialog box, press F1
to see the online help.
The Access Control List (ACL) of an object is a list of users and groups, and
the access permissions that each has for the object.
For example, for the Northeast Region Sales report you can specify the
following permissions:
• The Managers and Executive user groups have View access to the report.
• The Developers user group (people who create and modify your
applications) has Modify access.
The default ACL of a newly created object has the following characteristics:
• The owner (the user who created the object) has Full Control permission.
• Permissions for all other users are set according to the Children ACL of
the parent folder.
For example, if the Children setting of the parent folder’s ACL includes Full
Control permission for the Administrator and View permission for the
Everyone group, then the newly created object inside that folder will have
Full Control permission for the owner, Full Control for the Administrator,
and View permission for Everyone.
Modifying the ACL of a shortcut object does not modify the ACL of
that shortcut’s parent object.
When you move an object to a different folder, the moved object retains its
original ACLs. When you copy an object, the copied object inherits its ACL
from the Children ACL of the folder into which it is copied.
When you edit an object’s ACL using the object’s Properties dialog box, you
can assign a predefined grouping of permissions or you can create a custom
grouping. The table below lists the predefined groupings and the specific
permissions each one grants. For a complete list of the specific permissions,
see Access control list permissions for an object, page 892.
View Grants permission to access the object for viewing only. More specifically, the user can:
• Browse the object in Desktop and Web
• Read and view the object's definition in the appropriate editor
• Create objects that refer to this object, for example, a report that contains a metric
• Execute that object if the view permission is on Reports and Documents
Modify Grants permission to view and/or modify the object. More specifically, the user can:
• Browse the object in Desktop and Web
• Read and view the object's definition in the appropriate editor
• Create objects that refer to this object, for example, a report that contains a metric
• Execute that object on Reports and Documents
• Change the object definition, and save and overwrite the definition
• Delete the object
• Create new objects in folders if the view permission is on Folders
Full Control Grants all permissions for the object. More specifically, the user can:
• Browse the object in Desktop and Web
• Read and view the object's definition in the appropriate editor
• Create other objects that refer to this object, for example, a report that contains a
metric
• Execute that object on Reports and Documents
• Change the object definition
• Save and overwrite the definition
• Delete the object
• Create new objects in folders if the view permission is on Folders
• Modify the ACL for the object and take ownership of the object (define what
permissions other users have on the object)
Denied All Explicitly denies the group or user all permissions for the object. None of the
permissions are assigned.
Note: This permission overrides any permissions the user may inherit from any other
sources. For more information, see Permission levels, page 57.
Custom Allows the user or group to create a custom combination of permissions. For more
information, see Access control list permissions for an object, page 892.
A user can have permissions for a given object from the following sources:
• Special privileges: A user may possess a special privilege that causes the
normal access checks to be bypassed:
Bypass Schema Object Security Checks allows the user to ignore the
access checks for schema objects.
Bypass All Object Security Checks allows the user to ignore the access
checks for all objects.
Permission levels
Permissions ranked from highest level down to lowest level are listed below.
The permissions at the top of the list override those permissions lower down
the list, when both types of permissions are assigned to a user:
3 The permissions with the most restrictions (except Denied All): Lowest
level permissions
For example, if a user has Full Control permissions for a report, and is a
member of the Managers group, which has View permissions for the report,
the user has Full Control permissions for the report. If the user later becomes
a member of a group which has the Denied All permission for the report, the
user has no permissions at all for the report.
Everyone: Browse
Public/Guest: Browse
• Inherited ACL
Administrator: Default
Everyone: View
Public/Guest: View
toThisbrowse
means that new users, as part of the Everyone group, are able
the objects in the Public Objects folder, view their
definitions and use them in definitions of other objects (for
example, create a report with a public metric), and execute them
(execute reports). However, new users cannot delete these objects,
or create or save new objects to these folders.
• Personal folders
This means that new users can create objects in these folders and
have full control over those objects.
Two permissions relate to report and document execution: the Use and
Execute permissions. These have the following effects:
• The Use permission allows the user to reference or use the object when
they are modifying another object. This permission is checked only at
object design time.
A user may have three different levels of access to an object using these two
new permissions:
• Both Use and Execute permissions: The user can use the object to create
new reports, and can execute reports containing the object.
• Execute permission only: The user can execute previously created reports
containing the object, but cannot create new reports that use the object.
• Neither Use nor Execute permission: The user cannot create reports
containing the object, nor can the user execute such reports, even if the
user has Execute rights on the report.
ItExecute
is not possible for a user to have the Use permission but not the
permission. That is, someone cannot create a report and then
not have permission to execute it. The Use permission implies the
Execute permission.
If the user does not have access to an attribute, custom group, consolidation,
prompt, fact, filter, template, or hierarchy used to define a report, the report
execution fails.
If the user does not have access to a metric used to define a report, the report
execution continues, but the metric is not displayed in the report for that
user.
5 Click OK. Your changes are saved and the Project Configuration Editor
closes.
You can control what attribute drill paths users see on reports. You can
determine whether users can see all drill paths for an attribute, or only those
to which they have access. You determine this access using the Enable Web
personalized drill paths check box in the Project Configuration Editor,
Project Definition: Drilling category.
With the Enable Web personalized drill paths check box cleared (and thus,
XML caching enabled), the attributes to which all users in Web can drill are
stored in a report’s XML cache. In this case, users see all attribute drill paths
whether they have access to them or not. When a user selects an attribute
drill path, Intelligence Server then checks whether the user has access to the
attribute. If the user does not have access (for example, because of Access
Control Lists), the drill is not performed and the user sees an error message.
Alternatively, if you select the Enable Web personalized drill paths check
box, at the time the report results are created (not at drill time), Intelligence
Server checks which attributes the user may access and creates the report
XML with only the allowed attributes. This way, the users only see their
available drill paths, and they cannot attempt a drill action that is not
allowed. With this option enabled, you may see performance degradation on
Intelligence Server. This is because it must create XML for each report/user
combination rather than using XML that was cached.
For more information about XML caching, see XML caches, page 209.
Based on their different privileges, the users and user groups can perform
different types of operations in the MicroStrategy system. If a user does not
have a certain privilege, that user does not have access to that privilege’s
functionality.
For a complete list of privileges and what they control in the system, see List
of all privileges, page 895.
ToManager.
see which users are using certain privileges, use the License
See Using License Manager, page 190.
1 From MicroStrategy Desktop User Manager, edit the user with the User
Editor or edit the group with the Group Editor.
Rather than assigning individual users and groups these privileges, it may be
easier for you to create Security Roles (collections of privileges) and assign
them to users and groups. Then you can assign additional privileges
individually when there are exceptions. For more information about security
roles, see Defining sets of privileges: Security roles, page 65.
You can grant, revoke, and replace the existing privileges of users, user
groups, or security roles with the Find and Replace Privileges dialog box.
This dialog box allows you to search for the user, user group, or security role
and change their privileges, depending on the tasks required for their work.
To access the Find and Replace Privileges dialog box, in Desktop, right-click
the User Manager and select Find and Replace Privileges. The Find and
Replace Privileges dialog box opens. For detailed instructions on how to find
and replace privileges, see the Desktop Help (press F1 from within the Find
and Replace Privileges dialog box).
MicroStrategy comes with several predefined user groups. For a complete list
and explanation of these groups, see About MicroStrategy user groups,
page 48. These groups possess the following privileges:
• System Monitors and its member groups have privileges based on their
expected roles in the company. To see the privileges assigned to each
group, right-click the group and select Grant Access to Projects.
Several of the predefined user groups form hierarchies, which allow groups
to inherit privileges from any groups at a higher level within the hierarchy.
These hierarchies are as follows:
• Web Reporter
Web Analyst
- Web Professional
In the case of the Web user groups, the Web Analyst inherits the
privileges of the Web Reporter. The Web Professional inherits the
privileges of both the Web Analyst and Web Reporter. The Web
Professional user group has the complete set of Web privileges.
• Desktop Analyst
Desktop Designer
In the case of Desktop user groups, the Desktop Designer inherits the
privileges of the Desktop Analyst and therefore has more privileges than
the Desktop Analysts.
• System Monitors
The various System Monitors user groups inherit the privileges of the
System Monitors user group and therefore have more privileges than the
System Monitors. Each has its own specific set of privileges in addition,
that are not shared by the other System Monitors groups.
• International Users
This group inherits the privileges of the Desktop Analyst, Mobile User,
Web Reporter, and Web Viewer groups.
Security roles exist at the project source level, and can be used in any project
registered with Intelligence Server. A user can have different security roles in
each project. For example, an administrator for the development project may
have a Project Administrator security role in that project, but the Normal
User security role in all other projects on that server.
For information about how privileges are inherited from security roles and
groups, see How are privileges inherited?, page 63.
The Security Role Manager lists all the security roles available in a project
source. From this manager you can assign or revoke security roles for users
in projects, or create or delete security roles. For additional methods of
managing security roles, see Other ways of managing security roles,
page 67.
1 In Desktop, log in to the project source containing the security role. You
must have the Grant/Revoke Privileges privilege.
3 Double-click the security role you want to assign to the user or group. The
Security Role Editor opens.
5 From the Select a Project drop-down list, select the project for which to
assign the security role.
6 From the drop-down list of groups, select the group containing a user or
group you want to assign the security role to. The users or groups that are
members of that group are shown in the list box below the drop-down
list.
8 Click the > icon. The user or group moves to the Selected users and
groups list. You can assign multiple users or groups to the security role
by selecting them and clicking the > icon.
9 When you are finished assigning the security role, click OK. The security
role is assigned to the selected users and groups and the Security Role
Editor closes.
1 In Desktop, log in to a project in the project source you want to create the
security role in.
3 From the File menu, point to New, and select Security Role. The
Security Role Editor opens at the General tab.
8 Click OK to close the Security Role Editor and create the security role.
You can also assign security roles to a user or group in the User Editor or
Group Editor. From the Project Access category of the editor, you can
specify what security roles that user or group has for each project.
You can assign roles to multiple users and groups in a project through the
Project Configuration dialog box. The Project Access - General category
displays which users and groups have which security roles in the project, and
allows you to re-assign the security roles.
For detailed instructions on using these editors to manage security roles, see
the MicroStrategy Desktop online help. (From within Desktop, press F1.)
You can also use Command Manager to manage security roles. Command
Manager is a script-based administrative tool that helps you perform
complex administrative actions quickly. For specific syntax for security role
Ifyouryousystem’s
are using UNIX, you must use Command Manager to manage
security roles.
1 In Desktop, right-click on the project you want to deny access to. Select
Project Configuration. The Project Configuration Editor opens.
2 Expand Project Access. The Project Access - General dialog box opens.
3 From the Select a security role drop-down list, select the security role that
contains the user or group who you want to deny project access. For
example, select the Normal Users security role.
4 On the right-hand side of the Project access - General dialog, select the
user or group who you want to deny project access. Then click the left
arrow to remove that user or group from the security role. For example,
remove the Everyone group.
5 Using the right arrow, add any users to the security role for whom you
want to grant project access. To see the users contained in each group,
highlight the group and check the Show users check box.
6 Make sure the user or group whose access you want deny does not appear
in the Selected users and groups pane on the right-hand side of the
dialog. Then click OK.
7 In Desktop, under the project source that contains the project you are
restricting access to, expand Administration, then expand User
Manager.
8 Click on the group to which the user belongs who you want to deny
project access for. Then double-click on the user in the right-hand side of
Desktop. The User Editor opens.
9 On the Project Access tab, under the project you want to restrict access to,
review the Security Role Selection drop-down list. Make sure that no
security role is associated with this project for this user.
10 Click OK.
When the user attempts to log in to the project, he receives the message “No
projects were returned by this project source.”
Beginning with version 9.0, the MicroStrategy product suite comes with a
number of pre-defined security roles for administrators. These roles makes it
easy to delegate administrative tasks.
For example, your company security policy may require you to keep the user
security administrator for your projects separate from the project resource
administrator. Rather than specifying the privileges for each administrator
individually, you can assign the Project Security Administrator role to one
administrator, and the Project Resource Administrator to another. Because
users can have different security roles for each project, you can use the same
security role for different users in different projects to further delegate
project administration duties.
For instructions on how to assign these security roles to users or groups, see
Managing security roles, page 65.
The following ways by which data access can be controlled are discussed
below:
Database
Connection
DSN
Database DB Login :
Project Database MSTR users
Instance
Instance
All
users Data
Warehouse
1 In Desktop, log into your project. You must log in as a user with
administrative privileges.
6 Double-click the new connection mapping in the Users column. Click ...
(the browse button). The Add Members dialog box opens.
7 Select the desired user or group and click OK. That user or group is now
associated with the connection mapping.
One case in which you may wish to use connection mappings is if you have
existing security views defined in the data warehouse and you wish to allow
MicroStrategy users’ jobs to execute on the data warehouse using those
specific login IDs. For example,
• All other users have limited access (warehouse login ID = “MSTR users”)
In this case, you would need to create a user connection mapping within
MicroStrategy for the CEO. To do this:
This is shown in the diagram below in which the CEO connects as CEO (using
the new database login called “CEO”) and all other users use the default
database login “MSTR users.”
All Database
users Connection
Database
Database
Project Instance
Instance DSN
DB Login: DB Login:
MSTR users CEO
CEO
Data
Warehouse
Both the CEO and all the other users use the same project, database
instance, database connection (and DSN), but the database login is
different for the CEO.
Connection mappings can also be made for user groups and are not limited
to individual users. Continuing the example above, if you have a Managers
group within the MicroStrategy system that can access most data in the data
warehouse (warehouse login ID = “Managers”), you could create another
database login and then create another connection mapping to assign it to
the Managers user group.
Another case in which you may wish to use connection mappings is if you
need to have users connect to two data warehouses using the same project. In
this case, both data warehouses must have the same structure so that the
project works with both. This may be applicable if you have a data warehouse
with domestic data and another with foreign data and you want users to be
directed to one or the other based on the user group to which they belong
when they log in to the MicroStrategy system.
• “US users” connect to the U.S. data warehouse (data warehouse login ID
“MSTR users”)
In this case, you would need to create a user connection mapping within
MicroStrategy for both user groups. To do this, you would:
• Create two database connections in MicroStrategy—one to each data
warehouse (this assumes that DSNs already exist for each data
warehouse)
US
users
Database
Database
Project Instance
Instance
Data Data
Warehouse Warehouse
(US) (London)
The project, database instance, and database login can be the same,
but the connection mapping specifies different database connections
(and therefore, different DSNs) for the two groups.
You can configure each project to use either connection mappings or the
linked warehouse login ID when users execute reports, documents, or
browse attribute elements. If passthrough execution is enabled, the project
uses the linked warehouse login ID and password as defined in the User
Editor (Authentication tab). If no warehouse login ID is linked to a user,
Intelligence Server uses the default connection and login ID for the project’s
database instance.
• RDBMS auditing: If you wish to be able to track which users are accessing
the RDBMS system down to the individual database query. Mapping
multiple users to the same RDBMS account blurs the ability to track
which users have issued which RDBMS queries.
• Teradata spool space: If you use the Teradata RDBMS, note that it has a
limit for spool space set on a per-account basis. If multiple users share
the same RDBMS account, they are collectively limited by this setting.
• RDBMS security views: If you use security views, each user needs to log in
to the RDBMS with a unique database login ID so that a database security
view is enforced.
You can configure linked warehouse logins with the Project Configuration
Editor in MicroStrategy Desktop. To create a connection mapping, you
assign a user or group either a database connection or database login that is
different from the default. For information on this, see Connecting to the
data warehouse, page 9.
1 In Desktop, log into your project. You must log in as a user with
administrative privileges.
5 To use warehouse credentials for all database instances, select the For all
database instances option.
7 Click OK. The Project Configuration Editor closes and your changes are
saved.
For example, two regional managers may have two different security filters
for their regions—one in the Northeast and the other in the Southwest. This
means that if these two regional managers run the same report they may get
different results. For an extended example, with images of the reports that
result, see Security filters example, page 79.
Security filters enable you to control what warehouse data users can see,
when that data is accessed through MicroStrategy. They serve a similar
function as database-level techniques such as database views and row level
security.
A security filter comes into play when a user is executing reports and
browsing elements. The attribute qualification defined by the security filter is
used in the WHERE clause for any report that is related to the security filter’s
attribute. The same is true for element browsing—when the user browses
through a hierarchy to answer a prompt, he or she will only see the attribute
elements as defined by the security filter.
Security filters are used as part of the cache key for report caching and
element caching. This means that users with different security filters cannot
access the same cached results, preserving data security. For more
information, see Improving Report and Document Response Time:
Caching, page 203.
Each user can have only one security filter for a given project. Users may
have different security filters in different projects.
You should inform your users of the security filters assigned to them
or their group. If you do not inform them of their security filter, they
may not know that the data they see in their reports has been filtered.
A security filter can be created using the Security Filter Manager. To access
the Security Filter Manager, in Desktop from the Administration menu,
select Projects and then choose Security Filter Manager. For steps to create
a security filter or assign a security filter to a user or group, see the Desktop
online help.
For a user to create security filters, the user must have the following
privileges:
For details on using security filters with metric levels, see Security filters and
metric levels, page 80.
When this user executes a simple report with the following definition, he sees
the results below:
• Filter: Empty
Only the Items with the TV subcategory are returned, even though the
report filter did not qualify on any attribute.
If this user executes another report with the following definition, he gets
different results:
• Template: Category in the rows, Revenue in the columns, and Year on the
page by axis
• Filter: Empty
Some advanced security schemes may require additional rules for applying
security filters to metrics that have specific level properties defined, that is,
level metrics. Security filters support Top and Bottom properties to address
these requirements.
This behavior can be modified somewhat through the use of Top and Bottom
properties. Note that a security filter has these parts:
• Filter expression: specifies the subset of the data that a user can analyze.
• Top range attribute: specifies the highest level of detail that the security
filter allows the user to view. If a Top level is specified, the security filter
expression is NOT raised to any level above the Top level.
• Bottom range attribute: specifies the lowest level of detail that the
security filter allows the user to view. If this is not specified, the security
filter can view every level lower than the specified top range attribute, as
long as it is within the qualification defined by the filter expression.
The Top and Bottom range attributes can be set at the same level.
For example, consider users executing the report below. The template
consists of Category, Subcategory, and Item on the rows. In the columns are
three metrics: Revenue; Subcategory Revenue, which is defined with
Report 1
As shown in the example above, Item-level detail is displayed for only the
items within the TV category. The Subcategory Revenue is displayed for all
items within the TV subcategory. The Category Revenue is displayed for all
items in the Category, including items that are not part of the TV
subcategory. However, only the Electronics category is displayed. This
illustrates how the security filter Subcategory=TV is raised to the category
level such that Category=Electronics is the filter used with Category
Revenue.
Report 2
As shown in the above example, the Category Revenue is displayed for only
the items within the TV subcategory. The security filter Subcategory=TV is
not raised to the category level, because Category is a level above the Top
level of Subcategory.
Report 3
As shown in the example above, Item-level detail is not displayed, since Item
is a level below the bottom level of Subcategory. Instead, data for the entire
Subcategory is shown for each item. Data at the Subcategory level is
essentially the lowest level of granularity the user is allowed to see.
To further illustrate the effect of the Top and Bottom properties used in the
above reports, we provide the following:
• The diagram below illustrates the data access for a user with no Top or
Bottom Range Attribute defined, as in Report 1. At the level of the
security filter (=Subcategory) and below, the user cannot see data outside
his or her security filter. Above the level of the security filter, the user can
see data outside the security filter if it is in a metric with absolute filtering
for that level. Even in this case, the user sees only data for the Category in
which his or her security filter is defined.
• The diagram below illustrates the effect of a defined Top Range Attribute
on the security filter=TV, as in Report 2. In this case, the user cannot see
any data outside of his or her security filter. This is true even at levels
above the Top level, regardless of whether metrics with absolute filtering
are used.
1 No security filter is defined at the user level, but the user is a member of
multiple groups for which security filters are defined.
2 A security filter is defined at the user level, and the user is also a member
of a group or groups for which security filters are defined.
Regardless of whether security filters are defined at the user or the group
level, the SQL Engine generates an “OR” in SQL between the security filters if
they are related, and an “AND” if they are unrelated. Filters are considered
related if the attributes they derive from belong in the same hierarchy, for
example, Country and Region, or Year and Month. They are considered
unrelated if they do not belong in the same hierarchy, for example, Product
and Year. Below are two examples to illustrate this behavior in the security
filter model.
User1 (U1) has no security filter but belongs to the user groups described
above. When the security filter of User1 is merged with the ones of these
groups, the following behavior is expected of the SQL Engine:
U1 belongs to G2 and G4 G2 OR G4
User2 (U2) has a security filter of Month = April, and belongs to the user
groups described above.When the security filter of User2 is merged with the
ones of the groups, the following behavior is expected of the SQL Engine.
A system prompt is a special type of prompt that does not require an answer
from the user; instead it is answered automatically. In the case of the User
Login system prompt, the prompt is answered automatically with the login
name of the user who runs the report.
The User Login prompt ?[User Login] can be used to insert the user’s
login name into any filter, security filter, metric expression, or anywhere a
prompt can be used. For example, consider how ?[User Login] is used in
the following sample filter expressions:
Use the User Login system prompt to apply a single security filter to all users
in a group. For example, to restrict managers so that they can only view data
on the employees they supervise, use the User Login prompt in the form
Manager = ?[User Login]. Then attach the prompt to a security filter
and assign the security filter to the Manager group. When a manager named
John Smith executes a report, the security filter generates SQL for the
condition Manager = ‘John Smith’ and only John Smith’s employees’
data is returned.
Use the User Login system prompt to implement security filter functionality
at the report level, by defining a report filter with a system prompt. For
example, use the User Login prompt in the form Manager = ?[User
Login] to define a report filter. Include this filter in certain reports; when
one of these reports is executed, the prompt causes data to be returned to
only those users logged in as a manager.
Security views
Most databases provide a way to restrict access to data. For example, a user
may be able to access only certain tables or he may be restricted to certain
rows and columns within a table. The subset of data available to a user is
called the user’s security view.
Security views are often used when splitting fact tables by columns and
splitting fact tables by rows (discussed below) cannot be used. The rules that
determine which rows each user is allowed to see typically vary so much that
users cannot be separated into a manageable number of groups. In the
extreme, each user is allowed to see a different set of rows.
Note that restrictions on tables, or rows and columns within tables, may not
be directly evident to a user. However, they do affect the values displayed in a
report. You need to inform users as to which data they can access so that they
do not inadvertently run a report that yields misleading final results. For
example, if a user has access to only half of the sales information in the data
warehouse but runs a summary report on all sales, the summary reflects only
half of the sales. Reports do not indicate the database security view used to
generate the report.
You can split fact tables by rows to separate a logical data set into multiple
physical tables based on values in the rows (this is also known as table
partitioning). The resultant tables are physically distinct tables in the data
warehouse, and security administration is simple because permissions are
granted to entire tables rather than to rows and columns.
If the data to be secured can be separated by rows, then this may be a useful
technique. For example, suppose a fact table contains the key Customer ID,
Address, Member Bank and two fact columns, as shown below:
You can split the table into separate tables (based on the value in Member
Bank), one for each bank: 1st National, Eastern Credit, and so on. In this
example, the table for 1st National bank would look like this:
In most RDBMSs, split fact tables by rows are invisible to system users.
Although there are many physical tables, the system “sees” one logical fact
table.
Support for Split fact tables by rows for security reasons should not be
confused with the support that Intelligence Server provides for split fact
You can split fact tables by columns to separate a logical data set into
multiple physical tables by columns. If the data to be secured can be
separated by columns, then this may be a useful technique.
Each new table has the same primary key, but contains only a subset of the
fact columns in the original fact table. Splitting fact tables by columns allows
fact columns to be grouped based on user community. This makes security
administration simple because permissions are granted to entire tables
rather than to columns. For example, suppose a fact table contains the key
labeled Customer ID and fact columns as follows:
You can split the table into two tables, one for the marketing department and
one for the finance department. The marketing fact table would contain
everything except the financial fact columns as follows:
Customer Member
Customer ID
Address Bank
The second table used by the financial department would contain only the
financial fact columns but not the marketing-related information as follows:
Transaction Current
Customer ID
Amount ($) Balance ($)
For example, you want to merge UserB into UserA. In this case UserA is
referred to as the destination user. In the wizard, this is shown in the image
below:
When you open the User Merge Wizard and select a project source,
the wizard locks that project configuration. Other users cannot change
any configuration objects until you close the wizard. For more
information about locking and unlocking projects, see Locking
projects, page 303.
You can also merge users in batches if you have a large number of users to
merge. Merging in batches can significantly speed up the merge process.
Batch-merging is an option in the User Merge Wizard. Click Help for details
on setting this option.
For example, if UserA has the Web user privilege and UserB has the Web
user and Web Administration privileges, after the merge, UserA has both
Web user and Web Administration privileges.
The User Merge Wizard automatically merges all of a user’s or group’s group
memberships. Before the merge, each user has a distinct set of group
memberships. After the merge, all group memberships that were assigned to
UserB are combined with those of the destination user, UserA. This
combination is performed as a union. That is, group memberships are not
removed for either user.
The User Merge Wizard automatically merges all of a user’s or group’s profile
folders. Before the merge, UserA and UserB have separate and distinct user
profile folders. After UserB is merged into UserA, only UserA exists; her
profile contains the profile folder information from both UserA and UserB.
The User Merge Wizard automatically merges all of a user’s or group’s object
ownerships and access control lists (ACLs). Before the merge, the user to be
merged, UserB, owns the user objects in her profile folder and also has full
control over the objects in the access control list. After the merge, ownership
and access to the merged user’s objects are granted to the destination user,
UserA. The merged user is removed from the object’s ACL. Any other users
that existed in the ACL remain in the ACL. For example, before the merge,
UserB owns an object that a third user, UserC has access to. After the merge,
UserA owns the object, and UserC still has access to it.
The User Merge Wizard does not automatically merge a user’s or group’s
security roles. To merge them, you must select the Security Roles check box
on the Merge Options page in the wizard. Before the merge, both users have
unique security roles for a given project. After the merge, the destination
user profile is changed based on the following rules:
• If the destination user has no security role for a particular project, the
user inherits the role from the user to be merged.
• If you are merging multiple users into a single destination user and
each of the users to be merged has a security role, then the destination
user takes the security role of the first user to be merged. If the
destination user also has a security role, the existing security role of
the destination user is kept.
The User Merge Wizard does not automatically merge a user’s or group’s
security filters. To merge them, you must select the Security Filters check
box on the Merge Options page in the wizard. When merging security filters,
the wizard follows the same rules as for security roles, described above.
The User Merge Wizard does not automatically merge a user’s or group’s
database connection maps. To merge them, you must select the Connection
Mapping check box on the Merge Options page in the wizard. When merging
database connection mappings, the wizard follows the same rules as for
security roles and security filters, described above.
Merging schedules
The User Merge Wizard does not automatically merge a user’s or group’s
schedules. To merge user schedules, you must select the Schedules check
box on the Merge Options page in the wizard. The wizard follows the rules
listed below.
If the user to be merged has a schedule request for a particular report and:
• The destination user does not have the schedule request for the same
report, a new schedule request for that report is created for the
destination user.
• The destination user has a schedule request for the same report, then
nothing changes.
• The destination user has a schedule request for a different report, a new
schedule request is created like the source user’s (according to the first
bullet above) and the existing schedule for the different report is
preserved. The destination user now has two report schedules.
For a description of how the User Merge Wizard merges these optional
properties, see each individual property’s section in How users and
groups are merged, page 92.
4 Specify whether you wish to have the wizard select the users/groups to
merge automatically (you can verify and correct the merge candidates), or
if you wish to manually select them.
5 In the User Merge Candidates page, select the destination users or groups
and click > to move them to the right-hand side.
6 Select the users or groups to be merged and click > to move them to the
right-hand side. They display below the selected destination user or
group.
7 On the Summary page, review your selections, and click Finish. The users
or groups are merged.
Introduction
Authentication is the process by which the system identifies the user. In most
cases, a user provides a login ID and password which the system compares to
a list of authorized logins and passwords. If they match, the user is able to
access certain aspects of the system, according to the access rights and
application privileges associated with the user.
Modes of authentication
Several authentication modes are supported in the MicroStrategy
environment. The main difference between the modes is the authentication
authority used by each mode. The authentication authority is the system that
verifies and accepts the login/password credentials provided by the user.
1 In Desktop, from the Tools menu, select Project Source Manager. The
Project Source Manager opens.
2 Select the appropriate project source and click Modify. The Project
Source Manager for that project source opens.
3 On the Advanced tab, select the appropriate option for the default
authentication mode that you want to use.
4 Click OK twice. The Project Source Manager closes and the specified
authentication mode is now the default for that project source.
By default, all users connect to the data warehouse using one RDBMS login
ID, although you can change this using Connection Mapping. For more
information, see Connecting to the data warehouse, page 9. In addition,
standard authentication is the only authentication mode that allows a user or
system administrator to change or expire MicroStrategy passwords.
Password policy
• The number of past passwords that the system remembers, so that users
cannot use the same password
The minimum number of special characters, that is, symbols, that the
password must contain
The expiration settings are made in the User Editor and can be set for each
individual user. The complexity and remembered password settings are
made in the Security Policy Settings dialog box, and affect all users. For
detailed information about configuring these settings, see the Desktop Help.
(From within Desktop, press F1.)
1 In Desktop, open the Project Source Manager, and, on the Advanced tab,
select Use login ID and password entered by the user (standard
authentication). (This is the default setting.)
3 In Desktop, create a database instance for the data warehouse and assign
it a default database login. This is the RDBMS account that will be used to
execute reports from all users.
This dynamically created guest user is not the same as the “Guest”
user which is visible in the User Manager.
By default, guest users have no privileges; you must assign this group any
privileges that you want the guest users to have. Privileges that are grayed
out in the User Editor are not available by default to a guest user. Other than
the unavailable privileges, you can determine what the Guest user can and
cannot do by modifying the privileges of the Public/Guest user group and by
granting or denying it access to objects. For more information, see
Controlling access to functionality: Privileges, page 61 and Controlling
access to objects: Permissions, page 53.
All objects created by guest users must be saved to public folders and are
available to all guest users. Guest users may use the History List, but their
messages in the History List are not saved and are purged when the guest
users log out.
1 In Desktop, log into the project source with a user that has administrative
privileges.
3 From the File menu, select Properties. The Properties - project source
name dialog box opens.
4 In the Security tab, click Add. The Select Desktop Users and Groups
dialog box opens.
7 Click OK. The Select Desktop Users and Groups dialog box closes.
Ifrecommends
you use database authentication, for security reasons MicroStrategy
that you use the setting Create caches per database
login. This ensures that users who execute their reports using
different database login IDs cannot use the same cache. You can set
this in the Project Configuration Editor in the Caching: Result Caches:
Creation category.
This is done anonymously because the user has not yet logged in to a
specific project. Because a warehouse database is not associated with the
project source itself, users are not authenticated until they select a project
to use. For more information about anonymous authentication, including
instructions on enabling it for a project source, see Implementing
anonymous authentication, page 102.
2 The user selects a project, and then logs in to that project using her data
warehouse login ID and password. She is authenticated against the data
warehouse database associated with that project.
3 Assign a security role to the Public/Guest group for each project to which
you want to provide access (see Managing security roles, page 65).
When using LDAP authentication, LDAP users and groups are either linked
directly to MicroStrategy users and groups, or they are imported into the
MicroStrategy metadata. For information on the difference between
importing and linking LDAP users and groups and steps to implement each
setup in MicroStrategy, see Importing or linking LDAP users and groups in
MicroStrategy, page 118.
You can also set up LDAP authentication for MicroStrategy Office. For
information, see the MicroStrategy Office Guide.
3 The authentication user searches the LDAP directory for the user who is
logging in via Desktop or MicroStrategy Web, based on the DN of the user
logging in.
4 If this search successfully locates the user who is logging in, the user’s
LDAP group information is retrieved.
Intelligence Server requires that the LDAP SDK supports the following:
• LDAP v. 3
• SSL connections
• 64-bit architecture on UNIX and Linux platforms
InIntelligence
order for LDAP to work properly with MicroStrategy
Server Universal, the 64-bit LDAP libraries must be
used.
The following image shows how behavior of the various elements in an LDAP
configuration affects other elements in the configuration.
1: The behavior between Intelligence Server and the LDAP SDK varies
slightly depending on the LDAP SDK used. The MicroStrategy readme
lists recommendations.
2: The behavior between the LDAP SDK and the LDAP server is identical
no matter which LDAP SDK is used.
MicroStrategy recommends that you use the LDAP SDK vendor that
corresponds to the operating system vendor on which Intelligence Server is
running in your environment. Specific recommendations are listed in the
MicroStrategy readme, with the latest set of certified and supported LDAP
SDKs, references to MicroStrategy Tech Notes with version-specific details,
and SDK download location information.
To configure Intelligence Server to use a specific SDK and DLLs, see the
Intelligence Server Configuration Editor: General section in the
MicroStrategy Desktop online help.
1 Download the LDAP SDK DLLs onto the machine where Intelligence
Server is installed.
• Windows environment: Add the path of the LDAP SDK libraries to the
system environment variable so that Intelligence Server can locate
them.
2 Add Write privileges to the LDAP.sh file by typing the command chmod
u+w LDAP.sh and then pressing ENTER.
3 Open the LDAP.sh file in a text editor and add the library path to the
MSTR_LDAP_LIBRARY_PATH environment variable. For example:
MSTR_LDAP_LIBRARY_PATH=’/path/LDAP/
library’
Ityouis have
recommended that you store all libraries in the same path. If
several paths, you can add all paths to the
MSTR_LDAP_LIBRARY_PATH environment variable and separate
them by a colon (:). For example:
MSTR_LDAP_LIBRARY_PATH=’/path/LDAP/
library:/path/LDAP/library2’
4 Remove Write privileges from the LDAP.sh file by typing the command
chmod a-w LDAP.sh and then pressing ENTER.
Setting up LDAP SDK connectivity, page 108. The steps for defining the
required connection parameters are as follows:
4 Type the information for the Host field and choose the appropriate Port.
• Host
The LDAP host is either the host machine name or IP address of the
host machine of the LDAP server.
• Port
The LDAP port is the port number of the LDAP server. Port 389 is the
default when connecting with clear text, and port 636 is the default for
SSL. However, the LDAP port can be set to a different number than
the default. Confirm the accurate port number with your LDAP
administrator.
If you will be implementing SSL encryption, use the following steps to set
it up:
a Obtain a valid certificate from your LDAP server and save it on the
machine where Intelligence Server is installed.
• Novell: Provide the path to the certificate, including the file name.
• IBM: Use Java GSKit 7 to import the certificate, and provide the
key database name with full path, starting with the home
directory.
• Open LDAP: Provide the path to the directory that contains the
CA certificate file cacert.pem, the server certificate file
servercrt.pem, and the server certificate key file
serverkey.pem.
• HP-UX: Provide the path to the certificate. Do not include the file
name.
d Select the appropriate settings for the LDAP server vendor name,
connectivity driver, connectivity files, and Intelligence Server
platform. For details on setting these connection parameters, see
Setting up LDAP SDK connectivity, page 108.
• Authentication user
8 Click Ok. The Configuration Editor closes and the parameters are saved.
3 Click OK to accept your changes and close the Project Source Manager.
2 From the Administration menu, select Server, and then select LDAP
Connectivity Wizard. The LDAP Connectivity Wizard opens.
3 Step through the pages of the wizard. On the Summary page, click Finish
to create and save your LDAP connection.
Upon completing the LDAP Connectivity Wizard you are prompted to test
the LDAP connection. MicroStrategy recommends that you test the
connection to catch any errors with the connection parameters you have
provided.
To set the search parameters, you enter these parameters in the Intelligence
Server Configuration Editor, in the User Search Filter and Group Search
Filter dialog boxes. To access the Intelligence Server Configuration Editor,
right click a project source. Detailed steps for how to enter this information
can be found in the Desktop Online Help.
If you do not provide the search root, the user search filter, and the group
search filter searches of the LDAP directory might perform poorly. Highest
level to start an LDAP search: Search root, page 114, provides examples of
these parameters as well as additional details of each parameter and some
LDAP server-specific notes.
To search effectively, Intelligence Server must know where to start its search.
When setting up LDAP authentication, you indicate a search root
Distinguished Name to establish the directory location from which
Intelligence Server starts all user and group searches. If this search root is
not set, Intelligence Server searches the entire LDAP directory.
The following diagram and table present several examples of possible search
roots based on how users might be organized within a company and within
an LDAP directory. The diagram shows a typical company’s departmental
structure. The table describes several user import scenarios based on the
diagram.
The following table, based on the diagram above, provides common search
scenarios for users to be imported into MicroStrategy. The search root is the
root to be defined in MicroStrategy for the LDAP directory.
Include all users and groups from Technology and Departments (with an exclusion clause in the
Operations but not Consultants. User/Group search filter to exclude users who
belong to Consultants.)
For some LDAP vendors, the search root cannot be the LDAP tree’s root. For
example, both Microsoft Active Directory and Sun ONE require a search to
begin from the domain controller RDN (dc). The image below shows an
example of this type of RDN, where “dc=labs, dc=microstrategy, dc=com”:
The search root parameter searches for users in the “leaves” of the
“tree” who are all registered within a single domain. If your LDAP
directory has multiple domains for different departments, see
MicroStrategy TN5303-8X-2887.
Once Intelligence Server locates the user in the LDAP directory, the search
returns the user’s Distinguished Name, and the password entered at user
login is verified against the LDAP directory. Intelligence Server uses the
authentication user to access, search in, and retrieve the information from
the LDAP directory.
Using the user’s Distinguished Name, Intelligence Server searches for the
LDAP groups that the user is a member of. You must enter the group search
filter parameters separately from the user search filter parameters (see
About group search filters, page 117).
Depending on your LDAP server vendor and your LDAP tree structure, you
may need to try different attributes within the search filter syntax above. For
example, (&(objectclass=person) (uniqueID=#LDAP_LOGIN#)),
where uniqueID is the LDAP attribute name your company uses for
authentication.
The group search filter is generally in one of the following forms (or the
following forms may be combined, using a pipe | symbol to separate the
forms):
• (&(objectclass=LDAP_GROUP_OBJECT_CLASS)
(LDAP_MEMBER_LOGIN_ATTR=#LDAP_LOGIN#))
• (&(objectclass=LDAP_GROUP_OBJECT_CLASS)
(LDAP_MEMBER_DN_ATTR=#LDAP_DN#))
• (&(objectclass=LDAP_GROUP_OBJECT_CLASS)
(gidNumber=#LDAP_GIDNUMBER#))
The group search filter forms listed above have the following placeholders:
You can implement specific search patterns by adding additional criteria. For
example, you may have 20 different groups of users, of which only five
groups will be accessing and working in MicroStrategy. You can add
additional criteria to the group search filter to import only those five groups
into MicroStrategy. To see the detailed process of importing LDAP users and
groups into MicroStrategy, refer to Importing or linking LDAP users and
groups in MicroStrategy, page 118.
Importing users and • Users and groups are created in • In environments that have many LDAP
groups at login the metadata. users, importing can quickly fill the
• Users and groups can be assigned metadata with these users and their
additional privileges and related information.
permissions in MicroStrategy. • Users and groups may not have the
• Users have their own inboxes and correct permissions and privileges
personal folders in MicroStrategy. setup when they are initially imported
into MicroStrategy.
Linking users and • For environments that have many • Privileges in MicroStrategy are
groups without LDAP users, linking avoids filling generally limited by default to running
importing the metadata with users and their reports, although an administrator can
related information. allow additional privileges.
• You can use Command Manager • Users do not have their own inboxes
to automate the linking process and personal folders in MicroStrategy.
using scripts. See the online help
for details.
When you import LDAP users and groups in MicroStrategy, all authenticated
LDAP users and groups are imported into the MicroStrategy metadata,
which means that a physical MicroStrategy user is created within the
MicroStrategy metadata. For further details about and steps for
implementing this option, see Importing LDAP users and groups into
MicroStrategy, page 119.
Another option is to link LDAP users and groups to users and groups in
MicroStrategy (see Linking users and groups without importing, page 130).
This helps keep the metadata from filling with data related to the users.
However, it limits the privileges available to the linked users and groups.
If you choose to not import or link LDAP users and groups, no LDAP users or
groups are imported into the metadata when LDAP authentication occurs;
instead, LDAP users can log into MicroStrategy but receive the privileges of
the MicroStrategy Public/Guest group. For details on this authentication
option, see Allowing anonymous/guest users with LDAP authentication,
page 134.
You can choose to import LDAP users and groups at login, in a batch process,
or a combination of the two. For information on setting up user and group
import options, see the following sections:
When an LDAP user is imported into MicroStrategy, you can also choose to
import that user’s LDAP groups. If a user belongs to more than one group, all
the user’s groups are imported and created in the metadata. Imported LDAP
groups are created within MicroStrategy’s LDAP Users folder and in
MicroStrategy’s User Manager.
LDAP users and LDAP groups are all created within the MicroStrategy
LDAP Users group at the same level. While the LDAP relationship
between a user and any associated groups exists in the MicroStrategy
metadata, the relationship is not visually represented in
MicroStrategy Desktop. For example, looking in the LDAP Users
folder in MicroStrategy immediately after an import or
synchronization, you might see the following list of imported LDAP
users and groups:
Removing a user from the LDAP directory does not effect the user’s presence
in the MicroStrategy metadata. Deleted LDAP users are not automatically
deleted from the MicroStrategy metadata during synchronization. You can
revoke a user’s privileges in MicroStrategy, or remove the user manually.
You can choose to import users and their associated groups when a user logs
in to MicroStrategy for the first time. When an LDAP user logs in to
MicroStrategy for the first time, that user is imported into MicroStrategy and
a physical MicroStrategy user is created in the MicroStrategy metadata. Any
groups associated with that user that are not already in MicroStrategy are
also imported and created in the metadata.
The user names, user logins, and group names that are imported into
MicroStrategy depend on the LDAP attributes you select to import. To
designate the LDAP attributes that MicroStrategy will import, see the
following sections:
• Selecting how many nested groups to import with the user, page 128
You can choose to import a list of users and their associated groups in batch.
The list of users and groups are returned from user and group searches on
your LDAP directory. MicroStrategy users and groups are created in the
MicroStrategy metadata for all imported LDAP users and groups.
Users imported into MicroStrategy are also given their own inboxes and
personal folders.
The user names, user logins, and group names that are imported into
MicroStrategy depend on the LDAP attributes you select to import. For more
information see the following sections:
• Selecting how many nested groups to import with the user, page 128
LDAP administrator for the proper user search filter syntax. A user search
filter is generally of the following form:
(&(objectclass=LDAP_USER_OBJECT_CLASS)(LDAP_LOGIN_ATTR=S
EARCH_STRING))
The user search filter form given above has the following placeholders:
• SEARCH_STRING indicates the search criteria for your user search filter.
You must match the correct LDAP attribute for your search filter. For
example, you can search for all users with an LDAP user login that begins
with the letter h by entering (&(objectclass=person)(cn=h*)).
(&(objectclass=person)(uniqueID=SEARCH_STRING))
For information on how and where to enter your user search filters for
importing and synchronizing users in batch, see To allow users and groups
to be imported in batch, page 122.
Enter a group search filter to return a list of groups to import in batch. You
should contact your LDAP administrator for the proper group search filter
syntax. A group search filter is generally of the following form:
(&(objectclass=LDAP_GROUP_OBJECT_CLASS)(LDAP_GROUP_ATTR=
SEARCH_STRING))
The group search filter form given above has the following placeholders:
For information on how and where to enter your group search filters for
importing and synchronizing groups in batch, see To allow users and groups
to be imported in batch, page 122.
When importing LDAP users as MicroStrategy users, you must choose which
LDAP attribute is imported as the MicroStrategy user login. The user login is
what a user must enter in conjunction with the user’s password when logging
in to MicroStrategy. The user login is different from the user name; see
Selecting the LDAP attribute to import as user name, page 126
3 Expand the LDAP category, and then select User/Group Import. The
User/Group Import options are displayed.
4 In the Import user login as area, select one of the following LDAP
attributes to import as the user login:
• User login: Imports the user’s LDAP user login as the MicroStrategy
user login.
• Other: You can provide a different LDAP attribute than those listed
above to be imported and used as the MicroStrategy user login. Your
LDAP administrator can provide you with the appropriate LDAP
attribute to be used as the user login.
Ifauthentication
you type a value in the Other field, ensure that the
user contains a valid value for the same attribute
specified in the Other field. For more information on the
authentication user, see Authentication user, page 112.
When importing LDAP users as MicroStrategy users, you must choose which
LDAP attribute is imported as the MicroStrategy user name. The user name
is the name displayed and associated with a user login.
3 Expand the LDAP category, and then select User/Group Import. The
User/Group Import options are displayed.
4 In the Import user name as area, select one of the following LDAP
attributes to import as the user name:
• User name: Imports the user’s LDAP user name as the MicroStrategy
user name.
• Other: You can provide a different LDAP attribute than those listed
above to be imported and used as the MicroStrategy user name. Your
LDAP administrator can provide you with the appropriate LDAP
attribute to be used as the user name.
Ifauthentication
you type a value in the Other field, ensure that the
user contains a valid value for the same attribute
specified in the Other field. For more information on the
authentication user, see Authentication user, page 112.
When importing LDAP groups into MicroStrategy groups, you must choose
which LDAP attribute is imported as the MicroStrategy group name. The
group name is the name displayed and associated with a group.
3 Expand the LDAP category, and then select User/Group Import. The
User/Group Import options are displayed.
4 In the Import group name as area, select one of the following LDAP
attributes to import as the group name:
• Other: You can provide a different LDAP attribute than those listed
above to be imported and used as the MicroStrategy group name.
Your LDAP administrator can provide you with the appropriate LDAP
attribute to be used as the group name.
Ifauthentication
you type a value in the Other field, Ensure that the
user contains a valid value for the same attribute
LDAP users often have email addresses associated with them. If you have a
license for MicroStrategy Distribution Services, then when you import LDAP
users, either in a batch or at login, you can import these email addresses as
contacts associated with those users. For information about Distribution
Services, see Scheduling deliveries to email, file, and printer: Distribution
Services, page 370.
MicroStrategy
LDAP user.
9 imports only the primary email address for each
3 Expand the LDAP category, and then select Import Options. The Import
options are displayed.
5 Select whether to use the default LDAP email address attribute of mail,
or to use a different attribute. If you want to use a different attribute,
specify it in the text field.
6 From the Device drop-down list, select the email device that the email
addresses are to be associated with.
If the number 2 is selected for this field, when MicroStrategy imports LDAP
groups, it will import the groups associated with each user, up to two levels
above the user. In this case, for User 1, the groups Domestic and Marketing
would be imported. For user 3, Developers and Employees would be
imported.
3 Expand the LDAP category, and then select Filter. The Filter options are
displayed.
A user’s LDAP privileges and security settings are not imported along with a
user. Imported users receive the privileges of the MicroStrategy LDAP Users
group. You can add additional privileges to specific users in the LDAP Users
group using the standard MicroStrategy process in the User Editor. You can
also adjust privileges for the LDAP Users group as a whole.
Similarly, a group’s LDAP privileges and security settings are not imported
along with the group. Group privileges can be modified using the
MicroStrategy Group Editor.
The process of synchronizing users and groups can modify which groups a
user belongs to, and thus modify the user’s privileges and security settings.
For more information, see Synchronizing imported LDAP users and groups,
page 137.
The link between an LDAP user or group and the MicroStrategy user or
group is maintained in the MicroStrategy metadata in the form of a shared
Distinguished Name.
The user’s or group’s LDAP privileges are not linked with the MicroStrategy
user. In MicroStrategy, a linked LDAP user or group receives the privileges of
the MicroStrategy user or group to which it is linked.
• Group: Right-click a group and select Edit. The Group Editor opens.
5 In the field within the LDAP Authentication area, enter the user
distinguished name or the group distinguished name.
6 Click OK to accept the changes and close the User Editor or Group Editor.
If the Import check boxes are not cleared, the linked MicroStrategy
user will be overwritten in the metadata by the imported LDAP user
and cannot be recovered.
See the online help for specific details to access the User Editor and
Intelligence Server Configuration Editor and to use the features available
there.
• Whether the LDAP user account has been disabled, or has been identified
as an intruder and is locked out
If MicroStrategy can verify that none of these restrictions are in effect for this
user account, MicroStrategy performs an LDAP bind, and successfully
authenticates the user logging in. This is the default behavior for users and
groups that have been imported into MicroStrategy.
You can choose to have MicroStrategy verify only the accuracy of the user’s
password with which the user logged in, and not check for additional
restrictions on the password or user account. To support password
comparison authentication, your LDAP server must also be configured to
allow password comparison only.
3 Expand the LDAP category, and then select Server. The Server options
are displayed.
Because guest users are not present in the metadata, there are certain
actions these users cannot perform in MicroStrategy, even if the
associated privileges and permissions are explicitly assigned.
Examples include most administrative actions.
• The user does not have a History List, because the user is not physically
present in the metadata.
• The user cannot create objects and cannot schedule reports.
2 Right-click the project source that you want the anonymous/guest users
to have access to.
5 Log in to MicroStrategy using any LDAP server user login that is not
imported or linked to a MicroStrategy user.
See the Desktop online help for details to access the User Editor and
Intelligence Server Configuration Editor, and to use the features available
there.
A user with user login UserA and password PassA logs in to MicroStrategy at
9:00 A.M. and creates a new report. The user schedules the report to run at
3:00 P.M. later that day. Since there is no report cache, the report will be
executed against the database. At noon, an administrator changes UserA’s
password to PassB. UserA does not log back into MicroStrategy, and at 3:00
P.M. the scheduled report is run with the credentials UserA and PassA,
which are passed to the database. Since these credentials are now invalid, the
scheduled report execution fails.
To prevent this problem, schedule password changes for a time when users
are unlikely to run scheduled reports. In the case of users using database
passthrough authentication who regularly run scheduled reports, inform to
them reschedule all reports if their passwords have been changed.
For example, a user logs in to their Windows machine with a linked LDAP
login and password and is authenticated. The user then opens MicroStrategy
Desktop and connects to a project source using Windows authentication.
Rather than having to enter their login and password to log in to
MicroStrategy, the user’s login and password authenticated when logging in
to their machine is used to authenticate the user. During this process, the
user account and any relevant user groups are imported and synchronized
for the user.
Prerequisites
3 Expand the LDAP category, and then select Import Options. The Import
Options are displayed.
• User synchronization:
User details such as user name in MicroStrategy are updated with the
latest definitions in the LDAP directory.
• Group synchronization:
• If an LDAP user or group has been given new membership to a group that
has not been imported or linked to a group in MicroStrategy and import
options are turned off, the group cannot be imported into MicroStrategy
and thus cannot apply its permissions in MicroStrategy. For example,
User1 is a member of Group1 in MicroStrategy and both have been
imported into MicroStrategy. Then User1 is removed from Group1 in
LDAP and given membership to Group2, but Group2 is not imported or
linked to a MicroStrategy group. Upon synchronization, User1 is removed
from Group1 and is recognized as a member of Group2, but any
permissions for Group2 are not applied for the user until Group2 is
imported or linked to a MicroStrategy group. In the meantime, User1 is
given the security of the LDAP Users group.
Consider a user named Joe Doe who belongs to a particular group, Sales,
when he is imported into MicroStrategy. Later, he is moved to a different
group, Marketing, in the LDAP directory. The LDAP user Joe Doe and LDAP
groups Sales and Marketing have been imported into MicroStrategy. The
images below show a sample LDAP directory with user Joe Doe being moved
within the LDAP directory from Sales to Marketing. Also, the user name for
Joe Doe is changed to Joseph Doe, and the group name for Marketing is
changed to MarketingLDAP.
The following table describes what happens with users and groups in
MicroStrategy if users, groups, or both users and groups are synchronized.
Sync Sync User Name After Group Name After User Membership After
Users? Groups? Synchronization Synchronization Synchronization
Synchronization at login
You can choose to synchronize users and their associated groups when a user
logs into MicroStrategy. By synchronizing users and groups at login, a user’s
user name, group name, and group access are updated every time a user logs
into MicroStrategy.
Batch synchronization
Rather than synchronizing users and their associated groups every time a
user logs in to MicroStrategy, you can choose to synchronize users and
groups in batch. By synchronizing users and groups in batch, users and
groups are synchronized on the schedules your select (see Synchronization
Schedules, page 141).
• Enter search filter for importing list of users: Enter a search filter
that is used to synchronize a list of users in batch. For more
information on defining a user search filter, see Defining a search
filter to return a list of users, page 123.
• Enter search filter for importing list of groups: Enter a search filter
that is used to synchronize a list of groups in batch. For more
information on defining a group search filter, see Defining a search
filter to return a list of groups, page 124.
Synchronization Schedules
If you choose to synchronize users and groups in batch, you can select a
schedule that dictates when LDAP users and groups are synchronized in
MicroStrategy. For more information on creating and using schedules, see
Scheduling jobs and administrative tasks, page 353.
3 Expand the LDAP category, and then select Schedules. The Schedules
options are displayed.
Connection pooling
With connection pooling, you can reuse an open connection to the LDAP
server for subsequent operations. The connection to the LDAP server
remains open even when the connection is not processing any operations
(also known as pooling). This setting can improve performance by removing
the processing time required to open and close a connection to the LDAP
server for each operation.
3 Expand the LDAP category, and then select Sever. The Server options
are displayed.
4 Select the check box Use connection pooling to enable the reuse of open
connections to the LDAP server. Clearing this check box causes
MicroStrategy to open and close a connection to the LDAP server for
every operation.
To implement LDAP you may have multiple LDAP servers which work
together as a cluster of LDAP servers.
When a request to open an LDAP connection is made, the LDAP server with
the least amount of load at the time of the request is accessed. The operation
against the LDAP directory can then be completed, and in an environment
without connection pooling, the connection to the LDAP server is closed.
When the next request to open an LDAP connection is made, the LDAP
server with the least amount of load is determined again and chosen.
Troubleshooting
Use the procedures in the rest of this section to enable single sign-on with
Windows authentication in MicroStrategy Web. For high-level steps to
configure these settings, see Steps to enable single sign-on to MicroStrategy
Web using Windows authentication, page 147.
You can also create MicroStrategy users from existing Windows by importing
either user definitions or group definitions. For more information on
importing users or groups, see the Desktop online help.
There are several configurations that you must make to enable Windows
authentication in MicroStrategy Web. To properly configure MicroStrategy
Web, Microsoft Internet Information Services (IIS), and the link between
Prerequisites
Before continuing with the procedures described in the rest of this section,
you must first set up a Windows domain that contains a domain name for
each user that you want to allow single sign-on access to MicroStrategy Web
with Windows authentication.
5 Configure each MicroStrategy Web user’s browser for single sign-on. See
Configuring a browser for single sign-on to MicroStrategy Web,
page 151.
3 Select the Directory Security tab, and then under Anonymous access
and authentication control, click Edit. The Authentication Methods
dialog box opens.
Ifdirect
you are using MicroStrategy 8.1.1, use a project source with a
(two-tier) connection to map users.
3 Navigate to the MicroStrategy user you want to link a Windows user to.
Right-click the MicroStrategy user and select Edit. The User Editor
opens.
• Click Browse to select the user from the list of Windows users
displayed.
2 Right-click the project source and select Modify Project Source. The
Project Source Manager opens.
There are two ways to enable access to MicroStrategy Web using Windows
authentication. Access can be enabled for the MicroStrategy Web application
as a whole, or it can be enabled for individual projects at the project level.
For steps to enable Windows authentication for all of MicroStrategy Web, see
To enable Windows authentication login for MicroStrategy Web, page 150.
IfforyouMicroStrategy
want Windows authentication to be the default login mode
Web, for Windows Authentication, select the
Default option.
4 Click Save.
6 Click Apply.
• For Internet Explorer, you must enable integrated authentication for the
browser, as well as add the MicroStrategy Web server URL as a trusted
site.
• For Firefox, you must add the MicroStrategy Web server URL as a trusted
site. The URL must be listed in the about:config page, in the settings
network.negotiate-auth.trusted-uris and
network.negotiate-auth.delegation-uris.
For single sign-on with integrated authentication to work, users must have
user names and passwords that are printable, US-ASCII characters. This
limitation is expected behavior in Kerberos. This limitation is important to
keep in mind when creating a multilingual environment in MicroStrategy.
The third-party products discussed in the table and sections below are
manufactured by vendors independent of MicroStrategy, and the
information provided is subject to change. Refer to the appropriate
Machine hosting the domain Configure a domain controller with Microsoft Active Directory:
controller • To allow users created in a domain to use integrated authentication
in MicroStrategy, you must clear the Account is sensitive and
cannot be delegated authentication option for each user. For
information on this configuration, see Configuring a domain
controller and users, page 154.
• If Intelligence Server is run as an application with a particular user
account, you must create a user in the domain with the Account is
trusted for delegation authentication option selected. This user
account can then be used to run Intelligence Server as an
application. For information on this configuration, see Trusting
Intelligence Server for delegation, page 154.
• If Intelligence Server is run as a service, define the Intelligence
Server machine to be trusted for delegation. You can do this by
selecting the Trust computer for delegation authentication option
for the host machine. For information on this configuration, see
Trusting Intelligence Server for delegation, page 154.
• Define the web server for MicroStrategy Web host machine to be
trusted for delegation.This is achievable by selecting the Trust
computer for delegation authentication option for the host
machine. For information on this configuration, see Trusting the
MicroStrategy Web server host for delegation, page 155.
UNIX/Linux machine hosting If you use Intelligence Server Universal hosted on a UNIX/Linux
Intelligence Server Universal machine, you must install and configure Kerberos 5 on your UNIX/Linux
machine. For information on this configuration, see Configuring
Intelligence Server Universal on UNIX/Linux for Kerberos
authentication, page 155.
Machine hosting Internet Enable integrated authentication for IIS, as described in Enabling
Information Services (IIS) or integrated authentication for IIS, page 158.
other MicroStrategy Web
application server To enable single sign-on authentication to MicroStrategy Web from a
Microsoft Windows machine, you must modify a Windows registry
setting (allowtgtsessionkey). For information on this configuration,
see Enabling session keys for Kerberos security, page 160.
MicroStrategy Web user’s If a MicroStrategy Web user plans to use single sign-on to log in to
machine MicroStrategy Web, the user must configure their browser to enable
integrated authentication. For information on this configuration, see
Configuring a browser for single sign-on to MicroStrategy Web,
page 164.
Any machine with the required In MicroStrategy Desktop, link a MicroStrategy user to the domain user.
software for the task For information on this configuration, see Linking a domain user to a
MicroStrategy user, page 164.
The web server host for MicroStrategy Web must be trusted for delegation so
that it can pass login credentials to enable integrated authentication in
MicroStrategy. You can configure this delegation for the MicroStrategy Web
server machine in your domain controller. You must select the Trust this
computer for delegation to any service (Kerberos only) option for the
MicroStrategy Web server machine.
Bykinit
default, Kerberos tickets expire after 24 hours. You can use the
command to renew Kerberos tickets. For information on
Kerberos ticket expiration, refer to your third-party Kerberos
documentation.
Install Kerberos 5
You must have Kerberos 5 installed on your UNIX or Linux machine that
hosts Intelligence Server Universal. Your UNIX or Linux operating system
may come with Kerberos 5 installed. If Kerberos 5 is not installed on your
UNIX or Linux machine, refer to Kerberos documentation for steps to install
it.
You must configure a file named krb5.conf. This file is created as part of
your Kerberos 5 installation, and it is stored in the /etc/ directory by
default.
Ifupdate
you move the krb5.conf file to a different directory, you must
the KRB5_CONFIG environment variable with the new
location. Refer to your Kerberos documentation for steps to modify
the KRB5_CONFIG environment variable.
[libdefaults]
default_realm = DOMAIN_REALM
default_keytab_name = /etc/krb5.keytab
forwardable = true
no_addresses = true
[realms]
DOMAIN_REALM = {
kdc = DC_IPAddress:88
admin_server = DC_Admin_IPAddress:749
}
[domain_realm]
.domain_realm = DOMAIN_REALM
domain_realm = DOMAIN_REALM
The steps to configure this file on your UNIX or Linux machine are provided
in the procedure below.
Prerequisites
2 Retrieve the key version number for your Intelligence Server service
principal name, using the command shown below:
kvno MSTRSVRSvc/ISMachineName:ISPort
a ktutil
c wkt krb5.keytab
d exit
• The Microsoft Analysis Services XMLA Provider that was configured for
HTTP access to support integrated authentication to Analysis Services.
This optional configuration is required to allow user credentials to be
passed to Microsoft Analysis Services. For additional information on
configuring integrated authentication with Microsoft Analysis Services,
see Enabling integrated authentication to Microsoft Analysis Services,
page 167.
in the procedure below, which may vary depending on your version of IIS.
URLs are provided below to help you find information on how to enable
integrated authentication for your version of IIS:
IIS 7: http://www.iis.net/default.aspx?tabid=1#
IIS 6: http://www.microsoft.com/
WindowsServer2003/iis/default.mspx
IIS 5: http://www.microsoft.com/
windowsserver2003/iis/evaluation/overview/
previous.mspx#E5B
If you use an application server other than IIS to deploy MicroStrategy Web
Universal, see Enabling integrated authentication for J2EE compliant
application servers, page 161.
3 Select the Directory Security tab, and then under Anonymous access
and authentication control, click Edit. The Authentication Methods
dialog box opens
The path listed above assumes you have installed MicroStrategy in the
C:\Program Files directory.
Once you locate the krb5.ini file, open it in a text editor. The content
within the file is shown below:
[libdefaults]
default_realm = <DOMAIN REALM>
default_keytab_name = <path to keytab file>
forwardable = true
no_addresses = true
[realms]
<REALM_NAME> = {
kdc = <IP address of KDC>:88
[domain_realm]
match
The capitalization of DOMAIN_REALM and REALM_NAME must
the capitalization used in the syntax for krb5.ini listed
above. For example, if DOMAIN_REALM is in uppercase, you must
include your domain realm in uppercase.
• <path to keytab file>: The directory path to the keytab file. Keytab
files are part of a Kerberos security system and should be stored in a
secure location.
Bykinit
default, Kerberos tickets expire after 24 hours. You can use the
command to renew Kerberos tickets. For information on
Kerberos ticket expiration, refer to your third-party Kerberos
documentation.
• For your J2EE compliant application server, add the following JVM
startup arguments:
in the JDK to support your application server) as the value for this
parameter. This file is included in the JDK to support your application
server.
<filter>
<display-name>SpnegoFilter</display-name>
<filter-name>SpnegoFilter</filter-name>
<filter-class>com.microstrategy.web.filter.SpnegoFi
lter</filter-class>
</filter>
<filter-mapping>
<filter-name>SpnegoFilter</filter-name>
<servlet-name>mstrWeb</servlet-name>
</filter-mapping>
You must configure the krb5.keytab file. This file is created as part of your
Kerberos 5 installation, and it is stored in the /etc/ directory by default.
The steps to configure this file on your UNIX or Linux machine are provided
in the procedure below.
Prerequisites
Prerequisites
• In Microsoft Active Directory, you must define the service principal name
for your application server. The service principal name must be in the
following format:
HTTP/ASMachineName@DOMAIN.
• In Microsoft Active Directory, you must map a user account to the service
principal name for your application server.
2 Retrieve the key version number for your application server service
principal name, using the command shown below:
kvno HTTP/ASMachineName@DOMAIN
a ktutil
c wkt krb5.keytab
d exit
• For Internet Explorer, you must enable integrated authentication for the
browser, as well as add the MicroStrategy Web server URL as a trusted
site.
To apply security and privileges to a user in MicroStrategy, you must link the
domain user to a MicroStrategy user. This also enables the domain user to be
logged into MicroStrategy projects they have access to without having to type
their login credentials again.
Prerequisites
3 Browse to the MicroStrategy user you want to link a Windows user to.
Right-click the MicroStrategy user and select Edit. The User Editor
opens.
DomainUserName@DOMAIN_REALM
Prerequisites
2 Right-click a project source, and then click Modify Project Source. The
Project Source Manager opens.
IfforyouMicroStrategy
want integrated authentication to be the default login mode
Web, for Integrated Authentication, select the
Default option.
Microsoft Analysis Services and Microsoft SQL Server can be used as data
sources for your MicroStrategy applications. Through the use of integrated
authentication, you can allow each user’s credentials to be passed to
Microsoft Analysis Services or Microsoft SQL Server. Information on
configuring this optional support is described below.
• Within the MicroStrategy common files folder (by default, this folder is
stored at C:\Program Files\
Common Files\MicroStrategy), locate the file
ODBCConfiguration.ini. Prior to editing this file, create a backup
copy of the current file in case you must revert to previous functionality.
Open this file in a text editor. Locate the following tag:
<INTEGRATED_AUTHENTICATION>DEFAULT</INTEGRATED_AUTHEN
TICATION>
<INTEGRATED_AUTHENTICATION>YES</INTEGRATED_AUTHENTICA
TION>
authentication for login ID. For steps to create a data source name using
MicroStrategy Connectivity Wizard, see the Installation and
Configuration Guide.
In this security model, there are several layers. For example, when a user logs
in to Tivoli, Tivoli determines whether or not the user’s credentials are valid.
If the user logs in with valid credentials to Tivoli, the user directory (such as
LDAP) determines whether that valid user can connect to MicroStrategy. The
user’s MicroStrategy privileges are stored within the MicroStrategy Access
Control List (ACL). What a user can and cannot do within the MicroStrategy
application is stored on Intelligence Server in the metadata within these
ACLs. For more information about privileges and ACLs in MicroStrategy, see
Chapter 2, Setting Up User Security.
The distinguished name of the user passed from Tivoli and Siteminder
are URL decoded by default within MicroStrategy Web before it is
passed to the Intelligence Server.
You must complete all of the following steps to ensure proper configuration
of Tivoli or SiteMinder and MicroStrategy Web.
You link Tivoli to MicroStrategy Web using a junction, and you link
SiteMinder to MicroStrategy Web using a Web Agent. This link is required to
enable SSO authentication in MicroStrategy Web, as it redirects users from
Tivoli or SiteMinder to MicroStrategy Web.
For steps to create a junction (in Tivoli) or a Web Agent (in SiteMinder),
refer to your Tivoli or SiteMinder documentation.
Once the initial Tivoli/SiteMinder setup is complete, you must enable trusted
authentication in MicroStrategy Web, and establish trust between
MicroStrategy Web and Intelligence Server. This allows the authentication
token to be passed from one system to the other.
If you use Internet Information Services (IIS) as your web server, you must
enable anonymous authentication to the MicroStrategy virtual directory to
support SSO authentication to MicroStrategy Web. This is discussed in
Enabling anonymous authentication for Internet Information Services,
page 175.
2 On the left side of the page, click Default Properties. The Default
Properties page opens.
3 Scroll down to the Login area and, under Login mode, select the
Enabled check box next to Trusted Authentication Request. Also select
Once a trust relationship has been established, you can delete the trust
relationship. For steps, see To delete a trust relationship, page 174.
6 Type a User name and Password in the appropriate fields. The user
must have administrative privileges for MicroStrategy Web.
8 In the Web Server Application field, type a name for the trust
relationship. The name should easily identify the trust relationship. For
example, you can provide the URL to access MicroStrategy Web using
Tivoli as follows:
https://MachineName/JunctionName/
MicroStrategy/asp
14 On the left, expand the Web SSO category, and verify that the trusted
relationship is listed in the Trusted Web Application Registration list.
6 Provide your login information in the appropriate fields, and click Delete
trust relationship.
7 Click Save.
If you use Internet Information Services (IIS) as your web server, you must
enable anonymous authentication to the MicroStrategy virtual directory to
support SSO authentication to MicroStrategy Web.
Steps to perform this configuration are provided below, which may vary
depending on your version of IIS. Links are provided below to help you find
information on how to enable integrated authentication for your version of
IIS:
• IIS 7: http://www.iis.net/default.aspx?tabid=1#
• IIS 6: http://www.microsoft.com/
WindowsServer2003/iis/default.mspx
• IIS 5: http://www.microsoft.com/
windowsserver2003/iis/evaluation/overview/
previous.mspx#E5B
For a Tivoli or SiteMinder user to access MicroStrategy Web, the user must
be granted MicroStrategy privileges. The following flow chart illustrates the
various ways that MicroStrategy users are handled when they log in to Tivoli
or SiteMinder.
• Allowed guest access to MicroStrategy Web. The Tivoli user inherits the
privileges of the Public/Guest group in MicroStrategy. Guest access to
MicroStrategy Web is not necessary for imported or linked Tivoli users.
For steps to perform this configuration, see Enabling guest access to
MicroStrategy Web for Tivoli users, page 180.
imported into MicroStrategy only if the Tivoli user has not already been
imported as or associated with a MicroStrategy user.
• Security privileges are not imported from Tivoli; these must be defined in
MicroStrategy by an administrator.
As an alternative to importing users, you can link (or associate) Tivoli users
to existing MicroStrategy users to retain the existing privileges and
configurations defined for the MicroStrategy users. Linking Tivoli users
rather than enabling Tivoli users to be imported when they log in to
MicroStrategy Web enables you to assign privileges and other security
settings for the user prior to their initial login.
3 In the folder list on the left, expand Administration, and then expand
User Manager.
5 Right click the user and select Edit. The User Editor opens.
7 In the Trusted Authentication Request field, type the Tivoli user name
to link to the MicroStrategy user.
8 Click OK.
If you choose to not import or link Tivoli users to a MicroStrategy user, you
can enable guest access to MicroStrategy Web for the Tivoli users. Guest
users inherit their privileges from the MicroStrategy Public/Guest group.
Once all of the preliminary steps have been completed and tested, users may
begin to sign in to MicroStrategy using their Tivoli credentials. Sign-on steps
are provided in the procedure below.
where:
You are logged in to the MicroStrategy project with your Tivoli user
credentials.
If you are prompted to display both secure and non-secure items on the web
page, you can configure your web browser to hide this warning message.
Refer to your web browser documentation regarding this configuration.
Authentication examples
Below are a few examples of how the different methods for user
authentication can be combined with different methods for database
authentication to achieve the security requirements of your MicroStrategy
system. These examples illustrate a few possibilities; other combinations are
possible.
For detailed information about security views, see Security views, page 89.
1 In Desktop, open the Project Source Manager, and on the Advanced tab,
select Use network login ID (Windows authentication) as the
Authentication mode.
3 In the User Editor, select the Authentication tab, and link users to their
respective database user IDs using the Warehouse Login and Password
boxes for each user. For details on each option, click Help.
4 Enable the setting for database execution to use linked warehouse logins
on each project that you wish to use linked warehouse logins for database
For example, perhaps you are partitioning fact tables by rows, as described in
Splitting fact tables by rows, page 89. You have a user ID for the First
National Bank that only has access to the table containing records for that
bank and another user ID for the Eastern Credit Bank that only has access to
its corresponding table. Depending on the user ID used to log in to the
RDBMS, a different table is used in SQL queries.
Although there are only a small number of user IDs in the RDBMS, there are
many more users who access the MicroStrategy application. When users
access the MicroStrategy system, they log in using their MicroStrategy user
names and passwords. Using connection maps, Intelligence Server uses
different database accounts to execute queries, depending on the user who
submitted the report.
1 In Desktop, open the Project Source Manager, and on the Advanced tab,
select Use login ID and password entered by the user (standard
authentication) as the Authentication mode. This is the default setting.
Introduction
• Named User licenses, page 186, in which the number of users with access
to specific functionality are restricted
• CPU licenses, page 188, in which the number and speed of the CPUs used
by MicroStrategy server products are restricted
For example, the Web Use Filter Editor privilege is a Web Professional
privilege. If you assign this privilege to User1, then Intelligence Server grants
a Web Professional license to User1. If you only have one Web Professional
license in your system and you assign any Web Professional privilege, for
example Web Edit Drilling And Links, to User2, Intelligence Server displays
an error message when any user attempts to log in to MicroStrategy Web.
To fix this problem, you can either change the user privileges to match the
number of licenses you have, or you can obtain additional licenses from
MicroStrategy. License Manager can determine which users are causing the
metadata to exceed your licenses and which privileges for those users are
causing each user to be classified as a particular license type (see Using
License Manager, page 190).
For more information about the privileges associated with each license type,
see List of all privileges, page 895. Each privilege group has an introduction
indicating any license that the privileges in that group are associated with.
groups. These privileges are marked with asterisks, and are listed
at the top of each group’s list of privileges.
• Only users who have the Use Desktop privilege in the Desktop
Analyst group are granted Desktop Analyst or Desktop Designer
licenses. Users who do not have the Use Desktop privilege are not
granted either of these licenses, even if they have all other
privileges from these privilege groups.
To verify your Named User licenses, Intelligence Server scans the metadata
repository daily for the number of users fitting each Named User license
type. If the number of licenses for a given type has been exceeded, an error
message is displayed when a user logs in to a MicroStrategy product. Contact
your MicroStrategy account executive to increase your number of Named
User licenses. For detailed information on the effects of being out of
compliance with your licenses, see Effects of being out of compliance with
your licenses, page 188.
For steps to manually verify your Named User licenses using License
Manager, see Auditing your system for the proper licenses, page 192.
You can configure the time of day that Intelligence Server verifies your
Named User licenses.
3 Specify the time in the Time to run license check (24 hr format).
CPU licenses
When you purchase licenses in the CPU format, the system monitors the
number of CPUs being used by Intelligence Server in your implementation
and compares it to the number of licenses that you have. You cannot assign
privileges related to certain licenses if the system detects that more CPUs are
being used than are licensed. For example, this could happen if you have
MicroStrategy Web installed on two dual-processor machines (four CPUs)
and you have a license for only two CPUs.
To fix this problem, you can either use License Manager to reduce the
number of CPUs being used on a given machine so it matches the number of
licenses you have, or you can obtain additional licenses from MicroStrategy.
To use License Manager to determine the number of CPUs licensed and, if
necessary, to change the number of CPUs being used, see Using License
Manager, page 190.
To verify your CPU licenses, Intelligence Server scans the network to count
the number of CPUs in use by Intelligence Servers. If the number of CPU
licenses has been exceeded, an error message is displayed when a user logs in
to a MicroStrategy product. Contact your MicroStrategy account executive to
increase your number of CPU licenses. For detailed information on the
effects of being out of compliance with your licenses, see Effects of being out
of compliance with your licenses, page 188.
For steps to manually verify your CPU licenses using License Manager, see
Auditing your system for the proper licenses, page 192.
After the system has been out of compliance for fifteen days, an additional
error message is displayed to all users when they log into a project source,
warning them that the system is out of compliance with the available
licenses. This error message is only a warning, and users can still log in to the
project source.
After the system has been out of compliance for thirty days, Intelligence
Server can no longer be restarted once it is shut down. In addition, if the
system is out of compliance with Named User licenses, the privileges
associated with the out-of-compliance products are disabled in the User
Editor, Group Editor, and Security Role Editor to prevent them from being
assigned to any additional users.
You can check for and manage the following licensing issues:
• More copies of a MicroStrategy product are installed and being used than
you have licenses for.
• More users are using the system than you have licenses for.
• More CPUs are being used with Intelligence Server than you have licenses
for.
In both GUI mode and command line mode, License Manager allows you to:
From this information, you can determine whether you have the
number of licenses that you need. You can also print a report, or
create and view a Web page with this information.
• Trigger a license verification check after you have made any license
management changes, so the system can immediately return to normal
behavior.
Ifvalue
the edition is other than Evaluation, the expiration date has a
of “Never.”
For detailed steps to perform all of these procedures, view the License
Manager Help (from within License Manager, press F1).
• Windows command line: From the Start menu select Run. Type CMD
and press ENTER. Type malicmgr and press ENTER. License
Manager opens in command line mode, and instructions on how to
use the command line mode are displayed.
To audit your system, perform the procedure below on each server machine
in your system.
below.
In command line mode, the steps to audit licenses vary from those
Refer to the License Manager command line prompts to
guide you through the steps to audit licenses.
3 Select the Everyone group and click Audit. A folder tree of the assigned
licenses is listed in the Number of licenses pane.
4 Count the number of licenses per product for enabled users. Disabled
users do not count against the licensed user total, and should not be
counted in your audit.
For detailed information, click Report to create and view XML, HTML,
and CSV reports. You can also have the report display all privileges for
each user based on the license type. To do this, select the Show User
Privileges in Report check box.
6 Total the number of users with each license across all machines.
You must update your license key on all machines where MicroStrategy
products are installed. License Manager updates the license information for
the products that are installed on that machine.
Inthosecommand line mode, the steps to update your license vary from
below. Refer to the License Manager command line prompts
to guide you through the steps to update your license.
2 On the License Administration tab, select the Update local license key
option and click Next.
3 Type or paste the new key in the New License Key field and click Next.
Ifusage,
you have one or more products that are licensed based on CPU
the Upgrade window opens, showing the maximum
number of CPUs each product is licensed to use on that machine.
You can change these numbers to fit your license agreement. For
example, if you purchase a license that allows more CPUs to be
used, you can increase the number of CPUs being used by a
product.
4 The results of the upgrade are shown in the Upgrade Results dialog box.
License Manager can automatically request an Activation Code for your
license after you update.
After installation you can specify CPU affinity through the MicroStrategy
Service Manager. This requires administrator privileges on the target
machine.
1 From the Start menu, point to MicroStrategy, then Tools, and then
select Service Manager. Service Manager opens.
3 Click the Options button. The Service Options dialog box opens.
6 Click OK. The Service Options dialog box closes and CPU affinity has
been changed.
If the target machine contains more than one physical processor and the
MicroStrategy license key allows more than one CPU to run Intelligence
Server Universal Edition, you are prompted to provide the number of CPUs
to be deployed. The upper limit is either the number of licensed CPUs or the
physical CPU count, whichever is lower.
This automatic adjustment for CPU affinity attempts to apply the user’s
specified CPU affinity value when it adjusts the system, but it may not always
be able to do so depending on the availability of processors. For example, if
you own two CPU licenses and CPU affinity is manually set to use Processor 1
and Processor 2, the CPU affinity adjustment may reset CPU usage to
Processor 0 and Processor 1 when the system is automatically adjusted.
You can specify CPU affinity either through the MicroStrategy Service
Manager, or by modifying Intelligence Server options. If you want to view
and modify Intelligence Server’s options, it must be registered as a service.
You can register Intelligence Server Universal as a service using the
Configuration Wizard by selecting the Register Intelligence Server as a
Service option; alternatively, you can follow the procedure below.
mstrctl -s IntelligenceServerName rs
Whenever you change the CPU affinity, you must restart the machine.
Resource sets
exclusive use of a resource; the same CPU can be part of several different
resource sets.
To fix this problem, either run Intelligence Server from a root account, or do
not assign a resource set to Intelligence Server that contains more CPUs than
your CPU license allows.
Processor sets
assigned to it. Only the processes assigned to the processor set can use the
processor set’s CPUs. Other processes are not allowed to use any of these
CPUs.
A processor set exists beyond the lifetime of the process that created it.
Therefore, when a process is shut down, the process must delete the
processor set that was created. For example, if a process creates a processor
set with three CPUs and the process unexpectedly terminates without
deleting the processor set it created, the three CPUs cannot be utilized by any
other process until the system is rebooted or the processor set is manually
deleted.
Intelligence Server deletes the processor set before shutting down so that the
related processes do not remain locked. If Intelligence Server terminates
unexpectedly, when restarted it performs a cleanup of the processor set it
had created. However, if Intelligence Server is not restarted immediately
after termination, you may need to manually delete the processor set so the
CPUs are free to be used by other applications.
You can specify an existing processor set for Intelligence Server to use. To do
so, type the following command:
This section describes settings that may interact with CPU affinity that you
must consider, and provides steps to update CPU affinity in your
environment.
IIS versions
CPU affinity can be configured on machines running IIS 6.0 or 7.0. The
overall behavior depends on how IIS is configured. The following cases are
considered:
• Worker process isolation mode: In this mode, the CPU affinity setting
is applied at the application pool level. When MicroStrategy Web CPU
affinity is enabled, it is applied to all ASP.NET applications running in the
same application pool. By default, MicroStrategy Web runs in its own
• IIS 5.0 compatibility mode: In this mode, all ASP.NET applications run
in the same process. This means that when MicroStrategy Web CPU
affinity is enabled, it is applied to all ASP.NET applications running on
the Web server machine. A warning is displayed before installation or
before the CPU affinity tool (described below) attempts to set the CPU
affinity on a machine with IIS running in IIS 5.0 compatibility mode.
This is the default mode of operation when the machine has been
upgraded from an older version of Windows.
Both IIS 6.0 and IIS 7.0 support a "Web Garden" mode, in which IIS creates
some number of processes, each with affinity to a single CPU, instead of
creating a single process that uses all available CPUs. The administrator
specifies the total number of CPUs that are used. The Web Garden settings
can interact with and affect MicroStrategy CPU affinity.
The Web Garden setting should not be used with MicroStrategy Web.
At runtime, the MicroStrategy Web CPU affinity setting is applied
after IIS sets the CPU affinity for the Web Garden feature. Using these
settings together can produce unintended results.
In both IIS 6.0 and IIS 7.0, the Web Garden feature is disabled by default.
The MAWebAff.exe tool lists each physical CPU on a machine. You can add
or remove CPUs or disable CPU affinity using the associated check boxes.
Clearing all check boxes prevents the MicroStrategy Web CPU affinity setting
from overriding any IIS-related CPU affinity settings.
3 Click Apply to apply the settings without closing the tool, or click OK to
apply settings and close the tool.
Introduction
• The History List is a way of saving report results on a per-user basis. For
more information, see Saving report results: History List, page 233.
You specify settings for all cache types except History List under Caching in
the Project Configuration Editor. History List settings are specified in the
Intelligence Server Configuration Editor.
Result, element, and object caches are created and stored for individual
projects; they are not shared across projects. History Lists are created and
stored for individual users.
Changes to cache settings do not take effect until you stop and restart
MicroStrategy Intelligence Server.
Result caches
Result caches are caches of executed reports or documents. A report cache is
a result set from an executed report that is stored on MicroStrategy
Intelligence Server. A document cache is the saved output of an executed
document that is stored on Intelligence Server.
Report caches can only be created or used for a project if the Enable report
server caching check box is selected in the Project Configuration Editor
under the Caching: Result Caches: Creation category.
Document caches can only be created or used for a project if the Enable
document output caching in selected formats check box is selected in the
Project Configuration Editor under the Caching: Result Caches: Creation
category, and one or more formats are selected.
By default, result caching is enabled at the project level. It can also be set on a
per-report and per-document basis. For example, you can disable caching at
the project level, and enable caching only for specific, frequently-used
reports. For more information, see Result cache settings at the
report/document level, page 233.
Caching does not apply to a drill report request because the report is
constructed on the fly.
When a user runs a report (or, from Web, a document), a job is submitted to
Intelligence Server for processing. If a cache for that request is not found on
the server, a query is submitted to the data warehouse for processing, and
then the results of the report are cached. The next time someone runs the
report or document, the results are returned immediately without having to
wait for the database to process the query.
Ifslowyouresponse
are running Intelligence Server on HP-UX v2, and you notice a
time when using the Cache Monitor, see Cache Monitor
and Intelligent Cube Monitor performance, page 633 for steps you
can take to improve performance.
You can easily check whether an individual report hit a cache by viewing the
report in SQL View. The image below shows the SQL View of a MicroStrategy
Tutorial report, Sales by Region. The fifth line of the SQL View of this report
shows “Cache Used: Yes.”
• The drive that holds the result caches should always have at least 10% of
its capacity available.
• If results are cached by user ID (see Create caches per user, page 227), it
may be better to disable caching and instead use the History List. For
information about the History List, see Saving report results: History
List, page 233.
• Be aware of the various ways in which you can tune the caching
properties to improve your system’s performance. For a list of these
properties, and an explanation of each, see Configuring result cache
settings, page 222.
Matching caches
Matching caches are the results of reports and documents that are retained
for later use by the same requests later on. In general, Matching caches are
the type of result caches that are used most often by Intelligence Server.
History caches
History caches are report results saved for future reference in the History
List by a specific user. When a report is executed, an option is available to the
user to send the report to the History List. Selecting this option creates a
History cache to hold the results of that report and a message in the user’s
History List pointing to that History cache. The user can later reuse that
report result set by accessing the corresponding message in the History List.
It is possible for multiple History List messages, created by different users, to
refer to the same History cache.
The main difference between Matching and History caches is that a Matching
cache holds the results of a report or document and is accessed during
execution, while a History cache holds the data for a History List message
and is only accessed when that History List message is retrieved.
For more information about History Lists, see Saving report results:
History List, page 233.
Matching-History caches
XML caches
An XML cache is a report cache in XML format that is used for personalized
drill paths. It is created when a report is executed from Web, and is available
for reuse in Web. It is possible for an XML cache to be created at the same
time as its corresponding Matching cache. Although just a different format of
the Matching cache, the XML cache is maintained as a distinct cache and
thus counts as an independent unit towards the maximum number of caches.
It is automatically removed when the associated report or History cache is
removed.
To disable XML caching, select the Enable Web personalized drill paths
option in the Project definition: Drilling category in the Project
Configuration Editor. Note that this may adversely affect Web performance.
For more information about XML caching, see ACLs and personalized drill
paths in Web, page 61.
By default, result cache files are stored under the directory where
Intelligence Server is installed, in \Caches\Server
Definition\Machine Name\. Report caches are stored in this folder;
document caches are stored in the \RWDCache\ subfolder of this folder.
Report caches are stored on the disk in a binary file format. Each report
cache has two parts:
Intelligence Server creates two types of index files to identify and locate
report caches:
• CachePool.idx is an index file that contains the list of all Matching and
History caches and pointers to the caches’ locations.
Document caches are stored on the disk in a binary file format. Each
document cache has two parts:
• RWDPool.idx is an index file that contains the list of all Matching caches
and pointers to the caches’ locations.
Ifa report,
you are not using MicroStrategy OLAP Services, any modification to
even a simple formatting change or an Access Control List
Intelligence Server makes sure that users with different security filters
cannot access the same cache. Intelligence Server compares the Security ID
and Security Version ID of all the security filters applied to the user in the
request, including those inherited from the groups to which he or she
belongs, with the security profile of the user who originated the cache.
Intelligence Server makes sure a cache is not used if the user running the
report is using a different language than the user who created the cache.
Each different language creates a different cache.
You may find it necessary to add optional criteria (listed below) to the cache
matching process. These criteria are useful if database security view and
connection mapping are used to ensure that users with different security
profiles, who see different data from the data warehouse, cannot access the
same cache.
• User ID: Global unique identifier (GUID) of the user, if the Create
caches per user check box is selected in the Caching: Result Caches
(Creation) category in the Project Configuration Editor.
• Database Login: GUID of the database login assigned to the user via the
Connection Mapping or if the Create caches per database login check
box is selected in the Caching: Result Caches (Creation) category in the
Project Configuration Editor. The latter is especially useful if Database
authentication mode is used. For more information, see Implementing
database warehouse authentication, page 103.
Report services document caches have additional criteria that must match
before a cache can be used:
• The Export Option (All or Current Page) and Locale of the document
must match the cache.
• The selector and group-by options used in the document must match
those used in the cache.
• The format of the document (PDF, Excel, HTML, or XML/Flash) must
match the format of the cache.
• In Excel, the document and cache must both be either enabled or disabled
for use in MicroStrategy Office.
• Reports are heavily prompted, and the answer selections to the prompts
are different each time the reports are run.
• Few users share the same security filters when accessing the reports.
Ifenabling
you disable result caching for a project, you can set exceptions by
caching for specific reports or documents. For more
information, see Result cache settings at the report/document level,
page 233.
3 To disable all result caching, clear the Enable report server caching
check box.
1 In Desktop, log into a project source. You must log in as a user with the
Monitor Caches privilege.
3 Select the project for which you want to monitor the result caches and
click OK. The Report Cache Monitor or Document Cache Monitor opens.
The Intelligence Server logs are often useful when troubleshooting issues
with report caching in a MicroStrategy system. You can view these logs and
configure what information is logged using the Diagnostics and Performance
Logging Tool. For more information, see Configuring what is logged:
Diagnostics and Performance Logging tool, page 598.
4 In the Report Server component, in the Cache Trace dispatcher, click the
File Log (currently set to <None>) and select <New>. The Log
Destination Editor opens.
6 Click Save, and then click Close. The Log Destination Editor closes.
7 In the Report Server component, in the Cache Trace dispatcher, click the
File Log (currently set to <None>) and select cacheTrace. The creation
and deletion of report caches is now logged to this file.
Command Manager
You can also use the following Command Manager scripts to monitor result
caches:
Typically, reports and documents that are frequently used best qualify for
scheduling. Reports and documents that are not frequently used do not
necessarily need to be scheduled because the resource cost associated with
creating a cache on a schedule might not be worth it. For more information
on scheduling a result cache update, see Scheduling reports and documents:
Subscriptions, page 361.
You may need to unload caches from memory to disk to create free memory
for other operations on the Intelligence Server machine.
If a report cache is unloaded to disk and a user requests that report, the
report is then loaded back into memory automatically. You can also
manually load a report cache from the disk into memory.
Caches are saved to disk based on the Backup frequency setting (see Backup
Frequency (minutes), page 223). Caches are always saved to disk regardless
of whether they are loaded or unloaded; unloading or loading a cache only
affects the cache's status in Intelligence Server memory.
• When there are changes in the data warehouse, the existing caches are no
longer valid because the data may be out of date. In this case, future
report/document requests should no longer hit the caches.
Caches need to be invalidated when new data is loaded from the data
warehouse, so that the outdated cache is not used to fulfill a request. You can
invalidate all caches that rely on a specific table in the data warehouse. For
example, you could invalidate all report/document caches that use the
Sales_Trans table in your data warehouse.
You can update the data warehouse load routine to invoke a MicroStrategy
Command Manager script to invalidate the appropriate caches. This script is
located at C:\Program
Files\MicroStrategy\Administrator\Command
Manager\Outlines\Cache_Outlines\Invalidate_Report_Cache_
Outline. For more information about Command Manager, see the
MicroStrategy System Administration Guide, volume 2.
To invoke Command Manager from the database server, use one of the
following commands:
• DB2: ! cmdmgr
• Teradata: os cmdmgr
From the Cache Monitor, you can manually invalidate one or more caches.
1 In Desktop, log into a project source. You must log in as a user with the
Monitor Caches privilege.
3 Select the project for which you want to invalidate a cache and click OK.
The Report Cache Monitor or Document Cache Monitor opens.
Typically, you do not need to manually delete result caches if you are
invalidating caches and managing History List messages. Result caches are
automatically deleted by Intelligence Server if cache invalidation and History
Lists are performed and maintained properly, as follows:
In all cases, cache deletion occurs based on the Cache lookup cleanup
frequency setting. For more information about this setting, see Cache lookup
cleanup frequency (sec), page 224.
You can manually delete caches via the Cache Monitor and Command
Manager or scheduled via the Administration Tasks Scheduling, in the same
way that you manually invalidate caches. For details, see Invalidating result
caches, page 218.
You can delete all the result caches in a project at once by selecting the Purge
Caches option in the Project Configuration Editor. This forces reports
executed after the purge to retrieve and display the latest data from the data
warehouse.
Purging deletes all result caches in a project, including caches that are
still referenced by the History List. Therefore, purge caches only when
you are sure that you no longer need to maintain any of the caches in
the project, and otherwise delete individual caches.
Even after purging caches, reports and documents may continue to display
cached data. This can occur because results may be cached at the object and
element levels, in addition to at the report/document level. To ensure that a
re-executed report or document displays the most recent data, purge all
three caches. For instructions on purging element and object caches, see
Deleting all element caches, page 260 and Deleting object caches, page 264.
• At the server level (see Result cache settings at the server level,
page 223)
• At the project level (see Result cache settings at the project level,
page 224)
• At the individual report/document level (see Result cache settings at the
report/document level, page 233)
You can configure the following caching settings in the Intelligence Server
Configuration Editor, in the Server Definition (Advanced) category. Each is
described below.
You can also configure these settings using the Command Manager script,
Alter_Server_Config_Outline.otl,
located at C:\Program
Files\MicroStrategy\Administrator\Command
Manager\Outlines\Cache_Outlines.
You can specify the cache backup frequency in the Backup frequency
(minutes) box under the Server definition: Advanced subcategory in the
Intelligence Server Configuration Editor.
If you specify a backup frequency of 0 (zero), result caches are saved to disk
as soon as they are created. If you specify a backup frequency of 10
(minutes), the result caches are backed up from memory to disk ten minutes
after they are created.
This setting also defines when Intelligent Cubes are saved to secondary
storage, as described in Defining when Intelligent Cubes are automatically
saved to secondary storage, page 283.
The default value for this setting is 0, which means that the cleanup takes
place only at server shutdown. You may change this value to another based
on your needs, but make sure that it does not negatively affect your system
performance. MicroStrategy recommends cleaning the cache lookup at least
daily but not more frequently than every half hour.
You can configure the following caching settings in the Project Configuration
Editor, in the Result Caches category. Each is described below.
Maximum RAM usage, page 229 (separate settings for report and
document caches)
You can also configure these settings using the Command Manager scripts
located at C:\Program
Files\MicroStrategy\Administrator\Command
Manager\Outlines\Cache_Outlines.
Result caches can only be created or used for a project if the Enable report
server caching check box is selected in the Project Configuration Editor in
the Caching: Result Caches (Creation) category. If it is disabled, all the other
options in the Result Caches (Creation) and Result Caches (Maintenance)
categories are grayed out, except for Purge Now. By default, report server
caching is enabled. For more information on when report caching is used, see
Result caches, page 204.
Document caches can only be created or used for a project if the Enable
document output caching in selected formats check box is selected in the
Project Configuration Editor in the Caching: Result Caches (Creation)
category. Document caches are created for documents that are executed in
the selected output formats. You can select any or all of PDF, Excel, HTML,
and XML/Flash.
To disable this setting, clear its check box in the Project Configuration Editor
under the Caching: Result Caches (Creation) category.
If you Enable caching for prompted reports and documents (see above),
you can also Record prompt answers for cache monitoring. This causes
all prompt answers to be listed in the Cache Monitor when browsing the
result caches. You can then invalidate specific caches based on prompt
answers, either from the Cache Monitor or with a custom Command
Manager script.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
This option is enabled by default. To disable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
If you Enable XML caching for reports, reports executed from Web create
XML caches in addition to any Matching or History caches they may create.
For information about XML caches, see XML caches, page 209.
This option is enabled by default. To disable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
If the Create caches per user setting is enabled, different users cannot
share the same result cache. Enable this setting only in situations where
security issues (such as database-level Security Views) require users to have
their own cache files. For more information, see Cache matching algorithm,
page 211.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches (Creation)
category.
The Cache file directory, in the Project Configuration Editor under the
Caching: Result Caches (Storage) category, specifies where all the
cache-related files are stored. The default location is relative to the
installation path of Intelligence Server:
• Local caching: Each node hosts its own cache file directory that needs to
be shared as “ClusterCache” so that other nodes can access it.
ClusterCaches is the share name Intelligence Server looks for on other
nodes to retrieve caches.
• Centralized caching: All nodes have the cache file directory set to the
same network location, \\<machine name>\<shared directory
name>
On UNIX systems, you can use forward slashes instead of back slashes in the
directory name.
The Cache encryption level on disk drop-down list controls the strength of
the encryption on result caches. You can configure result caches to use either
simple encryption, or AES encryption with a 128-bit key. Encrypting caches
increases security, but may slow down the system.
By default the caches that are saved to disk are not encrypted. You can
change the encryption level in the Project Configuration Editor under the
Caching: Result Caches (Storage) category.
If the machine experiences problems because of high memory use, you may
want to reduce the Maximum RAM usage for the result caches. You need to
find a good balance between allowing sufficient memory for report caches
and freeing up memory for other uses on the machine. The default value is
50 megabytes for reports and datasets, and 256 megabytes for formatted
documents. The maximum value for each of these is 65536 megabytes, or 64
gigabytes.
MicroStrategy recommends that you initially set this value to 10% of the
system RAM if it is a dedicated Intelligence Server machine (no other
processes running on it). This setting depends on the following factors:
This setting should be at least as large as the largest report in the project
that you wish to be cached. If the amount of RAM available is not large
enough for the largest report cache, that cache will not be used and the
report will always execute against the warehouse. For example, if the
largest report you want to be cached in memory is 20 MB, the maximum
RAM usage needs to be at least 20 MB.
You should monitor the system’s performance when you change the
Maximum RAM usage setting. In general, it should not be more than 30%
of the machine’s total memory.
For more information about when report caches are moved in and out of
memory, see Location of result caches, page 209.
• The number of users and History List messages they will keep
If the MicroStrategy Intelligence Server memory that has been allocated for
caches becomes full, it must swap caches from memory to disk. The RAM
swap multiplier setting, in the Project Configuration Editor under the
Caching: Result Caches (Storage) category, controls how much memory is
swapped to disk, relative to the size of the cache being swapped into memory.
For example, if the RAM swap multiplier setting is 2 and the requested
cache is 80 Kbytes, 160 Kbytes are swapped from memory to disk.
If the cache memory is full and several concurrent reports are trying to swap
from disk, the swap attempts may fail and re-execute those reports. This
counteracts any gain in efficiency due to caching. In this case, increasing the
RAM swap multiplier setting provides additional free memory into which
those caches can be swapped.
For large projects, loading caches on startup can take a long time so you have
the option to set the loading of caches on demand only. However, if caches
are not loaded in advance, there will be a small additional delay in response
time when they are hit. Therefore, you need to decide which is best for your
set of user and system requirements.
The Never expire caches setting, in the Project Configuration Editor under
the Caching: Result Caches (Maintenance) category, causes caches to never
automatically expire. MicroStrategy recommends selecting this check box,
instead of using time-based result cache expiration. For more information,
see Expiring result caches, page 221.
All caches that have existed for longer than the Cache Duration (in hours)
are automatically expired. This duration is set to 24 hours by default. You
can change the duration in the Project Configuration Editor under the
Caching: Result Caches (Maintenance) category.
By default, caches for reports based on filters that use dynamic dates always
expire at midnight of the last day in the dynamic date filter. This behavior
occurs even if the Cache Duration (see above) is set to zero. For example, a
report has a filter based on the dynamic date “Today.” If this report is
When you create a subscription, you can force the report or document to
re-execute against the warehouse even if a cache is present. You can also
prevent the subscription from creating a new cache.
To change the default behavior for new subscriptions, use the following
check boxes in the Project Configuration Editor, in the Caching: Subscription
Execution category.
• To cause new History List and Mobile subscriptions to execute against
the warehouse by default, select the Re-run History List and Mobile
subscriptions against the warehouse check box.
• To cause new email, file, and print subscriptions to execute against the
warehouse by default, select the Re-run file, email, and print
subscriptions against the warehouse check box.
• To prevent new subscriptions of all types from creating or updating
caches by default, select the Do not create or update matching caches
check box.
These setting allows you to disable or enable caching for a specific report or
document.
• To set the caching options for a document, in the Document Editor, from
the Format menu, select Document Properties. The Document
Properties dialog box opens. Select the Caching category.
To use the project-level setting for caching, select the Use default
project-level behavior option. This indicates that the caching settings
configured at the project level in the Project Configuration Editor apply to
this specific report or document as well.
• Keep shortcuts to previously run reports, like the Favorites list when
browsing the Internet.
The History List is displayed at the user level, but is maintained at the project
source level. The History List folder contains messages for all the projects in
which the user is working. The number of messages in this folder is
controlled by the setting Maximum number of messages per user. For
example, if you set this number at 40, and you have 10 messages for Project
A and 15 for Project B, you can have no more than 15 for Project C. When the
maximum number is reached, the oldest message in the current project is
purged automatically to leave room for the new one.
Ifreached
the current project has no messages but the message limit has been
in other projects in the project source, the user may be unable
to run any reports in the current project. In this case the user must log
in to one of the other projects and delete messages from the History
list in that project.
The data contained in these History List messages is stored in the History
List repository, which can be located on Intelligence Server, or in the
database. For more information about the differences between these two
storage options, see Configuring History List data storage, page 238.
Each report that is sent to the History List creates a single History List
message. Each document creates a History List message for that document,
plus a message for each dataset report in the document.
You can send report results to the History List in the manually or
automatically.
Report results can be manually sent to the History List any time you plan to
execute a report, during report execution, or even after a report is executed:
This operation creates two jobs, one for executing the report
(against the data warehouse) and another for sending the report to
History List. If caching is enabled, the second job remains in the
waiting list for the first job to finish; if caching is not enabled, the
second job runs against the data warehouse again. Therefore, to
avoid wasting resources, MicroStrategy recommends that if
caching is not enabled, users not send the report to History List in
the middle of a report execution.
From Web: While the report is being executed, click Add to History
List on the wait page.
This operation creates only one job because the first one is
modified for the Send to History List request.
From Web: After the report is executed, select Add to History List
from the Home menu.
Two jobs are created for Desktop, and only one is created for Web.
Sending a message to the History List automatically
Report results can be automatically sent to the History List. There are two
different ways to automatically send messages to the History list. You can
either have every report or document that you execute sent to your History
List, or you can subscribe to specific reports or documents:
From Web: Select History List from the Project Preferences, and
then select Automatically for Add reports and documents to my
History List.
From Web: On the reports page, under the name of the report that
you want to send to History List, select Subscriptions, and then click
Add History List subscription on the My Subscriptions page.
Choose a schedule for the report execution. A History List message is
generated automatically whenever the report is executed based on the
schedule.
From Desktop: Right-click a report or document and select
Schedule Delivery to and select History List. The History List
Subscription Editor opens. Define the subscription details. For
specific information about using the Subscription Editor, click Help.
The History List Monitor filter can be used to either filter which messages
are displayed in the History List, or it can define the History List messages
that you want to purge from the History List. The History List Monitor filter
allows you to define various parameters to filter or purge your History List
messages
To use the History List Monitor Filter to filter your History List messages,
right click the History List folder, and select Filter. After you have specified
the filter parameters, click OK. The History List Monitor Filter closes, and
your History List messages will be filtered accordingly.
To use the History List Monitor Filter to purge items from your History List
folder, right click the History List folder and select Purge. The History List
Monitor Filter opens. After you have specified the filter parameters, click
Purge. The History List Monitor Filter closes, and the History List Messages
that match the criteria defined in the History List Monitor Filter are deleted.
For more details about the History List Monitor Filter, click Help.
Multiple messages can point to the same History cache. In this case,
only after ALL these messages are deleted is the History cache
eliminated as well.
Types of result caches, page 208. For more information about storing
History List data, see Configuring History List data storage, page 238.
You can use the History List messages to retrieve report results, even when
report caching is disabled.
There are two different ways that the History List repository can be
configured to store data for the History List. It can either be stored in file on
the Intelligence Server machine, or it can be stored in your database.
The History List cached data can be stored in a file on the machine that hosts
Intelligence Server. The default location of this file is relative to the
installation path of Intelligence Server:
• Local caching: Each node hosts its own cache file directory that needs to
be shared as “ClusterCache” so that other nodes can access it.
• Centralized caching: All nodes have the cache file directory set to the
same network location: \\<machine name>\<shared directory
name>
For example:
\\My_File_Server\My_Inbox_Directory.
On UNIX systems, you can use forward slashes instead of back slashes in
the directory name. For example,
//My_File_Server/My_Inbox_Directory.
For steps to configure Intelligence Server to store cached History List data in
a file-based repository, see the procedure below.
4 Select File based, and type the file location in the History Directory
field.
Once Intelligence Server has been configured to store the History List
cached data in the database, this setting will apply to the entire server
Prerequisites
• The storage location for the History List data (the History List repository)
has been created in the database.
• A database instance has been created that points to the History List
repository in the database.
Once Intelligence Server has been configured to store the History List
cached data in the database, this setting will apply to the entire server
definition. If you want to revert back to a file-based repository, you
must change the server definition.
6 From the Database Instance menu, select the database instance that
points to the History List repository in the database.
To confirm that the History List repository has been configured correctly
11 On the left, expand History Settings and select General. If you have
configured Intelligence Server properly, following message is displayed in
the Repository Type area of the Intelligence Server Configuration Editor:
In MicroStrategy Web, log in to the desired project and click the History List
link in the top navigation bar. This displays all history list messages for the
user that is currently logged in. The following information is available:
IfError
you are working in a clustered environment, only Ready and
statuses are synchronized across nodes. While a job on one
node is reported as Executing, it is reported as Processing On
Another Node on all the other nodes.
• Message Creation Time: The time the message was created, in the
currently selected time zone.
Each time a user submits a report that contains a prompt, the dialog
requires that he answer the prompt. As a result, multiple listings of
the same report may occur. The differences among these reports can
be found by checking the timestamp and the data contents.
• Folder name: Name of the folder where the original report is saved
• Last update time: The time when the original report was last updated
• Message text: The status message for the History List message
You can see more details of any message by right-clicking it and selecting
Quick View. This opens a new window with the following information:
• Report definition: Expand this category to see information about the
report definition, including the description, owner, time and date it was
last modified, the project it resides in, the report ID, the path to the
report’s location, and report details.
• Job execution statistics: Expand this category to see information about
the report execution, including the start and end time, the total number
of rows and columns in the report, the total number of rows and columns
that contain raw data, whether or not a cache was used, the job ID, and
the SQL produced.
time, read status, format, request type, application, message ID, and
message text.
3 Clear the check box for The new report will overwrite older versions of
itself.
• Choose the project that contains the object that you want to archive.
Click Next.
• Browse to the report or document that you want to archive. You can
select multiple reports or documents by holding the Ctrl key while
clicking them.
• Click Next when all of the reports or documents that you want to
archive have been added.
5 Select a user group to receive the message for the archived report or
document:
• Browse to the user group that you want to send the archived report
to.You can select multiple reports or documents by holding the Ctrl
key while clicking them
• Click Next when all of the user groups that you want to receive the
archived report or document have been added.
All members in the user group receive the History List message.
6 Specify the subscription properties. You can choose to do the following:
• Run the schedule immediately
7 Clear the The new report will overwrite older versions of itself check
box, and click Next.
8 Review the summary screen and click Finish. The Subscription Creation
Wizard closes.
Although you can set the number of History List messages retained in the
History List folder to a relatively high number (the maximum is 10,000),
keep in mind that if the list gets too big, you run the risk of wasting
resources. Therefore, MicroStrategy recommends that you educate users to
make efficient use of the History List feature by keeping only needed
messages and deleting unneeded ones in a timely fashion.
While users can do their part to maintain the size of the History List, an
administrator can control the size of the History List folders and thus control
resource usage through two settings: Message Lifetime and Deletion of
History List messages.
If you are using a database-based History List repository and you have the
proper permissions, you have access to the History List Messages Monitor.
This powerful tool allows you to view and manage History List messages for
all users. For more information, see Monitoring History List messages,
page 248.
Message lifetime controls how long (in days) messages can exist in a user’s
History List. This setting allows administrators to ensure that no History List
messages reside in the system indefinitely. Messages are tested against this
setting at user logout and deleted if found to be older than the established
lifetime.
When a message is deleted for this reason, any associated History caches are
also deleted. For more information about History caches, see History caches,
page 208.
The default value is -1, which means that messages can stay in the system
indefinitely until the user manually deletes them.
You can delete History List messages using the Schedule Administration
Tasks feature, which is accessed by selecting Scheduling from the
Administration menu. This allows you to periodically and selectively purge
History List messages of certain users and groups. You can choose to target
only certain messages, including:
• Messages for a certain project or for all projects
The Delete History List messages feature can also be used for one-time
maintenance by using a non-recurring schedule.
• Read
• Unread
• All
7 Click ... (browse) to select a user/group for which the History List
messages will be deleted.
The History List Messages Monitor allows you to view all History List
messages for all users, allows you to view detailed information about each
message, and also allows you to purge the messages based on certain
conditions.
To use the History List Messages Monitor, your History List repository must
be stored in a database. For more information about configuring the History
List repository, see Configuring Intelligence Server to use a database-based
History List repository, page 239.
You must have the Administer History List Monitor and the Monitor History
List privileges to be able to access the History List Messages Monitor.
To access the History List Messages Monitor, log in to a project source, and
expand the System Monitors folder. Click History List Messages. All
History List messages are displayed, as shown below:
• Filter or purge the messages displayed based on criteria that you define
by right-clicking a message and selecting Filter or Purge.
• Specify the details that you want to display for each message by
right-clicking the History List Messages Monitor and selecting View
Options.
For more information about using the History List Messages Monitor, refer
to the Desktop Help.
Element caches
When a user runs a prompted report containing an attribute element prompt
or a hierarchy prompt, an element request is created. (Additional ways to
create an element request are listed below.) An element request is actually a
SQL statement that is submitted to the data warehouse. Once the element
request is completed, the prompt can be resolved and sent back to the user.
Element caching, set by default, allows for this element to be stored in
memory so it can be retrieved rapidly for subsequent element requests
without triggering new SQL statements against the data warehouse.
For example, if ten users run a report with a prompt to select a region from a
list, when the first user runs the report, a SQL statement executes and
retrieves the region elements from the data warehouse to store in an element
cache. The next nine users see the list of elements return much faster than
the first user because the results are retrieved from the element cache in
memory. If element caching is not enabled, when the next nine users run the
report, nine additional SQL statements will be submitted to the data
warehouse, which puts unnecessary load on the data warehouse.
Element caches are the most-recently used lookup table elements that are
stored in memory on the Intelligence Server or MicroStrategy Desktop
machines so they can be retrieved more quickly. They are created when
users:
• Limiting the amount of memory available for element caches, page 257
When a Desktop user triggers an element request, the cache within the
Desktop machine’s memory is checked first. If it is not there, the Intelligence
Server memory is checked. If it is not there, the results are retrieved from the
data warehouse. Each option is successively slower than the previous one, for
example, the response time could be 1 second for Desktop, 2 seconds for
Intelligence Server, and 20 seconds for the data warehouse.
• Attribute ID
• Attribute version ID
• Element ID
• Search criteria
• Database connection (if the project is configured to check for the cache
key)
• Database login (if the project is configured to check for the cache key)
• Security filter (if the project and attributes are configured to use the cache
key)
In the Project Source Manager, select the Memory tab, set the Maximum
RAM usage (KBytes) to 0 (zero).
You might want to perform this operation if you always want to use the
caches on Intelligence Server. This is because when element caches are
purged, only the ones on Intelligence Server are eliminated automatically
while the ones in Desktop remain intact. Caches are generally purged
because there are frequent changes in the data warehouse that make the
caches invalid.
1 In Desktop, right-click the attribute and select Edit. The Attribute Editor
opens.
2 On the Display tab, clear the Enable element caching check box.
The incremental retrieval limit is four times the incremental fetch size. For
example, if your MicroStrategy Web product is configured to retrieve 50
elements at a time, 200 elements along with the distinct count value are
placed in the element cache. The user must hit the next option four times to
introduce another SELECT pass, which will retrieve another 200 records in
this example. Because the SELECT COUNT DISTINCT value was cached,
this would not be issued a second time the SELECT statement is issued.
To optimize the incremental element caching feature (if you have large
element fetch limits or small element cache pool sizes), Intelligence Server
uses only 10% of the element cache on any single cache request. For example,
if 200 elements use 20% of the cache pool, Intelligence Server only caches
100 elements, which is 10% of the available memory for element caches.
The number of elements retrieved per element cache can be set for Desktop
users at the project level, MicroStrategy Web product users, a hierarchy, or
an attribute. Each is discussed below.
To limit the number of elements displayed for a project (affects only Desktop
users)
1 Open the Project Configuration Editor and select the Project definition:
Advanced category.
4 Type the limit for the Maximum number of attribute elements per
block setting in the Incremental Fetch subcategory.
1 Open the Hierarchy editor, right-click the attribute and select Element
Display from the shortcut menu, and then select Limit. The Limit dialog
box opens.
3 In the Element Display category, select the Limit option and type a
number in the box.
The element display limit set for hierarchies and attributes may
further limit the number of elements set in the project properties or
Web preferences. For example, if you set 1,000 for the project, 500 for
the attribute, and 100 for the hierarchy, Intelligence Server will only
retrieve 100 elements.
You may find the incremental element fetching feature’s additional SELECT
COUNT DISTINCT query to be costly on your data warehouse. In some cases,
this additional query adds minutes to the element browse time making this
performance unacceptable for production environments.
To make this more efficient, you can set a VLDB option to control how the
total rows are calculated. The default is to use the SELECT COUNT
DISTINCT. The other option is to have Intelligence Server loop through the
table after the initial SELECT pass, eventually getting to the end of the table
and determining the total number of records. You must decide whether to
have the database or Intelligence Server determine the number of element
records. MicroStrategy recommends that you use Intelligence Server if your
data warehouse is heavily used, or if the SELECT COUNT DISTINCT query
itself adds minutes to the element browsing time.
Either option uses significantly less memory than what is used without
incremental element fetching enabled. Using the count distinct option,
Intelligence Server retrieves four times the incremental element size. Using
the Intelligence Server option retrieves four times the incremental element
size, plus additional resources needed to loop through the table. Compare
this to returning the complete result table (which may be as large as 100,000
elements) and you will see that the memory use is much less.
Caching algorithm
The cache behaves as though it contains a collection of blocks of elements.
Each cached element is counted as one object and each cached block of
elements is also counted as an object. As a result, a block of four elements are
counted as five objects, one object for each element and a fifth object for the
block. However, if the same element occurs on several blocks it is only
counted once. This is because the element cache shares elements between
blocks.
The cache uses the "least recently used" algorithm on blocks of elements.
That is, when the cache is full, it discards the blocks of elements that have
been in the cache for the longest time without any requests for the blocks.
Individual elements, which are shared between blocks, are discarded when
all of the blocks that contain the elements have been discarded. Finding the
blocks to discard is a relatively expensive operation. Hence, the cache
discards one quarter of its contents each time it reaches the maximum
number of allowed objects.
You can configure the memory setting for both the project and the client
machine in the Cache: Element subcategory in the Project Configuration
Editor. You should consider these factors before configuring it:
• The number of attributes that users browse elements on, for example, in
element prompts, hierarchy prompts, and so on
• Time and cost associated with running element requests on the data
warehouse
For example, if the element request for cities runs quickly (say in 2
seconds), it may not have to exist in the element cache.
1 Open the Project Configuration Editor and select the Caching: Auxiliary
Caches (Elements).
• If you set it to -1, Intelligence Server uses the default value of 1 MB.
3 Specify the amount of RAM (in megabytes) in the Client: Maximum RAM
usage (MBytes) box.
The new settings take affect only after Intelligence Server is restarted.
To set the RAM available for element caches on MicroStrategy Desktop
1 In the Project Source Manager, click the Caching tab and within the
Element Cache group of controls, select the Use custom value option.
2 Specify the RAM (in megabytes) in the Maximum RAM usage (MBytes)
field.
This functionality can be enabled for a project and limits the element cache
sharing to only those users with the same security filter. This can also be set
for attributes. That is, if you do not limit attribute elements with security
filters for a project, you can enable it for certain attributes. For example, if
you have Item information in the data warehouse available to external
suppliers, you could limit the attributes in the Product hierarchy with a
security filter. This is done by editing each attribute. This way, suppliers can
see their products, but not other suppliers’ products. Element caches not
related to the Product hierarchy, such as Time and Geography, are still
shared among users.
You must update the schema before changes to this setting take
affect (from the Schema menu, select Update Schema).
2 Select the Create element caches per connection map check box.
The new setting takes affect only after the project is reloaded or
after Intelligence Server is restarted.
If both of these properties are not set, the users will use their connection
maps to connect to the database.
2 Select the Create element caches per passthrough login check box.
The new setting takes affect only after the project is reloaded or
after Intelligence Server is restarted.
Ifelement
you are using a clustered Intelligence Server setup, to purge the
cache for a project, you must purge the cache from each node
of the cluster individually.
Even after purging element caches, reports and documents may continue to
display cached data. This can occur because results may be cached at the
report/document and object levels, in addition to at the element level. To
ensure that a re-executed report or document displays the most recent data,
purge all three caches. For instructions on purging result and object caches,
see Purging all result caches in a project, page 221 and Deleting object
caches, page 264.
Maximum number of elements to see Limiting the number of elements displayed and cached at a
display time, page 253
Attribute element number count see Optimizing element requests, page 255
method
Element cache - Max RAM usage see Limiting the amount of memory available for element caches,
(MBytes) Project page 257
Element cache - Max RAM usage see Limiting the amount of memory available for element caches,
(MBytes) Desktop page 257
Apply security filter to element see Limiting which attribute elements a user can see, page 258
browsing
Create caches per connection map see Limiting element caches by database connection, page 259
Create caches per passthrough see Limiting element caches by database login, page 260
login
Purge element caches see Deleting all element caches, page 260
Object caches
When you or any users browse an object definition (attribute, metric, and so
on), you create what is called an object cache. An object cache is a recently
used object definition stored in memory on MicroStrategy Desktop and
MicroStrategy Intelligence Server. You browse an object definition when you
open the editor for that object. You can create object caches for applications.
For example, when a user opens the Report Editor for a report, the collection
of attributes, metrics, and other user objects displayed in the Report Editor
compose the report’s definition. If no object cache for the report exists in
memory on MicroStrategy Desktop or MicroStrategy Intelligence Server, the
object request is sent to the metadata for processing.
The report object definition retrieved from the metadata and displayed to the
user in the Report Editor is deposited into an object cache in memory on
MicroStrategy Intelligence Server and also on the MicroStrategy Desktop of
the user who submitted the request. As with element caching, any time the
object definition can be returned from memory in either the Desktop or
Intelligence Server machine, it is faster than retrieving it from the metadata
database.
So when a Desktop user triggers an object request, the cache within the
Desktop machine’s memory is checked first. If it is not there, the Intelligence
Server memory is checked. If the cache is not even there, the results are
retrieved from the metadata database. Each option is successively slower
than the previous. If a MicroStrategy Web product user triggers an object
request, only the Intelligence Server cache is checked before getting the
results from the metadata database.
• Limiting the amount of memory available for object caches, page 263
• Object ID
• Object version ID
• Project ID
For a project that has a large schema object, the project loading speed
suffers if the maximum memory for object cache setting is not large
enough. This issue is recorded in the DSSErrors.log file. See
MicroStrategy Tech Note TN5300-007-0051 for more information.
You maintain object caching by using the Server: Maximum RAM usage
(MBytes) setting in the Caching: Auxiliary Caches (Objects) subcategory in
the Project Configuration Editor. On the client machine, you maintain object
caching by using the Client: Maximum RAM usage (MBytes) setting in the
Caching: Auxiliary Caches (Objects) subcategory in the Project Configuration
Editor.
The default value for both of these settings is 102,400 KB (100 MB).
Intelligence Server estimates that each object consumes 5 KB of the cache
pool, and therefore, it caches 20,480 object caches in memory.
1 Open the Project Configuration Editor and select the Caching: Auxiliary
Caches (Objects) category.
2 Specify the RAM (in megabytes) in the Server: Maximum RAM usage
(MBytes) box.
3 Specify the RAM (in megabytes) in the Client: Maximum RAM usage
(MBytes) box.
The new settings take affect only after Intelligence Server is restarted.
To set the RAM available for object caches for a MicroStrategy Desktop
machine
1 In the Project Source Manager, click the Caching tab and in the Object
Cache group of controls, select the Use custom value option.
Iftheyousame
select the Use project default option, the amount of RAM is
as specified in the Client section in the Project
Configuration Editor described above.
2 Specify the RAM (in megabytes) in the Maximum RAM usage (MBytes)
box.
Even after purging object caches, reports and documents may continue to
display cached data. This can occur because results may be cached at the
report/document and element levels, in addition to at the object level. To
ensure that a re-executed report or document displays the most recent data,
purge all three caches. For instructions on purging result and element
caches, see Purging all result caches in a project, page 221 and Deleting all
element caches, page 260.
1 Open the Project Configuration Editor and select the Caching: Auxiliary
Caches (Objects) category.
Configuration objects are cached at the server level. You can choose to delete
these object caches as well.
You cannot automatically schedule the purging of server object caches from
within MicroStrategy Desktop. However, you can compose a Command
Manager script to purge server object caches and schedule that script to
execute at certain times. For a description of this process, see tech note
TN6600-75X-0267 in the MicroStrategy Knowledge Base. For more
information about Command Manager, see the MicroStrategy System
Administration Guide, volume 2.
Introduction
You can return data from your data warehouse and save it to Intelligence
Server memory, rather than directly displaying the results in a report. This
data can then be shared as a single in-memory copy, among many different
reports created by multiple users. The reports created from the shared sets of
data are executed against the in-memory copy, also known as an Intelligent
Cube, rather than having to be executed against a data warehouse.
Once an Intelligent Cube has been published, you can manage it from the
Intelligent Cube Monitor. You can view details about your Intelligent Cubes
such as last update time, hit count, memory size, and so on.
Ifslowyouresponse
are running Intelligence Server on HP-UX v2, you may notice a
time when using the Intelligent Cube Monitor. For
information about this delay, including steps you can take to improve
performance, see Cache Monitor and Intelligent Cube Monitor
performance, page 633.
268 Managing Intelligent Cubes: Intelligent Cube Monitor © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
• Last Update Time: The time when the Intelligent Cube was last updated
against the data warehouse.
• Last Update Job: The job number that most recently updated the
Intelligent Cube against the data warehouse. You can use the Job Monitor
to view information on a given job.
• Creation Time: The time when the Intelligent Cube was first published to
Intelligence Server.
• Size (KB): The size of the Intelligent Cube, in kilobytes.
• Hit Count: The number of times the Intelligent Cube has been used by
reports since it was last updated.
• Historic Hit Count: The total number of times the Intelligent Cube has
been used by reports. Unpublishing and republishing an Intelligent Cube
creates a new Intelligent Cube that resets this value to zero.
• Open View Count: The number of reports currently accessing the
Intelligent Cube.
• File Name: The file location where the Intelligent Cube is saved to the
machine's secondary storage.
• Cube Instance ID: The ID for the current published version of the
Intelligent Cube.
• Data Language: The language used for the Intelligent Cube. This is
helpful if the Intelligent Cube is used in an internationalized environment
that supports multiple languages.
© 2010 MicroStrategy, Inc. Managing Intelligent Cubes: Intelligent Cube Monitor 269
6 Managing Intelligent Cubes System Administration Guide Vol. 1
You can also view Intelligent Cube information for a specific Intelligent
Cube, by double-clicking that Intelligent Cube in the Intelligent Cube
Monitor. This opens a Quick View of the Intelligent Cube information and
usage statistics.
Required Status to
Action Description
Perform Action
Activate Filed, but not Active Loads a previously deactivated Intelligent Cube as an
accessible set of data for multiple reports.
Update Active Re-executes and publishes an Intelligent Cube. When the data
for an Intelligent Cube is modified and saved, the Update action
updates the Intelligent Cube with the latest data.
Save to disk Loaded Saves an Intelligent Cube to secondary storage, and keeps the
Intelligent Cube in Intelligence Server memory.
If you have defined the backup frequency as zero minutes,
Intelligent Cubes are automatically saved to secondary storage,
as described in Defining when Intelligent Cubes are
automatically saved to secondary storage, page 283.
270 Managing Intelligent Cubes: Intelligent Cube Monitor © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
Required Status to
Action Description
Perform Action
Load in Active, but not Loaded Moves an Intelligent Cube from your machine’s secondary
memory storage to Intelligence Server memory. For information on when
to load and unload Intelligent Cubes, see Loading and
unloading Intelligent Cubes, page 272.
Note: If the memory limit is reached, this action unloads a
previously loaded Intelligent Cube from Intelligence Server
memory.
Unload from Loaded Moves an Intelligent Cube from Intelligence Server memory to
memory your machine’s secondary storage, such as a hard disk. For
information on when to load and unload Intelligent Cubes, see
Loading and unloading Intelligent Cubes, page 272.
Additional statuses such as Processing and Load Pending are also used by
the Intelligent Cube Monitor. These statuses denote that certain tasks are
currently being completed.
Additionally, if you have defined the backup frequency as greater than zero
minutes (as described in Defining when Intelligent Cubes are automatically
saved to secondary storage, page 283), the following additional statuses can
be encountered:
In both scenarios listed above, the data and monitoring information saved in
secondary storage for an Intelligent Cube is updated based on the backup
frequency. You can also manually save an Intelligent Cube to secondary
storage using the Save to disk action listed in the table above, or by using the
steps described in Storing Intelligent Cubes in secondary storage, page 282.
© 2010 MicroStrategy, Inc. Managing Intelligent Cubes: Intelligent Cube Monitor 271
6 Managing Intelligent Cubes System Administration Guide Vol. 1
Using the Intelligent Cube Monitor you can load an Intelligent Cube into
Intelligence Server memory, or unload it to secondary storage, such as a disk
drive.
By default, Intelligent Cubes are loaded when Intelligent Cubes are published
and when Intelligence Server starts. To change these behaviors, see:
272 Managing Intelligent Cubes: Intelligent Cube Monitor © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
The steps below show you how to define whether publishing Intelligent
Cubes loads them into Intelligence Server memory.
© 2010 MicroStrategy, Inc. Managing Intelligent Cubes: Intelligent Cube Monitor 273
6 Managing Intelligent Cubes System Administration Guide Vol. 1
4 You can select or clear the Load Intelligent Cubes into Intelligence
Server memory upon publication check box:
• Select this check box to load Intelligent Cubes into Intelligence Server
memory when the Intelligent Cube is published. Intelligent Cubes
must be loaded into Intelligence Server memory to allow reports to
access and analyze their data.
• To conserve Intelligence Server memory, you can clear this check box
to define Intelligent Cubes to only be stored in secondary storage
upon being published. The Intelligent Cube can then be loaded into
Intelligence Server memory manually, using schedules, or whenever a
report attempts to access the Intelligent Cube.
5 Click OK to save your changes and close the Project Configuration Editor.
6 For any changes to take affect, you must restart Intelligence Server.
Intelligent Cube data can also be stored in secondary storage, such as a hard
disk, on the machine hosting Intelligence Server. These Intelligent Cubes can
be loaded into memory when they are needed. For more information, see
Loading and unloading Intelligent Cubes, page 272.
274 Governing Intelligent Cube memory usage, loading, and storage © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
© 2010 MicroStrategy, Inc. Governing Intelligent Cube memory usage, loading, and storage 275
6 Managing Intelligent Cubes System Administration Guide Vol. 1
• The Maximum RAM usage (Mbytes) memory limit can be defined per
project. If you have multiple projects that are hosted from the same
Intelligence Server, each project can potentially store Intelligent Cube
data up to its memory limit.
For example, you have three projects and you set their Maximum RAM
usage (Mbytes) limits to 1 GB, 1 GB, and 2 GB. This means that 4 GB of
Intelligent Cube data could be stored in RAM on the Intelligence Server
machine if all projects reach their memory limits.
• The size of the Intelligent Cubes that are being published and loaded into
memory. The act of publishing an Intelligent Cube can require memory
resources in the area of two to four times greater than the size of an
Intelligent Cube. This can affect performance of your Intelligence Server
as well as the ability to publish the Intelligent Cube. For information on
how to plan for these memory requirements, see the next section.
• To help reduce Intelligent Cube memory size, review the best practices
described in Best practices for reducing Intelligent Cube memory size
below.
The list below describes various best practices to reduce the memory size of
your Intelligent Cubes:
• Attributes commonly use numeric values for their ID forms. Using
attributes defined in this way can save space as compared to attributes
that use character strings for their ID forms.
276 Governing Intelligent Cube memory usage, loading, and storage © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
You can help to keep the processes of publishing Intelligent Cubes within
RAM alone by defining memory limits for Intelligent Cubes that reflect your
Intelligence Server host’s available RAM as well as schedule the publishing of
Intelligent Cubes at a time when RAM usage is low. For information on
scheduling Intelligent Cube publishing, see the MicroStrategy OLAP
Services Guide.
To determine memory limits for Intelligent Cubes, you should review the
considerations listed in Determining memory limits for Intelligent Cubes,
page 275. You must also account for the potential peak in memory usage
when publishing an Intelligent Cube, which can be two to four times the size
of an Intelligent Cube.
© 2010 MicroStrategy, Inc. Governing Intelligent Cube memory usage, loading, and storage 277
6 Managing Intelligent Cubes System Administration Guide Vol. 1
Once the Intelligent Cube is published, only the 1 GB for the Intelligent Cube
(plus some space for indexing information) is used in RAM and the
remaining .6 GB of RAM and .9 GB of swap space used during the publishing
of the Intelligent Cube is returned to the system, as shown in the image
below.
Ifresources
Intelligence Server is hosted on an AIX machine, the system
required for Intelligent Cube publication may not be
returned to the system. However, these resources can be used for
additional Intelligence Server operations.
While the Intelligent Cube can be published successfully, using the swap
space could have an affect on performance of the Intelligence Server
machine.
278 Governing Intelligent Cube memory usage, loading, and storage © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
Once the Intelligent Cube is published, only the .5 GB for the Intelligent Cube
(plus some space for indexing information) is used in RAM and the
remaining RAM used during the publishing of the Intelligent Cube is
returned to the system, as shown in the image below.
Be aware that as more Intelligent Cube data is stored in RAM, less RAM is
available to process publishing an Intelligent Cube. This along with the peak
memory usage of publishing an Intelligent Cube and the hardware resources
of your Intelligence Server host machine should all be considered when
defining memory limits for Intelligent Cube storage per project.
© 2010 MicroStrategy, Inc. Governing Intelligent Cube memory usage, loading, and storage 279
6 Managing Intelligent Cubes System Administration Guide Vol. 1
You can define limits for the amount of Intelligent Cube memory stored in
Intelligence Server at a given time in two ways described below:
• You can use the amount of data required for all Intelligent Cubes to limit
the amount of Intelligent Cube data stored in Intelligence Server memory
at one time for a project. The default is 256 megabytes.
• You can use the number of Intelligent Cubes to limit the number of
Intelligent Cube stored in Intelligence Server memory at one time for a
project. The default is 1000 Intelligent Cubes.
The total number of Intelligent Cubes for a project that are stored in
Intelligence Server memory is compared to the limit you define. If an
attempt to load an Intelligent Cube is made that will exceed the numerical
limit, an Intelligent Cube is removed from Intelligence Server memory
before the new Intelligent Cube is loaded into memory.
1 In Desktop, log in to a project that uses Intelligent Cubes. You must log in
using an account with the Administer Cubes privilege.
3 From the Categories list, expand Cubes, and then select General.
280 Governing Intelligent Cube memory usage, loading, and storage © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
5 Click OK to save your changes and close the Project Configuration Editor.
Loading Intelligent • Report runtime performance for • The overhead experienced during
Cubes when reports accessing Intelligent Cubes Intelligence Server startup is
Intelligence Server is optimized since the Intelligent increased due to the processing of
starts Cube for the report has already loading Intelligent Cubes.
been loaded. • All Intelligent Cubes for a project
• This practice is a good option if are loaded into Intelligence Server
Intelligent Cubes are commonly memory, regardless of whether they
used in a project. are used by reports or not.
Loading Intelligent • The overhead experienced during • Report runtime performance for
Cubes when a report Intelligence Server startup is reports accessing Intelligent Cubes
is executed that decreased as compared to can be negatively affected as the
accesses a published including loading Intelligent Cubes Intelligent Cube must first be
Intelligent Cube as part of the startup tasks. loaded into Intelligence Server.
• If Intelligent Cubes are not required
by any reports, then they do not You can also load Intelligent Cubes
need to be loaded into Intelligence manually or with subscriptions after
Server and no overhead is Intelligence Server is started.
experienced.
• This practice is a good option if
Intelligent Cubes are supported for
a project, but some of the Intelligent
Cubes are rarely used in the
project.
© 2010 MicroStrategy, Inc. Governing Intelligent Cube memory usage, loading, and storage 281
6 Managing Intelligent Cubes System Administration Guide Vol. 1
3 From the Categories list, expand Cubes, and then select General.
4 Select or clear the Load cubes on startup check box to enable or disable
loading Intelligent Cubes when Intelligence Server starts.
5 Click OK to save your changes and close the Project Configuration Editor.
Before you save Intelligent Cubes to secondary storage, use the following
steps to define where Intelligent Cubes are saved.
3 From the Categories list, expand Cubes, and then select General.
4 In the Cube file directory area, click the ... button. The Browse for
Folder dialog box opens.
282 Governing Intelligent Cube memory usage, loading, and storage © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Intelligent Cubes 6
5 Browse to the folder location to store Intelligent Cubes, and then click
OK. You are returned to the Project Configuration Editor.
6 Click OK to save your changes and close the Project Configuration Editor.
4 In the Backup frequency (minutes) field, type the interval (in minutes)
between when Intelligent Cubes are automatically saved to secondary
© 2010 MicroStrategy, Inc. Governing Intelligent Cube memory usage, loading, and storage 283
6 Managing Intelligent Cubes System Administration Guide Vol. 1
Be aware that this option also controls the frequency at which cache and
History List messages are backed up to disk, as described in Backup
Frequency (minutes), page 223.
3 From the Categories list, expand Cubes, and then select General.
Ifcleared.
you do not use connection mapping, leave this check box
Introduction
• Determines the set of data warehouse tables to be used, and therefore the
set of data available to be analyzed.
• Contains all schema objects used to interpret the data in those tables.
Schema objects include objects such as facts, attributes, and hierarchies.
• Contains all application objects used to create reports and analyze the
data. Application objects include objects such as reports, metrics, and
filters.
• Defines the security scheme for the user community that accesses these
objects. Security objects include objects such as security roles, privileges,
and access control lists.
This scenario is shown in the diagram below in which objects iterate between
the development and test projects until they are ready for general users.
Once ready, they are promoted to the production project.
Once the objects’ definitions have stabilized, you move them to a test project
that a wider set of people can use for testing. You may have people run
through scripts or typical usage scenarios that users at your organization
commonly perform. The testers look for accuracy (are the numbers in the
reports correct?), stability (did the objects work? do their dependent objects
work?), and performance (did the objects work efficiently, not producing
overload on the data warehouse?).
After the objects have been tested and shown to be ready for use in a system
accessible to all users, you copy them into the production project. This is the
project used by most of the people in your company. It provides up-to-date
reports and tracks various business objectives.
To set up the development, test, and production projects so that they all have
related schemas, you need to first create the development project. For
instructions on how to create a project, see the MicroStrategy Project Design
Guide. Once the development project has been created, you can duplicate it
to create the test and production projects using the Project Duplication
Wizard. For detailed information about the Project Duplication Wizard, see
Duplicating a project, page 294.
Once the projects have been created, you can migrate specific objects
between them via Object Manager. For example, after a new metric has been
created in the development project, you can copy it to the test project. For
detailed information about Object Manager, see Copying objects between
projects, page 304.
You can also merge two related projects with the Project Merge Wizard. This
is useful when you have a large number of objects to copy. The Project Merge
Wizard copies all the objects in a given project to another project. For an
example of a situation in which you would want to use the Project Merge
Wizard, see Real-life scenario: New version from a project developer,
page 291. For detailed information about Project Merge, see Merging
projects to synchronize objects, page 336.
Tomerge,
help you decide whether you should use Object Manager or Project
see Comparing Project Merge to Object Manager, page 302.
The Project Comparison Wizard can help you determine what objects in a
project have changed since your last update. You can also save the results of
search objects and use those searches to track the changes in your projects.
For detailed information about the Project Comparison Wizard, see
Comparing and tracking projects, page 344. For instructions on how to use
search objects to track changes in a project, see Tracking your projects with
the Search Export feature, page 346.
Integrity Manager helps you ensure that your changes have not caused any
problems with your reports. Integrity Manager executes some or all of the
reports in a project, and can compare them against another project or a
previously established baseline. For detailed information about Integrity
Manager, see the MicroStrategy System Administration Guide, Volume 2.
This combination of the two projects creates Project version 2.1, as shown in
the diagram below.
Vendor Customer
Project Project
version 1 version 1
Project
version 1.1
Project
version 1.2
...
Project Project
version 2 version 2.1
Merged
project
The vendor’s new Version 2 project has new objects that are not in yours,
which you feel confident in moving over. But some of the objects in the
Version 2 project may conflict with objects that you had customized in the
Version 1.2 project. How do you determine which of the Version 2 objects
you want move into your system, or which of your Version 1.2 objects to
modify?
You could perform this merge object-by-object and migrate them manually
using Object Manager, but this will be time-consuming if the project is large.
It may be more efficient to use the Project Merge tool. With this tool, you can
define rules for merging projects that help you identify conflicting objects
and handle them a certain way. Project Merge then applies those rules while
merging the projects. For more information about using the MicroStrategy
Project Merge tool, see Merging projects to synchronize objects, page 336.
workflow you are likely to see at your organization. However, you should be
able to apply the basic principles to your specific situation.
Duplicating a project
Duplicating a project is an important part of the application life cycle. If you
want to copy objects between two projects, MicroStrategy recommends that
the projects have related schemas. This means that one must have originally
been a duplicate of the other, or both must have been duplicates of a third
project.
Project duplication is done using the Project Duplication wizard. For detailed
information about the duplication process, including step-by-step
instructions, see The Project Duplication Wizard, page 297.
If you are copying a project to another project source, you have the option to
duplicate configuration objects as well. Specifically:
• You can choose whether to duplicate all configuration objects, or only the
objects used by the project.
• You can choose to duplicate all users and groups, only the users and
groups used by the project, no users and groups, or a custom selection of
users and groups.
inconsistencies and ensures that the language display is consistent across the
interface.
ToAccess
duplicate a project, you must have the Bypass All Object Security
Checks privilege for that project. In addition, you must have
the Create Schema Objects privilege for the target project source.
1 From the MicroStrategy Object Manager select the Project menu (or
from MicroStrategy Desktop select the Schema menu), then select
Duplicate Project. The Project Duplication Wizard opens.
2 Specify the project source and project information that you are copying
from (the source).
3 Specify the project source and project information that you are copying to
(the destination).
6 Specify whether you wish to see the event messages as they happen and, if
so, what types. Also specify whether to create log files and, if so, what
types of events to log, and where to locate the log files. By default Project
Duplicator shows you error messages as they occur, and logs most events
to a text file. This log file is created by default in C:\Program
Files\Common Files\
MicroStrategy\.
At the end of the Project Duplication Wizard, you are given the option of
saving your settings in an XML file. You can load the settings from this file
later to speed up the project duplication process. The settings can be loaded
at the beginning of the Project Duplication Wizard.
You can also use the settings file to run the wizard in command-line mode.
The Project Duplication Wizard command line interface enables you to
duplicate a project without having to load the graphical interface, or to
schedule a duplication to run at a specific time. For example, you may want
to run the project duplication in the evening, when the load on Intelligence
Server is lessened. You can create an XML settings file, and then use the
Windows AT command or the Unix scheduler to schedule the duplication to
take place at night.
After saving the settings from the Project Duplication Wizard, invoke the
Project Duplication Wizard executable ProjectDuplicate.exe. By
default this executable is located in C:\Program Files\Common Files\
MicroStrategy.
where:
• -md indicates that the metadata of the destination project source will be
updated if it is older than the source project source’s metadata.
For example, a business analyst has an idea for a new business intelligence
application using MicroStrategy. The analyst needs to create a
proof-of-concept project to show her manager. The project will eventually be
used in the development and production environment, but the system
administrator might decide that it is not ideal to create the demo project in
the production database. Instead the analyst puts the project together on her
laptop, using a local Microsoft Access database. Once she demonstrates the
project and receives approval for it, the administrator can use the Project
Mover Wizard to move the project from the laptop’s Access database into the
development environment’s database platform.
ToBypass
migrate a project to a new database platform, you must have the
All Object Security Access Checks privilege for that project.
2 Select the warehouse and metadata databases that contain the source
project, and then select the source project.
3 Select any SQL scripts you want to run on the data warehouse, either
before or after project migration.
6 Review your choices and click Finish on the Summary page of the Project
Mover Wizard. The wizard migrates your project to the new database.
To create a response file, from the first page of the Project Mover Wizard
click Advanced. On the Advanced Options page, select Generate a
response file and enter the name and location of the new response file in the
text field.
To execute a response file from the Project Mover Wizard, from the first page
of the wizard click Advanced. Then select the Use Response File option
and load the response file. The Wizard opens the Summary page, which lists
all the options set by the response file. After reviewing these options, click
Finish. The Project Mover Wizard begins moving the project.
To execute a response file from the command line, you need to invoke the
Project Mover executable, demomover.exe. By default, this directory is
C:\Program Files\Common Files\Microstrategy.
MicroStrategy has the following tools available for updating the objects in a
project:
• Object Manager can move just a few objects, or just the objects in a few
folders. Project Merge moves all the objects in a project.
• The Project Merge Wizard allows you to store merge settings and rules in
an XML file. These rules define what is copied and how conflicts are
resolved. Once they are in the XML file, you can load the rules and
“replay” them with Project Merge. This can be useful if you need to
perform the same merge on a recurring schedule. For example, if a
project developer sends you a new project version quarterly, Project
Merge can make this process easier.
Locking projects
When you open a project in Project Merge, you automatically place a
metadata lock on the project. You also place a metadata lock on the project if
you open it in read/write mode in Object Manager, or if you create or import
an update package from the command line. For more information about
read/write mode versus read-only mode in Object Manager, see Project
locking with Object Manager, page 305.
If you lock a project by opening it in Object Manager, you can unlock the
project by right-clicking the project in Object Manager, and choosing
Disconnect from Project Source.
Only the user who locked a project, or another user with the Bypass
All Security Access Checks and Create Configuration Objects
privileges, can unlock a project.
Object Manager and Project Merge both copy multiple objects between
projects. Use Object Manager when you have only a few objects that need to
be copied. For the differences between Object Manager and Project Merge,
see Comparing Project Merge to Object Manager, page 302.
If you need to allow other users to change objects in projects while the
projects are opened in Object Manager, you can configure Object Manager to
connect to projects in read-only mode. You can also allow changes to
configuration objects by connecting to project sources in read-only mode.
• You cannot copy objects into a read-only project or project source. If you
connect to a project in read-only mode, you can still move, copy, and
delete objects in a project, but you cannot copy objects from another
project into that project.
6 Click OK. The Object Manager Preferences dialog box closes and your
preferences are saved.
Copying objects
Object Manager can copy application, schema, and configuration objects.
Ifproject
you use Object Manager to copy a user or user group between
sources, the user or group reverts to default inherited
access for all projects in the project source. To copy a user or
group’s security information for a project, you must copy the user
or group in a configuration update package. For information about
update packages, see About update packages, page 324.
For background information on these objects, including how they are created
and what roles they perform in a project, see the MicroStrategy Project
Design Guide.
• When copying MDX cubes between projects, make sure that the conflict
resolution action for the cubes, cube attributes, and reports that use the
cubes is set to Replace.
• If you need to copy objects from multiple folders at once, you can create a
new folder, and create shortcuts in the folder to all the objects you want to
copy. Then copy that folder. Object Manager copies the folder, its
contents (the shortcuts), and their dependencies (the target objects of
those shortcuts) to the new project.
• If you are using update packages to update the objects in your projects,
use the Export option to create a list of all the objects in each update
package.
2 In the list of project sources, select the check box for the project source
you want to access. You can select more than one project source.
3 Click Open. You are prompted to log in to each project source that you
have selected.
4 When you have logged into each project source, the MicroStrategy Object
Manager window opens.
5 In the Folder List, expand the project that contains the object you want to
copy, then navigate to the object
7 Expand the destination project in which you want to paste the object, and
then select the folder in which you want to paste the object.
For information about additional objects that may be copied with a given
object, see Child dependencies, page 310.
Iftwoyouwindows
are copying objects between two different project sources,
are open within the main Object Manager window.
In this case, instead of right-clicking and selecting Copy and
Paste, you can drag and drop objects between the projects.
9 If you copied any schema objects, you must update the destination
project’s schema. Select the destination project, and from the Project
menu, select Update Schema.
10 In the Folder Lists for both the source and destination projects, expand
the Administration folder, then select the appropriate manager for the
type of configuration object you want to copy (Database Instance
Manager, Schedule Manager, or User Manager).
11 From the list of objects displayed on the right-hand side in the source
project source, drag the desired object into the destination project source
and drop it.
ToManager,
display the list of users on the right-hand side, expand User
then on the left-hand side select a group.
If the object you are copying does exist in the destination project, a conflict
occurs and Object Manager opens the Conflict Resolution dialog box. For
information about how to resolve conflicts, see Resolving conflicts when
copying objects, page 316.
When an object uses another object in its definition, the objects are said to
depend on one another. Object Manager recognizes two types of object
dependencies: child dependencies and parent dependencies.
When you migrate an object to another project, any objects used by that
object in its definition (its child dependencies) are also migrated.
Child dependencies
A child dependency occurs when an object uses other objects in its definition.
For example, in the MicroStrategy Tutorial project, the metric named
Revenue uses the base formula named Revenue in its definition. The
Revenue metric is said to have a child dependency on the Revenue base
When you migrate an object to another project, any objects used by that
object in its definition (its child dependencies) are also migrated.
1 After you have opened a project source and a project using Object
Manager, in the Folder List select the object.
2 From the Tools menu, select Object child dependencies. The Child
dependencies dialog box opens and displays a list of objects that the
selected object uses in its definition. The image below shows the child
dependencies of the Revenue metric in the MicroStrategy Tutorial
project: in this case, the child dependency is the Revenue base formula.
3 In the Child dependencies dialog box, you can do any the following:
• View child dependencies for any object in the list by selecting the
object and clicking the Object child dependencies toolbar icon.
• Open the Parent dependencies dialog box for any object in the list by
selecting the object and clicking the Object parent dependencies
icon on the toolbar. For information about parent dependencies, see
Parent dependencies, page 311.
• View the properties of any object, such as its ID, version number, and
access control lists, by selecting the object and from the File menu
choosing Properties.
Parent dependencies
Revenue metric has parent dependencies of many reports and even other
metrics. The Revenue metric is said to be a child of these other objects.
1 After you have opened a project source and a project using Object
Manager, from the Folder List select the object.
2 From the Tools menu, choose Object parent dependencies. The Parent
dependencies dialog box opens and displays a list of objects that depend
on the selected object for part of their definition. The image below shows
some of the parent dependencies for the Revenue metric in the
MicroStrategy Tutorial project.
3 In the Parent dependencies dialog box, you can do any of the following:
• View parent dependencies for any object in the list by by selecting the
object and clicking the Object parent dependencies icon on the
toolbar.
• Open the Child dependencies dialog box for any object in the list by
selecting the object and clicking the Object child dependencies icon
on the toolbar. For information about child dependencies, see Child
dependencies, page 310.
• View the properties of any object, such as its ID, version number, and
access control lists, by selecting the object and from the File menu
choosing Properties.
When you copy an object using Object Manager, it checks for any child
dependents of that object and copies them as well. These dependent objects
are copied to the same path as in the source project. If this path does not
already exist in the destination project, Object Manager creates the path.
For example, a user copies a report from the source project to the destination
project. In the source project, all dependents of the report are stored in the
Public Objects\Report Dependents folder. Object Manager looks in
the destination project’s Public Objects folder for a subfolder named Report
Dependents (the same path as in the source project). If the folder exists, the
dependent objects are saved in that folder. If the destination project does not
have a folder in Public Objects with the name User, Object Manager creates
it and saves all dependent objects there.
make
When you create an update package, click Add All Dependencies to
sure all child dependencies are included in the package. If the
dependent objects for a specific object do not exist in either the
destination project source or in the update package, the update
package cannot be applied. If you choose not to add dependent objects
to the package, make sure that all dependent objects are included in
the destination project source.
Object dependencies
Some objects have dependencies that are not immediately obvious. These are
listed below:
• Folders have a child dependency on each object in the folder. If you copy
a folder using Object Manager, all the objects in that folder are also
copied.
Achild
folder that is copied as part of an update package does not have a
dependency on its contents.
• Security filters, users, and user groups have a child dependency on the
user groups they belong to. If you copy a security filter, user, or user
group, the groups that it belongs to are also copied.
Attributes used in fact entry levels are not dependents of the fact.
Excluding dependent attributes or tables from object migration
1 From the Tools menu, select Object Manager Preferences. The Object
Manager Preferences dialog box opens.
3 Select the check boxes for the objects you want to exclude from Object
Manager’s dependency checking.
4 Click OK. The Object Manager Preferences dialog box closes and your
preferences are saved.
1 From the Tools menu, select Object Manager Preferences. The Object
Manager Preferences dialog box opens.
4 Click OK. The Object Manager Preferences dialog box closes and your
preferences are saved.
The ability to retain the name, description, and long description is important
in internationalized environments. When replacing the objects to resolve
conflicts, retaining these properties of the objects in the destination project
facilitates support of internationalized environments. For example, if the
destination project contains objects with French names but the source
project has been developed in English (including English names), you can
retain the French names and descriptions for objects in the destination
project. Alternately, you can update the project with the English names and
not change the object itself.
1 From the Tools menu, select Object Manager Preferences. The Object
Manager Preferences dialog box opens.
4 From the Format drop-down list, select whether copied objects use the
locale settings from Desktop or from the machine’s regional settings.
6 To resolve translations with a different action than that specified for the
object associated with the translation, select the Enable advanced
conflict resolution check box.
• To always use the translations in the destination project, select Keep
Existing.
• To always use the translations in the source project, select Replace.
7 Click OK. The Object Manager Preferences dialog box closes and your
preferences are saved.
When copying objects across projects with Object Manager, if an object with
the same ID as the source object exists anywhere in the destination project, a
conflict occurs and the Conflict Resolution dialog box (shown below) opens.
It prompts you to resolve the conflict.
Conflict Explanation
Exists identically The object ID, object version, and path are the same in the source and destination
projects.
Exists differently The object ID is the same in the source and destination projects, but the object
versions are different. The path may be the same or different.
Exists identically The object ID and object version are the same in the source and destination projects,
except for path but the paths are different. This occurs when one of the objects exists in a different
folder.
Note: If your language preferences for the source and destination projects are
different, objects that are identical between the projects may be reported as Exists
identically except for path. This occurs because when different languages are used
for the path names, Object Manager treats them as different paths. To resolve this,
set your language preferences for the projects to the same language. For more
information on language preferences, including instructions, see Configuring
metadata object and report data language preferences, page 430.
If you resolve the conflict with the Replace action, the destination object is updated to
reflect the path of the source object.
Exists identically (User only) The object ID and object version of the user are the same in the source
except for and destination projects, but at least one associated Distribution Services contact or
Distribution contact group is different. This may occur if you modified a contact or contact group
Services objects linked to this user in the source project.
If you resolve the conflict with the Replace action, the destination user is updated to
reflect the contacts and contact groups of the source user.
Does not exist The object exists in the source project but not in the destination project.
Note: If you clear the Show new objects that exist only in the source check box in
the Migration category of the Object Manager Preferences dialog box, objects that do
not exist in the destination project are copied automatically with no need for conflict
resolution.
If a conflict occurs you must determine what action Object Manager should
take. The different actions are explained in the table below.
Use existing No change is made to the destination object. The source object is not copied.
Keep both No change is made to the destination object. The source object is duplicated in
the destination location.
Use newer If the source object’s modification time is more recent than the destination
object’s, the Replace action is used.
Otherwise, the Use existing action is used.
Use older If the source object’s modification time is more recent than the destination
object’s, the Use existing action is used.
Otherwise, the Replace action is used.
Merge (user/group The privileges, security roles, groups, and Distribution Services addresses and
only) contacts of the source user or group are added to those of the destination user
or group.
Do not move (table The selected table is not created in the destination project. This option is only
only) available if the Allow to override table creation for non-lookup tables that
exist only at source project check box in the Migration category of the Object
Manager Preferences dialog box is selected.
If you choose to replace a schema object, the following message may appear:
This message also appears if you choose to replace an application object that
depends on an attribute, and you have made changes to that attribute by
modifying its form properties at the report level or its column definition
through another attribute. For information about modifying the properties of
an attribute, see the MicroStrategy Project Design Guide.
To update the project schema, from the Object Manager Project menu, select
Update Schema. For details about updating the project schema, see the
Optimizing and Maintaining your Project chapter in the MicroStrategy
Project Design Guide.
To resolve a conflict
1 Select the object or objects that you want to resolve the conflict for. You
can select multiple objects by holding down SHIFT or CTRL when
selecting.
2 Choose an option from the Action drop-down list (see table above). This
option is set for all selected objects.
You can determine the default actions that display in the Conflict Resolution
dialog box when a conflict occurs. This includes setting the default actions
for the following object categories and types:
• Application objects
• Schema objects
• Configuration objects
• Folders
You can set a different default action for objects specifically selected by the
user, and for objects that are included because they are dependents of
selected objects. For example, you can set selected application objects to
default to Use newer to ensure that you always have the most recent version
of any metrics and reports. You can set dependent schema objects to default
to Replace to use the source project’s version of attributes, facts, and
hierarchies.
These selections are only the default actions. You can always change the
conflict resolution action for a given object when you copy that object.
1 From the Tools menu, select Object Manager Preferences. The Object
Manager Preferences dialog box opens.
3 Make any changes to the default actions for each category of objects.
• For an explanation of the differences between application,
configuration, and schema objects, see Copying objects, page 306
• For an explanation of each object action, see Choosing an action to
resolve a conflict, page 318.
4 Click OK. The Object Manager Preferences dialog box closes and your
preferences are saved.
When you update or add an object in the destination project, by default the
object keeps its access control list (ACL) from the source project. You can
change this behavior in two ways:
• If you resolve a conflict with the Replace action, and the access control
lists (ACL) of the objects are different between the two projects, you can
choose whether to keep the existing ACL in the destination project or
replace it with the ACL from the source project.
• If you add a new object to the destination project with the Create New or
Keep Both action, you can choose to have the object inherit its ACL from
the destination folder instead of keeping its own ACL. This is helpful
when copying an object into a user’s profile folder, so that the user can
have full control over the object.
The Use Older or Use Newer actions always keep the ACL of
whichever object (source or destination) is used.
1 From the Tools menu, select Object Manager Preferences. The Object
Manager Preferences dialog box opens.
3 Under ACL option on replacing objects, select how to handle the ACL
for conflicts resolved with the Replace action:
• To use the ACL of the source object, select Keep existing ACL when
replacing objects.
• To use the ACL of the replaced destination object, select Replace
existing ACL when replacing objects.
4 Under ACL option on new objects, select how to handle the ACL for
new objects added to the destination project:
• To use the ACL of the source object, select Keep ACL as in the
source objects.
• To inherit the ACL from the destination folder, select Inherit ACL
from the destination folder.
5 Click OK. The Object Manager Preferences dialog box closes and your
preferences are saved.
You can choose not to create a dependent table in the destination project by
changing the Action for the table from Create New to Ignore. You can also
choose not to migrate any dependent tables by specifying that they not be
included in Object Manager’s dependency search. For detailed information,
including instructions, see Migrating dependent objects, page 313.
The following list and related tables explain how the attribute - table or fact -
table relationship is handled, based on the existing objects and tables and the
conflict resolution action you select.
Replace The object has the same references to the table as it does in the source project.
Keep Both No change is made to the destination object. The source object is duplicated in the
destination project. The duplicated object will have the same references to the table
as it does in the source project.
Replace The object has the same references to the table as it does in the source project.
Keep Both No change is made to the destination object. The source object is duplicated in the
destination project. The duplicated object will have the same references to the table
as it does in the source project.
Use Existing The object has the same references to the table as it did before the action.
Keep Both No change is made to the destination object. The source object is duplicated in the
destination project. The duplicated object will not reference the table.
For example, you have several developers who are each responsible for a
subset of the objects in the development project. The developers can submit
update packages, with a list of the objects in the packages, to the project
administrator. The administrator can then import those packages into the
test project.
Ifupdate
your update package includes any schema objects, you may need to
the project schema after importing the package. For more
information about updating the schema after importing an update
package, see Update packages and updating the project schema,
page 335.
To update your users and groups with the project access information for each
project, you must create a project security update package for each project.
You create these packages at the same time that you create the configuration
update package, by selecting the Create project security packages check
box and specifying which projects you want to create a project security
update package for. For detailed instructions on creating a configuration
update package and project security update packages, see Creating a
configuration update package, page 328.
You can also create update packages from the command line, using
rules specified in an XML file. For more information and instructions,
see Creating an update package from the command line, page 330.
2 From the Tools menu, select Create Package. The Create Package
dialog box opens.
You can also open this dialog box from the Conflict Resolution
dialog box by clicking Create Package. In this case, all objects in
the Conflict Resolution dialog box, and all dependents of those
objects, are automatically included in the package.
4 To add the dependents of all objects to the package, click Add all used
dependencies. All dependent objects of all objects currently listed in the
package are added to the package.
Ifthethedestination
dependent objects for a specific object do not exist in either
project source or in the update package, the update
package cannot be applied. If you choose not to add dependent
objects to the package, make sure that all dependent objects are
included in the destination project source.
5 To add the dependents of specific objects, select those objects and click
Add used dependencies. All dependent objects of those objects are
added to the package.
7 Select the schema update options for this package. For more details on
these options, see Update packages and updating the project schema,
page 335.
8 Select the ACL options for objects in this package. For more details on
these options, see Conflict resolution and access control lists, page 321.
9 Enter the name and location of the package file in the Save As field. The
default file extension for update packages is .mmp.
You can set the default location in the Object Manager Preferences
dialog box, in the Object Manager: Browsing category.
11 When you have added all objects to the package, click Proceed. The
package is created in the specified location.
Ifpackage,
you choose to include users or groups in a configuration update
project access information (such as privileges, security roles,
and security filters) is not included in the configuration package. To
migrate project access information about the users or groups, you
must create a project security update package for each project at the
same time you create the configuration update package. For more
information about project security packages, see Updating project
access information for users and groups, page 324.
3 From the Tools menu, select Create Configuration Package. The Create
Package dialog box opens.
5 Search for the objects you want to add to the package. For instructions on
performing a search, see the online help.
6 When the objects are loaded in the search area, click and drag them to the
Create Package dialog box.
7 When you have added all the desired objects to the package, close the
Configuration - Search for Objects dialog box.
8 To add the dependents of all objects to the package, click Add all used
dependencies. All dependent objects of all objects currently listed in the
package are added to the package.
Ifthethedestination
dependent objects for a specific object do not exist in either
project source or in the update package, the update
9 To add the dependents of specific objects, select those objects and click
Add used dependencies. All dependent objects of those objects are
added to the package.
11 In the Projects area, select the check boxes next to the projects you want
to create project security packages for.
Ifselect
you are creating project security update packages, you must
Replace as the conflict resolution action for all users and
groups. Otherwise the project-level security information about
those users and groups is not copied into the destination project.
13 Select the ACL options for objects in this package. For more details on
these options, see Conflict resolution and access control lists, page 321.
14 Enter the name and location of the package file in the Save As field. The
default file extension for update packages is .mmp.
16 When you have added all objects to the package, click Proceed. The
configuration update package and any associated project security update
packages are created in the specified location.
You can also create update packages without needing to open Object
Manager. You must first create an XML file that contains a list of the objects
to be migrated and conflict resolution rules for those objects. Then you
execute the instructions in that XML file using the Project Merge executable.
Sample package creation XML files for project update packages and
configuration update packages can be found in the Object Manager folder. By
default this folder is C:\Program Files\MicroStrategy\Object
Manager\.
The XML file has the same structure as an XML file created using the
Project Merge Wizard. For more information about creating an XML
file for use with Project Merge, see Merging projects to synchronize
objects, page 336.
2 Edit your copy of the XML file to include the following information, in the
appropriate XML tags:
• AddDependents:
– Yes for the package to include all dependents of all objects in the
package.
• ConnectionMode:
– 2-tier for a direct (2-tier) project source connection.
• Login: The user name to connect to the project source. You must
provide a password for the user name when you run the XML file from
the command line.
3 For a project update package, you can specify conflict resolution rules for
individual objects. In an Operation block, specify the ID (GUID) and
Type of the object, and the action to be taken. For information about the
actions that can be taken in conflict resolution, see Choosing an action to
resolve a conflict, page 318.
Importing a package
An update package is saved in a file, and can be freely copied and moved
between machines.
You can import an update package into a project or project source in the
following ways:
• From within Object Manager: You can use the Object Manager
graphical interface to import an update package.
You can also create an XML file to import an update package from the
command line, similar to how you can use an XML file to create an
update package.
2 From the Tools menu, select Import Package. Browse to the saved
update package and select it. The default file extension for update
packages is .mmp.
3 Click OK. All objects in the update package are copied to the destination
project or project source, following the rules specified in the update
package. A log file containing information about the import process is
created in the Object Manager directory.
4 If the package made any changes to the project schema, you may need to
update the schema for the changes to take effect. To update the project
schema, from the Object Manager Project menu, select Update Schema.
Effect Parameter
Log into the project source with this MicroStrategy username and -u UserName
password, using standard authentication (required unless you are using -p Password
Windows authentication)
Import this package into the specified project source (required) -f PackageLocation
Note: The location must be specified relative to the Intelligence Server
machine, not relative to the machine running the Import Package utility.
Import the package into this project (required for project update packages) -j ProjectName
Force a configuration or project lock prior to importing the package. This -forcelocking
lock is released after the package is imported. For more information about
project and configuration locking, see Locking projects, page 303.
These sample package import XML files can be found in the Object
Manager directory. By default this directory is C:\Program
Files\MicroStrategy\Object Manager\.
2 Edit your copy of the XML file to include the following information, in the
appropriate XML tags:
where “Filename” is the name and location of the update package, and
“ProjectName” is the name of the project the update is to be applied to.
Iftothe package made any changes to the project schema, you need
update the schema for the changes to take effect. The syntax for
updating the schema in a Command Manager script is
The update package cannot recalculate the object client cache size, and it
cannot update the schema logical information. These tasks must be
performed manually. So, for example, if you import an attribute that has a
new attribute form, you must manually update the project schema before any
objects in the project can use that attribute form.
• In Object Manager, select the project and, from the Project menu, select
Update Schema.
• In Desktop, log into the project and, from the Schema menu, select
Update Schema.
• Call a Command Manager script with the following command:
For more detailed information about updating the project schema, see the
Optimizing and Maintaining your Project chapter in the MicroStrategy
Project Design Guide.
The rules that you use to resolve conflicts between the two projects in Project
Merge can be saved to an XML file and reused. You can then execute Project
Merge repeatedly using this rule file. This allows you to schedule a project
merge on a recurring basis. For more details about scheduling project
merges, see Scheduling a project merge, page 342.
Project Merge migrates an entire project. All objects are copied to the
destination project. Any objects that are present in the source project but not
the destination project are created in the destination project.
Projects may need to be merged at various points during their life cycle.
These points may include:
In either case, you must move objects from development to testing, and then
to the production projects that your users use every day.
In the MicroStrategy system, every object has an ID (or GUID) and a version.
(To see the ID and version of an object, right-click the object and select
Properties.) Project Merge checks the destination project for the existence of
every object in the source project, by ID. The resulting possibilities are
described below:
• If an object exists in the destination project and has the same object ID in
both projects but a different version, there is a conflict that must be
resolved. The conflict is resolved by following the set of rules specified in
the Project Merge Wizard and stored in an XML file. The possible conflict
resolutions are discussed in Project Merge conflict resolution rules,
page 343.
Merging projects with the Project Merge Wizard does not update the
modification date of the project, as shown in the Project Configuration
Editor. This is because, when copying objects between projects, only
the objects themselves change. The definition of the project itself is
not modified by Project Merge.
After going through the steps in the wizard, you can either execute the merge
right away or save the rules and settings in a Project Merge XML file. You can
use this file to run Project Merge from the Windows command prompt (see
Running Project Merge from the command prompt, page 340) or to
schedule a merge (see Scheduling a project merge, page 342).
The following scenario runs through the Project Merge Wizard several times,
each time fine-tuning the rules, and the final time actually performing the
merge.
Both the source and the destination project must be loaded for the
project merge to complete. For more information on loading projects,
see Setting the status of a project, page 455.
2 Follow the steps in the wizard to set your options and conflict resolution
rules.
For details about all settings available when running the wizard,
see the online help (press F1 from within the Project Merge
Wizard). For information about the rules for resolving conflicts,
see Resolving conflicts when merging projects, page 343.
3 Near the end of the wizard, when you are prompted to perform the merge
or generate a log file only, select Generate log file only. Also, choose to
Save Project Merge XML. At the end of the wizard, click Finish. Because
you selected to generate a log file only, this serves as a trial merge.
4 After the trial merge is finished, you can read through the log files to see
what would have been copied (or not copied) if the merge had actually
been performed.
5 Based on what you learn from the log files, you may wish to change some
of the conflict resolution rules you set when going through the wizard. To
do this, run the wizard again and, at the beginning of the wizard, choose
to Load the Project Merge XML that you created in the previous run. As
you proceed through the wizard, you can fine-tune the settings you
specified earlier. At the end of the wizard, choose to Generate the log file
only (thereby performing another trial) and choose Save the Project
Merge XML. Repeat this step as many times as necessary until the log file
indicates that objects are copied or skipped as you desire.
6 When you are satisfied that no more rule changes are needed, run the
wizard a final time. At the beginning of the wizard, load the Project Merge
XML as you did before. At the end of the wizard, when prompted to
perform the merge or generate a log file only, select Perform merge and
generate log file.
A Project Merge can be launched from the Windows command prompt. You
can also run several sessions of the Project Merge Wizard with the same
source project, using the command prompt. For information on running
multiple sessions, see Multiple project merges from the same project,
page 341.
The settings for this routine must be saved in an XML file which can easily be
created using the Project Merge Wizard. Once created, the XML file serves as
the input parameter to the command.
-f[ ] Specifies the path and file name (without spaces) of the XML file to use. (You must have
already created the file using the Project Merge Wizard.) Example:
-fc:\files\merge.xml
-sp[ ] Password for SOURCE Project Source. (The login ID to be used is stored in the XML file.)
Example: -sphello
-dp[ ] Password for DESTINATION Project Source. (The login ID to be used is stored in the XML
file.) Example: -dphello
-smp[ ] Password for SOURCE metadata. (The login ID to be used is stored in the XML file.)
Example: -smphello
-dmp[ ] Password for DESTINATION metadata. (The login ID to be used is stored in the XML file.)
Example: -dmphello
-sup Suppress progress window. This is useful for running a project merge in the background,
and the window displaying status of the merge does not appear.
-MD Forces metadata update of DESTINATION metadata if it is older than the SOURCE
metadata. Project Merge will not execute unless DESTINATION metadata is the same
version as or more recent than SOURCE metadata.
-SU Updates the schema of the DESTINATION project after the Project Merge is completed.
This update is required when you make any changes to schema objects (facts, attributes,
or hierarchies).
Note: Do not use this switch if the Project Merge configuration XML contains an instruction
to update the schema.
Ifenclose
the XML file contains a space in the name or the path, you must
the name in double quotes, such as:
The Project Merge Wizard can perform multiple simultaneous merges from
the same project source. This can be useful when you wish to propagate a
change to several projects simultaneously.
To do this, you must modify the Project Merge XML file, and then make a
copy of it for each session you wish to run.
3 Make one copy of the XML file for each session of the Project Merge
Wizard you wish to run.
• Ensure that each file uses a different Project Merge log file name.
5 Manually lock the source project. For detailed steps on locking projects
manually, see the Desktop online help (press F1 from within Desktop).
7 For each XML file, run one instance of the Project Merge Wizard from the
command line.
For a list of the syntax options for this command, see Running Project Merge
from the command prompt, page 340.
2 Change the drive to the one on which the Project Merge utility is installed.
The default installation location is the C: drive (the prompt appears as:
C:\>)
When you define the rules for Project Merge to use, you first set the default
conflict resolution action for each category of objects (schema, application,
and configuration). (For a list of objects included in each category, see
page 306.) Then you can specify conflict resolution rules at the object type
level (attributes, facts, reports, consolidations, events, schedules, and so on).
Object type rules override object category rules. Next you can specify rules
for specific objects, which, in turn, override both object type rules and object
category rules.
For example, the Use Newer action replaces the destination object with the
source object if the source object has been modified more recently than the
destination object. If you specified the Use newer action for all metrics, but
the Sales metric has been changed recently and is not yet ready for the
production system, you can specify Use existing (use the object in the
destination project) for that metric only and it will not be replaced.
If the source object has a different version than the destination object, that is,
the objects exist differently, you must determine what action should occur.
The various actions that can be taken to resolve conflicts are explained in the
table below.
Action Effect
Use existing No change is made to the destination object. The source object is not copied.
Action Effect
Keep both No change is made to the destination object. The source object is duplicated in the
destination location.
Use newer If the source object’s modification time is more recent than the destination object’s, the
Replace action is used. Otherwise, the Use existing action is used.
Use older If the source object’s modification time is more recent than the destination object’s, the
Use existing action is used. Otherwise, the Replace action is used.
You can track changes to your projects with the MicroStrategy Search
feature, or retrieve a list of all unused objects in a project with the Find
Unused Objects feature of Object Manager.
• Tracking your projects with the Search Export feature, page 346
For the source project, you specify whether to compare objects from the
entire project, or just from a single folder and all its subfolders. You also
specify what types of objects (such as reports, attributes, or metrics) to
include in the comparison.
You can print this result list, or save it as a text file or an Excel file.
Since the Project Comparison Wizard is a part of Object Manager, you can
also select objects from the result set to immediately migrate from the source
project to the destination project. For more information about migrating
objects using Object Manager, see Copying objects between projects,
page 304.
6 Review your choices at the summary screen and click Finish. The objects
in the two projects are compared and the Project Comparison Result Set
dialog opens. This dialog lists all the objects you selected and the results
of their comparison.
7 To save the results, from the File menu select Save as text file or Save
as Excel file.
definition and search results to a text file, and save the search object itself for
later reuse.
For example, you can create a search object in the development project that
returns all objects that have been changed after a certain date. This lets you
know what objects have been updated and need to be migrated to the test
project. For more information about development and test projects, see The
project life cycle, page 288.
• The user who was logged in when the search was performed.
• Any search criteria entered into the tabs of the Search for Objects dialog
box.
3 After your search is complete, from the Tools menu in the Search for
Objects dialog box, select Export to Text. The text file is saved by default
to C:\Program Files\MicroStrategy\Desktop\
SearchResults_<date and timestamp>.txt, where <date and
timestamp> is the day and time when the search was saved. For
example, the text file named SearchResult_022607152554.txt was
saved on February 26, 2007, at 15:25:54, or 3:25 PM.
3 From the Tools menu, select Find unused objects. The Search for
Objects dialog box opens.
4 In the Look In field, enter the folder you want to start your search in.
6 Click Find Now. The unused objects are listed at the bottom of the dialog
box.
348 Deleting unused schema objects: managed objects © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Your Projects 7
• Freeform SQL
• Query Builder
• MDX cube sources such as SAP BW, Hyperion Essbase, and Microsoft
Analysis Services
For information on Freeform SQL and Query Builder, see the MicroStrategy
Advanced Reporting Guide. For information on MDX cube sources, see the
MicroStrategy MDX Cube Reporting Guide.
Managed objects are stored in a special system folder, and can be difficult to
delete individually due to how these objects are created and stored. If you use
one of the features listed above, and then decide to remove some or all of that
feature’s related reports and MDX cubes from the project, there may be
unused managed objects included in your project that can be deleted.
For example, you decide to delete a single Freeform SQL report that
automatically created a new managed object named Store. When you delete
the report, the managed object Store is not automatically deleted. You do not
plan to use the object again; however, you do plan to create more Freeform
SQL reports and want to keep the database instance included in the project.
Instead of deleting the entire Freeform SQL schema, you can delete only the
managed object Store.
© 2010 MicroStrategy, Inc. Deleting unused schema objects: managed objects 349
7 Managing Your Projects System Administration Guide Vol. 1
If you are removing MDX cube managed objects, you must also remove
any MDX cubes that these managed objects depend on.
2 Right-click the project and select Search for Objects. The Search for
Objects dialog box opens.
3 From the Tools menu, select Options. The Search Options dialog box
opens.
6 Enter your search criteria and select Find Now. A list of managed objects
appears.
For example, you can create a separate database instance for your Freeform
SQL reports in your project. Later on, you may decide to no longer use
Freeform SQL, or any of the reports created with the Freeform SQL feature.
After you delete all the Freeform SQL reports, you can remove the Freeform
SQL database instance from the project. Once you remove the database
instance from the project, any Freeform SQL managed objects that depended
solely on that database instance can be deleted.
You can implement the same process when removing database instances for
Query Builder, SAP BW, Essbase, and Analysis Services.
350 Deleting unused schema objects: managed objects © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Managing Your Projects 7
1 Remove all reports created with Freeform SQL, Query Builder, or MDX
cubes.
Ifremove
you are removing MDX cube managed objects, you must also
all imported MDX cubes.
5 Clear the check box for the database instance you want to remove from
the project. You can only remove a database instance from a project if the
database instance has no dependent objects in the project.
© 2010 MicroStrategy, Inc. Deleting unused schema objects: managed objects 351
7 Managing Your Projects System Administration Guide Vol. 1
352 Deleting unused schema objects: managed objects © 2010 MicroStrategy, Inc.
8
8. AUTOMATING TASKS
Introduction
This chapter describes how you can automate certain MicroStrategy jobs and
administrative tasks. Methods of automation include:
• Best practices for scheduling jobs and administrative tasks, page 354
• If you need to create multiple similar subscriptions, you can create them
all at once with the Subscription Wizard. For example, you can subscribe
users to several different reports at the same time.
• When selecting reports to be subscribed to, make sure none of the reports
have prompts that require an answer and have no default answer. If a
report has a prompt that requires an answer but has no default answer,
the subscription will be unable to run the report successfully since the
prompt cannot be resolved, and the subscription will be automatically
invalidated and removed from the system.
About schedules
A schedule is a MicroStrategy object that contains information specifying
when a task is to be executed. A single schedule can control several different
tasks. Schedules are stored at the project source level, and are thus available
to all projects within the project source.
Time-triggered schedules
With a time-triggered schedule, you define a specific date and time at which
the scheduled task is to be run. For example, you can execute a particular
task every Sunday night at midnight. Time-triggered schedules are useful to
allow large, resource-intensive tasks to run at off-peak times, such as
overnight or over a weekend.
Event-triggered schedules
Inevent-triggered
a clustered environment, administrative tasks associated with
schedules are executed only on the node of the cluster
that triggered the event.
Creating schedules
To create a schedule
3 From the File menu, point to New, and then select Schedule. The
Schedule Wizard opens.
5 When you reach the Summary page of the wizard, review your choices
and click Finish. The schedule is created.
You can also create a schedule with the Create Schedule script for
Command Manager. For detailed syntax, see the Create Schedule
script outline in Command Manager.
Managing schedules
You can add, remove, or modify schedules through the Schedule Manager.
You can modify the events that trigger event-triggered schedules through the
Event Manager. For instructions on using the Event Manager, see About
events and event-triggered schedules, page 358.
Once Intelligence Server has been notified that the event has taken place,
Intelligence Server performs the tasks associated with the corresponding
schedule.
Inevent-triggered
a clustered environment, administrative tasks associated with
schedules are only executed by the node on which the
event is triggered. MicroStrategy recommends that you use
event-triggered schedules in situations where it is important to
control which node performs certain tasks.
Creating events
1 log in to a project source. You must log in as a user with the Create And
Edit Schedules And Events privilege.
3 From the File menu, point to New, and then select Event. A new event is
created.
You can create events with the following Command Manager script:
Triggering events
At the end of the database load routine, you include a statement to add a line
to a database table, DB_LOAD_COMPLETE, that indicates that the database
load is complete. You then create a database trigger that checks to see when
the DB_LOAD_COMPLETE table is updated, and then executes a Command
Manager script. That script contains the following line:
When the script is executed, the OnDBLoad event is triggered, and the
schedule is executed.
You can also use the MicroStrategy SDK to develop an application that
triggers an event. You can then cause the database trigger to execute this
application. For information about obtaining the MicroStrategy SDK, contact
your MicroStrategy account representative.
You can manually trigger events using the Event Manager. This is primarily
useful in a testing environment. In a production system it is often not
practical for the administrator to be present to trigger event-based
schedules.
1 In Desktop, log in to a project source. You must log in as a user with the
Trigger Event privilege.
Scheduling the execution of reports and documents reduces the load on the
system in two ways:
• You can create caches for frequently-accessed reports and documents,
which provides fast response times to users while not generating
additional load on the database system.
Adocument’s
subscription for a document only creates or updates that
cache for the default mode of the document (HTML, PDF,
Excel, or XML/Flash). If the document is viewed in other modes it
does not use this cache. For more information about document
caches, see Cache matching algorithm, page 211.
You can also create multiple subscriptions at one time for a user or user
group using the Subscription Wizard, or subscribe to an individual report in
MicroStrategy Web or with a Command Manager script.
2 From the File menu, point to Schedule Delivery To, and select the type
of subscription to create. The Subscription Editor for that type of
subscription opens. For a list of the types of subscriptions, see Types of
subscriptions, page 363. For detailed instructions on using the
Subscription Editor, click Help.
2 Step through the wizard, specifying a schedule and type for the
subscriptions, and the reports and documents that are subscribed to. For
3 When you have reached the Summary page of the wizard, review the
subscription information and click Finish. The subscription is created
and available for viewing in the Subscription Manager.
1 On the reports page, under the name of the report/document for which
you want to create a schedule, click the Subscriptions button .
2 Select Add Subscription for the type of subscription you want to create.
For a list of the types of subscriptions, see Types of subscriptions,
page 363. For detailed instructions on creating a subscription, click Help.
Types of subscriptions
• Cache update subscriptions refresh the cache for the specified report or
document. For example, your system contains a set of standard weekly
and monthly reports. These reports should be kept in cache because they
are frequently accessed. Certain tables in the database are refreshed
weekly, and other tables are refreshed monthly. Whenever these tables
are updated, the appropriate caches should be refreshed.
Cache update subscriptions often use event-triggered schedules because
caches do not need refreshing unless the underlying data changes from
an event like a data warehouse load. For more information on caches, see
Result caches, page 204. For additional suggestions for scheduling
strategies, see Scheduling updates of result caches, page 217.
• History List subscriptions create a History List message for the specified
report or document. For more information about the History List, see
Saving report results: History List, page 233.
Ifschedule
you have purchased a Distribution Services license, you can also
reports and documents to be emailed to users, saved as Excel
or PDF files, or printed. For more information about Distribution
Services, see Scheduling deliveries to email, file, and printer:
Distribution Services, page 370.
When you set up several Intelligence Server machines in a cluster, you can
distribute projects across those clustered machines (or nodes) in any
configuration. Each node can host a different subset of projects. For more
information about clustering Intelligence Servers, see Chapter 11, Clustering
Multiple MicroStrategy Servers.
A subscribed report can contain prompts. How and whether the report is
executed depends on the prompt definition. For more information about
prompts, see the Prompts chapter in the Advanced Reporting Guide.
Default
Prompt
Answer Result
Required?
present?
No No The prompt is ignored since it is not required; the report is executed but it is
not filtered by the prompt.
No Yes The prompt and default answer are ignored since the prompt is not required;
the report is executed but it is not filtered by the prompt.
Yes No The report is not executed. No answer is provided to the required prompt so
MicroStrategy cannot complete the report without user interaction.
Yes Yes The report is executed; the prompt is answered with the default.
When you create a subscription, you can force the report or document to
re-execute against the warehouse even if a cache is present, by selecting the
Re-run against the warehouse check box in the Subscription Wizard. You
can also prevent the subscription from creating a new cache by selecting the
Do not create or update matching caches check box.
You can change the default values for these check boxes in the Project
Configuration Editor, in the Caching: Subscription Execution category.
You can set the maximum number of subscriptions of each type that each
user can have for each project. This can prevent excessive load on the system
when subscriptions are executed. By default, there is no limit to the number
of subscriptions. You set these limits in the Project Configuration Editor.
3 Choose a task from the action list. For descriptions of the tasks, see the
table below.
5 Set any additional options required for the task. For information about
the possible options for each task, click Help in the Schedule
Administration Tasks dialog box.
6 Click OK. The Schedule Administration Tasks dialog box closes and the
task is scheduled.
The table below lists the tasks that can be scheduled for a project. Some of
the tasks can also be scheduled at the project source level, affecting all
projects in that project source.
Task Description
Delete report Delete all report caches for the project. For more information, see Deleting result
caches caches, page 220.
Note: Typically the “Invalidate Caches” task is sufficient to clear the report caches.
Delete History Delete all history list messages for the project or project source. For more
List messages information, see Scheduling History List message deletion, page 247.
(project or Note: This maintenance request can sometimes be large. Schedule History List
project source) deletions during times when Intelligence Server is not busy, such as during hours
when users are not sending requests to the system. Alternatively, delete History Lists
in increments, for example, delete the History Lists of groups of users at different
times, such as at 1 AM, 2 AM, and so on.
Invalidate Invalidate the report caches in a project. The invalid caches are automatically deleted
caches once all references to them have been deleted. For more information, see
Invalidating result caches, page 218.
Purge element Delete the element caches for a project. For more information, see Deleting all
caches element caches, page 260.
Task Description
Activate Publish an Intelligent Cube to Intelligence Server, making it available for use in
Intelligent Cubes reports. For more information, see Chapter 6, Managing Intelligent Cubes.
Deactivate Unpublish an Intelligent Cube from Intelligence Server. For more information, see
Intelligent Cubes Chapter 6, Managing Intelligent Cubes.
Delete Intelligent Delete an Intelligent Cube from the server. For more information, see Chapter 6,
Cube Managing Intelligent Cubes.
Update Updates a currently published Intelligent Cube. For more information, see Chapter 6,
Intelligent Cubes Managing Intelligent Cubes.
Idle project Cause the project to stop accepting certain types of requests. For more information,
see Setting the status of a project, page 455.
Load project Bring the project back into normal operation from an unloaded state. For more
information, see Setting the status of a project, page 455.
Resume project Bring the project back into normal operation from an idle state. For more information,
see Setting the status of a project, page 455.
Unload project Take a project offline to users and remove the project from Intelligence Server
memory. For more information, see Setting the status of a project, page 455.
Batch LDAP Import LDAP users into the MicroStrategy system. For more information, see
import (project Importing a list of users and groups in batch, page 122.
source only)
Delete unused Remove the unused managed objects created for Freeform SQL, Query Builder, and
managed objects MDX cube reports. For more information, see Deleting unused schema objects:
(project or managed objects, page 348.
project source)
Purge statistics Purge the Statistics database for this project. For more information, see the chapter
(project or Analyzing System Usage with Enterprise Manager, in the System Administration
project source) Guide Volume 2.
1 In Desktop, log in to a project source. You must log in as a user with the
Administer Subscriptions privilege.
5 To delete a scheduled task, right-click the task and select Expire. The task
is removed from the list of tasks.
Users are not notified when a task they have scheduled is deleted.
Scheduling administrative tasks in a clustered system
When you set up several Intelligence Server machines in a cluster, you can
distribute projects across those clustered machines (or nodes) in any
configuration. Each node can host a different subset of projects. For more
information about clustering Intelligence Servers, see Chapter 11, Clustering
Multiple MicroStrategy Servers.
You can see which nodes are running which projects using the Cluster view of
the System Administration monitor. For details on using the Cluster view of
the System Administration monitor, see Managing your clustered projects,
page 504.
• Send Now
370 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
About transmitters
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 371
8 Automating Tasks System Administration Guide Vol. 1
About devices
Distribution Services comes with default email, file, and print devices that
are already set up, out of the box. You can use the default devices as is,
modify their settings according to your requirements, or create your own
devices from scratch if you require additional devices with different
combinations of properties. For example, you may require one email device
to send emails to Microsoft Outlook and a separate device to send emails to
web-based email accounts such as Yahoo, Gmail, Hotmail, and so on.
For details on how to create or manage devices, see Creating and managing
devices, page 383.
About contacts
Contacts provide a user with a set of associated email addresses, file delivery
location, and network printer delivery locations. To make it easier to manage
all the addresses and delivery locations for a user, you can create a contact
for each address and delivery location. Contacts allow you to group multiple
addresses together by linking those contacts to a MicroStrategy user. The
user linked to the contacts can have reports and documents subscribed to the
contacts, and thus the reports and documents are delivered to selected
addresses and delivery locations defined for those contacts. Since a contact
can be linked to only one MicroStrategy user account, no other users can
access or see the address in a contact.
372 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
For details on how to create or manage contacts, see Creating and managing
contacts, page 389.
5 In MicroStrategy Web, the user selects his own address from the To
drop-down menu. If he chooses, he can also select additional addresses
for himself, other MicroStrategy users, or other contacts, to also receive
this report or document based on the subscription.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 373
8 Automating Tasks System Administration Guide Vol. 1
Enable the zipping feature for the subscription so that files are
reduced in size.
Use bulk export instead of the CSV file format. Details on bulk
exporting are in the Reports chapter of the Advanced Reporting
Guide.
374 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
Enable caching.
Itchanging
is strongly recommended that you exercise caution when
settings from the default. See the appropriate section of
this manual for details on each setting.
Limit the Number of scheduled jobs per project and per Intelligence
Server.
Increase the User session idle time.
Enable caching.
Itchanging
is strongly recommended that you exercise caution when
settings from the default. See the appropriate section of
this manual for details on each setting.
• When creating contacts, make sure that each contact has at least one
address for each delivery type (email, file, and print). Otherwise the
contact will not appear in the list of contacts for subscriptions that are for
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 375
8 Automating Tasks System Administration Guide Vol. 1
a delivery type that the contact has no address for. For example, if a
contact does not have an email address, then when an email subscription
is being created, that contact will not appear in the list of contacts.
• When selecting reports to be subscribed to, make sure none of the reports
have prompts that require an answer and have no default answer. If a
report has a prompt that requires an answer but has no default answer,
the subscription will be unable to run the report successfully and the
subscription will be automatically removed from the system.
Prerequisites
• Understand your users’ requirements for subscribing to reports and
where they want them delivered.
• Have administrator privileges.
Checklist
The following high-level checklist shows you what you need to do to set up a
report delivery system in MicroStrategy using Distribution Services.
• For best practices for working with transmitters, see Best practices for
working with transmitters, page 378.
376 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
• For best practices for working with devices, see Best practices for
working with devices, page 384.
• For steps to modify a device, see Viewing and modifying a device and
accessing the device editors, page 385.
• For steps to create a new device, see Creating and managing devices,
page 383.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 377
8 Automating Tasks System Administration Guide Vol. 1
Using the Transmitter Editor, you can view and modify the definition of a
transmitter, rename the transmitter, duplicate the transmitter, and so on.
378 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
2 In the Transmitter List area on the right, right-click the transmitter that
you want to view or change settings for.
4 Change the transmitter settings as desired. Click Help for details on each
option in the interface.
Creating a transmitter
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 379
8 Automating Tasks System Administration Guide Vol. 1
2 Right-click in the Transmitter List area on the right, select New, and
select Transmitter. The Select Transmitter Type dialog box opens.
3 Select Email and click OK. The Email Transmitter Editor opens.
4 Change the transmitter settings as desired. Click Help for details on each
option in the interface.
5 Click OK to save the transmitter. The new transmitter with the specified
name is added to the list of existing transmitters in the Transmitter List
area.
Once an email transmitter is created, you can create email devices based on
this transmitter. When you create a device, the transmitter appears in the list
of existing transmitters in the Select Device Type dialog box. The settings you
specified above for the email transmitter apply to all email devices that will
be based on this transmitter.
380 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
document. The file transmitter then sends the file to a file storage location on
a network computer.
2 Right-click in the Transmitter List area on the right, select New, then
select Transmitter. The Select Transmitter Type dialog box opens.
3 Select File and click OK. The File Transmitter Editor opens.
4 Change the transmitter settings as desired. Click Help for details on each
option in the interface.
5 Click OK to save the transmitter. The new transmitter with the specified
name is added to the list of existing transmitters in the Transmitter List
area.
Once a file transmitter is created, you can create file devices based on this
transmitter. When you create a device, the transmitter appears in the list of
existing transmitters in the Select Device Type dialog box. The settings you
specified above for the file transmitter apply to all file devices that will be
based on this transmitter.
2 Right-click in the Transmitter List area on the right, select New, and
select Transmitter. The Select Transmitter Type dialog box opens.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 381
8 Automating Tasks System Administration Guide Vol. 1
3 Select Print and click OK. The Print Transmitter Editor opens.
4 Change the transmitter settings as desired. Click Help for details on each
option in the interface.
5 Click OK to save the transmitter. The new transmitter with the specified
name is added to the list of existing transmitters in the Transmitter List
area.
Once a print transmitter is created, you can create print devices based on this
transmitter. When you create a device, the transmitter appears in the list of
existing transmitters in the Select Device Type dialog box. The settings you
specified above for the print transmitter apply to all print devices that will be
based on this transmitter.
Deleting a transmitter
Prerequisites
• You cannot delete a transmitter if there are devices depending on the
transmitter. You must first delete any devices that depend on the
transmitter.
To delete a transmitter
2 In the Transmitter List area on the right, right-click the transmitter that
you want to delete.
3 Select Delete. The Confirm Delete Object message is displayed. See the
prerequisite above to be sure you have properly prepared the system to
allow the transmitter to be deleted.
382 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
For example, if you want to send reports via email, and your recipients use an
email client such as Microsoft Outlook, then you can create a Microsoft
Outlook email device that has settings appropriate for working with Outlook.
If you need to send reports to a file location on a computer on your network,
you can create a file device specifying the network file location. If you want to
send reports to a printer on your network, you can create a printer device
specifying the network printer location and printer properties.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 383
8 Automating Tasks System Administration Guide Vol. 1
system. Print and file locations for devices created when in server
connection mode (three-tier) are automatically validated by
MicroStrategy.
For file delivery locations, use the Device Editor’s File: General tab
and File: Advanced Properties tab.
For printer locations, use the Device Editor’s Print: General tab and
Print: Advanced Properties tab.
• Test a delivery using each device (email, file, and print) to make sure that
the device settings are still effective and any system changes that may
have occurred do not require changes to any device settings.
• If you have a new email client that you want to use with Distribution
Services functionality, create a new email device and apply settings
specific to your new email application. To create a new device quickly, use
the Duplicate option and then change the device settings so they suit
your new email application.
• If you rename a device or change any settings of a device, test the device
to make sure that the changes allow the device to deliver reports or
documents successfully for users.
384 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
Use the Device Editor to view and modify the definition of a device, rename
the device, and so on.
2 In the Device List area on the right, right-click the device that you want to
view or change settings for, and select Edit. The Device Editor opens.
3 Change the device settings as desired. Click Help for details on each
option in the interface.
To rename a device, right-click the device and select Rename. Type a new
name, and then press ENTER. When you rename a device, the contacts and
subscriptions using the device are updated automatically.
A file device can automatically send a report or document in the form of a file
to a storage location such as a folder on a computer on your network. Users
subscribe to a report or document, which triggers the file device to send the
subscribed report or document to the specified location when the
subscription requires it to be sent.
You must specify the file properties and the network file location for the file
device to deliver files to. You can include properties for the delivered files
such as having the system set the file to Read-only, label it as Archive, and so
on.
A quick way to create a new file device is to duplicate an existing device and
then edit its settings to meet the specific needs for this new device. This is a
time-saving method if you have a similar device already created, or you want
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 385
8 Automating Tasks System Administration Guide Vol. 1
2 Right-click in the Device List area on the right, select New, and then
Device. The Select Device Type dialog box opens.
3 Select File and click OK. The File Device Editor opens.
4 Change the device settings as desired. Click Help for details on each
option in the interface.
Once the file device is created, it appears in the list of existing file devices
when you create an address (in this case, a path to a file storage location such
as a folder) for a MicroStrategy user or a contact. You select a file device and
assign it to the address you are creating. When a user subscribes to a report
to be delivered to this address, the report is delivered to the file delivery
location specified in that address, using the delivery settings specified in the
associated file device. Click Help for details to create an address for a user or
to create a contact and add addresses to the contact.
You can specify various MIME options for the emails sent by an email device,
such as the type of encoding for the emails, the type of attachments the
emails, can support, and so on.
386 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
Prerequisites
• An understanding of your organization’s email server or other email
delivery systems.
2 Right-click in any open space in the Device List area on the right, select
New, and then Device. The Select Device Type dialog box opens.
3 Select Email and click OK. The Email Device Editor opens.
4 Change the device settings as desired. Click Help for details on each
option in the interface.
Once an email device is created, it appears in the list of existing email devices
when you create an address for a MicroStrategy user or a contact. You select
an email device and assign it to the address you are creating. When a user
subscribes to a report to be sent to this address, the report is sent to the email
recipient specified in that address, using the delivery settings specified in the
associated email device. Click Help for details to create an address for a user
or to create a contact and add addresses to the contact.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 387
8 Automating Tasks System Administration Guide Vol. 1
Prerequisites
• The selected printer must be added to the list of printers on the machine
on which MicroStrategy Intelligence Server is running.
2 Right-click in the Device List area on the right, select New, and then
Device. The Select Device Type dialog box opens.
3 Select Print and click OK. The Print Device Editor opens.
4 Change the device settings as desired. Click Help for details on each
option in the interface.
Once a print device is created, it appears in the list of existing print devices
when you create an address (in this case, a path to the printer) for a
MicroStrategy user or a contact. You select a print device and assign it to the
address you are creating. When a user subscribes to a report to be sent to this
address, the report is sent to the printer specified in that address, using the
delivery settings specified in the associated print device. Click Help for
details to create an address for a user or to create a contact and add
addresses to the contact.
Deleting a device
Prerequisites
Update those contacts and subscriptions that are using the device, by
replacing the device with a different one. To do this, check whether the
device you want to delete is used by any existing addresses:
• To find contacts, use the Delivery Manager for Contacts (in View
Options, select the device name).
388 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
To delete a device
2 In the Device List area on the right, right-click the device you want to
delete.
3 Select Delete. The Confirm Delete Objects message is displayed. See the
Prerequisites above to be sure you have properly prepared the system to
allow the device to be deleted.
Contacts can also be used when you want to deliver reports or documents to
people who are not MicroStrategy users. For an example and more details on
using contacts this way, see the Desktop Help.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 389
8 Automating Tasks System Administration Guide Vol. 1
Prerequisites
• Understand your users’ requirements for file and printer delivery
locations, and email addresses, as well as the reports and documents they
are likely to subscribe to or be subscribed to. For example, some
MicroStrategy documents are Flash dashboards, which require Flash to
be installed wherever the dashboard is delivered to. For specific
requirements for Flash dashboards, see the MicroStrategy Document
Creation Guide.
• If the user linked to one or more contacts does not need to receive
subscribed reports and documents, delete any associated contacts.
390 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
• If you have many contacts and contact groups, use the filter to restrict the
number of contacts you are viewing when performing contact
maintenance tasks. Click Help for steps to use the filter.
You can view and modify the definition of a contact, rename the contact,
duplicate the contact, delete or disable a contact, and so on, using the
Contact Editor.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 391
8 Automating Tasks System Administration Guide Vol. 1
2 In the Contact List area on the right, right-click the contact that you want
to view or change settings for.
4 Change the name, description, or other settings of the contact. For details
on each option, click Help.
• Delete: Deletes the selected contact. For important warnings and other
details, see Deleting a contact, page 396.
• Disable Contact/Enable Contact: Disables or enables the selected
contact. Disabling a contact means the contact will no longer be available
for report or document subscription. For example, this option is useful
when a printer or server is down for maintenance and the delivery
address (path to the printer or file storage location) associated with the
contact is not available for a period of time.
392 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
Creating a contact
You create a new contact for each address (email address, file storage
location on your network, or network printer path) that reports and
documents will be delivered to.
To create a contact
2 Right-click in the Contact List area on the right, select New, and then
Contact. The Contact Editor opens.
3 Change the contact settings as desired. Click Help for details on each
option in the interface.
4 Click OK.
A contact group is a set of contacts that are combined under one name.
Contact groups are useful to create when there are certain reports that need
to be sent to multiple contacts. For example, if there are four contacts that
need to receive the same subscribed reports, you can group the contacts into
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 393
8 Automating Tasks System Administration Guide Vol. 1
a contact group and subscribe the contact group to the reports, rather than
subscribing each contact individually.
2 Right-click in the Contact List area on the right, select New, and then
Contact Group. The Contact Group Editor opens.
3 Change the contact settings as desired. Click Help for details on each
option in the interface.
Aavailable
contact group must be linked to a user for its contacts to be
for report and document subscription.
4 Click OK.
You can also group multiple contact groups into one contact group. Grouping
multiple contact groups into a contact group makes it easy to send out
wide-distribution reports that have no security implications, such as an
employee birthday list that is sent out at the beginning of every month.
All members (contacts) of each contact group within the top-level contact
group receive the same subscribed reports, when the top-level contact group
is chosen as the recipient of a subscription.
394 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
The Contact List area displays a list of users linked to contacts, along with the
list of contacts and contact groups. Right-click a user and select from the
following options:
• Edit: Opens the User Editor for the selected user. For details on each
option in the interface, click Help.
Any changes made to the user account in the User Editor will affect
the user’s account across the MicroStrategy system.
• Rename: Allows you to rename the selected user. Right-click the user
and select Rename. Type a new name and press ENTER.
© 2010 MicroStrategy, Inc. Scheduling deliveries to email, file, and printer: Distribution Services 395
8 Automating Tasks System Administration Guide Vol. 1
Maintaining addresses
A MicroStrategy user can have several email, file, and/or printer addresses
for subscribed reports to be delivered to when the user subscribes to or is
subscribed to a report. Contacts (each containing an address) are linked to
the user. You can create and add addresses to a user on the Addresses tab of
the User Editor.
In the Contacts List area, right-click and address for a contact or a user and
select from the following options:
• Edit: Opens the User Editor: Addresses tab if you right-click an address
within a user. Opens the Contact Editor: Addresses tab if you right-click
an address within a contact. Click Help for details on each option in the
interface.
Deleting a contact
396 Scheduling deliveries to email, file, and printer: Distribution Services © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Automating Tasks 8
Prerequisites
• Check to see whether you need to save any of the delivery locations
(addresses) that make up the contact that you plan to delete. To do this,
first search for subscriptions that are dependent on the contact by
right-clicking the contact and selecting Search for dependent
subscriptions. If you want those subscriptions to continue to be sent to
any of the contact’s delivery locations, create a new contact and then
copy/paste that delivery location into the new contact.
To delete a contact
2 In the Contact List area on the right, right-click the contact you want to
delete.
Introduction
Translating your data and metadata allows your users to view their reports in
a variety of languages. It also allows report designers and others to display
report and document editors and other objects editors in various languages.
And because all translation information can be stored in the same project,
project maintenance is easier and more efficient for administrators.
The image below shows which parts of a report are translated using data
internationalization and which parts of a report are translated using
metadata internationalization:
• Achieving the correct language display, page 443 provides a table of the
functionality that MicroStrategy users can access to take advantage of
internationalization.
About internationalization
For a fully internationalized environment, both metadata
internationalization and data internationalization are required. However,
you can internationalize only your metadata, or only your data, based on
your needs. Both are described below.
• For steps to select the interface language in Desktop, see Selecting the
Interface language preference, page 429.
Different caches are created for different languages. When a user whose MDI
language and DI language are French runs a report, a cache is created
containing French data and using the report’s French name. When a
different user whose MDI language and DI language are German runs the
same report, a new cache is created with German data and using the report’s
German name. If a third user whose MDI language is French and DI
language is German runs the same report, a new cache is created with
German data but using the report’s French name.
• If you have old projects with metadata objects that have been previously
translated, it is recommended that you merge your translated strings
from your old metadata into the newly upgraded metadata using
MicroStrategy Project Merge. For steps, see Translating already
translated pre-9.x projects, page 416.
Prerequisites
• This chapter includes steps to be taken when installing or upgrading to
MicroStrategy Desktop 9.x. You should be prepared to use the steps
below during the installation or upgrade process. For steps to install, see
the Installation and Configuration Guide. For steps to upgrade, see the
Upgrade Guide.
The first step to internationalizing your data and metadata is to add the
internationalization tables to your MicroStrategy metadata repository.
2 Log into the project. You are prompted to update your project. Click Yes.
If you prefer to provide your own translations (for example if you will be
customizing folder names), you do not need to perform this procedure.
You can modify these default privileges if you need to for a specific user role
or a specific language object.
3 All language objects are listed on the right. To change ACL permissions
for a language object, right-click the object and select Properties.
4 Select Security on the left. For details on each ACL and what access it
allows, click Help.
After the metadata has been updated and your project has been prepared for
internationalization (usually performed during the MicroStrategy
installation or upgrade), you enable languages so they will be supported by
the project for metadata internationalization.
Prerequisites
• Gather a list of languages used by filters and prompts in the project.
These languages should be enabled for the project, otherwise a report
containing a filter or prompt in a language not enabled for the project will
not be able to execute successfully.
The languages displayed in bold blue are those languages that the
metadata objects have been enabled to support. This list is displayed as a
starting point for the set of languages you can choose to enable for
supporting data internationalization.
For
To add a new language, click New. The Languages Editor opens.
steps to create a custom language, see Adding or removing a
language in the system, page 446.
5 Select the check boxes for the languages that you want to enable for this
project.
7 Select one of the languages on the right side to be the default language for
this project. The default language is used by the system to maintain object
name uniqueness.
This may have been set when the project was first created. If so, it
will not be available to be selected here.
Iffrom
you are enabling a language for a project that has been upgraded
8.x or earlier, the default metadata language must be the
language in which the project was originally created (the 8.x
Desktop language at the time of project creation.) The only way to
change the default language is to duplicate the project.
8 Click OK.
You can use the steps below to disable a language for a project. When a
language has been disabled from a project, that language is no longer
available for users to select as a language preference, and the language
cannot be seen in any translation-related interfaces, such as an object’s
Properties dialog box.
Iflanguage
a user’s preferred language is disabled, the next lower priority
preference will take effect. To see the language preference
priority hierarchy, see Configuring metadata object and report data
language preferences, page 430.
Any translations for the disabled language are not removed from the
metadata with these steps. To do that, objects that contain translations in the
disabled language must be modified individually and saved.
4 On the right side, under Selected Languages, clear the check box for the
language that you want to disable for the project, and click OK. The
Project Configuration Editor closes.
• Translate a single object: You can simply right-click the object, select
Properties, select International on the left, and click Translate. Type the
translated word(s) for each language this object supports, and click OK.
For details to use the Object Translation dialog box, click Help.
The rest of this section describes the method to translate bulk object strings,
using a separate translation database, with the Repository Translation
Wizard.
All of the procedures in this section assume that your projects have
been prepared for internationalization. Preparation steps are in
Preparing a project to support internationalization, page 404.
4 Import the newly translated object strings back into the metadata
repository (see Importing translated strings from the translation
database to the metadata, page 415)
3 Click Help for details on each option in each page of the wizard.
• To extract strings from the metadata, select Extract the
MicroStrategy Repository objects from the appropriate page in the
wizard.
1: This means that the object has been modified between extraction
and import.
2: This means that the object that is being imported is no longer
present in the metadata.
• LASTMODIFIED: This is the data and time when the strings were
extracted.
Iflanguage
an object has an empty translation in a user’s chosen project
preference, the system defaults to displaying the object’s
default language, so it is not necessary to add translations for objects
that are not intended to be translated. For details on language
preferences, see Selecting preferred languages for interfaces,
reports, and objects, page 428.
3 Click Help for details on each option in each page of the wizard.
• To import strings from the translation database back into the
metadata, select Translate the MicroStrategy Repository objects in
the appropriate page in the wizard.
After the strings are imported back into the project, any objects that were
modified while the translation process was being performed, are
automatically marked with a 1. These translations should be checked for
correctness, since the modification may have included changing the object’s
name or description.
When you are finished with the string translation process, you can proceed
with data internationalization if you plan to provide translated report data to
your users. You can also set user language preferences for translated
metadata objects and data in Enabling or disabling languages in the project
to support DI, page 422.
2 Back up your existing translated strings by extracting all objects from the
old translated projects using the MicroStrategy Repository Translation
Wizard (see Extracting metadata object strings for translation,
page 412).
3 Merge the translated projects into the master project using the Project
Merge Wizard. Do not merge any translations.
You now have a single master project that contains all objects that were
present in both the original master project and in the translated project.
4 Extract all objects from the master project using the MicroStrategy
Repository Translation Wizard (see Extracting metadata object strings
for translation, page 412).
6 Import all translations back into the master project (see Importing
translated strings from the translation database to the metadata,
page 415).
All of the procedures in this section assume that your projects have
been prepared for internationalization. Preparation steps are in
Preparing a project to support internationalization, page 404.
1 Store the translated data in a data warehouse. Translated data strings can
be stored either in their own columns and/or tables in the same
warehouse as the source (untranslated) data, or in different warehouses
separated by language. Some organizations keep the source language
stored in one warehouse, with all other languages stored together in a
different warehouse. You must configure MicroStrategy with a DI model
so it can connect to one of these storage scenarios: the SQL-based model
and the connection-based model. For details on each model and steps to
configure MicroStrategy, see Storing translated data: data
internationalization models, page 418).
You must connect MicroStrategy to your storage system for translated data.
To do this, you must identify which type of storage system you are using.
Translated data for a given project is stored in one of two ways:
• In columns and tables within the same data warehouse as your source
(untranslated) data (see SQL-based DI model, page 418)
SQL-based DI model
If all of your translations are stored in the same data warehouse as the source
(untranslated) data, this is a SQL-based DI model. This model assumes that
your translation storage is set up for column-level data translation (CLDT)
and/or table-level data translation (TLDT), with standardized naming
conventions.
This model is called SQL-based because SQL queries are used to directly
access data in a single warehouse for all languages. You can provide
translated DESC (description) forms for attributes with this DI model.
If you are using a SQL-based DI model, you must specify the column pattern
or table pattern for each language. The pattern depends upon the table and
column names that contain translated data in your warehouse.
MicroStrategy supports a wide range of string patterns. The string pattern is
not limited to suffixes only. However, using prefixes or other non-suffix
naming conventions requires you to use some functions so that the system
can recognize the location of translated data. These functions are included in
the steps to connect the system to your database.
Connection-based DI model
Choosing a DI model
You must evaluate your physical data storage for both your source
(untranslated) language and any translated languages, and decide which
data internationalization model is appropriate for your environment.
Data Internationalization
Translation Storage Location Translation Access Method
Model
Different tables for each language, in Different SQL generated for each SQL-based
one data warehouse language
Different columns for each language, in Different SQL generated for each SQL-based
one data warehouse language
Data Internationalization
Translation Storage Location Translation Access Method
Model
Different tables and columns for each Different SQL generated for each SQL-based
language, in one data warehouse language
One data warehouse for each language Different database connection for Connection-based
each language
column naming patterns, explains the use of only tables, only columns, or
both tables and columns, the use of logical views, and so on.
For detailed steps to connect the system to your translation database, see the
Project Design Guide, Enabling data internationalization through SQL
queries section. The Project Design Guide includes details to select your
table or column naming pattern, as well as functions to use if your naming
pattern does not use suffixes.
If you are changing from one DI model to another, you must reload the
project after completing the steps above. Settings from the old DI model are
preserved, in case you need to change back.
Connection
Manager.
mapping can also be performed using Command
The database connection that you use for each data warehouse must be
configured in MicroStrategy before you can provide translated data to
MicroStrategy users.
Prerequisites
• The procedure in the Project Design Guide assumes that you will enable
the connection-based DI model. If you decide to enable the SQL-based
model, you can still perform the steps to enable the connection-based
model, but the language-specific connection maps you create in the
procedure will not be active.
• You must have the Configure Connection Map privilege, at either the user
level or the project level.
For detailed steps to connect the system to more than one data warehouse,
see the Project Design Guide, Enabling data internationalization through
connection mappings section.
If you are changing from one DI model to another, you must reload the
project after completing the steps in the Project Design Guide. Settings from
the old DI model are preserved, in case you need to change back.
If the project designer has not already done so, you must define attribute
forms in the project so that they can be displayed in multiple languages.
Detailed information and steps to define attribute forms to support multiple
languages are in the Project Design Guide, Supporting data
internationalization for attribute elements section.
You can also add a custom language to the list of languages available to be
enabled for data internationalization. For steps to add a custom language to
the project, see Adding or removing a language in the system, page 446.
After translated data has been stored, you must configure the project to
establish which languages will be supported for data internationalization
(DI). You must perform this procedure whether you store translated data
using a SQL-based DI model or a connection-based DI model.
5 Select the DI model that you are using. For details, see Storing translated
data: data internationalization models, page 418.
6 Click Add. The Available Languages dialog box opens, as shown below:
7 Languages displayed in bold blue are those languages that have been
enabled for the project to support translated metadata objects, if any.
This list is displayed as a starting point for the set of languages you can
choose to enable for supporting data internationalization.
8 Select the check box next to any language or languages that you want to
enable for this project.
Ifinternationalization,
no languages are selected to be enabled to support data
then data internationalization is treated by
the system as disabled.
The languages you selected are displayed in the Language: Data dialog
box.
10 In the Default column, select one language to be the default language for
data internationalization in the project. This selection does not have any
impact on the project or how languages are supported for data
internationalization. Unlike the MDI default language, this DI default
language can be changed at any time.
Iftreated
no default DI language is selected, data internationalization is
by the system as disabled.
11 For each language you have enabled, define the column/table naming
pattern or the connection-mapped warehouse, depending on which DI
model you are using (for information on DI models and on naming
patterns, see Storing translated data: data internationalization models,
page 418):
Some languages may have the same suffix - for example, English
US and English UK. You can also specify a NULL suffix.
13 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
You can use the steps below to disable a language for a project. When a
language has been disabled in a project, that language is no longer available
for users to select as a language preference, and the language cannot be seen
in any translation-related interfaces such as an object’s Properties dialog
box. Any translations for the disabled language are not removed from the
data warehouse with these steps.
Ifpreference
a user has selected the language as a language preference, the
will no longer be in effect once the language is disabled.
The project’s default language will take effect.
If you disable all languages for data internationalization (DI), the system
treats DI as disabled. Likewise, if you do not have a default language set for
DI, the system treats DI as disabled.
4 On the right side, under Selected Languages, clear the check box for the
language that you want to disable for the project.
• Language disabling will only affect MDX cubes and regular reports
and documents if an attribute form description in the disabled
language exists in the cube or report. If this is true, the cube, report, or
document cannot be published or used. The cube, report, or document
designer must remove attribute forms in the disabled language before
the cube/report/document can be used again.
7 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
These language preferences are for metadata languages only. All data
internationalization languages fall back to the project’s default
language if a DI preference is not enabled or translation of a specific
report cell is not available.
The following sections show you how to select language preferences based on
various priority levels within the system, starting with a section that explains
the priority levels:
• Report data: Determine the language that will be displayed for report
results that come from your data warehouse, such as attribute element
names. For steps to set this preference, see Configuring metadata object
and report data language preferences, page 430.
This language can also be used to determine the language used for the
metadata objects and report data, if the Desktop level language
preference is set to Use the same language as MicroStrategy
Desktop. For more information on the Desktop level language
preference, see Selecting the Desktop level language preference,
page 438.
4 From the Interface Language drop-down list, select the language that
you want to use as the interface default language
6 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
There are several levels at which metadata and report data languages can be
specified in MicroStrategy. Lower level languages are used by the system
automatically if a higher level language is unavailable. This ensures that end
users see an appropriate language in all situations.
Language preferences can be set at six different levels, from highest priority
to lowest. The language that is set at the highest level is the language that is
always displayed, if it is available. If that language does not exist or is not
available in the metadata or the data warehouse, the next highest level
language preference is used.
The following table describes each level, from highest priority to lowest
priority, and points to information on how to set the language preference at
each level.
Language Preference
Setting Location for End Setting Location for
Level (highest to Description
Users Administrators
lowest priority)
User-Project level The language Web: Preferences link at Set in the User Language
preference for a the top of any page. Preference Manager. See
user for a specific Desktop: From the Tools Selecting the User-Project
project. menu, select My level language preference,
Preferences. page 433.
User-All Projects level The language Web: Preferences link at Set in the User Editor. See
preference for a the top of any page. Selecting the User-All
user for all Desktop: From the Tools Projects level language
projects. menu, select My preference, page 435.
Preferences.
Project-All Users level The language Not applicable. In the Project Configuration
preference for all Editor, expand
users in a specific Languages, select User
project. Preferences. See
Selecting the All Users In
Project level language
preference, page 436.
Desktop level The interface Set in the Desktop Set in the Desktop
language Preferences dialog box. For Preferences dialog box.
preference for all steps to specify this For steps to specify this
users of Desktop language, see Selecting the language, see Selecting
on that machine, Desktop level language the Desktop level language
for all projects. preference, page 438. preference, page 438.
Language Preference
Setting Location for End Setting Location for
Level (highest to Description
Users Administrators
lowest priority)
Machine level The language On the user’s machine and On the user’s machine and
preference for all within the user’s browser within the user’s browser
users on a given settings. settings. For steps to
machine. specify this language, see
Selecting the Machine level
language preference,
page 440.
Project Default level This is the project Not applicable. Set in the Project
default language Configuration Editor. For
set for MDI. It is the steps to specify this
language language, see Configuring
preference for all the Project Default level
users connected to language preference,
the metadata. page 440.
For example, a user has her User-Project Level preference for Project A set to
English. Her User-All Projects Level preference is set to French. If the user
logs in to Project A and runs a report, the language displayed will be English.
If the user logs in to Project B, which does not have a User-Project Level
preference specified, and runs a report, the project will be displayed in
French. This is because there is no User-Project Level preference for Project
B, so the system automatically uses the next, lower language preference level
(User-All Projects) to determine the language to display.
Iflanguage
an object has an empty translation in a user’s chosen project
preference, the system defaults to displaying the object’s
default language, so it is not necessary to add translations for objects
that are not intended to be translated.
2 Right-click the project that you want to set the language preference for
and select Project Configuration. The Project Configuration Editor
opens.
6 Select the users from the list on the left side of the User Language
Preferences Manager that you want to change the User-Project level
language preference for, and click > to add them to the list on the right.
You can narrow the list of users displayed on the left by doing one of the
following:
• To search for users in a specific user group, select the group from the
drop-down menu that is under the Choose a project to define user
language preferences drop-down menu.
• To search for users containing a certain text string, type the text string
in the Find field, and click the following icon:
This returns a list of users matching the text string you typed.
Previous strings you have typed into the Find field can be accessed
again by expanding the Find drop-down menu.
7 On the right side, select the user(s) that you want to change the
User-Project level preferred language for, and do the following:
8 Click OK. The preferences are saved and the User Language Preferences
Manager closes.
Once the user language preferences have been saved, users can no
longer be removed from the Selected list.
10 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
2 In the Folder List on the left, expand Administration and navigate to the
user that you want to set the language preference for.
4 On the left side of the User Editor, expand the International category and
select Language.
7 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
The All Users In Project level language preference determines the language
that will be displayed for all users that connect to a project, unless a higher
priority language is specified for the user. Use the steps below to set this
preference.
Ifspecified
the User-Project or User-All Projects language preferences are
for the user, the user will see the All Users In Project
language only if the other two language preferences are not available.
To see the hierarchy of language preference priorities, see the table in
Configuring metadata object and report data language preferences,
page 430.
2 In the Folder List on the left, select the project. From the Administration
menu, select Projects, then Project Configuration. The Project
Configuration Editor opens.
• From the Data language preference for all users in this project
drop-down menu, select the language that you want to be displayed
for report results in this project.
6 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
The Desktop level language preference determines the default language for
all objects displayed within MicroStrategy Desktop, unless a higher priority
language preference has been specified. This is the same as the interface
preference.
Ifpreferences
the User-Project, User-All Projects, or All Users In Project language
are specified, the user will see the Desktop language only
if the other three language preferences are not available. To see the
hierarchy of language preference priorities, see the table in
Configuring metadata object and report data language preferences,
page 430.
4 Select one of the following from the Language for metadata and
warehouse data if user and project level preferences are set to
default drop-down menu.
• If you want the Desktop language preference to be the same as the
Interface language preference, select Use the same language as
MicroStrategy Desktop. For information about configuring the
Interface language preference, see Selecting the Interface language
preference, page 429.
5 Select the language that you want to use as the default Desktop interface
language from the Interface Language drop-down menu.
7 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
This preference determines the language that is used on all objects on the
local machine. MicroStrategy Web uses the language that is specified in the
user’s web browser if a language is not specified at a level higher than this
one.
Iflanguage
the User-Project, User-All Projects, All Users In Project, or Desktop
preferences are specified, the user will see the Machine
language only if the other four language preferences are not available.
To see the hierarchy of language preference priorities, see the table in
Configuring metadata object and report data language preferences,
page 430.
This language preference has the lowest priority in determining the language
displayed within Desktop. It specifies the default language for the project.
Use the steps below to set this preference.
IfandtheMachine
User-Project, User-All Projects, All Users In Project, Desktop,
language preferences are specified, the user will see the
Project Default language only if the other five language preferences
are not available. To see the hierarchy of language preference
priorities, see the table in Configuring metadata object and report
data language preferences, page 430.
2 Select the project that you want to set the default preferred language for.
• To specify the default data language for the project, select Data from
the Language category. Then select Default for the desired language.
6 Disconnect and reconnect to the project source so that your changes take
effect. To do this, right-click the project source, select Disconnect from
Project Source, then repeat this and select Connect to Project Source.
Each MicroStrategy object can have its own default language. The translation
for the object default language is used when the system cannot find or access
a translation for the object in the language specified as the user or project
preference.
This preference is useful especially for personal objects, since most personal
objects are only used in one language, the owner’s language. The object
default language can be set to any language supported by the project in which
the object resides.
If the object default language preference is not set, the system uses the
project default language.
When duplicating a project, objects in the source that are set to take the
project default language will take whatever the destination project’s default
language is.
1 Log in to the project source that contains the object as a user with
administrative privileges.
2 Right-click the object and select Properties. The Properties dialog box
opens.
4 From the Select the default language for the object drop-down menu,
select the default language for the object.
Number format (decimal, In Desktop: Regional settings on the Desktop user’s machine.
thousands separator, currency In Web: Preferences link at the top of any page. Click Help for
symbol, weight) steps and details on options.
Currency conversion Use a Value prompt on a metric. See the Advanced Prompts
chapter of the Advanced Reporting Guide.
Date format and separators In Desktop: Regional settings on the Desktop user’s machine.
In Web: Click the Preferences link at the top of any page, select
Languages, then select Show Advanced Options.
Note: In Web, if the browser is set to a language unsupported in
MicroStrategy and the user’s preferences are set to Default, the
date/time and number formatting display in English.
Autostyle fonts that support a given In Desktop, right-click and Format the attribute or metric (column
language header, value, or subtotal) using the font you prefer (on the Font
tab, specify the font.) From the Grid menu, select Save Autostyle
As and either overwrite the existing autostyle or create a new one.
Fonts that support all languages Few fonts support all languages. One that does is Arial Unicode
MS, which is licensed from Microsoft.
Character sets in Teradata The Character Column Option and National Character Column
databases Option VLDB properties let you support the character sets used in
Teradata.
Double-byte language support In Desktop, from the Tools menu, select Desktop Preferences.
User changing own language In Web: Click the Preferences link at the top of any page.
In Desktop: From the Tools menu select Desktop Preferences.
The list of languages to choose from comes from the languages
enabled for a project; see Enabling metadata languages for a
project, page 408.
Default language for all users in a Right-click a project, select Project Configuration, expand
project Language, and select User Preferences.
Different default language for a Right-click a project, select Project Configuration, expand
single user in different projects. Language, and select User Preferences.
Function names Function names are not translated. The MicroStrategy system
expects function names to be in English.
An individual object Use the Object Translation Editor. To access this, right-click the
object, select Properties, then select International.
Repository Translation Wizard list Enable languages the project supports for metadata objects (see
of available languages Enabling metadata languages for a project, page 408).
Metadata object names and For a project being created, select these in Architect or in the New
descriptions (such as report Project dialog box in Project Builder.
names, metric names, system For an existing project, see Enabling metadata languages for a
folder names, embedded project, page 408.
descriptors such as attribute
aliases, prompt, instructions, and
so on)
Attribute elements, for example, First translate the element name in your data warehouse. Then
the Product attribute has an enable the language; see Enabling languages for data
element called DVD player internationalization, page 423.
Project name and description In the Project Configuration Editor, expand Project Definition,
select General, click Modify, select Internationalization, then
click Translate. You can type both a project name and a
description in the Object Description field.
When designing a project using In Architect, from the Options menu, select Settings. On the
Architect, see columns in the Display Settings tab, select Display columns used for data
Warehouse Tables area that internationalization.
support data internationalization
Enable a new language for a See Enabling metadata languages for a project, page 408.
project to support User adding the language must have Browse permission for that
language object’s ACL.
Enable a custom language for a See Adding a new language to the system, page 446. Then see
project to support Enabling metadata languages for a project, page 408.
User adding the language must have Browse permission for that
language object’s ACL.
Searching the project Searches are conducted in the user’s preferred metadata language
by default.
A language-specific search can be conducted; open a project, then
from the Tools menu select Search for Objects. On the
International tab, click Help for details on each option in the
interface.
Project or object merge, or See Merging projects to synchronize objects, page 336, Copying
duplication objects between projects, page 304, and Duplicating a project,
page 294.
Derived elements In the Derived Element Editor, from the File menu, select
Properties, then select International. For details to use the
options, click Help.
MicroStrategy Office user interface In MicroStrategy Office, select Options, select General, then
and Excel format languages select International.
• List resolved languages, which are the languages that are displayed to
users from among the list of possible preferences.
For these and all the other scripts you can use in Command Manager, open
Command Manager and click Help.
Specifically, the database that allocates the metadata must be set with a code
page that supports the languages that are intended to be used in your
MicroStrategy project.
You can add new languages to MicroStrategy. Once they are added, new
languages are then available to be enabled for a project to support
internationalization.
Custom languages can also be added. For example, you can create a new
language called Accounting, based on the English language, for all users in
your Accounting department. The language contains its own work-specific
terminology.
Prerequisites
• You must have the Browse permission for the language object’s ACL
(access control list). For details on ACLs, see the System Administration
Guide.
5 Click New. The Languages Editor opens. Click Help for details on each
option.
Ifpreference
a user has selected the language as a language preference, the
will no longer be in effect once the language is disabled.
The next lower priority language preference will take effect. To see the
language preference priority hierarchy, see Configuring metadata
object and report data language preferences, page 430.
2 For metadata languages, any translations for the disabled language are
not removed from the metadata with these steps. To remove translations:
• For individual objects: Objects that contain translations in the
disabled language must be modified individually and saved.
• For the entire metadata: Duplicate the project after the language has
been removed, and do not include the translated strings in the
duplicated project.
1 In Desktop, from the Folder List on the left, expand Administration, then
expand Configuration Managers.
2 Select Languages. The language objects are displayed on the right side of
Desktop.
Introduction
MicroStrategy provides several system monitors to help you keep track of the
state of the system. These monitors include:
• Project, which helps you keep track of the status of all the projects
contained in the selected project source. For detailed information, see
Managing project status, configuration, or security: Project view,
page 452.
• Cluster, which helps you manage how projects are distributed across the
servers in a cluster. For detailed information, see Managing clustered
Intelligence Servers: Cluster view, page 454.
• The Scheduled Maintenance monitor, which lists all the scheduled
maintenance tasks. For detailed information, see Scheduling jobs and
administrative tasks, page 353.
452 System Maintenance monitor: Project and cluster maintenance © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Monitoring System Usage 10
2 Expand the System Maintenance group, and then select Project. The
projects and their statuses display on the right-hand side.
The Project view lists all the projects in the project source. If your system is
set up as a cluster of servers, the Project Monitor displays all projects in the
cluster, including the projects that are not running on the node from which
you are accessing the Project Monitor. For details on projects in a clustered
environment, see Distributing projects across nodes in a cluster, page 502.
To view the status of a project, select the List or Details view, and click the +
sign next to the project’s name. A list of all the servers in the cluster expands
below the project’s name. The status of the project on each server is shown
next to the server’s name. If your system is not clustered, there is only one
server in this list.
From the Project view, you can access a number of administrative and
maintenance functions. You can:
ToCluster
load a project on a specific server in a cluster, you use the
Monitor. For details on this procedure, see Managing
clustered Intelligence Servers: Cluster view, page 454.
© 2010 MicroStrategy, Inc. System Maintenance monitor: Project and cluster maintenance 453
10 Monitoring System Usage System Administration Guide Vol. 1
These tasks are all available by right-clicking a project in the Project Monitor.
For more detailed information about any of these options, see the online help
or related sections in this guide.
You can also schedule any of these maintenance functions from the Schedule
Administration Tasks dialog box. To access this dialog box, right-click a
project in the Project view and select Schedule Administration Tasks. For
more information, including detailed instructions on scheduling a task, see
Scheduling jobs and administrative tasks, page 353.
2 Expand the System Maintenance group, and then select Cluster. The
projects and their statuses display on the right-hand side.
3 To see a list of all the projects on a node, click the + sign next to that node.
The status of the project on the selected server is shown next to the
project’s name.
From the Cluster view, you can access a number of administrative and
maintenance functions. You can:
454 System Maintenance monitor: Project and cluster maintenance © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Monitoring System Usage 10
These tasks are all available by right-clicking a server in the Cluster view.
You can also load or unload projects from a specific machine, or idle or
resume projects on a specific machine for maintenance (for details, see
Changing the status of a project, page 459) by right-clicking a specific
project on a server. For more detailed information about any of these
options, see the online help, or see Managing your clustered projects,
page 504.
For example scenarios where the different project idle modes can help to
support project and data warehouse maintenance tasks, see Project and data
warehouse maintenance example scenarios, page 461.
© 2010 MicroStrategy, Inc. System Maintenance monitor: Project and cluster maintenance 455
10 Monitoring System Usage System Administration Guide Vol. 1
Loaded
Unloaded
Aexecuting
project unload request is fully processed only when all currently
jobs for the project are complete.
Request Idle
Request Idle mode helps to achieve a graceful shutdown of the project rather
than modifying a project from Loaded mode directly to Full Idle mode. In
this mode, Intelligence Server:
• Stops accepting new user requests from the clients for the project.
• Completes jobs that are already being processed. If a user requested that
results be sent to their History List, then the results are available in the
user’s History List after the project is resumed.
Setting a project to Request Idle can be helpful to manage server load for
projects on different clusters. For example, in a cluster with two nodes
named Node1 and Node2, the administrator wants to redirect load
temporarily to the project on Node2. The administrator must first set the
project on Node1 to Request Idle. This allows existing requests to finish
execution for the project on Node1, and then all new load is handled by the
project on Node2.
456 System Maintenance monitor: Project and cluster maintenance © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Monitoring System Usage 10
Execution Idle
• Stops executing all new and currently executing jobs and, in most cases,
places them in the job queue. This includes jobs that require SQL to be
submitted to the data warehouse, as well as jobs that are executed within
Intelligence Server such as answering prompts.
Iffetching
a project is idled while Intelligence Server is in the process of
query results from the data warehouse for a job, that job is
cancelled instead of being placed in the job queue. When the
project is resumed, an error message is placed in the user’s History
List. The user can click the message to resubmit the job request.
• Allows users to continue to request jobs, but execution is not allowed and
the jobs are placed in the job queue. Jobs in the job queue are displayed
as “Waiting for project” in the Job Monitor. When the project is resumed,
Intelligence Server resumes executing the jobs in the job queue.
This mode allows you to perform maintenance tasks for the project. For
example, you can still view the different project administration monitors,
create reports, create attributes, and so on. However, tasks such as element
browsing, exporting, and running reports that are not cached are not
allowed.
• Accepts new user requests from clients for the project but it does not
submit any SQL to the data warehouse.
• Stops any new or currently executing jobs that require SQL to be executed
against the data warehouse and, in most cases, places them in the job
queue. These jobs display as “Waiting for project” in the Job Monitor.
When the project is resumed, Intelligence Server resumes executing the
jobs in the job queue.
Iffetching
a project is idled while Intelligence Server is in the process of
query results from the data warehouse for a job, that job is
cancelled instead of being placed in the job queue. When the
© 2010 MicroStrategy, Inc. System Maintenance monitor: Project and cluster maintenance 457
10 Monitoring System Usage System Administration Guide Vol. 1
• Completes any jobs that do not require SQL to be executed against the
data warehouse.
This mode allows you to perform maintenance tasks on the data warehouse
while users continue to access non-database dependent functionality. For
example, users can run cached reports, but they cannot drill if that drilling
requires additional SQL to be submitted to the data warehouse. Users can
also export reports and documents in the project.
Full Idle
Full Idle is a combination of Request Idle and Execution Idle. In this mode,
Intelligence Server does not accept any new user requests and currently
active requests are canceled. When the project is resumed, Intelligence
Server does not resubmit the canceled jobs and it places an error message in
the user’s History List. The user can click the message to resubmit the
request.
This mode allows you to stop all Intelligence Server and data warehouse
processing for a project. However, the project still remains in Intelligence
Server memory.
Partial Idle
This mode allows you to stop all Intelligence Server and data warehouse
processing for a project, while not cancelling jobs that do not require any
warehouse processing. The project still remains in Intelligence Server
memory.
458 System Maintenance monitor: Project and cluster maintenance © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Monitoring System Usage 10
Ifproject
the project is running on multiple clustered Intelligence Servers, the
is loaded or unloaded from all nodes. To load or unload the
project from specific nodes, use the Cluster view instead of the
Project view. For detailed instructions, see Using the Cluster view,
page 454.
Ifproject
the project is running on multiple clustered Intelligence Servers, the
status changes for all nodes. To idle or resume the project on
specific nodes, use the Cluster view instead of the Project view. For
detailed instructions, see Using the Cluster view, page 454.
© 2010 MicroStrategy, Inc. System Maintenance monitor: Project and cluster maintenance 459
10 Monitoring System Usage System Administration Guide Vol. 1
4 Select the options for the idle mode that you want to set the project to:
• Request Idle (Request Idle): all currently executing and queued jobs
finish executing, and any newly submitted jobs are rejected.
• Execution Idle (Execution Idle for All Jobs): all currently executing,
queued, and newly submitted jobs are placed in the queue, to be
executed when the project resumes.
• Partial Idle (Request Idle and Execution Idle for Warehouse jobs):
all currently executing and queued jobs that do not submit SQL
against the data warehouse are cancelled, and any newly submitted
jobs are rejected. Any currently executing and queued jobs that do not
require SQL to be executed against the data warehouse are executed.
ToRequest
resume the project from a previously idled state, clear the
Idle and Execution Idle check boxes.
5 Click OK. The Idle/Resume dialog box closes and the project goes into
the selected mode. If you are using clustered Intelligence Servers, the
project mode is changed for all nodes in the cluster.
460 System Maintenance monitor: Project and cluster maintenance © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Monitoring System Usage 10
• Two projects, named Project1 and Project 2, use the same data
warehouse. Project1 needs dedicated access to the data warehouse for a
specific length of time. The administrator first sets Project2 to Request
Idle. After existing activity against the data warehouse is complete,
Project2 is restricted against executing on the data warehouse. Then, the
administrator sets Project2 to Warehouse Execution Idle mode to allow
data warehouse-independent activity to execute. Project1 now has
dedicated access to the data warehouse until Project2 is reset to Loaded.
The Job Monitor does not display detailed sub-steps that a job is currently
performing. However, it does display information to inform you of what is
happening with system tasks. You can see jobs that are:
• Executing
• Cancelling
Because the Job Monitor does not refresh itself, you must periodically
refresh it to see the latest status of jobs. To do this, press F5.
1 In Desktop, log in to a project source. You must log in as a user with the
Monitor Jobs privilege.
3 To view a job’s details including its SQL, double-click it. A Quick View
dialog box opens.
4 To view more details for all jobs displayed, right-click in the Job Monitor
and select Complete Information. Additional columns displayed are
Project name, Priority, Creation time, and Network address.
To cancel a job
2 Press DELETE, and then confirm whether you wish to cancel the job.
1 In Desktop, log in to a project source. You must log in as a user with the
Monitor User Connections privilege.
right-hand side. For each user, there is one connection for each project
the user is logged in to, plus one connection for <Server> indicating that
the user is logged in to the project source.
– Temp client: At times, you may see “Temp client” in the Network
Address column. This may happen when Intelligence Server is
under a heavy load and a user accesses the Projects or Home page
in MicroStrategy Web (the pages that display the list of available
projects). Intelligence Server creates a temporary session that
submits a job request for the available projects and then sends the
list to the Web client for display. This temporary session, which
remains open until the request is fulfilled, is displayed as “Temp
client.”
To disconnect a user
If<Configuration>
you disconnect users from the project source (the
entry in the User Connection Monitor), they are
also disconnected from any projects they were connected to.
user who is using the connection, and the database login being used to
connect to the database.
1 In Desktop, log in to a project source. You must log in as a user with the
Monitor Database Connections privilege.
Areport
cache’s hit count is the number of times the cache is used. When a
is executed (which creates a job) and the results of that report
are retrieved from a cache instead of from the data warehouse,
Intelligence Server increments the cache’s hit count. This can happen
when a user runs a report or when the report is run on a schedule for
the user. This does not include the case of a user retrieving a report
from the History List (which does not create a job). Even if that report
is cached, it does not increase its hit count.
1 In Desktop, log in to a project source. You must log in as a user with the
Monitor Caches privilege.
3 Select the project for which you want to view the caches and click OK. The
Report Cache Monitor or Document Cache Monitor opens.
5 To view additional details about all caches, from the View menu select
Details.
6 To change the columns shown in the Details view, right-click in the Cache
Monitor and select View Options. The Cache Monitor View Options
dialog box opens. Select the columns you want to see and click OK.
You can perform any of the following options after you select one or more
caches and right-click:
• Load from disk: Loads into memory a cache that was previously
unloaded to disk
• Unload to disk: Removes the cache from memory and stores it on disk
For detailed information about these actions, see Managing result caches,
page 217.
Ifslowyouresponse
are running Intelligence Server on HP-UX v2, you may notice a
time when using the Cache Monitor. For information
about this delay, including steps you can take to improve
performance, see Cache Monitor and Intelligent Cube Monitor
performance, page 633.
Cache statuses
A result cache’s status is displayed in the Report Cache Monitor using one or
more of the following letters:
I Invalid The cache has been invalidated, either manually or by a change to one of the
objects used in the cache. It is no longer used, and will be deleted by Intelligence
Server. For information about invalid caches, see Invalidating result caches,
page 218.
E Expired The cache has been invalidated because its lifetime has elapsed. For information
about expired caches, see Expiring result caches, page 221.
D Dirty The cache has been updated in Intelligence Server memory since the last time it
was saved to disk.
F Filed The cache has been unloaded, and exists as a file on disk instead of in
Intelligence Server memory. For information about loading and unloading caches,
see Unloading and loading result caches to disk, page 218.
Cache types
Type Description
Matching-History The cache is valid and available for use, and also referenced in at least one History
List message.
XML (Web only) The cache exists as an XML file and is referenced by the matching cache.
When the corresponding Matching cache is deleted, the XML cache is deleted.
For more information about each type of cache, see Types of result caches,
page 208.
Once a database-based History List Repository has been configured, you can
filter, and purge any of the History List Messages for any user in a project
source using the History List Messages System Monitor.
You need to have the Administer History List Monitor and the Monitor
History List privileges to be able to access the History List Messages System
Monitor.
When you have opened the History List Message System Monitor, you can do
the following:
• Sort the History List messages by column by clicking on the column
headers.
• Filter or purge the messages displayed based on criteria that you define
by right-clicking a message and selecting Filter or Purge.
• Specify what details you want to display for each message by
right-clicking the History List Messages System Monitor and selecting
View Options.
For more information about using the History List Message System Monitor,
refer to the MicroStrategy Desktop Online Help.
Once an Intelligent Cube has been published, you can manage it from the
Cube Monitor. Here you can view information about your Intelligent Cubes
and perform tasks such as updating, activating, and deactivating Intelligent
Cubes, as well as saving them to disk. For detailed instructions on how to use
the Cube Monitor, see Managing Intelligent Cubes: Intelligent Cube
Monitor, page 268.
1 In Desktop, log in to a project source. You must log in as a user with the
Monitor Cubes privilege.
The logged information includes items such as the user who made the
change, the date and time of the change, and the type of change (such as
saving, copying, or deleting an object). With change journaling, you can keep
track of all object changes, from simple user actions such as saving or moving
You can enable change journaling for some or all projects in a project source.
If change journaling is enabled for a project source, changes to the
configuration objects in that project source are logged. If change journaling
is enabled for a project, changes to all objects in that project are logged.
You can also enable change journaling at the project source level. In this case
information about changes to the project configuration objects, such as users
or schedules, is recorded in the change journal.
1 In Desktop, log in to a project source. You must log in as a user with the
Configure Change Journaling privilege.
5 In the Comments field, enter any comments you may have about the
reason for enabling or disabling change journaling.
3 To enable or disable change journaling for this project, select or clear the
Enable change journaling check box.
When change journaling is enabled, users are prompted for comments every
time they change an object. These comments can provide documentation as
to the nature of the changes made to objects.
You can disable the requests for object comments from the Desktop
Preferences dialog box.
4 Clear the Display change journal comments input dialog check box.
5 Click OK. The Desktop Preferences dialog box closes. You are no longer
prompted to enter a comment when you save objects.
You must have the Audit Change Journal privilege to view the change
journal.
Entry Details
Object type The type of object changed. For example, Metric, User, or Server Definition.
User name The name of the MicroStrategy user that made the change.
Transaction The date and time of the change, based on the time on the Intelligence Server
timestamp machine.
Transaction type The type of change and the target of the change. For example, Delete Objects,
Save Objects, or Enable Logging.
Transaction source The application that made the change. For example, Desktop, Command Manager,
MicroStrategy Web, or Scheduler.
Project name The name of the project that contains the object that was changed.
Note: If the object is a configuration object, the project name is listed as
<Configuration>
Comments Any comments entered in the Comments dialog box at the time of the change.
Machine name The name of the machine that the object was changed on.
Change type The type of change that was made. For example, Create, Change, or Delete.
Session ID A unique 32-digit hexadecimal number that identifies the user session in which the
change was made.
This information can also be viewed in the columns of the change journal. To
change the visible columns, right-click anywhere in the change journal and
select View Options. In the View Options dialog box, select the columns you
want to see.
For example:
• To find out when certain users were given certain permissions, you can
view only entries related to Users.
You can also quickly filter the entries so that you see only the entries for a
specific object, or only the changes made by a specific user. To do this,
right-click one of the entries for that object or that user and select either
Filter view by object or Filter view by user. To remove the filter, right-click
in the change journal and select Clear filter view.
4 To see only changes made in a specific time range, enter the start and end
time and date.
5 To view all transactions, not just those that change the version of an
object, clear the Show version changes only and Hide Empty
Transactions check boxes.
Iftransactions
the Show version changes only check box is cleared, two
named “LinkItem” are listed for every time an
application object is saved. These transactions are monitored for
MicroStrategy technical support use only, and do not indicate that
the application object has been changed. Any time the object has
actually been changed, a SaveObjects transaction with the name of
the application object is listed.
6 Click OK to close the dialog box and filter the change journal.
• To see only the changes to this object, select Filter view by object.
• To see only the changes made by this user, select Filter view by user.
2 To remove a quick filter, right-click in the change journal and select Clear
filter view.
When you export the change journal, any filters that you have used to
view the results of the change journal are also applied to the export. If
you want to export the entire change journal, make sure that no filters
are currently in use. To do this, right-click in the change journal and
select Clear filter view.
2 Right-click Change Audit and select Export list. The change journal is
exported to a text file.
When you purge the change journal, you specify a date and time. All entries
in the change journal that were recorded prior to that date and time are
deleted. You can purge the change journal for an individual project, or for all
projects in a project source.
3 Set the date and time. All data recorded prior to this date and time will be
deleted from the change journal.
4 To purge data for all projects, select the Apply to all projects check box.
To purge only data relating to the project source configuration, leave this
check box cleared.
5 Click Purge Now. When the warning dialog box opens, click Yes to purge
the data, or No to cancel the purge. If you click Yes, change journal
information recorded prior to the specified date is deleted.
Iftransaction
you are logging transactions for this project source, a Purge Log
is logged when you purge the change journal.
3 Set the date and time. All change journal data for this project from before
this date and time will be deleted from the change journal.
4 Click Purge Now. When the warning dialog box opens, click Yes to purge
the data, or No to cancel the purge. If you click Yes, change journal
information for this project from before the specified date and time is
deleted.
1 In the Windows Performance Monitor, on the toolbar, click the View Log
Data icon. The System Monitor Properties dialog box opens.
478 Managing system memory and resources: Windows Performance Monitor © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Monitoring System Usage 10
5 Select the desired counters from the list below and click Add.
6 Click Close, then click OK. The dialog boxes close and the desired
counters are now displayed in the Performance Monitor.
© 2010 MicroStrategy, Inc. Managing system memory and resources: Windows Performance Monitor 479
10 Monitoring System Usage System Administration Guide Vol. 1
480 Managing system memory and resources: Windows Performance Monitor © 2010 MicroStrategy, Inc.
11
CLUSTERING MULTIPLE
11.
MICROSTRATEGY SERVERS
Introduction
What is clustering?
A cluster is a group of two or more servers connected to each other in such a
way that they behave like a single server. Each machine in the cluster is
called a node. Because each machine in the cluster runs the same services as
other machines in the cluster, any machine can stand in for any other
machine in the cluster. This becomes important when one machine goes
down or must be taken out of service for a period of time. The remaining
machines in the cluster can seamlessly take over the work of the downed
machine, providing users with uninterrupted access to services and data.
• You can cluster Intelligence Servers using the built-in Clustering feature.
A Clustering license allows you to cluster up to four Intelligence Server
machines. For instructions on how to cluster Intelligence Servers, see
Clustering Intelligence Servers, page 494.
Benefits of clustering
Clustering Intelligence Servers provides the following benefits:
Failover support
Load balancing
are routed to the node to which they are connected until the user disconnects
from the MicroStrategy Web product.
When you set up several server machines in a cluster, you can distribute
projects across those clustered machines or nodes in any configuration, in
both Windows and UNIX/Linux environments. Each node in the cluster can
host a different set of projects, which means only a subset of projects need to
be loaded on a specific Intelligence Server machine. This feature not only
provides you with flexibility in using your resources, but can also provide
better scalability and performance due to less overhead, since all servers in a
cluster do not need to be running all projects.
Distributing projects across nodes also provides project failover support. For
example, one server is hosting project A and another server is hosting
projects B and C. If the first server fails, the other server can host all three
projects to ensure project availability.
The node of the cluster that performs all job executions is the node that the
client application, such as Desktop, connects to. This is also the node that
can be monitored by an administrator using the monitoring tools.
the user logged into. All report requests are then processed by the nodes
to which the users are connected.
4 The Intelligence Server nodes receive the requests and process them. In
addition, the nodes communicate with each other to maintain metadata
synchronization and cache accessibility across nodes.
• History Lists: Each user’s History List, which is held in memory by each
node in the cluster, contains direct references to the relevant cache files.
Accessing a report through the History List bypasses many of the report
execution steps, for greater efficiency. For an introduction to History
Lists, see Saving report results: History List, page 233.
• Result caches and Intelligent Cubes (for details, see Sharing result caches
and Intelligent Cubes in a cluster, page 487)
To view clustered cache information, such as cache hit counts, use the Cache
Monitor.
Result cache settings are configured per project, and different projects may
use different methods of result cache storage. Different projects may also use
different locations for their cache repositories. However, History List
settings are configured per project source. Therefore, different projects
cannot use different locations for their History List backups.
For result caches and History Lists, you must configure either multiple local
caches or a centralized cache for your cluster. The following sections describe
the caches that are affected by clustering, and presents the procedures to
configure caches across cluster nodes.
Synchronizing metadata
Ininvalidated
addition to server object caches, client object caches are also
when a change occurs. When a user requests a changed
object, the invalid client cache is not used and the request is processed
against the server object cache. If the server object cache has not been
refreshed with the changed object, the request is executed against the
metadata.
In a clustered environment, each node within a cluster must share its result
caches and Intelligent Cubes with the other nodes, so all clustered machines
have the latest cache information. For example, for a given project, result
caches on each node that has loaded the project are shared among other
nodes in the cluster that have also loaded the project. Configuring caches to
• Local caching: Each node hosts its own cache file directory and
Intelligent Cube directory. These directories need to be shared so that
other nodes can access them. For more information, see Local caching,
page 489.
Ifasyou are using local caching, the cache directory must be shared
“ClusterCaches” and the Intelligent Cube directory must be
shared as “ClusterCube”. These are the share names Intelligence
Server looks for on other nodes to retrieve caches and Intelligent
Cubes.
• Centralized caching: All nodes have the cache file directory and
Intelligent Cube directory set to the same network locations,
\\<machine name>\<shared cache folder name> and
\\<machine name>\<shared Intelligent Cube folder
name>. For more information, see Centralized caching, page 489.
The following table summarizes the pros and cons of the result cache
configurations:
Pros Cons
• Allows faster read and write operations for • The local cache files may be
cache files created by local server. temporarily unavailable if an
• Faster backup of cache lookup table. Intelligence Server is taken off the
Local • Allows most caches to remain accessible network or powered down.
caching even if one node in a cluster goes offline. • A document cache on one node may
depend on a dataset that is cached
on another node, creating a
multi-node cluster dependency.
• Allows for easier backup process. • All cache operations are required to
• Allows all cache files to be accessible even go over the network if shared location
if one node in a cluster goes offline. is not located on one of the
• May better suit some security plans, Intelligence Server machines.
Centralized because nodes using network account are • Requires additional hardware if
caching accessing only one machine for files. shared location is not located on an
Intelligence Server.
• All caches become inaccessible if the
machine hosting the centralized
caches goes offline.
For steps to configure cache files with either method, see Configuring caches
in a cluster, page 494.
Local caching
In this cache configuration, each node maintains its own local Intelligent
Cubes and local cache file, and thus maintains its own cache index file. Each
node’s caches are accessible by other nodes in the cluster through the cache
index file. This is illustrated in the diagram below.
For example, User A, who is connected to node 1, executes a report and thus
creates report cache A on node 1. User B, who is connected to node 2,
executes the report. Node 2 checks its own cache index file first. When it does
not locate report cache A in its own cache index file, it checks the index file of
other nodes in the cluster. Locating report cache A on node 1, it uses that
cache to service the request, rather than executing the report against the
warehouse.
Centralized caching
In this cache configuration, all nodes in the cluster use one shared
centralized location for Intelligent Cubes and one shared centralized cache
file location. These can be stored on one of the Intelligence Server machines
or on a separate machine dedicated to serving the caches. The Intelligent
Cubes, History List messages, and result caches for all the Intelligence Server
machines in the cluster are written to the same location. In this option, only
one cache index file is maintained. This is illustrated in the diagram below.
For example, User A, who is connected to node 1, executes report A and thus
creates report cache A, which is stored in a centralized file folder. User B,
who is connected to node 2, executes report A. Node 2 checks the centralized
cache index file for report cache A. Locating report cache A in the centralized
file folder, it uses that cache to service the request, regardless of the fact that
node 1 originally created the cache.
A History List is a set of pointers to cache files. Each user has his or her own
History List, and each node in a cluster stores the pointers created for each
user who is connected to that node. Each node’s History List is synchronized
with the rest of the cluster. Even if report caching is disabled, History List
functionality is not affected.
The Intelligence Server inbox stores the collection of History List messages
for all users, which appear in the History folder in Desktop. Inbox
synchronization refers to the process of synchronizing inboxes across all
nodes in the cluster, so that all nodes contain the same History List
messages.
Inbox synchronization enables users to view the same set of personal History
List messages, regardless of the cluster node to which they are connected.
MicroStrategy prerequisites
• You must have purchased an Intelligence Server license that allows
clustering. To determine the license information, use the License
Manager tool and verify that the Clustering feature is available for
MicroStrategy Intelligence Server. For more information on using
License Manager, see Chapter 4, Managing Your Licenses.
• The user account under which the Intelligence Server service is running
must have full control of cache and History List folders on all nodes.
Otherwise, Intelligence Server will not be able to create and access cache
and History List files.
• Server definitions store Intelligence Server configuration information.
MicroStrategy strongly recommends that all servers in the cluster use the
same server definition. This ensures that all nodes have the same
governing settings.
Server definitions can be modified from Desktop through the Intelligence
Server Configuration Editor and the Project Configuration Editor. For
further instructions, see the Desktop Help. (From within Desktop, press
F1.)
• MicroStrategy Desktop must be installed on a Windows machine to
administer the cluster. This version of Desktop must be the same as the
version of Intelligence Servers. For example, if the Intelligence Servers
• You must have access to the Cluster view of the System Administration
monitor in Desktop. Therefore, you must have the Administration
privilege to create a cluster. For details about the Cluster view of the
System Administration monitor, see Managing your clustered projects,
page 504.
• The computers that will be clustered must have the same intra-cluster
communication settings. To configure these settings, on each Intelligence
Server machine, in Desktop, right-click on the project source and select
Configure MicroStrategy Intelligence Server. The Intelligence Server
Configuration Editor opens. Under the Server definition category, select
General. For further instructions, see the Desktop Help.
• The same caching method (localized or centralized caching) should be
used for both result caches and file-based History Lists. For information
about localized and centralized caching, see Sharing result caches and
Intelligent Cubes in a cluster, page 487.
Server prerequisites
• The machines to be clustered must be running the same version of the
same operating system. For example, you cannot cluster two machines
when one is running on Windows 2008 and one is running on Windows
2003.
• The required data source names (DSNs) must be created and configured
for Intelligence Server on each machine. MicroStrategy strongly
recommends that you configure both servers to use the same metadata
database, warehouse, port number, and server definition.
• The service user’s Regional Options settings must be the same as the
clustered system’s Regional Options settings.
ln -s OLDNAME NEWNAME
where:
OLDNAME is the target of the link, usually a path name.
• Confirm that each server machine works properly, and then shut each
down.
3 Join nodes.
You create a cluster by joining Intelligence Servers that have been
synchronized. For instructions, see Joining the nodes in a cluster,
page 500.
• Centralized caching: All nodes have the cache file directory and
Intelligent Cube directory set to the same network locations. For more
information, see Centralized caching, page 489.
For steps to configure caches in either way, follow the instructions below
depending on your operating system:
Use one of the procedures below to share cache files among the nodes in your
cluster. MicroStrategy strongly recommends that each node in your cluster
use the same server definition. In this case, you only need to configure the
cache location in Intelligence Server one time. However, you must create the
shared folders on each node separately. For a detailed explanation of the two
methods of cache sharing, see Sharing result caches and Intelligent Cubes in
a cluster, page 487.
4 Click OK.
6 Right-click the cache file folder, and select Sharing. The [Server
Definition] Properties dialog box opens.
7 On the Sharing tab, select the Shared as option. In the Shared Name
box, delete the existing text and type ClusterCaches
8 Click OK. After you have completed these steps, you can cluster the nodes
using the Cluster Monitor.
or
4 Click OK.
5 On the machine that is storing the centralized cache, create the file folder
that will be used as the shared folder. The file folder name must be
identical to the name you earlier specified in the Cache file directory box
(shown as Shared Folder Name above).
If you are using a file-based History List, you can set up History Lists to use
multiple local disk backups on each node in the cluster, using a procedure
similar to the procedure above, To configure cache sharing using multiple
local cache files, page 495. The History List messages are stored in the
History folder. (To locate this folder, in the Intelligence Server Configuration
Editor, select Governing, then select History settings.)
/<machine_name>/ClusterCaches
/<machine_name>/ClusterInbox
You can choose to use either procedure below, depending on whether you
want to use centralized or local caching. For a detailed description and
diagrams of cache synchronization setup, see Synchronizing cached
information across nodes in a cluster, page 486.
10 Disconnect from the project source and shut down Intelligence Server.
11 Create the folders for caches on the shared device (as described in
Prerequisites for UNIX/Linux clustering above):
mkdir /sandbox/Caches
mkdir /sandbox/Inbox
10 Disconnect from the project source and shut down Intelligence Server.
mkdir $MSTR_<HOME_PATH>/ClusterInbox
mkdir $MSTR_HOME_PATH/ClusterInbox
1 In Desktop, log in to a project source. You must log in as a user with the
Administer Cluster privilege.
4 Type the name of the machine running Intelligence Server to which you
wish to add this node, or click ... to browse for and select it.
5 Once you have specified or selected the server to join, click OK.
1 Connect to one Intelligence Server in the cluster and ensure that the
Cluster view in Desktop (under Administration, under System
Administration) is showing all the proper nodes as members of the
cluster.
3 Use the Cache Manager and view the report details to make sure the
cache is created.
4 Connect to a different node and run the same report. Verify that the
report used the cache created by the first node.
7 Without logging out that user, log on to a different node with the same
user name.
8 Verify that the History List contains the report added in the first node.
Ifis MicroStrategy Web does not recognize all nodes in the cluster, it
possible that the machine itself cannot resolve the name of that
node. MicroStrategy cluster implementation uses the names of the
machines for internal communication. Therefore, the Web
machine should be able to resolve names to IP addresses. You can
edit the lmhost file to relate IP addresses to machine names.
You can also perform the same cache and History List tests described above
in To verify from Desktop.
To distribute projects across the cluster, you manually assign the projects to
specific nodes within the cluster. Once a project has been assigned to a node,
it is available for use.
Ifandyouusers
do not assign a project to a node, the project remains unloaded
cannot use it. You must then manually load the project for it
to be available. To manually load a project, right-click the project in
the Project Monitor and select Load.
Ifwithyouclustered
are using single instance session logging in Enterprise Manager
Intelligence Servers, the single instance session logging
project must be loaded onto all the clustered Intelligence Servers.
Failure to load this project on all servers at startup will result in a loss
of session statistics for any Intelligence Server onto which the project
is not loaded at startup. For more information, see Tech Note
TN6400-80X-0422 in the MicroStrategy Knowledge Base. For
detailed information about session logging in Enterprise Manager, see
the MicroStrategy System Administration Guide, volume 2.
2 One column is displayed for each node in the cluster that is detected at
the time the Intelligence Server Configuration Editor opens. Select the
corresponding check box to configure the system to load a given project
on a given node. A selected box at the intersection of a project row and a
node column signifies that the project is to be loaded at startup on that
node.
• If no check boxes are selected for a project, the project is not loaded
on any node at startup. Likewise, if no check boxes are selected for a
node, no projects are loaded on that node at startup.
IfManager,
you are using single instance session logging with Enterprise
the single instance session logging project must be
loaded onto all the clustered Intelligence Servers at startup.
Failure to load this project on all servers at startup will result in a
loss of session statistics for any Intelligence Server onto which the
project is not loaded at startup. For more information about single
instance session logging, see the chapter Analyzing System Usage
with Enterprise Manager, in the System Administration Guide
Volume 2. For more information about this issue, see Tech Note
TN6400-80X-0422 in the MicroStrategy Knowledge Base.
• All Servers: If this check box is selected for a project, all nodes in the
cluster load this project at startup. All individual node check boxes are
also selected automatically. When you add a new node to the cluster,
any projects set to load on All Servers automatically load on the new
node.
Ifloaded
you place an individual checkmark for a given project to be
on every node but you do not select the All Servers check
box, the system loads the project on the selected nodes. When a
3 Select whether to display only the selected projects, and whether to apply
the startup configuration on save:
4 Click OK when you are finished configuring your projects across the
nodes in the cluster.
If you do not see the projects you want to load displayed in the Intelligence
Server Configuration Editor, you must configure Intelligence Server to use a
server definition that points to the metadata containing the project. Use the
MicroStrategy Configuration Wizard to accomplish this. See the
MicroStrategy Installation and Configuration Guide for details.
Itlisted
is possible that not all projects in the metadata are registered and
in the server definition when the Intelligence Server
Configuration Editor opens. This can occur if a project is created or
duplicated in a two-tier (direct connection) project source that points
to the same metadata as that being used by Intelligence Server while it
is running. Creating, duplicating, or deleting a project in two-tier
while a server is started against the same metadata is not
recommended.
For detailed information about the effects of the various idle states on a
project, see Setting the status of a project, page 455.
1 In Desktop, log in to a project source. You must log in as a user with the
Administer Cluster privilege.
3 To see a list of all the projects on a node, click the + sign next to that node.
The status of the project on the selected server is shown next to the
project’s name.
1 In the Cluster view, right-click the project whose status you want to
change, point to Administer project on node, and select Idle/Resume.
The Idle/Resume dialog box opens.
2 Select the options for the idle mode that you want to set the project to:
• Request Idle (Request Idle): all currently executing and queued jobs
finish executing, and any newly submitted jobs are rejected.
• Execution Idle (Execution Idle for All Jobs): all currently executing,
queued, and newly submitted jobs are placed in the queue, to be
executed when the project resumes.
• Full Idle (Request Idle and Execution Idle for All jobs): all currently
executing and queued jobs are cancelled, and any newly submitted
jobs are rejected.
• Partial Idle (Request Idle and Execution Idle for Warehouse jobs):
all currently executing and queued jobs that do not submit SQL
against the data warehouse are cancelled, and any newly submitted
jobs are rejected. Any currently executing and queued jobs that do not
require SQL to be executed against the data warehouse are executed.
ToRequest
resume the project from a previously idled state, clear the
Idle and Execution Idle check boxes.
3 Click OK. The Idle/Resume dialog box closes and the project goes into
the selected mode.
In the Cluster view, right-click the project whose status you want to
change, point to Administer project on node, and select Load or
Unload. The project is loaded or unloaded from that node.
Failover and latency only take effect when a server fails. If a server is
manually shut down, its projects are not automatically transferred to another
server, and are not automatically transferred back to that server when it
restarts.
You can determine several settings that control the time delay, or latency
period, in the following instances:
• After a machine fails, but before its projects are loaded onto to a different
machine
• After the failed machine is recovered, but before its original projects are
reloaded
3 Enter the project failover latency and configuration recovery latency, and
click OK.
When deciding on these latency period settings, consider how long it takes
an average project in your environment to load on a machine. If your projects
are particularly large, they may take some time to load, which presents a
strain on your system resources. With this consideration in mind, use the
following information to decide on a latency period.
You can control the time delay (latency) before the project on a failed
machine is loaded on another node to maintain a minimum level of
availability.
• Setting a higher latency period prevents projects on the failed server from
being loaded onto other servers quickly. This can be a good idea if your
projects are large and you trust that your failed server will recover
quickly. A high latency period provides the failed server more time to
come back online before its projects need to be loaded on another server.
• Setting a lower latency period causes projects from the failed machine to
be loaded relatively quickly onto another server. This can be a good idea if
it is crucial that your projects are available to users at all times.
If you enter -1, the failover process is disabled and projects are not
transferred to another node if there is a machine failure.
When the conditions that caused the project failover disappear, the system
automatically reverts back to the original project distribution configuration
by removing the project from the surrogate server and loading the project
back onto the recovered server (the project’s original server).
• The Cache Monitor’s hit count number on a given machine only reflects
the number of cache hits the given machine initiated on any cache within
the cluster. If a different machine within the cluster hits a cache on the
local machine, that hit will not be counted on the local machine’s hit
count. For more information about the Cache Monitor, see Monitoring
report and document caches, page 465.
For example, ServerA and ServerB are clustered together, and the cluster
is configured to use local caching (see Local caching, page 489). A report
is executed on ServerA, creating a cache there. When the report is
executed on ServerB, it hits the report cache on ServerA. The cache
monitor on ServerA does not record this cache hit, because ServerA’s
cache monitor only displays activity initiated by ServerA.
Resource availability
Desktop
MicroStrategy Web
If a cluster node shuts down while there are MicroStrategy Web users
connected, those jobs return an error message by default. The error message
offers the option to resubmit the job, in which case MicroStrategy Web
automatically reconnects the user to another node.
Customizations
in several ways.
to MicroStrategy Web can alter this default behavior
If a node goes down for any reason, all jobs on that node are terminated.
Restarting the node provides an empty list of jobs in the job queue.
You can define the nodes that should automatically rejoin the cluster
on restart from the Intelligence Server Configuration Editor. For steps
to perform this configuration, see the Desktop Help.
Nodes that are still in the cluster but not available are listed in the Cluster
Monitor with a status of Stopped.
If the machine selected is part of a cluster, the entire cluster appears on the
Administration page and is labelled as a single cluster. Click Help on
MicroStrategy Web’s Administration page for steps to connect to an
Intelligence Server.
If nodes are manually removed from the cluster, projects are treated as
separate in MicroStrategy Web, and the node connected to will depend on
which project is selected. However, all projects are still accessing the same
metadata.
Node failure
MicroStrategy Web or Web Universal users can be automatically connected
to another node when a node fails. To implement automatic load
redistribution for these users, on the Web Administrator page, under Web
Server select Security, and in the Login area select Allow Automatic Login
if Session is Lost.
BEST PERFORMANCE
Introduction
Tuning overview
To get the best performance out of your MicroStrategy system, you must be
familiar with the characteristics of your system and how it performs under
different conditions. You must also know about the settings that you can
change (which this guide explains). In addition to this, you must have a plan
for tuning the system. For example, you should have a base record of certain
performance measures (such as Enterprise Manager reports or Performance
Monitor logs) and key configuration settings before you begin experimenting
with those settings. Make one change at a time and test the system
performance. Compare the new performance to the base and see if it
improved. If it did not improve, change the setting back to its previous value.
Changing multiple settings at a time may cause undesired effects. Also, if
performance does improve, you will not know which of the changes is
responsible.
The size of machines you use for MicroStrategy Intelligence Server, how you
tune them, and practical use of them depend on the number of users,
concurrent users, their usage patterns, and so on. MicroStrategy provides
up-to-date recommendations for these areas on the Knowledge Base.
These topics are introduced below and lay the foundation for the remainder
of this Tuning section.
These scenarios share common requirements that can help you define your
own expectations for the system.
First, you may require a certain number of users be actively logged in and
require the system to handle a certain number of concurrent users. Second,
you may require a certain level of performance, such as report results
returning to the users within a certain time, or when they manipulate a
report the change happens quickly, or that a certain number of reports can be
run within an hour or within a day. Third, you probably require that certain
functionality be available in the system, such as allowing report flexibility so
users can run ad hoc, predefined, prompted, page-by, or OLAP Services
reports. Another functionality requirement may be that users can use certain
features, such as scheduling (or subscribing to) a report, or sending a report
from the Web interface to someone else via e-mail, or that your users will be
able to use the rich MicroStrategy Desktop interface.
Intelligence Server
Capacity
Requests
The diagram below illustrates these factors that influence the system’s
capacity.
Requests
Configuring Intelligence
Report design
Server and Projects
UNIX and Linux systems allow processes and applications to run in a virtual
environment. MicroStrategy Intelligence Server Universal installs on UNIX
and Linux systems with the required environment variables set to ensure
that the server’s jobs are processed correctly. However, you can tune these
system settings to fit your system requirements and improve performance.
For more information, see the Planning Your Installation chapter of the
MicroStrategy Installation and Configuration Guide.
Many of the options in the following sections are specified in the Intelligence
Server Configuration Editor or the Project Configuration Editor. See below
for instructions on how to access these editors.
Requests
Configuring Intelligence
Report design
Server and Projects
• How the number of active users and user sessions on your system use
system resources just by logging in to the system (see Governing active
users, page 521)
• How the user’s profile can determine what he or she is able to do when
they are logged in to the system and how you can govern it (see
Governing user profiles, page 523)
• How memory and CPU are used by active users when they execute jobs,
run reports, and make requests and how you can govern them (see
Governing user resources, page 525)
You can track the users who are currently connected to the system with the
User Connection Monitor. For details about this system monitor, see
Monitoring users’ connections to projects, page 463.
The more user sessions that are allowed on Intelligence Server, the more
load those users can potentially put on the system because each session can
run multiple jobs. You should make sure that users log out of the system (or
are logged out by the system) if they are not using it.
To help control the potential load that user sessions can put on the system,
you can limit the number of user sessions allowed for each project and for
Intelligence Server. Also, both Web and Desktop users have session timeouts
so that when users forget to log out, the system logs them out and their
sessions do not unnecessarily use up Intelligence Server resources.
Consider a case in which a user logs in, runs a report, then leaves for lunch
without logging out of the system. If Intelligence Server is serving the
maximum number of user sessions and another user attempts to log in to the
system, the user is not allowed in. You can set a time limit for the total
duration of a user session and you can limit how long a session remains open
if it is inactive or not being used. In this case, if you set the inactive time limit
to 15 minutes, the person who left for lunch has his session ended by
Intelligence Server. After that, other users can log in.
Intelligence Server does not end a user session until all of the jobs
submitted by that user have completed (or timed out). This includes
reports waiting for autoprompt answers.
These user session limits are discussed below as they relate to specific
software features and products.
This setting limits the number of user sessions that can be connected to an
Intelligence Server. This includes connections made from MicroStrategy
Web products, Desktop, Scheduler, Narrowcast Server, or other applications
you may have created with the SDK. A single user account may establish
multiple sessions on an Intelligence Server. Each session connects once to
Intelligence Server and once for each project the user accesses. In the User
Connection Monitor, the connections made to Intelligence Server display as
<Server> in the Project column. Project sessions are governed separately
with a project level setting (called User sessions per project, discussed
below). When the maximum number of user sessions on Intelligence Server
is reached, users cannot log in, except for the administrator who can
disconnect current users (using the User Connection Monitor) or increase
this governing setting.
You can limit the number of sessions that are allowed for a project. When the
maximum number of user sessions for a project is reached, users cannot log
in, except for the administrator who can disconnect current users (using the
User Connection Monitor) or increase this governing setting.
To specify this setting, in the Project Configuration Editor for the project,
select the Governing: User sessions category and type the number in the
User sessions per project field.
You can also limit the number of concurrent sessions per user. This can be
useful if a single user account, such as “Guest,” is used for multiple
connections. To specify this setting, in the Project Configuration Editor for
the project, select the Governing: User sessions category and type the
number in the Concurrent interactive project sessions per user field.
You can specify the maximum amount of time a session can remain idle
before Intelligence Server disconnects that session. This frees up the system
resources that the idle session was using and allows other users to log in to
the system if the maximum number of user sessions had been reached.
This setting is the same as the User session idle time limit described above,
except that it is for users of MicroStrategy Web products. If the idle time
limit is reached, Intelligence Server disconnects the user session.
Report Services documents in Web: Set the Web user session idle
time (sec) to 3600 to avoid a project source timeout, if designers will
be building documents and dashboards in Web. Then restart
Intelligence Server.
If you have purchased OLAP Services licenses for your users, they have the
potential for using much of the system’s resources. For more details about
this, see OLAP Services reports, page 579. Simply enabling these features
does not automatically increase effects on the system resources. The effect is
different if users are not using the features, or if they are using them heavily.
For example, if your users are creating large OLAP Services reports
(Intelligent Cube reports) and doing many manipulations on them, the
system will be loaded much more than if they are running occasional, small
reports and not performing many manipulations.
Schedule-related privileges
• If you allow users to schedule reports to be run (create subscriptions),
they can potentially use much of the system resources. Most reports
should be scheduled for when the system is not under a heavy workload.
This is controlled by the Web scheduled reports and Schedule request
privileges.
Allowing users to add reports to the History List and to use the History List
can consume extra resources. These are discussed more fully in the next
section (see Governing user resources, page 525).
The more manipulations you allow users to do, the greater the potential for
using more system resources. Manipulations that can use extra system
resources are:
Exporting privileges
History List
The History List is an in-memory message list that references reports a user
has executed or scheduled. The results are stored as History or
Matching-History caches on Intelligence Server.
If you allow users to use the History List (sometimes called the Inbox), they
can potentially use much of the system resources. Answering the following
questions can give you an idea about how its use is affecting your system:
• Do your users put a lot of their reports in the History List? For example,
do they send every executed report to the History List?
• Do the users clean out read messages from the History List when they log
out?
• How often do you delete the History List messages using the Schedule
Administration Tasks feature?
• Have you limited the Maximum number of messages per user and the
Message lifetime (days)? To set this, use the Intelligence Server
Configuration Editor, Governing: History settings category.
For details on History List settings, see Saving report results: History List,
page 233.
Working set
When a user runs a report from MicroStrategy Web or Web Universal, the
results from the report are added to what is called the working set for that
user’s session and stored in memory on Intelligence Server. The working set
is a collection of messages that reference in-memory report instances. A
message is added to the working set when a user executes a report or
retrieves a message from his or her Inbox. The purpose of the working set is
to:
Each message in the working set may store two versions of the report
instance in memory: the original version and the result version. The
original version of the report instance is created the first time the report is
executed and is held in memory the entire time a message is part of the
working set. The result version of the report instance is added to the working
set only after the user manipulates the report. Each report manipulation
adds what is called a delta XML to the report message. On each successive
manipulation, a new delta XML is applied to the result version. When the
user clicks the browser’s Back button, previous delta XMLs are applied to the
original report instance up to the point (or state) the user is requesting. For
example, if a user has made four manipulations, the report has four delta
XMLs; when the user clicks the Back button, the three previous XMLs are
applied to the original version.
MicroStrategy Web or Web Allowed a specified number of reports to be available for report
Universal User manipulation. The default is 3 and the minimum is 1.
You can control the amount of the memory used by the History List and
Working set in these ways:
• Limit the number of reports that a user can keep available for
manipulation within a MicroStrategy Web product. This number is
defined in the MicroStrategy Web products’ interface in Project
defaults: History List settings. You must select the Manually option for
adding messages to the History List, then specify the number in the field
labeled If manually, how many of the most recently run reports and
documents do you want to keep available for manipulation? The
default is 3 and the minimum is 1. The higher the number, the more
memory the reports may consume. For details, see the MicroStrategy
Web Help.
• Limit the Maximum amount of RAM that all users can use for the
working set. When the limit is reached and new report instances are
created, the least recently used report instance is swapped to disk. To set
this, in the Intelligence Server Configuration Editor, under the
Governing: Working Set category and type the limit in the Maximum
RAM for Working Set cache (MB) box.
Ifmake
you set this limit to more memory than the operating system can
available, Intelligence Server uses a value of 100 MB. The
maximum value for this setting is 65,536 megabytes (64 gigabytes)
on most operating systems. It is 2048 megabytes (2 gigabytes)
under Windows 2003.
Additionally, if a session has an open job, the session is not closed (and its
report instance is not removed from the Working set) until the job has
finished or timed out. This functionality is designed to support the execution
of jobs even after the user has logged out. It may, however, add to the
problem of excessive memory usage on Intelligence Server because the
session’s working set is held in memory until the session is closed.
Governing requests
Each user session can execute multiple concurrent jobs or requests. This
happens when users run documents that submit multiple child reports at a
time or when they send a report to the History List, then execute another
while the first one is still executing. Users can also log in to the system
multiple times and run reports simultaneously. Again, this has the potential
for using up much of the system resources.
Requests
Configuring Intelligence
Report design
Server and Projects
To control the number of jobs that can be running, you can set limits per user
and per project.
• The amount of time reports can execute (Limiting the maximum report
execution time, page 529)
• The number of executing reports or data marts per user account (not
counting element requests, metadata requests, and report
manipulations) (Limiting the number of executing jobs per user and
project, page 531)
• The number of jobs per user account and per user session (Limiting the
number of jobs per user session and per user account, page 531)
• The number of jobs per project (Limiting the number of jobs per project,
page 532)
• The total number of jobs (Limiting the total number of jobs, page 533)
• A report's SQL (per pass) including both its size and the time it executes
(Limiting a report's SQL per pass, page 533)
To set this limit, use the Project Configuration Editor, select the Governing:
Result Sets category, and specify the number of seconds in the Intelligence
Server Elapsed Time (sec) fields. You can set different limits for interactive
(ad-hoc) reports and scheduled reports.
This limit applies to most operations that are entailed in a job from the time
it is submitted to the time the results are returned to the user. If the job
exceeds the limit, the user sees an error message and cannot view the report.
The figure below illustrates how job tasks make up the entire report
execution time. In this instance, the time limit includes the time waiting for
the user to complete report prompts. Each step is explained in the table
below.
2* Waiting (in queue) Element request is waiting in job queue for execution
*Steps 2 and 3 are for an element request. They are executed as separate
jobs. During steps 2 and 3, the original report job has the status “Waiting for
Autoprompt.”
The following less time consuming tasks are not shown in the example
above, but they also count toward the report execution time:
For more information about the job processing steps, see Processing jobs,
page 27.
This limit is called Executing jobs per user. If the limit is reached for the
project, new report requests are placed in the Intelligence Server queue until
other jobs finish. They are then processed in the order in which they were
placed in the queue, which is controlled by the priority map (see Prioritizing
jobs, page 540).
To specify this limit setting, in the Project Configuration Editor for the
project, select the Governing: Jobs category, and type the number of
concurrent report jobs per user you want to allow in the Executing jobs per
user field.
Limiting the number of jobs per user session and per user
account
If your users’ job requests place a heavy burden on the system, you can limit
the number of open jobs within Intelligence Server, including element
requests, autoprompts, and reports for a user.
• To help control the number of jobs that can run in a project and thus
reduce their impact on system resources, you can limit the number of
concurrent jobs a user can execute in a user session. For example, if the
Jobs per user session limit is set to four and a user has one session
open for the project, that user can only execute four jobs at a time.
However, the user can bypass this limit by logging in to the project
multiple times. (To prevent this, see the next setting, Jobs per user
account limit.)
To specify this setting, use the Project Configuration Editor for the
project, select the Governing: Jobs category and type the number in the
Jobs per user session box.
• You can set a limit on the number of concurrent jobs that a user can
execute for each project regardless of the number of user sessions that
user has at the time. For example, if the user has two user sessions and
the Jobs per user session limit is set to four, the user can potentially run
eight jobs. But if this Jobs per user account limit is set to five, that user
can only execute five jobs, regardless of the number of times the user logs
in to the system. Therefore, this limit can prevent users from
circumventing the Jobs per user session limit by logging in multiple
times.
To specify this setting, in the Project Configuration Editor for the project,
select the Governing: Jobs category, and type the number of jobs per
user account you want to allow in the Jobs per user account box.
These two limits count the following types of job requests that are executing
or waiting to execute:
• Report
• Element
• Autoprompt
Jobs that have finished, cached jobs, or jobs that returned in error are not
counted toward these limits. If either limit is reached, any jobs the user
submits do not execute and the user sees an error message.
To specify this job limit setting, in the Project Configuration Editor for the
project, select the Governing: Jobs category, and specify the number of
concurrent jobs for the project that you want to allow in each Jobs per
project field. You can specify a different job limit for interactive (ad-hoc) and
scheduled jobs.
and three reports as datasets creates four jobs. Make sure that this
setting is high enough to allow all the jobs in any of your documents to
execute.
You should set this limit relatively high. Realize that multiple jobs may be
submitted when you execute documents and reports. For example, four jobs
could run if you execute a document that has a prompt and three reports
embedded in it.
To set this limit, in the Intelligence Server Configuration Editor, select the
Governing: General category, and specify the value in the Maximum
number of jobs box.
You can also specify a maximum number of interactive jobs (jobs executed by
a direct user request) and schedule d jobs (jobs executed by a scheduled
request).
Set any of these values to -1 to have there be no limit on the number of jobs.
statement can be. These limits are set in the VLDB properties, as described
below.
You can also limit the amount of memory that Intelligence Server uses
during report SQL generation. This limit is set for all reports generated on
the server. To set this limit, in the Intelligence Server Configuration Editor,
open the Governing: Result Sets category, and specify the Memory
consumption during SQL generation. A value of -1 indicates no limit.
• SQL Time Out (Per Pass) (database instance and report)
You can limit the amount of time each pass of SQL can take within the
data warehouse. If the time for a SQL pass reaches the maximum,
Intelligence Server cancels the job and the user sees an error message.
You can specify this setting at either the database instance level or at the
report level.
To specify this setting, edit the VLDB properties for the database instance
or a report, expand Governing settings, then select the SQL Time Out
(Per Pass) option. (See the online help for details.)
• Maximum SQL Size (database instance)
You can limit the size (in bytes) of the SQL statement per pass before it is
submitted to the data warehouse. If the size for a SQL pass reaches the
maximum, Intelligence Server cancels the job and the user sees an error
message. You can specify this setting at the database instance level.
To specify this, edit the VLDB properties for the database instance,
expand Governing settings, then select the Maximum SQL Size option.
(See the online help for details.)
Requests
Configuring Intelligence
Report design
Server and Projects
The main factor that determines job execution performance is the number of
connection threads that are made to the data warehouse. The optimum
number allows the users’ requests to return without waiting too long and
does not overload the system resources. This section discusses how you can
manage job execution. This includes
• Managing database connection threads, page 535
This number falls in the range depicted as the Optimal use of resources in the
illustration below.
Optimal use of
Number of database
connection threads
With most tuning decisions, it is not possible for MicroStrategy to give you
an exact recommendation. Because each system configuration is unique and
has unique user demands, you must determine the best number of database
connection threads. The overall goal is to prioritize jobs and provide enough
threads so that jobs that must be processed immediately are processed
immediately, and the remainder of jobs are processed as timely as possible.
Once you have the number of threads calculated, you can then set priorities
and control how many threads are dedicated to serving jobs meeting certain
criteria.
You are not required to set medium and high connections; however, you
should have at least one low connection, because low priority is the default
job priority.
Ifwarehouse.
you set all connections to zero, jobs are not submitted to the data
This may be a useful way for you to test whether
scheduled reports are processed by Intelligence Server properly. Jobs
wait in the queue and are not submitted to the data warehouse until
you increase the connection number, at which point they are then
submitted to the data warehouse. Once the testing is over, you can
delete the jobs so they are never submitted to the data warehouse.
that are running too long. To optimize how those threads are used, you can
limit the length of time they can be used by certain jobs. These limits are
described below.
When a user runs a report that executes for a long time on the data
warehouse, the user may cancel the job execution. This may be due to an
error in the report’s design, especially if it is in a project in a development
environment, or the user may simply not want to wait any longer. Once
the cancel request is made, if the cancel is not successful after a short
time (30 seconds) a timer starts counting. You can set a limit on how long
Intelligence Server should count (in addition to the 30 seconds) while the
cancel occurs. If the cancel is not successful before the limit, Intelligence
Server deletes that database connection thread.
To set this limit, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on
the Database Connections dialog box Advanced tab, specify the
Maximum cancel attempt time (sec).
• Maximum query execution time
This is the maximum amount of time a single pass of SQL can execute on
the data warehouse. When the SQL statement or fetch operation begins, a
timer starts counting. If the Maximum query execution time limit is
reached, Intelligence Server cancels the operation.
This setting is very similar to the SQL time out (per pass) VLDB setting
that was discussed earlier; however, that VLDB setting overrides this
setting. This setting is made on the database connection and may be used
to govern the maximum across all projects using that connection. The
VLDB setting may be used to override this setting for a specific report.
Values of 0 and -1 indicate no limit.
To set this limit, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on
the Database Connections dialog box Advanced tab, specify the
Maximum query execution time (sec).
• Maximum connection attempt time
To set this limit, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on
the Database Connections dialog box Advanced tab, specify the
Maximum connection attempt time (sec).
Aconnection
value of -1 indicates no limit. A value of 0 indicates that the
is not cached and is deleted immediately when
execution is complete.
To set this limit, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on
the Database Connections dialog box Advanced tab, specify the
Connection lifetime (sec).
To set this limit, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on
the Database Connections dialog box Advanced tab, specify the
Connection idle timeout (sec).
Aconnection
value of -1 indicates no limit. A value of 0 indicates that the
is not cached and is deleted immediately when
execution is complete.
Prioritizing jobs
Job priority defines the order in which jobs are processed. Jobs are usually
executed on a first-come, first-served basis; however, you probably have
certain jobs that you want to be processed before other jobs.
For example, if a job with a medium priority is submitted and all medium
priority connections are busy processing jobs, Intelligence Server processes
the job on a low priority connection.
When a job is submitted and no connections are available to process it, either
with the same priority or with a lower priority, Intelligence Server places the
job in queue and then processes it when a connection becomes available.
As stated earlier, there are three possible priorities: high, medium, and low.
You decide which variables are used to determine a job’s priority. The
possible variables are:
• User Group: jobs submitted by users in the groups you select are
processed by the priority you specify (Prioritizing jobs by user group,
page 543)
• Cost: refers to the cost of processing a job (Prioritizing jobs by report
cost, page 543)
Job cost is an arbitrary value you can assign to a report’s properties that
symbolizes the cost of processing that job (the higher the number the
heavier the cost)
• Project: jobs submitted from projects you select are processed by the
priority you specify (Prioritizing jobs by project, page 544)
These variables allow you to create sophisticated rules for which job requests
are processed first. For example, you could specify that element requests
submitted by any user are high priority, that any report requests from Project
M are low priority and from Project Y are medium priority.
3 To add rules, click New. This starts the Job Prioritization Wizard.
For specific information about using the wizard, press F1 to view the online
help.
The variables you can set using the wizard are discussed below.
If you choose to use the Request type as a variable, you can select whether
Element requests or Report requests are processed before one or the other.
For example, you may want element requests to be submitted to the data
warehouse before report requests, because element requests are generally
used in prompts and you do not want users to have to wait long while prompt
values load. In this case you would run through the job priority wizard and
select the Request type check box. Later in the wizard, you can select
whether to use element or report requests (or both).
If you choose to use the Application type as a priority variable, you can select
whether MicroStrategy Desktop, MicroStrategy Web (orWeb Universal), and
Scheduler are used to determine on what type of connection to the data
warehouse jobs are submitted. All jobs submitted from each of the
applications take the corresponding priority (depending on the priority
map).
For example, you may want all jobs that are submitted from MicroStrategy
Desktop to be processed on a high priority connection. You then select the
Application type check box. Then select Desktop and specify the High
priority.
If you choose to use a user group as a priority variable, you must select the
User groups check box. This allows you to select the groups for which you
want to specify the priority of jobs submitted. All groups in the system may
not necessarily be appropriate for defining priority. You should select only
those groups that need to be considered for establishing priority.
We can revisit the executive example mentioned earlier. If you want all jobs
from users in the Executive user group to be processed on a high priority
database connection, select the User group check box. Later you can select
the Executive user group and specify the High priority.
Ifthea highest
job is submitted by a user who belongs to more than one group,
possible priority is used.
For example, users Joe and Mary are members of both the Developers and
Managers groups. In the priority map, you specify that all jobs submitted by
Developers are done on a high priority connection and all Manager jobs are
done on a medium priority connection. Any jobs Mary or Joe submit are
done on a high priority connection because the Developer jobs are specified
as high priority.
Report cost is an arbitrary value you can assign to a report to help determine
its priority in relation to other requests. If you choose to use report cost as a
priority variable, you must define a set of priority groups based on report
cost. For example:
• Report cost between 0 and 334 = Light
With a set of report costs like those above, you could specify all Heavy
reports to have a low job priority, Medium reports to have a medium job
priority and Light reports to have a high job priority.
The set of cost groupings must cover all values from 0 to 999. Once you
determine the cost groupings, you can set the report cost value on individual
reports. For example, you notice that a particular report requires
significantly more processing time than most other reports. You can assign it
a report cost of 900 (heavy). In this sample configuration, the report has a
low priority.
You can assign the report cost priority variable individually to a report. The
following are the steps to assign priority to a report based on the report cost.
ToSystem
set the priority of a report by report cost, you need to belong to the
Administrator’s group. You must have system administrator
privileges to view the Priority tab in the Properties dialog box for a
report.
You can select whether projects are a criteria for prioritizing jobs. For
example, you may want all jobs submitted from a project to have a low
priority because they are not critical.
If you choose to use Project as a priority variable, you must select the Project
check box. Then you can select the projects in your system for which you
want to specify the job priority. You should select only those projects that
need to be considered for establishing priority. This variable can be selected
in combination with other variables to create a set of criteria that must all be
met to have a certain job priority.
Results processing
When Intelligence Server processes results that were returned from the data
warehouse, several factors determine how much of the machine’s resources
are used. These are discussed below (see the page number):
You can also control whether threads within Intelligence Server are allocated
to processes such as object serving, element serving, SQL generation, and so
forth that need them most, while less loaded ones can return threads to the
available pool. To do this, open the Intelligence Server Configuration Editor
to the Server Definition: Advanced category, and select the Balance
MicroStrategy Server Threads check box.
Report size
A report instance is the version of the report results Intelligence Server holds
in memory for cache and working set results. The size of the report instance
is proportional to the size of the report results (row size * number of rows).
The row size depends on the data types of the attributes and metrics on the
report.
Dates are the largest data type. Text strings, such as descriptions and names
are next in size (unless the description is unusually long—then they can be
larger than dates). Numbers, such as IDs, totals, and metric values are the
smallest.
The table below shows examples of the relationship between cache size and
report size (in number of cells).
The easiest way to estimate the amount of memory the reports use is to view
the size of the cache files using the Cache Monitor in MicroStrategy Desktop.
The Cache Monitor shows the size of the report results in binary format,
which from testing has proven to be 30 – 50% of the actual size of the report
instance in memory.
To govern a report (or request’s) size, you can limit the report’s number of
result rows, the number of element rows, and the number of intermediate
rows. These are discussed below.
You can set this limit in two places—for all reports in a project and for a
specific report in the VLDB properties. The VLDB properties limit for the
report overrides the project limit. For example, if you set the project limit
at 10,000 rows, but set the limit to 20,000 rows for a specific report that
usually returns more than 10,000 rows, users are able to see that report
without any errors.
To specify this setting for all reports in a project, use the Project
Configuration Editor for the project, select the Governing: Result Sets
category and type the number in the corresponding Final result rows
box. You can set different limits for Intelligent Cubes, data marts, and
standard reports.
1 Edit the report for which you wish to set the limit. (Right-click the report
and select Edit.) The Report Editor opens.
2 From the Data menu, select VLDB properties. The VLDB Properties
dialog box opens.
3 Expand the Governing settings, then select Results Set Row Limit.
4 Clear the Use default inherited value check box (if it is not already
cleared).
6 Click Save and Close to save the VLDB properties. You see the Report
Editor again.
7 Click Save and Close to save the report along with the changed VLDB
properties.
For more information about element requests, such as how they are
created, how incremental fetch works, and the caches that store the
results, see Element caches, page 249.
can be set for all reports in a project and within the VLDB properties of a
specific report. If the limit is set for a report, it overrides the limit set for
the project.
This limit does not apply to the rows in intermediate or temporary tables
created in the data warehouse. Rather, it controls the number of rows
held in memory within the Analytical Engine processing unit of
Intelligence Server for analytic calculations that cannot be done on the
database. Lowering this setting reduces the amount of memory
consumed for large reports. If the limit is reached, the user sees an error
message and cannot view the report. This could happen, for example,
when you add a complex subtotal to a large report or when you pivot a
large report.
1 Edit the report for which you wish to set the limit. (Right-click the report
and select Edit.) The Report Editor opens.
2 From the Data menu, select VLDB properties. The VLDB Properties
dialog box opens.
4 Clear the Use default inherited value check box (if it is not already
cleared).
6 Click Save and Close to save the VLDB properties. You see the Report
Editor again.
7 Click Save and Close to save the report along with the changed VLDB
properties.
Analytic complexity
Users can use features on a report that require the Analytical Engine
component of Intelligence Server to handle. These have an impact on
Intelligence Server’s system resources. Be aware of their potential impact
and inform your report designers of them.
• Analytic calculations
Calculations that cannot be done with SQL in the data warehouse are
performed by the Analytical Engine in Intelligence Server. These may
result in significant memory use during report execution. Some analytic
calculations (such as AvgDev) require the entire column of the fact table
as input to the calculation. The amount of memory used depends on the
type of calculation and the size of the input dataset.
• Subtotals
Requests
Configuring Intelligence
Report design
Server and Projects
To deliver results, Intelligence Server generates XML and sends it to the Web
server (this happens when a report is first run or when it is manipulated).
The Web server then translates the XML into HTML for display in the user’s
Web browser.
You can set limits in two areas to control how much information is sent at a
time. The lower of these two settings determines the maximum size of results
that Intelligence Server delivers at a time:
• How many rows and columns can be displayed at a time in a
MicroStrategy Web product (see Limiting the information displayed at
one time, page 551)
• How many XML cells in a result set can be delivered at a time (see
Limiting the number of XML cells, page 551)
As a result, the user sees report results more quickly and the impact on
system resources is limited. Additionally, governing results delivery
includes:
• Controlling XML drill paths (see Limiting the total number of XML drill
paths, page 553)
• The Web administrator sets the default values for these limits. (To do
this, click Preferences, then Project defaults, then click Grid display
and specify the Maximum rows in grid and Maximum columns in
grid).
• Users must have the privilege Web change user preferences. Then in
the MicroStrategy Web products’ interface, they click Preferences, then
Grid display, and specify the Maximum rows in grid and Maximum
columns in grid.
For more information about this, see the topic for “Incremental fetch” in the
Web online help.
If a report’s result set is larger than these limits, the report is broken into
pages (or increments) that are delivered (or fetched from the server) one at a
time. Therefore, the user sees one increment at a time.
IfXML
users set the number of rows and columns too high, the number of
cells limit that is set in Intelligence Server (discussed next)
governs the size of the result set.
The number of cells that are counted toward this limit is the number
of rows multiplied by the number of metric columns. The attribute
cells are not counted.
For example, if the XML limit is set at 10,000 cells and a report has 100,000
cells, the report is split into 10 pages. The user clicks the page number to
view the corresponding page.
• If the limit is larger, it takes a shorter time to generate the XML in fewer,
but larger, batches, thus using more memory and system resources.
To set the XML limit, in the Intelligence Server Configuration Editor, select
the Governing: File Generation category, then specify the Maximum
number of XML cells. You must restart Intelligence Server for the new limit
to take effect.
These limits are set in the Intelligence Server Configuration Editor. In this
editor, select the Governing: File Generation category, then specify the
maximum memory consumption for the XML, PDF, Excel, and HTML files.
To understand the impact that exporting can have on system resources, you
should know that the more formatting an exported report has, the more
memory it consumes. When exporting large reports the best options are
Plain text or CSV file formats because formatting information is not included
with the report data. In contrast, exporting reports as Excel with formatting
uses a significant amount of memory because it contains both the report data
and all of the formatting data.
For more information about exporting reports, see What happens when I
export a report from Web?, page 41.
To set this limit, in the Intelligence Server Configuration Editor, select the
Governing: File Generation category, then specify the Maximum number
of XML drill paths.
You must restart Intelligence Server for the new limit to take effect.
For more information about customizing drill maps, see the MicroStrategy
Advanced Reporting Guide.
For XML drill paths and system performance: If you select the Enable
Web personalized drill paths check box in the Project Configuration
Editor, Drilling category, it turns off XML caching. This could have a
negative effect on performance, particularly for large reports. For
more information, see ACLs and personalized drill paths in Web,
page 61.
Requests
Configuring Intelligence
Report design
Server and Projects
You must make certain choices about how to maximize the use of your
system’s resources. If you can increase resources, what should you add so
that you get the most improvement?
• The processors (Processor type and speed, page 555 and Number of
processors, page 555)
If you upgrade a machine’s CPU, make sure you have the appropriate license
to run Intelligence Server on the faster CPU. For example, if you upgrade the
processor on the Intelligence Server machine from a 2GHz to a 2.5 GHz
processor, you should obtain a new license key from MicroStrategy.
Use the License Manager tool to enter the new license key. For details, see
Updating your license, page 193.
Number of processors
Intelligence Server performs faster on a machine with multiple processors. If
you notice that the processor is running consistently at a high capacity, for
example, greater than 80%, consider increasing the number of processors.
You can use the Windows Performance Monitor or Task Manager to monitor
this.
Use the License Manager tool to enter the new license key (see Updating
your license, page 193).
Physical disk
If the physical disk is utilized too much on a machine hosting Intelligence
Server, it can indicate that there is a bottleneck in the system’s performance.
To monitor this, use the Windows Performance Monitor for the object
PhysicalDisk and the counter % Disk Time. If you see that the counter is
greater than 80% on average, it may indicate that there is not enough
memory on the machine. This is because when the machine’s physical RAM
is full, the operating system starts swapping memory in and out of the page
file on disk. This is not as efficient as using RAM. Therefore, Intelligence
Server’s performance may suffer.
By monitoring the disk utilization, you can see if the machine is consistently
swapping at a high level. Defragmenting the physical disk may help lessen
the amount of swapping. If that does not sufficiently lessen the utilization,
consider increasing the amount of physical RAM in the machine. For a
discussion about how memory is used in Intelligence Server, see Memory,
page 557.
Another performance counter you can use to gauge the disk’s utilization is
the Current disk queue length, which indicates how many requests are
waiting at a given time. MicroStrategy recommends using the % Disk Time
and Current disk queue length counters to monitor the disk utilization,
however, you may also wish to use other counters.
Memory
The memory used by Intelligence Server is limited by two factors:
For instructions on how to track the memory used by Intelligence Server, see
Monitoring memory use with Performance Monitor, page 558. For detailed
information about how Intelligence Server manages memory and how you
can tune its memory usage, see Governing Intelligence Server memory use,
page 562.
Virtual Memory
Virtual memory is the amount of physical memory (RAM) plus the Disk
Page file (swap file). It is shared by all processes running on the machine,
including the operating system.
When a machine runs out of virtual memory, processes on the machine are
no longer able to process instructions and eventually the operating system
may shut down. More virtual memory can be obtained by making sure that as
few programs or services as possible are executing on the machine or by
increasing the amount of physical memory or the size of the page file.
Private bytes are the bytes of virtual memory that are allocated to a given
process. Private bytes are so named because they cannot be shared with
other processes: when a process such as Intelligence Server needs memory, it
allocates an amount of virtual memory for its own use. The private bytes
used by a process can be measured with the Private Bytes counter in the
Windows Performance Monitor.
The governing settings built into Intelligence Server control its demand for
private bytes by limiting the number and scale of operations which it may
The virtual address space used by a process does not represent the actual
virtual memory. Instead, the system maintains a page map for each process,
which is an internal data structure used to translate virtual addresses into
corresponding physical (RAM and page file) addresses. For this reason, the
total virtual address space of all processes is much larger than the total
virtual memory available.
The limit associated with Intelligence Server virtual address space allocation
is the committed address space (memory actually being used by a process)
plus the reserved address space (memory reserved for potential use by a
process). This value is called the process’s virtual bytes. Memory depletion is
usually caused by running out of virtual bytes.
The two counters you should log with Performance Monitor are Private Bytes
and Virtual Bytes for the Intelligence Server process (Mstrsvr.exe). A
sample log of these two counters (along with others) for Intelligence Server is
shown in the diagram below.
The diagram above illustrates the gap between private bytes and virtual bytes
in Intelligence Server. The Virtual Bytes counter represents memory that is
reserved, not committed, for the process. Private Bytes represents memory
actually being used by the process. Intelligence Server reserves regions of
memory (called heaps) for use within the process. The heaps that are used by
Intelligence Server cannot share reserved memory between themselves,
causing the gap between reserved memory (virtual bytes) and memory being
used by the process (private bytes) to increase further.
When Intelligence Server starts up, it uses memory in the following ways:
• It initializes all internal components and loads the static DLLs necessary
for operation. This consumes 25 MB of private bytes and 110 MB of
virtual bytes. You cannot control this memory usage.
• It loads all server definition settings and all configuration objects. This
consumes an additional 10 MB of private bytes and an additional 40 MB
of virtual bytes. This brings the total memory consumption at this point
to 35 MB of private bytes and 150 MB of virtual bytes. You cannot control
this memory usage.
• It loads the project schema into memory (needed by the SQL engine
component). The number and size of projects greatly impacts the amount
of memory used. This consumes an amount of private bytes equal to 3x
the schema size and an amount of virtual bytes equal to 4x the schema
size. For example, with a schema size of 5 MB, the private bytes
consumption would increase by 15 MB (3 * 5 MB). The virtual bytes
consumption would increase by 20 MB (4 * 5 MB). You can control this
memory usage by limiting the number of projects that load at startup
time.
Ifenvironment,
you are not performing this procedure in a production
make sure you set all the configuration options as they
exist in your production environment. Otherwise, the measurements
will not reflect the actual production memory consumption.
Server process. You can confirm this by logging the counter information
to the current activity window as well as the performance log.
• Object and Element caches: caches that have been created since
Intelligence Server has started. The maximum amount of memory used
for object and element caches is configured at the project level. For
details, see Element caches, page 249 and Object caches, page 262.
• Intelligent Cubes: any Intelligent Cubes that have been loaded since
Intelligence Server has started. The maximum amount of memory used
for Intelligent Cubes is configured at the project level. For details, see
Chapter 6, Managing Intelligent Cubes.
• User session related resources: History List and Working set memory,
which are greatly influenced by governing settings, report size, and report
design. For details, see Managing user sessions, page 520 and see
Saving report results: History List, page 233.
• XML generation
The Enable single memory allocation governing option lets you specify
how much memory can be reserved for a single Intelligence Server operation
at a time. When this option is enabled, each memory request is compared to
the Maximum single allocation size (MBytes) setting. If the request
exceeds this limit, the request is denied. For example, if the allocation limit is
set to 100 MB and a request is made for 120 MB, the request is denied, while
a later request for 95 MB is allowed.
If the Intelligence Server machine has additional software running on it, you
may wish to set aside some memory for those processes to use. To reserve
this memory, you can specify the Minimum reserved memory in terms of
either the number of MB or the percent of total system memory. In this case,
the total available memory is calculated as the initial size of the page file plus
the RAM. It is possible that a machine has more virtual memory than MCM
knows about if the maximum page file size is greater than the initial size.
When MCM receives a request that would cause Intelligence Server’s current
memory usage to exceed the Maximum use of virtual address space, it
denies the request and goes into memory request idle mode. In this mode,
MCM denies any requests that would deplete memory. MCM remains in
memory request idle mode until the memory used by Intelligence Server falls
below a certain limit, known as the low watermark. For information on how
the low watermark is calculated, see Memory watermarks, page 567. For
information about how MCM handles memory request idle mode, see
Memory request idle mode, page 568.
The Memory request idle time is the longest amount of time MCM remains
in memory request idle mode. If the memory usage has not fallen below the
low watermark by the end of the Memory request idle time, MCM restarts
Intelligence Server.
MCM does not submit memory allocations to the memory subsystem (such
as a memory manager) on behalf of a task. Rather, it keeps a record of how
much memory is available and how much memory has already been
contracted out to the tasks.
YES
YES
NO Deny request
I-Server NO
in Memory Request
Idle mode?
Calculate Max contract request size = Calculate Max contract request size =
YES
LWM – [1.05 * (I-Server PB) + HWM – [1.05 * (I-Server PB) +
Contracted Memory] Contracted Memory]
Grant request
Deny request and exit Memory Request
Deny request
Idle mode if in it
Memory watermarks
The high watermark (HWM) represents the highest value that the sum of
private bytes and outstanding memory contracts reach before triggering
memory request idle mode. The low watermark (LWM) represents the value
that Intelligence Server’s private byte usage must drop to before MCM exits
memory request idle mode. Before every memory request for more than 10
MB, or every tenth request if none of those requests are for more than 10 MB,
MCM recalculates the high and low watermarks.
Two possible values are calculated for the high watermark: one based on
virtual memory, and one based on virtual bytes. For an explanation of the
different types of memory, such as virtual bytes and private bytes, see
Memory, page 557.
• The high watermark for virtual memory (HWM1 in the diagram above) is
calculated as (Intelligence Server private bytes +
available system memory). It is recalculated for each potential
memory depletion.
reserved
The available system memory is calculated using the Minimum
memory limit if the actual memory used by other
processes is less than this limit.
• The high watermark for virtual bytes (HWM2 in the diagram above) is
calculated as (Intelligence Server private bytes). It is
calculated the first time the virtual byte usage exceeds the amount
specified in the Maximum use of virtual address space setting. Since
MCM ensures that Intelligence Server private byte usage cannot increase
beyond the initial calculation, it is not recalculated until after Intelligence
Server returns from the memory request idle state.
The high watermark used by MCM is the lower of these two values. This
accounts for the scenario in which, after the virtual bytes HWM is calculated,
Intelligence Server releases memory but other processes consume more
available memory. This can cause a later calculation of the virtual memory
HWM to be lower than the virtual bytes HWM.
Once the high and low watermarks have been established, MCM checks to
see if single memory allocation governing is enabled. If it is, and the request
• In memory request idle mode, the maximum request size is based on the
low watermark. The formula is [LWM - (1.05 *(Intelligence
Server Private Bytes) + Outstanding Contracts)].
If MCM is already in memory request idle mode and the request is larger
than the maximum request size, MCM denies the request. It then checks
whether the memory request idle time has been exceeded, and if so, it
restarts Intelligence Server. For a detailed explanation of memory request
idle mode, see Memory request idle mode, page 568.
If the request is smaller than the maximum request size, MCM performs a
final check to account for potential fragmentation of virtual address space.
MCM checks whether its record of the largest free block of memory has been
updated in the last 100 requests, and if not, updates the record with the size
of the current largest free block. It then compares the request against the
largest free block. If the request is more than 80% of the largest free block,
the request is denied. Otherwise, the request is granted.
After granting a request, if MCM has been in memory request idle mode, it
returns to normal operation.
When MCM first denies a request, it enters memory request idle mode. In
this mode, MCM denies all requests that would keep Intelligence Server’s
private byte usage above the low watermark. MCM remains in memory
request idle mode until one of the following situations occurs:
• MCM has been in memory request idle mode for longer than the Memory
request idle time. In this case, MCM shuts down and restarts
Intelligence Server. This frees up the memory that had been allocated to
Intelligence Server tasks, and avoids memory depletion.
The Memory request idle time limit is not enforced via an internal clock or
scheduler. Instead, after every denied request MCM checks how much time
has passed since the memory request idle mode was triggered. If this time is
more than the memory request idle time limit, then Intelligence Server
restarts.
Once request B has been denied, Intelligence Server enters the memory
request idle mode. In this mode of operation, all requests that would push
the total memory used above the low watermark are denied.
In the example above, request C falls above the LWM. Since Intelligence
Server is in memory request idle mode, this request will be denied unless
Intelligence Server releases memory from elsewhere, such as other
completed contracts.
Request D is below the LWM, so it will be granted. Once it has been granted,
Intelligence Server switches out of request idle mode and resumes normal
operation.
In this example, Intelligence Server has increased its private byte usage to
the point that existing contracts are pushed above the high watermark.
Request A is denied because the requested memory would further deplete
Intelligence Server’s virtual address space.
Once request A has been denied, Intelligence Server enters the memory
request idle mode. In this mode of operation, all requests that would push
the total memory used above the low watermark are denied.
The low watermark is 95% of the high watermark. In this scenario, the HWM
is the amount of Intelligence Server private bytes at the time when the
memory depletion was first detected. Once the virtual byte HWM has been
set, it is not recalculated. Thus, for Intelligence Server to exit memory
request idle mode it must release some of the private bytes.
Intelligence Server remains in memory request idle mode until the memory
usage looks like it does at the time of request B. The Intelligence Server
private byte usage has dropped to the point where a request can be made that
is below the LWM. This request is granted, and MCM exits memory request
idle mode.
This setting is useful if you wish to prevent the system from servicing a
Web request if memory is depleted. If the condition is met, Intelligence
Server denies all requests from a MicroStrategy Web product (or a client
built with the MicroStrategy Web API).
• Minimum machine free physical memory
Minimum machine free physical memory (%) sets the minimum amount
of RAM that must remain available for Web requests. This value is a
percentage of the total amount of physical memory on the machine (not
including the Page File memory).
Requests
Configuring Intelligence
Report design
Server and Projects
• How the data warehouse is configured (see How the data warehouse can
affect performance, page 573)
Platform considerations
The size and speed of the machine(s) hosting your data warehouse and the
database platform (RDBMS) running your data warehouse both affect the
system’s performance. While MicroStrategy does not give recommendations
about data warehouse platforms, certain RDBMSs are better suited than
others to handle very large data warehouses and large numbers of users. You
should have an idea of the number of users your system needs to serve and
research which RDBMS can handle that type of load.
Your data warehouse’s design (also called the physical warehouse schema)
and tuning is important and unique to your organization. They also affect the
performance of your business intelligence system. The discussion of the set
of trade-offs you must make when designing and tuning the data warehouse
is out of the scope of this guide. Examples of the types of decisions you must
make include:
For more information about data warehouse design and data modeling, see
the MicroStrategy Advanced Reporting Guide and Project Design Guide.
The steps that occur over each connection are described in the table below
the diagram.
shared cache
file server
Desktop
5
3 4 Metadata
1 2
7
Web client MicroStrategy
Web server Intelligence Server Data
cluster Warehouse
1 HTTP HTML sent from Web server to client. Data size is small compared to other points
because results have been incrementally fetched from Intelligence Server and HTML
results do not contain any unnecessary information.
2 TPC/IP XML requests are sent to Intelligence Server. XML report results are incrementally
fetched from Intelligence Server.
3 TCP/IP Requests are sent to Intelligence Server. (No incremental fetch is used.)
4 TCP/IP Broadcasts between all nodes of the cluster (if implemented): metadata changes,
Inbox, report caches. Files containing cache and Inbox messages are exchanged
between Intelligence Server nodes.
5 TCP/IP Files containing cache and Inbox messages may also be exchanged between
Intelligence Server nodes and a shared cache file server if implemented (see Sharing
result caches and Intelligent Cubes in a cluster, page 487).
6 ODBC Object requests and transactions to metadata. Request results are stored locally in
Intelligence Server object cache.
7 ODBC Complete result set is retrieved from database and stored in Intelligence Server
memory and/or caches.
The maximum number of threads used in step 2 and 3 can be governed in the
Intelligence Server Configuration Editor, in the Server Definition: General
category, in the Number of Network Threads field.
• Place Intelligence Server close to the both the data warehouse and the
metadata repository
• If you have a clustered environment with a shared cache file server, place
the shared cache file server close to the Intelligence Server machines
These depend on the type of reports your users typically run. This, in turn,
determines the load they place on the system and how much network traffic
occurs between the system components. This is discussed next.
The ability of the network to quickly transport data between the components
of the system greatly affects its performance. Typically for large result sets,
the highest load or the most traffic occurs between the data warehouse and
the Intelligence Servers (indicated by C in the diagram below). The load
between Intelligence Server and Web Server is somewhat less (B), followed
by the least load between the Web Server and the Web browser (A).
A B C
Data
MicroStrategy Web Intelligence
Web client Warehouse
Server Server(s)
• The load at C is determined by the number of rows pulled back from the
data warehouse. Sending SQL and retrieving objects from the metadata
result in minimal traffic.
• Report manipulations that cause SQL to be generated and sent to the data
warehouse are similar to running non-cached reports of the same size.
After noting where the highest load is on your network, you are able to tune it
or change the placement of system components to improve the network’s
performance.
You can tell whether or not your network is negatively impacting your
system’s performance by monitoring how much of your network’s capacity is
being used. Use the Windows Performance Monitor for the object Network
Interface, and the watch the counter Total bytes/sec as a percent of your
network’s bandwidth. If it is consistently greater than 60% (for example), it
may indicate that the network is negatively affecting the system’s
performance. You may wish to use a figure different than 60% for your
system.
To calculate the network capacity utilization percent, take the total capacity,
in terms of bits/second, and divide it by (Total bytes per sec * 8).
The clustering feature is built into Intelligence Server and is available out of
the box if you have the proper license. It is discussed in detail in Chapter 11,
Clustering Multiple MicroStrategy Servers.
Designing reports
In addition to the fact that large reports can exert a heavy toll on system
performance, a report’s design can also affect it. Some features consume
more of the system’s capacity than others when they are used. This factor
that influences Intelligence Server capacity is highlighted in the diagram
below.
Requests
Configuring Intelligence
Report design
Server and Projects
For more information about these features, see the MicroStrategy Advanced
Reporting Guide.
Page-by feature
If designers or users create reports that use the page-by feature, they can
potentially use significant system resources. This is because the entire report
is held in memory even though the user is seeing only a portion of it at a time.
To lessen their potential impact, consider splitting large reports with the
page-by feature into multiple reports and eliminating the use of page-by.
• Filter the data that is displayed in the report (must have the “Use view
filter editor” or “Web use view filter editor” privilege)
• Less data warehouse execution if several users access the same data
definition but with customized view definitions
• Users to manipulate their reports in various ways, thus changing the view
definition, but not changing the underlying data definition
Because of these features and the way the reports are held in memory on
Intelligence Server, the reports have the potential to use more memory than
standard reports if:
However, this increased memory use may be worth it if you are looking for
ways to reduce the load on your data warehouse. It may also be worth it if
your users like the flexibility of the reports (being able to drag and drop items
on or off the report grid). This reiterates the trade-offs you must make to
meet your requirements of user sessions and job execution given the system
resources that are available.
The memory required for the report view instance depends on the amount of
data seen in the view. For example, if a report is created that returns
200,000 orders, but the view only displays 500 orders, the report view is
much smaller than if the view returns all 200,000 orders. Below is an
example of an Intelligent Cube report and how the report view sizes vary
depending on how the view filter changes (these results were achieved using
the MicroStrategy Tutorial project).
You should monitor Intelligence Server CPU utilization and memory use
closely if users are making extensive use of OLAP Services functionality. You
should consider these guidelines when implementing OLAP Services:
• The cache size of the data definition and report view is the easiest way to
determine the size of Intelligent Cube reports. The cache size reported in
the Cache Monitor is typically 30-50% smaller than the version held in
memory.
• In theory, the largest cache file that may exist is limited only by the
amount of virtual address space available for the Intelligence Server
process. This theoretical limit for an Intelligent Cube report is a cache
The primary way to manage the size of these reports is to limit the memory
use on Intelligence Server. For more information about this, see Governing
Intelligence Server memory use, page 562.
Prompt complexity
Each attribute element or hierarchy prompt requires an element request to
be executed by Intelligence Server. The number of prompts used and the
number of elements returned from the prompts determine how much load is
placed on Intelligence Server.
Make sure element caches are being used effectively. For details, see Element
caches, page 249.
Documents
If you allow users to execute documents, they may submit multiple reports to
the data warehouse simultaneously. Depending on how many documents
your users execute and the number of child reports those documents submit,
documents can use a lot of system resources.
To limit this impact, create report caches for the child reports used in
document objects.
overview of the governing settings throughout the system. This factor that
influences Intelligence Server capacity is highlighted in the diagram below.
Requests
Configuring Intelligence
Report design
Server and Projects
These governors are arranged by where in the interface you can find them.
Number of network Controls the number of network connections available for 574
threads communication between Intelligence Server and the client, such as
Desktop or MicroStrategy Web.
Backup frequency Controls the frequency (in minutes) at which cache and History List 231
(min) messages are backed up to disk. A value of 0 means that cache
and history messages are backed up immediately after they are
created.
Balance Controls whether threads within Intelligence Server are allocated to 545
MicroStrategy Server processes such as object serving, element serving, SQL
threads generation, and so forth that need them most, while less loaded
ones can return threads to the available pool.
Cache lookup Cleans up the cache lookup table at the specified frequency (in 231
cleanup frequency seconds). This reduces the amount of memory it consumes and
(sec) the time it takes to back up the lookup table to disk.
Project failover The amount of time (the delay) before the project is loaded on 508
latency (min.) another server to maintain minimum level availability.
Configuration When the conditions that caused a project failover disappear, the 508
recovery latency failover configuration reverts automatically to the original
(min.) configuration. This setting is the amount of time (the delay) before
the failover configuration reverts to the original configuration.
Maximum number of The maximum number concurrent of jobs that may exist on an 533
jobs Intelligence Server.
Maximum number of Limits the number of concurrent interactive (non-scheduled) jobs 533
interactive jobs that may exist on this Intelligence Server. A value of -1 indicates
no limit.
Maximum number of Limits the number of concurrent scheduled jobs that may exist on 533
scheduled jobs this Intelligence Server. A value of -1 indicates no limit.
Maximum number of The maximum number of user sessions (connections) for an 521
user sessions Intelligence Server. A single user account may establish multiple
sessions to an Intelligence Server.
User session idle time The time allowed for a Desktop user to remain idle before his or 522
(sec) her session is ended. An idle session is one from which there are
no requests submitted to Intelligence Server.
Web user session idle The time allowed for a Web user to remain idle before his or her 523
time (sec) session is ended.
Note for Report Services documents in Web: Set the Web
user session idle time (sec) to 3600 to avoid a project source
timeout, if designers will be building documents and dashboards
in Web. Then restart Intelligence Server.
XML Generation: The maximum number of XML cells in a report result set that 551
Maximum number of Intelligence Server can send to the MicroStrategy Web products at
XML cells a time. When this limit is reached, the user sees an error message
along with the partial result set. The user can incrementally fetch
the remaining cells.
XML Generation: The maximum number of attribute elements users can see in the 553
Maximum number of drill across menu in MicroStrategy Web products. If this setting is
XML drill paths set too low, the user does not see all of the available drill attributes.
XML Generation: The maximum amount of memory (in megabytes) that Intelligence 553
Maximum memory Server can use to generate a report or document in XML. If this limit
consumption for XML is reached, the XML document is not generated and the user sees
(MB) an error message.
PDF Generation: The maximum amount of memory (in megabytes) that Intelligence 553
Maximum memory Server can use to generate a report or document in PDF. If this limit
consumption for PDF is reached, the PDF document is not generated and the user sees
(MB) an error message.
Excel Generation: The maximum amount of memory (in megabytes) that Intelligence 553
Maximum memory Server can use to generate a report or document in Excel. If this
consumption for Excel limit is reached, the Excel document is not generated and the user
(MB) sees an error message.
HTML Generation: The maximum amount of memory (in megabytes) that Intelligence 553
Maximum memory Server can use to generate a report or document in HTML. If this
consumption for limit is reached, the HTML document is not generated and the user
HTML (MB) sees an error message.
Enable Web request job A check box that enables the governors: 572
throttling • Maximum Intelligence Server use of total memory
• Minimum machine free physical memory.
Maximum Intelligence The maximum amount of total system memory (RAM + Page 572
Server use of total File) that may be used by the Intelligence Server process
memory (%) (MSTRSVR.exe) compared to the total amount of memory on the
machine. If the limit is met, all requests from MicroStrategy Web
products of any nature (log in, report execution, search, folder
browsing) are denied until the conditions are resolved.
Minimum machine free The minimum amount of physical memory (RAM) that needs to 572
physical memory be available (as a percentage of the total amount of physical
memory on the machine). If the limit is met, all requests from
MicroStrategy Web products of any nature (log in, report
execution, search, folder browsing) are denied until the
conditions are resolved.
Enable single memory A check box that enables the governor: 563
allocation governing • Maximum single allocation size
Maximum single Prevents Intelligence Server from granting a request that would 563
allocation size (MBytes) exceed this limit.
Enable memory contract A check box that enables the governors: 563
management • Minimum reserved memory (MB or %)
• Maximum use of virtual address space (%)
• Memory request idle time
Minimum reserved The amount of system memory, in either MB or a percent, that 563
memory (MB or %) must be reserved for processes external to Intelligence Server.
Maximum use of virtual The maximum percent of the process’ virtual address space that 563
address space (%) Intelligence Server can use before entering memory request idle
mode.
Memory request idle The amount of time Intelligence Server denies requests that may 563
time result in memory depletion. If Intelligence Server does not return
to acceptable memory conditions before the idle time is reached,
Intelligence Server shuts down.
Maximum RAM for The maximum amount of memory that can be used for report 526
Working Set cache instances referenced by messages in the Working set.
(KBytes)
Maximum number of The maximum number of History (Inbox) messages that may 233
messages per user exist in a user’s History List at any time. When the limit is
reached, the oldest message is removed.
Message lifetime (days) Length of time before a History List message expires and is 233
automatically deleted. Set to -1 for messages to never expire.
Repository type Select File Based for History List messages to be stored on disk 233
in a file system, or Database Based for History List messages to
be stored in a database (recommended).
Project configuration
These governors can be set per project. To access them, right-click the
project, select Project Configuration, then select the category as noted
below.
Maximum number of Prevents too many attribute elements from being returned 253
elements to display from the data warehouse at a time.
Intelligence Server The amount of time an interactive (ad-hoc) report request can 529
elapsed time - Interactive take before it is canceled. This accounts for total time spent
reports (sec) resolving prompts, waiting for autoprompts, waiting in the job
queue, executing SQL, analytical calculation, and preparing
report results.
Intelligence Server The amount of time a scheduled report request can take before it 529
elapsed time - is canceled. This accounts for total time spent resolving prompts,
Scheduled reports (sec) waiting for autoprompts, waiting in the job queue, executing SQL,
analytical calculation, and preparing report results.
Final result rows - The maximum number of rows that may be returned to 546
Intelligent Cubes Intelligence Server for an Intelligent Cube request. This setting is
applied by the Query Engine when retrieving the results from the
database. This is the default for all reports in a project and can
be overridden for individual reports using VLDB settings.
Final result rows - data The maximum number of rows that may be returned to 546
marts Intelligence Server for a data mart report request. This setting is
applied by the Query Engine when retrieving the results from the
database. This is the default for all reports in a project and can
be overridden for individual reports using VLDB settings.
Final result rows - all The maximum number of rows that may be returned to 546
other reports Intelligence Server for a standard report request. This setting is
applied by the Query Engine when retrieving the results from the
database. This is the default for all reports in a project and can
be overridden for individual reports using VLDB settings.
All element browsing The maximum number of rows that may be retrieved from the 547
rows data warehouse for an element request.
All intermediate result The maximum number of rows that may be in an intermediate 548
rows result set used for analytical processing in Intelligence Server.
This is the default for all reports in a project and can be
overridden using VLDB settings for individual reports.
Memory consumption Maximum amount of memory (in megabytes) that Intelligence 533
during SQL generation Server can use for SQL generation. Default is -1, which indicates
(MB no limit.
Jobs per user account The maximum number of concurrent jobs per user account and 531
project.
Jobs per user session The maximum number of concurrent jobs a user may have during a 531
session.
Executing jobs per The maximum number of concurrent jobs a single user account 531
user may have executing in the project. If this condition is met, additional
jobs are placed in the queue until executing jobs finish.
Jobs per project - The maximum number of concurrent interactive (ad-hoc) jobs that 532
interactive the project can process at a time.
Jobs per project - The maximum number of concurrent scheduled jobs that the 532
scheduled project can process at a time.
User sessions per The maximum number of user sessions (connections) that are 522
project allowed in the project. When the limit is reached, users cannot log in,
except for the Administrator. This may be required to disconnect
current users or increase the governing setting.
Maximum History The maximum number of reports or documents that a user can be 365
List subscriptions subscribed to for delivery to the History List.
per user
Maximum Cache The maximum number of reports or documents that a user can be 365
Update subscriptions subscribed to for updating caches.
per user
Maximum email The maximum number of reports or documents that a user can be 365
subscriptions per subscribed to for delivery to an email address (Distribution Services
user only).
Maximum file The maximum number of reports or documents that a user can be 365
subscriptions per subscribed to for delivery to a file location (Distribution Services
user only).
Maximum print The maximum number of reports or documents that a user can be 365
subscriptions per subscribed to for delivery to a printer (Distribution Services only).
user
Maximum Mobile The maximum number of reports or documents that a user can be 365
subscriptions per subscribed to for delivery to a Mobile device (MicroStrategy Mobile
user only).
Datasets - The maximum amount of memory reserved for the creation and 222
Maximum RAM storage of report and dataset caches. This setting should be
usage configured at least the size of the largest cache file or that report will
not be cached.
Datasets - The maximum number of report and dataset caches the project may 217
Maximum number of have at a time.
caches
Formatted The maximum amount of memory reserved for the creation and 222
documents - storage of document caches. This setting should be configured at
Maximum RAM least the size of the largest cache file or that report will not be cached.
usage
Formatted The maximum number of document caches the project may have at a 217
documents - time.
Maximum number of
caches
RAM Swap Controls how much memory is swapped to disk, relative to the size of 230
Multiplier the cache being swapped into memory. For example, if the RAM
swap multiplier setting is 2 and the requested cache is 80 Kbytes,
160 Kbytes are swapped from memory to disk
Never expire caches Select this check box for caches to never automatically 231
expire.
Cache duration (Hours) The amount of time a result cache remains valid. 231
Do not override cache Select this check box for report caches with dynamic 231
expiration settings for reports dates to expire in the same way as other report caches
containing dynamic dates
Server - Maximum The amount of memory Intelligence Server allocates for object 266
RAM usage (MBytes) caching.
Client - Maximum The amount of memory Desktop allocates for object caching. 266
RAM usage (MBytes)
Server - Maximum The amount of memory Intelligence Server allocates for element 261
RAM usage (MBytes) caching.
Client - Maximum The amount of memory Desktop allocates for object caching. 261
RAM usage (MBytes)
Re-run History List and Select this check box to cause new subscriptions to create 365
Mobile subscriptions against caches or update existing caches by default when a report or
the warehouse document is executed and that report/document is
subscribed to the History List or a Mobile device.
Re-run file, email, and print Select this check box to cause new subscriptions to create 365
subscriptions against the caches or update existing caches by default when a report or
warehouse document is executed and that report/document is
subscribed to a file, email, or print device.
Do not create or update Select this check box to prevent subscriptions from creating 365
matching caches or updating caches by default.
Keep document available for Select this check box to retain a document or report for later 365
manipulation for History List manipulation that was delivered to the History List.
subscriptions only
Database connection
This set of governors can be set by modifying a project source’s database
instance and then modifying either the number of Job Prioritization
connections or the Database connection. See the noted page below for each
governor for more details.
ODBC Settings
Number of database The sum of the Number of database connections by priority: 535
connection threads High, Medium and Low allowed at a time between Intelligence
Server and the data warehouse (set on the database instance’s
Job Prioritization tab).
Maximum cancel attempt The maximum amount of time the Query Engine waits for a 538
time (sec) successful attempt to cancel a query.
Maximum query The maximum amount of time a single pass of SQL may 538
execution time (sec) execute on the data warehouse.
Maximum connection The maximum amount of time Intelligence Server waits to 538
attempt time (sec) connect to the data warehouse.
Connection lifetime The amount of time an active database connection thread remains 539
(sec) open and cached on Intelligence Server.
Connection idle The amount of time an inactive database connection thread 540
timeout (sec) remains cached until it is terminated.
VLDB settings
These settings can be made in the VLDB properties for either reports or the
database instance. For information about accessing them, see the noted page
for each property in the table below. For complete details about all VLDB
properties, see the Details for all VLDB properties appendix in this guide.
Intermediate row Maximum number of rows that may be in an intermediate table used by 548
limit Intelligence Server. This setting overrides what is in the project’s
default setting Number of intermediate result rows.
Results Set Row Maximum number of rows that can be in a report result set. This setting 546
Limit overrides what is in the project’s default setting Number of report
result rows.
SQL time out (per The amount of time in seconds any given SQL pass can execute on 534
pass) the data warehouse. This can be set at the database instance and
report levels.
Maximum SQL Maximum size (in bytes) the SQL statement can be. This can be set at 534
size the database instance level.
PRE executes a separate report for each set of users with unique
personalization. Users may have reports executed under the context of
the corresponding Intelligence Server user if desired. Using this option,
security profiles defined in MicroStrategy Desktop are maintained.
However if there are many users who all have unique personalization,
this option can place a large load on Intelligence Server.
For more detailed information about these options, refer to the Narrowcast
Server Application Designer Guide (specifically, see the chapter on Page
Personalization and Dynamic Subscriptions).
Information sources
• You can balance the load manually by creating multiple ISs or by using a
single IS pointing to one Intelligence Server, thereby designating it to
handle all Narrowcast Server requests.
Introduction
This chapter provides guidance for finding and fixing trouble spots in the
system. While the material in the chapter does not go into great detail, it does
provide references to the relevant portions of this guide where the topic or
remedy is discussed in more detail. The discussions in this chapter are:
• Methodology for finding trouble spots, page 596
• Use the server state dump (the DSSErrors log file) (see Server state
dumps, page 608) to determine whether it was a:
• Tune the system as necessary (see Chapter 12, Tuning your System for
Best Performance)
Which components of the system are slow (use the “Execution Cycle
Breakdown” report) to see if reports can be designed differently
When the system is slowest and if that relates to concurrency (use the
“Peak Time Period,” “Average Execution Time vs. Number of
Sessions,” and “Average Execution Time vs. Number of Jobs per
User,” reports)
Whether scheduled reports are running during peak times (use the
“Scheduled Report Load” report) and if so, schedule them at off-peak
times
• The connection to the data warehouse may not be working or there may
be problems with the data warehouse (see Connecting to the data
warehouse, page 9)
• The result set row for the report may have exceeded the limit specified in
the Project Configuration Editor or the VLDB Properties editor. (see
Troubleshooting subscription and report results, page 634)
• Check that the correct port numbers are set if you are using firewalls in
your configuration (see Using firewalls, page 928)
You can access the Diagnostics tool either through MicroStrategy Desktop as
described below, or you can access it without logging into Desktop. To do the
latter, from the Start menu, select Programs, select MicroStrategy, select
Tools, and select Diagnostics Configuration.
IfPerformance
you save any changes to settings within the Diagnostics and
Logging tool, you cannot automatically return to the
out-of-the-box settings. If you might want to return to the original
default settings at any time, record the default setup for your records.
menu,
If the Diagnostics menu option does not appear on the Tools
it has not yet been enabled. To enable this option, from the
Tools menu, select Desktop Preferences. In the General
category, in the Advanced subcategory, select the Show
Diagnostics Menu Option check box and click OK.
the
For details about this editor while you are using it, press F1 to see
online help.
2 Select from one of two configurations to see the current default setup, as
follows:
• Machine Default: The components and counters displayed reflect the
client machine.
When you select the CastorServer instance, you can select whether
to use the default configuration. On the Diagnostics tab, this check
box is named Use Default Diagnostics Configuration; on the
Performance tab, this check box is called Use Default
Performance Configuration. This check box refers to the
Machine Default settings. No matter what you have changed and
saved on either tab when CastorServer instance is selected, if you
check the Use Default Configuration box, the system logs
whatever information is configured for Machine Default at run
time.
• For a detailed list of specific logging options for both tabs, see the
appendix Diagnostics and Performance Logging, page 961.
4 From the File menu, select Save. Your new settings are saved in the
registry, and Intelligence Server begins logging the information you
configured.
Diagnostics configuration
The Diagnostics and Performance Logging tool allows you to control two
types of diagnostics logging, represented by two tabs, Diagnostics
Configuration and Performance Configuration. The Diagnostics
Configuration tab in the Diagnostics and Performance Logging tool is where
you determine which specific system components you want to log diagnostics
for. For each component, you can select the log file that Intelligence Server
writes messages to.
• System log: The Event Viewer system log file. The Event Viewer is a
Windows tool, located by default in Administrative Tools. To access it,
from the Start menu, select Settings, select Control Panel, and choose
Administrative Tools.
Common customizations
• Error: This dispatcher logs the final message before an error occurs,
which can be important information to help detect the system component
and action that caused or preceded the error.
• Fatal: This dispatcher logs the final message before a fatal error occurs,
which can be important information to help detect the system component
and action that caused or preceded the server fatality.
• Info: This dispatcher logs every operation and manipulation that occurs
on the system.
Component Dispatcher
Performance configuration
what amount of time the CPU takes to operate a given system function. On
the Performance Configuration tab you can also create a new log file which
Intelligence Server can write messages to for performance counter logging.
• Category: The operating system component or feature for which you can
log performance diagnostics messages.
When you select the log file to which information on performance counters is
recorded, you can determine how often data is recorded, in seconds, and
whether to persist the counters.
• If you want to create a new log file, from the File Name drop-down
box select New. The Log Destination Editor opens. See Creating a
new log file below for steps to create a log file.
2 From the Logging Frequency (sec) field, enter how often you want data
to be logged.
3 From the Persist Counters drop-down box, select whether you want
counters persisted, as follows:
• Yes: If the server is restarted, the system creates a separate log file
(using the same file name but with a 2 added) to continue logging
information.
4 When you are finished configuring the performance counter log file, click
Save on the toolbar. Your choices are saved for the selected log file.
Within the Diagnostics and Performance Logging tool, you can create your
own log files to gather and save information about system components and
performance counters. To create a custom log file, use the Log Destination
Editor.
1 In the Diagnostics and Performance Logging tool, from the Tools menu,
select Log Destinations. The Log Destination Editor opens.
3 In the File Name field, enter a name for your new log file.
4 In the Max File Size (KB) field, enter the size you want to limit your file
to. General guidelines for setting this number are as follows:
• If the Kernel XML API component is selected in the Diagnostics and
Performance Logging tool, the maximum file size should be set to no
lower than 2000 KB.
• To keep a longer history of information in this log file, you can enter
10,000 KB or more.
Log files are always appended to. When a file reaches its maximum
size, the file is backed up (with a .bak extension) and a new file is
created.
5 From the File Type drop-down box, select whether you want this log file
to record diagnostics data or performance data, as follows:
• Diagnostics: If you select Diagnostics as the file type, your new log
file becomes available as a selection from the File Log column on the
Diagnostics Configuration tab, and can record information for any of
the component/dispatcher combinations.
6 When you are finished setting up your new log file, click Save. All log files
are saved to C:\Program Files\Common Files\MicroStrategy\
Log. This location is set during installation and cannot be changed.
All messages in the log files have the same format. Each entry has the
following parts:
Section Definition
The following sample is a simple log file that was generated from
MicroStrategy Web (ASP.NET) after running the report called Length of
Employment in the MicroStrategy Tutorial project. The bulleted line before
each entry explains what the log entry is recording.
• The entry below shows that Intelligence Server creates a report definition.
• Intelligence Server loads the report definition object named Length of Employment from the
metadata.
• Intelligence Server checks to see whether the report exists in the report cache.
• Intelligence Server checks for prompts and finds none in the report.
The MicroStrategy Monitor is a console tool that allows you to view logged
information in real time. The information that appears depends on the
components and other options selected in the Configuring what is logged:
Diagnostics and Performance Logging tool section above. The information
logged reflects where the Monitor was launched from, as follows:
• If the Monitor is launched from the client machine, the Monitor logs only
client information.
• If the Monitor is launched from the server machine, the Monitor logs only
server information.
• If the server and client are on the same machine, when the Monitor is
launched from this machine it logs both Desktop and server information.
The Monitor is a viewer-type tool and does not allow you to edit logged
information.
1 From your Start menu, select Program Files, then MicroStrategy, then
Tools, then select Monitor.
By default, error log files on the Web server machine are located in the
MstrWeb/WEB-INF/log/ directory.
In Web Universal click View logs on the left side of the page.
For more information, see the online help accessible from the Administrator
Page in the Web product.
The SSD is logged into the DSSErrors.log file, located in the following
folder by default:
\Program Files\Common Files\MicroStrategy\Log\
The SSD contains information including the server and project configuration
settings, memory usage, schedule requests, user sessions, currently
executing jobs and processing unit states, and so on.
Each occurrence has information logged under the same process ID and
thread ID.
This section precedes the actual SSD and provides information on what
triggered the SSD. This includes memory depletion or an unknown exception
error.
This section provides a subset of Intelligence Server level settings as they are
defined in the Intelligence Server Configuration Editor (in Desktop,
right-click the project source, and select Configure MicroStrategy
Intelligence Server). The settings include:
• Server definition name
Review the above governor settings for unusually high values. This may help
you understand why memory depletions are occurring.
This section includes basic information related to the state and configuration
of projects. This shows settings that are defined in the Project Configuration
Editor, such as:
• Project name
• Cache settings
• Governor settings
• DBRole used
• DBConnection settings
Review the above settings for unusually high values which may contribute to
memory depletions. Also review the settings to ensure that the project is
configured as you expect it to be.
• Inbox settings
• Idle timeouts
• XML governors
Review the above governor settings for unusually high values. This may help
you understand why memory depletions are occurring. The memory and
XML governor settings are especially important in memory depletion cases.
MCM is specifically designed to help you avoid memory depletions. For more
information on MCM, see Memory Contract Manager, page 620.
The Callstack dump provides information on the functions being used at the
time the SSD was written. Similarly, the lockstack provides a list of active
locks. The Module info dump provides a list of files and their location
(Modules) loaded into memory by Intelligence Server.
• Private bytes
• Virtual bytes
This shows the memory profile of the Intelligence Server process and
machine. If any of these values are near their limit, memory may be a cause
of the problem.
Look for an unexpectedly high number of users or jobs. These may have
caused the problem.
• Reports
• Documents
Look for unexpected schedules that might be running at the time of the
dump. Schedules can add significant load to the system and contribute to
problems such as memory depletions.
• Connection name
• User name
• Login
The section provides information on the size of various loaded user Inboxes
and information related to the WorkingSet. Examples include:
• Working Set Result Pool memory consumption
Look for high values in any of these numbers, especially in cases of memory
depletion. They may also contribute to slow performance.
This section provides a snapshot of the state of executing jobs in more detail
than shown in the Job Monitor. This can be broken down into three
sub-sections. Each one is provided in the dump for each job:
• Details on job steps completed and currently executing, along with timing
These may be useful to see what the current load on Intelligence Server was,
as well as what was executing at the time of the error. If the error is due to a
specific report, the information here can help you reproduce it (to help
MicroStrategy Technical Support). It may also help you avoid the problem
while it is being fixed.
This section provides details on the various user sessions within Intelligence
Server at the time of shut down in more detail than shown in the User
Connection Monitor.
Look for an unusually high number of user sessions for the same user.
This section provides information about the states of the threads within each
processing unit in Intelligence Server. It also provides information on the
number of threads per PU and to what priority they are assigned.
Make sure the thread count is not unexpectedly high. This is most useful for
MicroStrategy Technical Support.
This is a very brief summary of the memory basics. For more detail on this,
see Memory, page 557.
Virtual memory is Physical memory (RAM) + Disk Page file (also called the
swap file). It is shared by all processes running on the machine including the
operating system.
Virtual bytes measures the use of the UAS. When virtual bytes reaches the
UAS limit, it causes a memory depletion.
bytes
The Commit Limit in Windows Task Manager is not equal to Virtual
Private bytes reflect virtual memory usage and they are a subset of allocated
virtual bytes.
• How is the project being used? Are there very complex reports? Reporting
on very large data sets?
• Are the governor settings too high for Working set, XML cells, result set,
and so on?
To answer these questions, you must be familiar with the system and how it
is being used (Enterprise Manager reports will help you with this). But
perhaps most useful is to know what the system was doing when the memory
depletion occurred. To answer this question, use:
• The DssErrors.log file for details about what was happening when the
memory depletion occurred (see Reading log files, page 605)
• Your knowledge of the system and whether or not the top memory
consumers are heavily used
• If memory depletion does not seem to fall into one of known categories,
page 620
• How much memory does Intelligence Server use when it starts up?,
page 559
• How does Intelligence Server use memory after it is running?, page 561
• Number of projects
Possible solutions:
• Reduce the Maximum RAM size for report caches (see Maximum RAM
usage, page 229, and below)
The cache lookup table is responsible for most memory problems related to
caching. It matches report requests to report caches and is loaded into
memory on each cluster node. Prompt answers tend to consume most
memory. To give you an idea of the size of the lookup table, calculate the size
of all CachLKUP.idx files. Also, the cache lookup table memory
consumption is not governed.
For more detailed information about report caches, see Result caches,
page 204.
Possible solutions:
Check the cache hit ratio using the Enterprise Manager report “Cache
Analysis.”
• Reduce the Maximum RAM size for report caches (see Maximum RAM
usage, page 229)
Working set
This feature’s memory use typically correlates with the number of open user
sessions and the size of reports the users run.
Possible solutions:
Many messages in the History List consume a lot of memory. When the user
logs in to the system, his or her Inbox is loaded into memory. Every user
logged in consumes more memory.
Possible solutions:
• Decrease the session idle timeouts so that the user’s Inbox can be
unloaded (see Governing active users, page 521)
• Limit the number of Inbox Maximum number of messages per user (see
Governing user resources, page 525).
You may notice that when a certain report is run, memory use spikes. This
could be caused by a number of factors:
Possible solutions:
• Redefine the report or split the report into multiple smaller reports
• Restrict the report’s maximum size so users cannot create large reports
(see Governing results delivery, page 550)
Reduce the Maximum number of XML cells setting
Possible solutions:
• Have your users export using the “Plain text,” “Excel with plain text,” or
“CSV file format” settings, which use less memory than others
• Reduce the Maximum number of XML cells setting (see Governing
results delivery, page 550)
If memory depletion does not seem to fall into one of known categories
• Use tools: DssErrors.log, Performance Monitor
To help protect Intelligence Server from reaching memory depletion, you can
enable the Memory Contract Manager. This is a built in tool that controls
whether or not certain job tasks are allowed to occur based on how much
memory they could consume. This does not guarantee that a memory
depletion will not happen, but decreases the chance of it. For more
information on this, see the following sections:
Troubleshooting authentication
This section gives you a list of things to check according to the type of
authentication you are using. Refer to the appropriate authentication section
below.
• LDAP authentication
Torunning
use Windows authentication with MicroStrategy Web, you must be
Web or Web Universal on a Windows machine with Microsoft
IIS. Non-IIS web servers do not support Windows authentication.
[Kernel][Error] ConfigManager::GetServerDefSetting():
ServerDef not initialized: Long SettingId=75.
This may happen because the server definition has not been initialized
correctly.
LDAP authentication
LDAP authentication problems within Intelligence Server usually fall into
one of these categories:
• Authentication issues that include clear text and Secure Socket Layer
(SSL) authentication modes
These are discussed below. For flowcharts, more details, and answers to
frequently asked questions, see MicroStrategy Tech Note TN5300-722-0369.
The following list describes error messages that may in MicroStrategy when
trying to connect to the LDAP Server.
Cannot contact LDAP Server: If you receive a “Can’t connect to the LDAP
Server” error message, try to connect using the LDAP Server IP address. You
should also check to be sure the LDAP machine is running. Another
possibility is that the SSL certificate files are not valid. For additional SSL
troubleshooting, see the next section.
There are two modes of authentication against LDAP: Clear Text and SSL.
Depending on which you are using, answering the following questions may
help you reach a solution:
Can the LDAP user (different than the authentication user) log in?
You can test this with any LDAP browser.
Do the LDAP server side logs show success messages? The LDAP
administrator can access these logs.
The user’s DN is specified on the User Editor, Authentication tab in the User
distinguished name (DN) box.
Intelligence Server also allows LDAP groups to be imported. With this option
selected, all the groups to which the user belongs are also imported under the
LDAP Users group (similar to the imported user) when an LDAP user logs
in.
group)
You cannot link MicroStrategy system groups (such as the Everyone
to an LDAP group.
The Synchronize at Login options for both users and groups cause
Intelligence Server to check (at the time of next login) whether:
The names and links between the two may or may not be synchronized
depending on whether the synchronize option is selected in combination
with whether users and groups are to be imported. For specific information
about this and to see a flowchart, see MicroStrategy Tech Note
TN5300-722-0369.
• There is no link persisted in the metadata and the user has the privileges
of a “Guest” user as long as the user is logged in. The user has the
privileges of the MicroStrategy Public/Guest group.
• The non-imported user does not have an Inbox, because the user is not
physically present in the metadata.
If users are imported into the metadata, they have their own Inbox and
personal folders. If users are not imported, regardless of whether they are
part of the LDAP Users or LDAP Public group, they do not have an Inbox.
Users that are not imported do not have personal folders and can only save
items in public folders if they have the correct privileges and permissions.
How can I assign security filters, security roles (privileges), or access control
(permissions) to individual LDAP users?
Security filters, security roles, and access control may be assigned to users
after they are imported into the MicroStrategy metadata, but this
information may not be assigned dynamically from information in the LDAP
repository.
May two different users have different LDAP links, but the same user name?
No. The MicroStrategy metadata may not contain two users with the same
login name or user name. If you attempt to create a user with the same user
login or user name, the import fails. Each user object in the MicroStrategy
metadata must have a unique user login and user name.
What happens if there are two users with similar descriptions in the LDAP
directory?
What happens if I import a User Group along with all its members in the LDAP
directory into MicroStrategy metadata and then assign a connection mapping to
the imported group?
The connection mapping of the imported user group to which the user
belongs will not readily apply to the user. For this to work, you will need to
manually assign the user as a member of the group after she or he has been
imported.
If you attempt to use an object that has been deleted by another user during
your session, you may receive an error message similar to the following:
'Object with ID '46E2C20D46100C9AFD5174BF58EB8D12' and type
26(Column) is not found in the metadata. It may have been
deleted.'
You can verify that the object no longer exists in the project by disconnecting
and reconnecting to the project.
The following table lists all the object types and object descriptions that
occur in the MicroStrategy metadata. You can refer to the type of the missing
object from the table and restrict your search only to that particular object.
This way you do not have to search through all the objects in a project.
Object
Object Classification Object Description
Type
0 DssTypeReserved None
1 DssTypeFilter Filter
2 DssTypeTemplate Template
3 DssTypeReportDefinition Report
4 DssTypeMetric Metric
5 Unused None
6 DssTypeAutostyles Autostyle
8 DssTypeFolder Folder
9 Unused None
10 DssTypePrompt Prompt
11 DssTypeFunction Function
12 DssTypeAttribute Attribute
Object
Object Classification Object Description
Type
13 DssTypeFact Fact
14 DssTypeDimension Hierarchy
16 Unused None
20 Unused None
32 DssTypeProject Project
35 Unused None
Object
Object Classification Object Description
Type
38 Unused None
41 Unused None
43 DssTypeRole Transformation
47 DssTypeConsolidation Consolidation
Scan MD
Troubleshooting performance
Project performance
You may notice a project that takes longer to load than others, or you may see
the following error message in the server log file:
[DSS Engine] [Error]DSSSQLEngine: WARNING: Object
cache MaxCacheCount setting (200) is too small
relative to estimated schema size (461).
The project loading time involves retrieving a significant amount of data and
can be time-consuming depending on the project’s size, your hardware
configuration, and your network. If a project takes a long time to load or you
see the error message above, there are some things you can look at:
The error message shown above will contain an estimated schema size.
This number should be multiplied by 10 and the result entered in the
Maximum RAM usage setting. This setting may help optimize your
project.
Raising the number for Maximum RAM usage may cause high
memory use, which may cause your machine to experience
problems. Be sure you understand the full ramifications of all
settings before you make significant changes to them.
• Turn off diagnostic tracing: If you have tracing turned on, turn it off to
ensure the project is loaded without logging any unnecessary information
that can slow down performance. (In Desktop, from the Tools menu,
select Diagnostics. Click Help for details on the various tracing options.)
You can use the Diagnostics and Performance Logging Tool to trace
the schema update process and determine whether you need to
increase the memory available to the object cache. In the Diagnostics
and Performance Logging Tool, enable the Metadata Server >
Transaction and Engine > Scope traces. For information about the
Diagnostics and Performance Logging Tool, see Appendix E,
Diagnostics and Performance Logging.
You can improve the response time of the Cache Monitor and the Intelligent
Cube Monitor on HP-UX v2 by changing the following settings on the
Intelligence Server machine (requires root access):
# kctune nfs_new_lock_code=1
# kctune nfs_async_read_avoidance_enabled=1
# kctune nfs_fine_grain_fs_lock=1
# kctune nfs_new_rnode_lock_code=1
# kctune nfs_wakeup_one=1
For the subscription to work, you can redesign the prompt so that it has a
default answer, or simply remove the prompt from the report.
See the Graphing appendix in the Advanced Reporting Guide for details on
many of the graph styles available in MicroStrategy and the specific
requirements for each.
If you change this setting, reports may still continue to be returned and
displayed showing more than the maximum number of report result rows
allowed. This is because this setting is designed to apply only to reports
created after this setting has been changed. When you change this setting, no
existing reports are affected.
You can limit the number of report result rows on existing (and new) reports
individually, using the Result Set Row Limit VLDB property. The Result Set
Row Limit VLDB property can be specified for any report. For details on this
VLDB property, see Results Set Row Limit, page 689. For information on
accessing the VLDB Properties Editor, see Accessing and working with
VLDB properties, page 654.
This error message is displayed when the SQL string size exceeds the
maximum value set for the SQL/MDX string.
You can increase the maximum value for the SQL/MDX field in the Project
Configuration dialog box as follows:
5 Clear the Use default inherited value - (Default Settings) check box.
6 Increase the Maximum SQL/MDX Size value as required. You can enter
any number between 1 and 999999999. If you enter 0 or -1, the
Maximum SQL/MDX Size is set to the default value of 64000. This
default size may be different for different databases. It depends on the
database instance that you select.
You should enter a value that a certified ODBC driver can handle; a
large value can cause the report to fail in the ODBC driver. This is
dependent on the database type you are using.
Ifissue,
increasing the value of this VLDB property does not resolve the
try simplifying the report. You can simplify a report by
removing attributes, metrics, and filters. Importing large sets of
elements for filters can often cause large SQL/MDX size.
For more information, see Maximum SQL/MDX Size, page 688 in the SQL
Generation and Data Processing: VLDB Properties appendix of this guide
This error message can result from an incorrect setting in the database
instance. If the database instance is using a Microsoft Excel file as a data
source and the database instance type is set to Generic DBMS, there is a
change in the syntax. This change in the syntax generates the error message.
2 Right-click the name of the database instance that you want to modify
and select Edit. The Database Instance dialog box opens.
4 Click OK.
InMicroStrategy
order for the change to take effect, you must restart the
Intelligence Server which uses this database instance.
Troubleshooting subscriptions
• A subscription or schedule owner must have the Use permission and the
Browse permission for a contact, to see that specific contact in any list of
contacts.
Logon failure
When you attempt to start Intelligence Server, you may receive the following
error message:
Failed to start service
This issue can occur because the password for the account used to
automatically start Intelligence Server has changed. MicroStrategy Service
Manager does not automatically update the stored password, so you must
manually update this password.
Error code: -1
In all these cases, the error occurs when Intelligence Server is not activated
within the activation grace period.
In this case, the system date on the Intelligence Server machine is incorrect,
and is beyond the expiration date of the installation key. To start Intelligence
Server, correct the date on the Intelligence Server machine and then restart
the machine.
If powering on after a power failure, it does not matter what order the nodes
are powered back on. Once the nodes are on, use the Cluster Monitor to
determine which machine is the primary node, and then manage any caches.
In Desktop, the project source definition includes the server name. Desktop
only connects to a specific node, so you can control which server you are
connected to.
Nodes in the cluster can host a different set of projects from the same
metadata. The node to which a Desktop user connects is important because it
dictates which projects will be available to the user at that time.
If the cache is available on the local node, the Cache Monitor will increment
the hit count. If the cache is retrieved from another node, speed of response
can indicate whether a cache is hit. Statistics tables can provide additional
data on cache hits.
When an object is edited on one cluster node, the updated version ID of the
object is announced to the other nodes. This allows the other nodes to
invalidate the object if they have it in memory and retrieve a fresh copy from
the metadata. Therefore, in this instance there is no need to purge the object
cache.
If changes to an object are made in 2-tier (Direct) mode, those changes will
not be propagated to any Intelligence Servers connected to the metadata.
Additionally, if an object is modified from an Intelligence Server not in the
cluster but using the same metadata, the cluster nodes will not know of the
object change. In these cases, the object cache should be purged.
You can automate purging the element cache using MicroStrategy Command
Manager. See the MicroStrategy System Administration Guide, Volume 2.
For Intelligence Server 7.1 and later, the combined History List in memory is
a sum of all local files and is automatically synchronized. Therefore, you
cannot tell which pointers are physically located on which machine.
If powering on after a power failure, it does not matter what order the
machines are started. It is important to locate the machine that is the
primary node, so that cache management can be controlled. The primary
node is designated in the Cluster Monitor. See Managing your clustered
projects, page 504.
If statistics are being logged but some data is being lost, the load on your
system may be too high. For ways to decrease the system load, see Chapter
12, Tuning your System for Best Performance.
If“Application
you are logging statistics to a DB2 database, disabling the
Using Threads” setting for the statistics DSN may
improve performance on AIX systems. For more information and
detailed instructions, see your database documentation.
For troubleshooting tips in cases where statistics suddenly stop being logged,
see Statistics logging suddenly stops, page 646. If statistics are not being
logged at all, see Statistics logging suddenly stops, page 646.
If statistics do not appear to be logged for your project, first verify that the
Intelligence Servers to be monitored are correctly logging information in the
statistics tables, and that these tables are correctly located within the
Enterprise Manager Warehouse. When the statistics tables are created using
the MicroStrategy Configuration Wizard, they must be created in the
database that will be used as the Enterprise Manager warehouse.
The following Structured Query Language (SQL) scripts can confirm whether
statistics are being recorded correctly. They should be executed using a
native layer query tool (for example, SQL+ for Oracle, Query Analyzer for
SQL Server).
For a detailed list of the statistics tables and columns, see the appendix
Enterprise Manager Data Model and Object Definitions, in the System
Administration Guide Volume 2.
• Intelligence Server has shut down while jobs were still executing.
Statistics are not logged for jobs that do not complete before Intelligence
Server shuts down. This applies to both a manual shutdown and to an
unexpected shutdown (crash).
To check this, expand the Administration section for the project source
that should be logging statistics. Select Database Instance Manager.
Right-click the statistics database instance and select Edit. Then verify
the database connection type, data source name (DSN), and default
database connection.
• If you are using single instance session logging and the specified project is
not one of the monitored projects, then no data for any monitored project
If statistics logging for a project suddenly stops, one or more of the following
factors may be the cause:
• The database server hosting the statistics database may not be running.
• A heavy load on the statistics database may have caused statistics logging
to shut down.
• The login or password for the statistics database may have been modified.
At the end of your MicroStrategy installation, you were prompted to view the
MicroStrategy readme, which also includes release notes. You can access the
readme in the following ways:
Readme
Release notes
• Known issues: You can review any functionality that has been identified
as having a known issue for the related version. A description of the issue
is given, and in many cases workarounds are provided.
• Resolved issues: You can review any issues that have been resolved for
the related version.
• Troubleshooting. You can review a list of troubleshooting tips for each
MicroStrategy component. The troubleshooting tips may include a
description of the symptom of the issue, the cause of the issue, and a
resolution for the problem.
• Release notes
• White papers
• Newsletters
The troubleshooting documents that are included in the Knowledge base are
created by MicroStrategy developers, engineers, and consultants. You can
find helpful tips and solutions for various MicroStrategy error codes and
issues.
The single field at the top of the support site is where you type a search word
or phrase to search the entire Knowledge Base. You can do a simple search
for keywords or phrases, or choose an Advanced Search at the top right and
include search criteria such as known issues, document status, last
modification date, and so on.
Use these best practices to improve the results of your Knowledge Base
search:
• If you are troubleshooting an error message or error code, you can copy
the error message into your Knowledge Base search. You may have to
enclose the error message in double quotes (“ ”) if it includes certain
characters.
• You can choose the software version in the advanced search options, but
you can also include the version in your search terms. For example, you
can type product line keywords such as 9.x, 8.1.2, and so on.
Customer Forums
The MicroStrategy Customer Forums is a group of message boards where
MicroStrategy customers can have open discussions on implementation
experiences, troubleshooting steps, and any fixes or best practices for
MicroStrategy products. You can post and respond to message threads. The
threads can help answer questions by pooling together one or more
experiences and solutions to an issue.
The Customer Forums are not meant to replace Technical Support, and,
while MicroStrategy employees monitor the forums from time to time, they
are not responsible for answering messages posted to the Customer Forums.
You can access the Customer Forums from the left side menu of the
Knowledge Base site, or from the following URL:
https://forums.microstrategy.com.
The Customer Forums also provide a search field at the top right, which you
can use to search for keywords, author, date, and so on.
Technical Support
MicroStrategy Technical Support helps to answer questions and
troubleshoot issues related to your MicroStrategy products. If the
troubleshooting resources described above do not provide you with a viable
solution to your problem, you can call Technical Support to help
troubleshoot your products. For more information on Technical Support and
how best to ensure a timely solution to your questions, see Technical
Support, page xxxiv in the Book Overview and Additional Resources of this
guide.
PROCESSING: VLDB
PROPERTIES
Introduction
Each VLDB property has two or more VLDB settings. VLDB settings are the
different options available for a VLDB property. For example, the Metric
Join Type VLDB property has two VLDB settings, Inner Join and Outer Join.
VLDB properties also help you configure and optimize your system. You can
use MicroStrategy for different types of data analysis on a variety of data
warehouse implementations. VLDB properties offer different configurations
to support or optimize your reporting and analysis requirements in the best
way.
For example, you may find that enabling the Set Operator Optimization
VLDB property provides a significant performance gain by utilizing set
Order of precedence
VLDB properties can be set at multiple levels, providing flexibility in the way
you can configure your reporting environment. For example, you can choose
to apply a setting to an entire database instance or only to a single report
associated with that database instance.
The following diagram shows how VLDB properties that are set for one level
take precedence over those set for another.
Template Project
Metric Attribute or
Transformation
Database Instance
The arrows depict the overwrite authority of the levels, with the report level
having the greatest authority. For example, if a VLDB property is set one way
for a report and the same property is set differently for the database instance,
the report setting takes precedence.
Properties set at the report level override properties at every other level.
Properties set at the template level override those set at the metric level, the
database instance level, and the DBMS level, and so on.
• Upgrading the VLDB options for a particular database type, page 662
When you access the VLDB Properties Editor for a database instance, you see
the most complete set of the VLDB properties. However, not all properties
are available at the database instance level. The rest of the access methods
have a limited number of properties available depending on which properties
are supported for the selected object/level.
654 Accessing and working with VLDB properties © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 SQL Generation and Data Processing: VLDB Properties A
The table below describes every way to access the VLDB Properties Editor:
To set VLDB
properties at Open the VLDB Properties Editor this way
this level
Attribute In the Attribute Editor, on the Tools menu, select VLDB Properties.
Metric In the Metric Editor, on the Tools menu, point to Advanced Settings, and then select
VLDB Properties.
Project In the Project Configuration Editor, expand Project definition, and select Advanced.
In the Analytical Engine VLDB Properties area, click Configure.
Report In the Report Editor or Report Viewer, on the Data menu, select VLDB Properties.
Template In the Template Editor, on the Data menu, select VLDB Properties.
Transformation In the Transformation Editor, on the Tools menu, select VLDB Properties. Only one
property (Transformation Role Processing) is available at this level. All other VLDB
properties must be accessed from one of the other levels listed in this table.
• All VLDB properties at the DBMS level are used for initialization
and debugging only. You cannot modify a VLDB property at the
DBMS level.
• VLDB Settings list: Shows the list of folders into which the VLDB
properties are grouped. Expand a folder to see the individual properties.
The settings listed depend on the level at which the VLDB Properties
Editor was accessed (see the table above). For example, if you access the
VLDB Properties Editor from the project level, you only see Analytical
Engine properties.
© 2010 MicroStrategy, Inc. Accessing and working with VLDB properties 655
A SQL Generation and Data Processing: VLDB Properties System Administration Guide Vol. 1
• Options and Parameters box: Where you set or change the parameters
that affect the SQL syntax.
• SQL preview box: (Only appears for VLDB properties that directly
impact the SQL statement.) Shows a sample SQL statement and how it
changes when you edit a property.
When you change a property from its default, a check mark appears
on the folder in which the property is located and on the property
itself.
• Display the physical setting names alongside the names that appear in the
interface. The physical setting names can be useful when you are working
with MicroStrategy Technical Support to troubleshoot the effect of a
VLDB property.
656 Accessing and working with VLDB properties © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 SQL Generation and Data Processing: VLDB Properties A
• Display descriptions of the values for each setting. For example, if the
Distinguish Duplicated Rows property is set to False, its description is
“7.x client behavior, duplicated report rows will not be distinguished.”
• Hide all settings that are currently set to default values. This can be useful
if you want to see only those properties and their settings which have
been changed from the default.
The steps below show you how to create a VLDB settings report. A common
scenario for creating a VLDB settings report is to create a list of default VLDB
settings for the database or other data source you are connecting to, which is
described in Default VLDB settings for specific data sources, page 888.
1 Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on accessing the VLDB
Properties Editor, see Opening the VLDB Properties Editor, page 654.)
4 You can choose to have the report display or hide the information
described above, by selecting the appropriate check boxes.
5 You can copy the content in the report using the Ctrl+C keys on your
keyboard. Then paste the information into a text editor or word
processing program (such as Microsoft Word) using the Ctrl+V keys.
© 2010 MicroStrategy, Inc. Accessing and working with VLDB properties 657
A SQL Generation and Data Processing: VLDB Properties System Administration Guide Vol. 1
1 Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on object levels, see
Order of precedence, page 653.)
2 Modify the VLDB property you want to change. For use cases, examples,
sample code, and other information on every VLDB property, see Details
for all VLDB properties, page 664.
3 If necessary, you can ensure that a property is set to the default. At the
bottom of the Options and Parameters area for that property (on the
right), select the Use default inherited value check box. Next to this
check box name, information appears about what level the setting is
inheriting its default from.
4 Click Save and Close to save your changes and close the VLDB
Properties Editor.
5 You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.
658 Accessing and working with VLDB properties © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 SQL Generation and Data Processing: VLDB Properties A
1 Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on object levels, see
Order of precedence, page 653.)
2 From the Tools menu, select Show Advanced Settings. All advanced
properties display with the other properties.
3 Modify the VLDB property you want to change. For use cases, examples,
sample code, and other information on every VLDB property, see Details
for all VLDB properties, page 664.
4 If necessary, you can ensure that a property is set to the default. At the
bottom of the Options and Parameters area for that property (on the
right), select the Use default inherited value check box. Next to this
check box name, information appears about what level the setting is
inheriting its default from.
5 Click Save and Close to save your changes and close the VLDB
Properties Editor.
6 You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.
Iforyou perform this procedure, any changes you may have made to any
all VLDB properties displayed in the chosen view of the VLDB
Properties Editor will be lost. For details on which VLDB properties
are displayed depending on how you access the VLDB Properties
Editor, see Details for all VLDB properties, page 664.
© 2010 MicroStrategy, Inc. Accessing and working with VLDB properties 659
A SQL Generation and Data Processing: VLDB Properties System Administration Guide Vol. 1
1 Use either or both of the following methods to see your system’s VLDB
properties that are not set to default. You should know which VLDB
properties you will be affecting when you return properties to their
default settings:
• Generate a report listing VLDB properties that are not set to the
default settings. For steps, see Creating a VLDB settings report,
page 656, and select the check box named Do not show settings
with Default values.
• Display an individual VLDB property by viewing the VLDB property
whose default/non-default status you are interested in. (For steps, see
Viewing and changing VLDB properties, page 657.) At the bottom of
the Options and Parameters area for that property (on the right), you
can see whether the Use default inherited value check box is
selected. Next to this check box name, information appears about
what level the setting is inheriting its default from.
2 Open the VLDB Properties Editor to display the VLDB properties that
you want to set to their original defaults. (For information on object
levels, see Order of precedence, page 653.)
3 In the VLDB Properties Editor, you can identify any VLDB properties that
have had their default settings changed, because they are identified with a
check mark. The folder in which the property is stored has a chicanery on
it (as shown on the Joins folder in the example image below), and the
property name itself has a chicanery on it (as shown on the gear icon in
660 Accessing and working with VLDB properties © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 SQL Generation and Data Processing: VLDB Properties A
front of the Cartesian Join Warning property name in the second image
below).
4 From the Tools menu, select Set all values to default. See the warning
above if you are unsure about whether to set properties to the default.
5 In the confirmation window that appears, click Yes. All VLDB properties
that are displayed in the VLDB Properties Editor are returned to their
default settings.
6 Click Save and Close to save your changes and close the VLDB
Properties Editor.
© 2010 MicroStrategy, Inc. Accessing and working with VLDB properties 661
A SQL Generation and Data Processing: VLDB Properties System Administration Guide Vol. 1
7 You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.
• It loads updated properties for existing database types that are still
supported.
• It keeps properties for existing database types that are no longer
supported. If an existing database type does not have any updates, but the
properties for it have been removed, the process does not remove them
from your metadata.
Prerequisites
• You have upgraded your MicroStrategy environment, as described in the
Upgrade Guide.
662 Accessing and working with VLDB properties © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 SQL Generation and Data Processing: VLDB Properties A
3 Right-click any database instance and select Edit. The Database Instances
Editor opens.
5 Click Load to load all the available database types for a MicroStrategy
version.
6 Use the arrows to add any required database types by moving them from
the Available database types list to the Existing database types list.
For descriptions and examples of all VLDB properties and to see what
properties can be modified, see Details for all VLDB properties, page 664.
© 2010 MicroStrategy, Inc. Accessing and working with VLDB properties 663
A SQL Generation and Data Processing: VLDB Properties System Administration Guide Vol. 1
The VLDB properties are grouped into different property sets, depending on
their functionality:
• Limiting report rows, SQL size, and SQL time-out: Governing, page 687
The table below summarizes the Analytical Engine VLDB properties, and
includes a description of the issue or optimization that the property
Display NULL Determines where NULL values appear • Display NULL values Display NULL
On Top when you sort data. on bottom while values on top while
sorting sorting
• Display NULL values
on top while sorting
Distinguish Determines how the Analytical Engine • 7.x client behavior, 7.x client behavior,
Duplicated handles duplicate IDs and their duplicated report duplicated report
Rows associated metric values. In rows will not be rows will not be
MicroStrategy 7.x and earlier, only one distinguished distinguished
row for each ID is returned. In • Recommended
MicroStrategy 8.x and later, rows with setting for all 8.x
the same ID are returned as separate projects, distinguish
rows for data analysis. However, the duplicated report
MicroStrategy does not support IDs with rows
multiple descriptions.
Evaluation Determines the order in which the • 6.x Evaluation Order 6.x Evaluation
Ordering Analytical Engine resolves different • 7.x Evaluation Order Order
types of calculations. This is an
advanced property.
Subtotal and Determines whether subtotaling and • 7.1 client sub-total Recommended
Aggregation aggregation are compatible with behavior, 7.2 metric setting for all 7.2
Compatibility MicroStrategy version 7.1 or version 7.2. aggregations will not projects
If this is set for 7.1, the Subtotal be dimensionally
Dimensionality Aware setting is ignored. aware
• Recommended
setting for all 7.2
projects
Subtotals over Determines how consolidation elements • Evaluate subtotals Evaluate subtotals
Consolidations are totaled. In MicroStrategy version over consolidation over consolidation
Compatibility 7.2.x and earlier, totals include all elements and their elements and their
related attribute elements, not just those corresponding corresponding
in the consolidation. In MicroStrategy attribute elements attribute elements
7.5 and later, you can set the Analytical (behavior for 7.2.x (behavior for 7.2.x
Engine to total values for only those and earlier) and earlier)
elements that are part of the • Evaluate subtotals
consolidation. over consolidation
This property must be set at the project elements only
level. (behavior for 7.5 and
later)
The Display NULL on Top property determines where NULL values appear
when you sort data. The default is to display the NULL values at the top of a
list of values when sorting.
USA 2003 10
USA 2003 20
USA 2004 25
Germany 2003 15
Germany 2003 30
The combination of Country and Year returns two pairs of duplicate rows for
the year 2003. The Distinguish Duplicated Rows setting allows you to handle
data of this type in the following ways:
• In MicroStrategy 7.x and earlier versions, the duplicate rows are not
distinguished. Therefore, when a duplicate row is returned the Analytical
Engine returns only one of the rows. In this scenario, the data given in the
previous table may be returned in a report as shown below.
USA 2003 10
USA 2004 25
Germany 2003 15
Since two duplicate rows are not returned for the report, the metric data
for the missing rows cannot be analyzed.
• In MicroStrategy 8.x, duplicate rows can be distinguished and included in
reports. This functionality allows you to analyze all of the metric values
for duplicate rows. For the example with Country, Year, and Sales (in
thousands), the report returns the data as shown in the first table in this
section, which displays all duplicate rows.
Evaluation Ordering
• In MicroStrategy 6.x and earlier versions, the evaluation order could not
be changed. The evaluation order for non-subtotal values was compound
metric, consolidation, and then metric limit. The evaluation order for
subtotal values was consolidation, subtotal, and then compound metric.
• MicroStrategy 7.x provides a new evaluation order and the ability to alter
the order. In MicroStrategy 7.x and later, the evaluation order for both
non-subtotal and subtotal values is compound metric, consolidation,
metric limit, and then subtotal. To change the evaluation order for
MicroStrategy 7.x and later, on the Report Editor menu bar, select Data
and then Report Data Options.
Iforder,
the Evaluation Ordering VLDB setting is set to the 6.x evaluation
then all custom ordering is ignored.
Ifyouryou7.1areclients
running in an environment with mixed client versions, then
get level-aware subtotals only. If you do not want your
7.1 or 7i clients to see the new level-aware subtotals, then you can
change this setting to FALSE. However, this also disables the
level-aware dynamic aggregation for 7i users. MicroStrategy
recommends that you leave the setting as TRUE.
Examples
Notice that the Dollar Sales by Quarter totals are simply a sum of the
quarterly totals.
Q1 Food 250
Q1 Electronics 625
Q2 Food 200
Q2 Electronics 600
In MicroStrategy 7i, the behavior produces the report shown below. Notice
that Dollar Sales by Quarter is showing correct subtotal numbers. The
Analytical Engine is aware of the metric’s level (Quarter) and therefore does
not do additional totaling.
When the Sub-Category attribute is removed, you get the results shown
below. Notice that the subtotals are still level-aware. This is not because the
SQL is re-executed, but because the 7i Analytical Engine is smart enough to
calculate the numbers correctly.
Dollar Sales by
Quarter Category Dollar Sales
Quarter
If you have turned off Subtotal Dimensionality Aware and you want dynamic
aggregation to have the same values as the subtotals, set this property to “7.1
client sub-total behavior, 7.2 metric aggregations will not be
dimensionality-aware”. This changes the above report to the report shown
below, after dynamic aggregation.
Dollar Sales
Quarter Category Dollar Sales
by Quarter
The default setting is “Recommended setting for all 7.2 projects.” However,
to see the subtotals the way they appeared in pre-7i versions, select “7.1 client
sub-total behavior, 7.2 metric aggregations will not be
dimensionality-aware.”
MicroStrategy 7i (7.2.x and later) has the ability to detect the level of a metric
and subtotal it accordingly. The Subtotal Dimensionality Aware property
allows you to choose between the 7.1 and earlier subtotaling behavior
(FALSE) and the 7.2.x and later subtotaling behavior (TRUE). MicroStrategy
recommends that you set this property to TRUE.
If this property is set to True, and a report contains a metric that is calculated
at a higher level than the report level, the subtotal of the metric is calculated
based on the metric’s level. For example, a report at the Quarter level
containing a yearly sales metric shows the yearly sales as the subtotal instead
of simply summing the rows on the report.
Example
The quarterly subtotal is calculated as 600, that is, a total of the Quarterly
Dollar Sales values. The yearly subtotal is calculated as 2400, the total of the
Yearly Dollar Sales values. This is how MicroStrategy 7.1 calculates the
subtotal.
The quarterly subtotal is still 600. Intelligence Server is aware of the level of
the Yearly Dollar Sales metric, so rather than simply adding the column
values, it correctly calculates the Yearly Dollar Sales total as 600.
– This VLDB property must be set at the project level for the
calculation to be performed correctly.
– The setting takes effect when the project is initialized, so after this
setting is changed you must reload the project or restart
Intelligence Server.
– After you enable this setting, you must enable subtotals at either
the consolidation level or the report level. If you enable subtotals
at the consolidation level, subtotals are available for all reports in
which the consolidation is used. (Consolidation Editor ->
Elements menu -> Subtotals -> Enabled.) If you enable subtotals
at the report level, subtotals for consolidations can be enabled on a
report-by-report basis. (Report Editor -> Report Data Options ->
Subtotals -> Yes. If Default is selected, the Analytical Engine
reverts to the Enabled/Disabled property as set on the
consolidation object itself.)
Ifaccessed
the project is registered on an Intelligence Server version 7.5.x but is
by clients using Desktop version 7.2.x or earlier, leave this
property setting on “Evaluate subtotals over consolidation elements
and their corresponding attribute elements.” Otherwise, metric values
may return as zeroes when Desktop 7.2.x users execute reports with
consolidations, or when they pivot in such reports.
Change this property from the default only when all Desktop clients
have upgraded to MicroStrategy version 7.5.x.
Project only
Example
The Total value is calculated for more elements than are displayed in the
Super Regions column. The Analytical Engine is including the following
elements in the calculation: East + (Northeast + Mid-Atlantic + Southeast) +
Central + (Central + South) + West + (Northwest + Southwest).
The Total value is now calculated for only the Super Regions consolidation
elements. The Analytical Engine is including only the following elements in
the calculation: East + Central + West.
Aggregate Defines whether dynamic • Aggregate tables contain the Aggregate tables
Table sourcing is enabled or same data as corresponding contain the same
Validation disabled for aggregate detail tables and the aggregation data as
tables. function is SUM corresponding
• Aggregate tables contain either detail tables and
less data or more data than their the aggregation
corresponding detail tables and/or function is SUM
the aggregation function is not
SUM
Attribute Defines whether dynamic • Attribute columns in fact tables Attribute columns
Validation sourcing is enabled or and lookup tables do not contain in fact tables and
disabled for attributes. NULLs and all attribute elements lookup tables do
in fact tables are present in not contain NULLs
lookup tables and all attribute
• Attribute columns in fact tables or elements in fact
lookup tables may contain NULLs tables are present
and/or some attribute elements in in lookup tables
fact tables are not present in
lookup tables
Enable Cube Defines whether the • Disable Cube Parse Log in SQL Disable Cube
Parse Log in Intelligent Cube Parse log View Parse Log in SQL
SQL View is displayed in the SQL • Enable Cube Parse Log in SQL View
View of an Intelligent View
Cube. This log helps
determine which reports
use dynamic sourcing to
connect to the Intelligent
Cube.
Enable Defines whether dynamic • Disable dynamic sourcing for Enable dynamic
Dynamic sourcing is enabled or report sourcing for report
Sourcing for disabled for reports. • Enable dynamic sourcing for
Report report
Enable Defines whether the • Disable Extended Mismatch Log Disable Extended
Extended extended mismatch log is in SQL View Mismatch Log in
Mismatch Log displayed in the SQL View • Enable Extended Mismatch Log SQL View
in SQL View of a report. The extended in SQL View
mismatch log helps
determine why a metric
prevents the use of
dynamic sourcing is
provided in the extended
mismatch log.
Enable Defines whether the • Disable Mismatch Log in SQL Disable Mismatch
Mismatch Log mismatch log is displayed View Log in SQL View
in SQL View in the SQL View of a • Enable Mismatch Log in SQL
report. This log helps View
determine why a report
that can use dynamic
sourcing cannot connect
to a specific Intelligent
Cube.
Enable Report Defines whether the • Disable Report Parse Log in SQL Disable Report
Parse Log in Report Parse log is View Parse Log in SQL
SQL View displayed in the SQL View • Enable Report Parse Log in SQL View
of a report. This log helps View
determine whether the
report can use dynamic
sourcing to connect to an
Intelligent Cube.
Metric Defines whether dynamic • Fact table does not contain Fact table does not
Validation sourcing is enabled or NULLs for metric values contain NULLs for
disabled for metrics. • Fact table may contain NULLs for metric values
metric values
String Defines whether dynamic • Use case insensitive string Use case
Comparison sourcing is enabled or comparison with dynamic insensitive string
Behavior disabled for attributes that sourcing comparison with
are used in filter • Do not allow any string dynamic sourcing
qualifications. comparison with dynamic
sourcing
Reports that use aggregate tables are available for dynamic sourcing by
default, but there are some data modeling conventions that should be
considered when using dynamic sourcing.
You can enable and disable dynamic sourcing for aggregate tables by
modifying the Aggregate Table Validation VLDB property. This VLDB
property has the following options:
• Aggregate tables contain either less data or more data than their
corresponding detail tables and/or the aggregation function is not
SUM: This option disables dynamic sourcing for aggregate tables. This
setting should be used if your aggregate tables are not modeled to support
dynamic sourcing. The use of an aggregation function other than Sum or
the mismatch of data in your aggregate tables with the rest of your data
warehouse can cause incorrect data to be returned to reports from
Intelligent Cubes through dynamic sourcing.
You can disable dynamic sourcing individually for reports that use aggregate
tables or you can disable dynamic sourcing for all reports that use aggregate
tables within a project. While the definition of the VLDB property at the
project level defines a default for all reports in the project, any modifications
at the report level take precedence over the project level definition. For
information on defining a project-wide dynamic sourcing strategy, see the
OLAP Services Guide.
Attribute Validation
Attributes are available for dynamic sourcing by default, but there are some
data modeling conventions that should be considered when using dynamic
sourcing.
Two scenarios can cause attributes that use inner joins to return incorrect
data when dynamic sourcing is used:
• All attribute elements in fact tables are not also present in lookup tables
You can enable and disable dynamic sourcing for attributes by modifying the
Attribute Validation VLDB property. This VLDB property has the following
options:
You can disable dynamic sourcing for attributes individually or you can
disable dynamic sourcing for all attributes within a project. While the
definition of the VLDB property at the project level defines a default for all
attributes in the project, any modifications at the attribute level take
precedence over the project level definition. For information on defining a
project-wide dynamic sourcing strategy, see the OLAP Services Guide.
Enable Cube Parse Log in SQL View is an advanced VLDB property that is
hidden by default. For information on how to display this property, see
Viewing and changing advanced VLDB properties, page 658.
The Intelligent Cube parse log helps determine which reports use dynamic
sourcing to connect to an Intelligent Cube, as well as why some reports
cannot use dynamic sourcing to connect to an Intelligent Cube. By default,
the Intelligent Cube parse log can only be viewed using the MicroStrategy
Diagnostics and Performance Logging tool. You can also allow this log to be
viewed in the SQL View of an Intelligent Cube.
• Disable Cube Parse Log in SQL View: This is the default option, which
allows the Intelligent Cube parse log to only be viewed using the
MicroStrategy Diagnostics and Performance Logging tool.
• Enable Cube Parse Log in SQL View: Select this option to allow the
Intelligent Cube parse log to be viewed in the SQL View of an Intelligent
Cube. This information can help determine which reports use dynamic
sourcing to connect to the Intelligent Cube.
You can enable dynamic sourcing for reports by modifying the Enable
Dynamic Sourcing for Report VLDB property. This VLDB property has the
following options:
You can enable dynamic sourcing for reports individually or you can enable
dynamic sourcing for all reports within a project. While the definition of the
VLDB property at the project level defines a default for all reports in the
project, any modifications at the report level take precedence over the project
level definition. For information on defining a project-wide dynamic
sourcing strategy, see the OLAP Services Guide.
The extended mismatch log helps determine why a metric prevents the use of
dynamic sourcing is provided in the extended mismatch log. This
information is listed for every metric that prevents the use of dynamic
sourcing. By default, the extended mismatch log can only be viewed using
the MicroStrategy Diagnostics and Performance Logging tool. You can also
allow this log to be viewed in the SQL View of a report.
The extended mismatch log can increase in size quickly and thus is
best suited for troubleshooting purposes.
The mismatch log helps determine why a report that can use dynamic
sourcing cannot connect to a specific Intelligent Cube. By default, the
mismatch log can only be viewed using the MicroStrategy Diagnostics and
Performance Logging tool. You can also allow this log to be viewed in the
SQL View of a report.
• Disable Mismatch Log in SQL View: This is the default option, which
allows the mismatch log to only be viewed using the MicroStrategy
Diagnostics and Performance Logging tool.
• Enable Mismatch Log in SQL View: Select this option to allow the
mismatch log to be viewed in the SQL View of a report. This information
can help determine why a report that can use dynamic sourcing cannot
connect to a specific Intelligent Cube.
The report parse log helps determine whether the report can use dynamic
sourcing to connect to an Intelligent Cube. By default, the report parse log
can only be viewed using the MicroStrategy Diagnostics and Performance
Logging tool. You can also allow this log to be viewed in the SQL View of a
report.
• Disable Report Parse Log in SQL View: This is the default option,
which allows the report parse log to only be viewed using the
MicroStrategy Diagnostics and Performance Logging tool.
• Enable Report Parse Log in SQL View: Select this option to allow the
report parse log to be viewed in the SQL View of a report . This
information can help determine whether the report can use dynamic
sourcing to connect to an Intelligent Cube.
Metric Validation
Metrics are available for dynamic sourcing by default, but there are some
data modeling conventions that should be considered when using dynamic
sourcing.
In general, if metrics use outer joins, accurate data can be returned to reports
from Intelligent Cubes through dynamic sourcing. However, if metrics use
inner joins, which is a more common join type, you should verify that the
metric data can be correctly represented through dynamic sourcing.
If the fact table that stores data for metrics includes NULL values for metric
data, this can cause metrics that use inner joins to return incorrect data when
dynamic sourcing is used. This scenario is uncommon.
You can enable and disable dynamic sourcing for metrics by modifying the
Metric Validation VLDB property. This VLDB property has the following
options:
• Fact table does not contain NULLs for metric data: This is the default
option for metrics, which enables metrics for dynamic sourcing.
• Fact table may contain NULLs for metric data: This option disables
dynamic sourcing for metrics unless outer joins are used for the metric.
This setting should be used if your metric data is not modeled to support
dynamic sourcing. The inclusion of NULLs in fact tables that contain your
metric data can cause incorrect data to be returned to reports from
Intelligent Cubes through dynamic sourcing.
You can disable dynamic sourcing for metrics individually or you can disable
dynamic sourcing for all metrics within a project. While the definition of the
VLDB property at the project level defines a default for all metrics in the
project, any modifications at the metric level take precedence over the
project level definition. For information on defining a project-wide dynamic
sourcing strategy, see the OLAP Services Guide.
To ensure that dynamic sourcing can return the correct results for attributes,
you must also verify that filtering on attributes achieves the same results
when executed against your database versus an Intelligent Cube.
Consider a filter qualification that filters on customers that have a last name
beginning with the letter h. If your database is case-sensitive and uses
uppercase letters for the first letter in a name, a filter qualification using a
lowercase h is likely to return no data. However, this same filter qualification
on the same data stored in an Intelligent Cube returns all customers that
have a last name beginning with the letter h.
This is a good option if your database does not enforce case sensitivity. In
this scenario, dynamic sourcing returns the same results that would be
returned by the filter qualification if the report was executed against the
database.
You can modify this VLDB property for attributes individually or you can
modify it for all attributes within a project. While the definition of the VLDB
property at the project level defines a default for all attributes in the project,
any modifications at the attribute level take precedence over the project level
definition. For information on defining a project-wide dynamic sourcing
strategy, see the OLAP Services Guide.
Ignore Empty The Ignore Empty Result for Freeform • Do not turn off warnings for Do not turn off
Result for SQL VLDB property provides the Freeform SQL statements warnings for
Freeform SQL flexibility to display or hide warnings with empty results, such as Freeform SQL
when a Freeform SQL statement updates statements
returns an empty result. • Turn off warnings for with empty
Freeform SQL statements results, such
with empty results, such as as updates
updates
The Ignore Empty Result for Freeform SQL VLDB property provides the
flexibility to display or hide warnings when a Freeform SQL statement
returns an empty result.
• Do not turn off warnings for Freeform SQL statements with empty
results, such as updates: Select this option to allow warnings to be
displayed when a Freeform SQL statement causes a Freeform SQL report
to return an empty result. This is a good option if you use Freeform SQL
to return and display data with Freeform SQL reports.
• Turn off warnings for Freeform SQL statements with empty results,
such as updates: Select this option to hide all warnings when a
Freeform SQL statement causes a Freeform SQL report to return an
empty result. This is a good option if you commonly use Freeform SQL to
execute various SQL statements that are not expected to return any
report results. This prevents users from seeing a warning every time a
SQL statement is executed using Freeform SQL.
However, be aware that if you also use Freeform SQL to return and
display data with Freeform SQL reports, no warnings are displayed if the
report returns an empty result.
Possible
Property Description Default Value
Values
Intermediate The maximum number of rows returned to the server User-defined -1 (Use value
Row Limit for each intermediate pass. (0 = unlimited number of from higher level)
rows; -1 = use value from higher level.)
Results Set The maximum number of rows returned to the Server User-defined -1 (Use value
Row Limit for the final result set. (0 = unlimited number of rows; from higher level)
-1 = use value from higher level.)
SQL Time Out Single SQL pass time-out in seconds. (0 = no User-defined 0 (No limit)
(Per Pass) time-out.)
The Intermediate Row Limit property is used to limit the number of rows of
data returned to the Server from pure SELECT statements issued apart from
the final pass. Apart from the final pass, pure SELECT statements are usually
The table below explains the possible values and their behavior:
Value Behavior
Report only
The Maximum SQL/MDX Size property specifies the SQL size (in bytes) on a
pass-by-pass basis. If the limit is exceeded, the report execution is
terminated and an error message is returned. The error message usually
mentions that a SQL/MDX string is longer than a corresponding limitation.
The limit you choose should be based on the size of the SQL string accepted
by your ODBC driver.
The table below explains the possible values and their behavior:
Value Behavior
Number The maximum SQL pass size (in bytes) is limited to the specified number
Default By selecting the check box Use default inherited value, the value is set to the default for
the database type used for the related database instance. The default size varies
depending on the database type.
the
Increasing the maximum to a large value can cause the report to fail in
ODBC driver. This is dependent on the database type you are
using.
The Results Set Row Limit property is used to limit the number of rows
returned from the final results set SELECT statements issued. This property
is report-specific.
When the report contains a custom group, this property is applied to each
element in the group. Therefore, the final result set displayed could be larger
than the predefined setting. For example, if you set the Result Set Row Limit
to 1,000, it means you want only 1,000 rows to be returned. Now apply this
setting to each element in the custom group. If the group has three elements
and each uses the maximum specified in the setting (1,000), the final report
returns 3,000 rows.
The table below explains the possible values and their behavior:
Value Behavior
Report only
The SQL Time Out property is used to avoid lengthy intermediate passes. If
any pass of SQL runs longer than the set time (in seconds), the report
execution is terminated.
The table below explains the possible values and their behavior:
Value Behavior
Allow Index on Determines whether or • Don’t allow the creation of Don’t allow the
Metric not to allow the creation indexes on metric columns creation of
of indexes on fact or • Allow the creation of indexes on indexes on
metric columns. metric columns (if the metric columns
Intermediate Table Index setting
is set to create)
Intermediate Table Determines whether and • Don't create an index Don’t create an
Index when to create an index • Create Primary index index
for the intermediate table. (Teradata)/Partition key (DB2
UDB EEE)/Primary key (Red
Brick, Tandem)
• Create Primary index
(Teradata)/Partition key (DB2
UDB EEE)/Primary key (Red
Brick, Tandem) and Secondary
index on intermediate table
• Create table, insert into table,
create index on intermediate
table (all platforms other than
Teradata, DB2 UDB EEE, Red
Brick and Tandem)
Primary Index Determines whether a • Create primary key if the Create primary
Control primary key is created intermediate table index setting key if the
instead of a partitioning is set to create a primary index. intermediate
key for databases that • Create primary index (Teradata) table index
support both types, such or partitioning key (UDB) if the setting is set to
as UDB. intermediate table index setting create a primary
is set to create a primary index. index.
Secondary Index Defines whether an index • Create index after inserting into Create index
Order is created before or after table after inserting
inserting data into a • Create index before inserting into table
table. into table
Secondary Index Defines what type of • Create Composite Index for Create
Type index is created for Temporary Table Column Composite Index
temporary table column Indexing for Temporary
indexing. • Create Individual Indexes for Table Column
Temporary Table Column Indexing
Indexing
The Allow Index on Metric property determines whether or not to use fact or
metric columns in index creation. You can see better performance in
different environments, especially in Teradata, when you add the fact or
metric column in the index. Usually, the indexes are created on attribute
columns; but with this setting, the fact or metric columns are added as well.
All fact or metric columns are added.
Example
This example is the same as the example above except that the last line of
code should be replaced with the following:
create index ZZT8L005YAGEA000_i on ZZT8L005YAGEA000
(CATEGORY_ID, REGION_ID, YEAR_ID, WJXBFS1, WJXBFS2)
Index Prefix
This property allows you to define the prefix to add to the beginning of the
CREATE INDEX statement when automatically creating indexes for
intermediate SQL passes.
For example, the index prefix you define appears in the CREATE INDEX
statement as shown below:
create index(index prefix)
IDX_TEMP1(STORE_ID, STORE_DESC)
The Index Post String and Index Qualifier property can be used to customize
the CREATE INDEX statement. Indexes can be created when the
Intermediate Table Type is set to Permanent Tables, Temporary Tables, and
Views (most platforms do not support indexes on views). These two settings
can be used to specify the type of index to be created and the storage
parameters as provided by the specific database platform. If the Index Post
String and Index Qualifier are set to a certain string, then for all the CREATE
INDEX statements, the Index Post String and Index Qualifier are applied.
• Teradata:
Example
The Index Post String setting allows you to add a custom string to the end of
the CREATE INDEX statement.
Index Post String = /* in tablespace1 */
create index IDX_TEMP1(STORE_ID, STORE_DESC) /* in
"tablespace1*/
The Intermediate Table Index property is used to control the primary and
secondary indexes generated for platforms that support them. This property
is for permanent tables and temporary tables, where applicable.
Examples
• DB2/UDB
create table TEMP1(
STORE_ID INT,
STORE_DESC VARCHAR(20),
SALES FLOAT)
partitioning key(STORE_ID, STORE_DESC)
• DB2/UDB
create table TEMP1(
STORE_ID INT,
STORE_DESC VARCHAR(20),
SALES FLOAT)
partitioning key(STORE_ID, STORE_DESC)
Create table, insert into table, create index on intermediate table (all
platforms other than Teradata, UDB EEE, RedBrick and Tandem)
create table TEMP1(
STORE_ID INT,
STORE_DESC VARCHAR(20),
SALES FLOAT)
The table below explains the possible values and their behavior:
Value Behavior
Number The maximum number of attribute ID columns to use with the wildcard
The table below explains the possible values and their behavior:
Value Behavior
The Primary Index Control property determines the pattern for creating
primary keys and indexes.
• RedBrick and Tandem: When the Intermediate Table Index property is
set to create a primary index, then a primary key is created.
• DB2 UDB EEE: A partition key is used. However, DB2 users may want to
create a primary key rather than a partition key, and this setting allows
you to do this. UDB is currently the only platform that can use either
option. This changes the create table pattern.
Example (DB2)
The Secondary Index Order VLDB property allows you to define whether an
index is created before or after inserting data into a table. This VLDB
property has the following options:
• Create index after inserting into table: This default behavior creates
the index after inserting data into a table, which is a good option to
support most database and indexing strategies.
• Create index before inserting into table: This option creates the index
before inserting data into a table, which can improve performance for
some environments, including Sybase IQ. The type of index created can
The Secondary Index Type VLDB property allows you to define what type of
index is created for temporary table column indexing. This VLDB property
has the following options:
Attribute to Controls whether tables are • Join common key on both sides Join common
join when key joined only on the common keys • Join common attributes key on both
from neither or on all common columns for (reduced) on both sides sides
side can be each table.
supported by
the other side
Base Table Controls whether two fact tables • Temp table join Temp table join
Join for are directly joined together. If you • Fact table join
Template choose Temp Table Join, the
Analytical Engine calculates
results independently from each
fact table and places those results
into two intermediate tables.
These intermediate tables are
then joined together.
Downward Allows users to choose how to • Do not preserve all the rows for Do not
Outer Join handle metrics which have a metrics higher than template preserve all the
Option higher level than the template. level rows for
• Preserve all the rows for metrics metrics higher
higher than template level w/o than template
report filter level
• Preserve all the rows for metrics
higher than template level with
report filter
• Do not do downward outer join
for database that support full
outer join
• Do not do downward outer join
for database that support full
outer join, and order temp tables
in last pass by level
DSS Star Join Controls which lookup tables are • No star join No star join
included in the join against the • Partial star join
fact table. For a partial star join,
the Analytical Engine joins the
lookup tables of all attributes
present in either the template or
the filter or metric level, if needed.
From Clause Determines whether to use the • Normal FROM clause order as Normal FROM
Order normal FROM clause order as generated by the engine clause order as
generated by the Analytical • Move last table in normal FROM generated by
Engine or to switch the order. clause order to the first the engine
• Move MQ table in normal From
clause order to the last (for
RedBrick)
• Reverse FROM clause order as
generated by the engine
Lookup Table Determines how lookup tables • Partially based on attribute level Partially based
Join Orderr are loaded for being joined. (behavior prior to version 8.0.1) on attribute
• Fully based on attribute level. level (behavior
Lookup tables for lower level prior to version
attributes are joined before 8.0.1)
those for higher level attributes
Nested Defines when outer joins are • Do not perform outer join on Do not perform
Aggregation performed on metrics that are nested aggregation outer join on
Outer Joins defined with nested aggregation • Do perform outer join on nested nested
functions. aggregation when all formulas aggregation
have the same level
• Do perform downward outer join
on nested aggregation when all
formulas can downward outer
join to a common lower level
Preserve all Perform an outer join to the final • Preserve common elements of Preserve
final pass result set in the final pass. final pass result table and common
result lookup/relationship table elements of
elements • Preserve all final result pass final pass result
elements table and
• Preserve all elements of final lookup/relation
pass result table with respect to ship table.
lookup table but not relationship
table
• Do not listen to per report level
setting, preserve elements of
final pass according to the
setting at attribute level. If this
choice is selected at attribute
level, it will be treated as
preserve common elements (i.e.
choice 1)
Preserve all Perform an outer join to lookup • Preserve common elements of Preserve
lookup table table in the final pass. lookup and final pass result common
elements table elements of
• Preserve lookup table elements lookup and final
joined to final pass result table pass result
based on fact table keys table
• Preserve lookup table elements
joined to final pass result table
based on template attributes
without filter
• Preserve lookup table elements
joined to final pass result table
based on template attributes
with filter
The Attribute to join when key from neither side can be supported by the
other side is an advanced property that is hidden by default. For information
on how to display this property, see Viewing and changing advanced VLDB
properties.
• Join common key on both sides: Joins on tables only use columns that
are in each table, and are also keys for each table.
You have two different tables named Table1 and Table2. Both tables
share 3 ID columns for Year, Month, and Date along with other
columns of data. Table1 uses Year, Month, and Date as keys while
Table2 uses only Year and Month as keys. Since the ID column for
Date is not a key for Table2, you must set this option to include Day to
join the tables along with Year and Month.
You have a table named Table1 that includes the columns for the
attributes Quarter, Month of Year, and Month. Since Month is a child
of Quarter and Month of Year, its ID column is used as the key for
Table1. There is also a temporary table named TempTable that
includes the columns for the attributes Quarter, Month of Year, and
Year, using all three ID columns as keys of the table. It is not possible
to join Table1 and TempTable unless you set this option because they
do not share any common keys. If you set this option, Table1 and
TempTable can join on the common attributes Quarter and Month of
Year.
The Base Table Join for Template is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
changing advanced VLDB properties, page 658.
Caution must be taken when changing this setting since the results
can be different depending on the types of metrics on the report.
Example
pa1.CLEARANCESAL WJXBFS1,
pa2.COSTAMOUNT WJXBFS2
from #ZZTIS00H5D3SP000 pa1
left outer join #ZZTIS00H5D3SP001 pa2
on (pa1.MARKET_NBR = pa2.MARKET_NBR)
left outer join HARI_LOOKUP_MARKET a11
on (pa1.MARKET_NBR = a11.MARKET_NBR)
This property allows the MicroStrategy SQL Engine to use a new algorithm
for evaluating whether or not a Cartesian join is necessary. The new
algorithm can sometimes avoid a Cartesian join when the old algorithm
cannot. For backward compatibility, the default is the old algorithm. If you
see Cartesian joins that appear to be avoidable, use this property to
determine whether the engine’s new algorithm avoids the Cartesian join.
Examples
Traditionally, the outer join flag is ignored, because M2 (at Region level) is
higher than the report level of Store. It is difficult to preserve all of the stores
for a metric at the Region level. However, you can preserve rows for a metric
at a higher level than the report. Since M2 is at the region level, it is
impossible to preserve all regions for M2 because the report only shows
Store. To do that, a downward join pass is needed to find all stores that
belong to the region in M2, so that a union is formed among all these stores
with the stores in M1.
When performing a downward join, another issue arises. Even though all the
stores that belong to the region in M2 can be found, these stores may not be
those from which M2 is calculated. If a report filters on a subset of stores,
then M2 (if it is a filtered metric) is calculated only from those stores, and
aggregated to regions. When a downward join is done, either all the stores
that belong to the regions in M2 are included or only those stores that belong
to the regions in M2 and in the report filter. Hence, this property has three
options.
Example
Using the above example and applying a filter for Atlanta and Charlotte, the
Do not preserve all the rows for metrics higher than template level
option returns the following results. Note that Charlotte does not appear
because it has no sales data in the fact table; the outer join is ignored. The
outer join flag on metrics higher than template level is ignored.
Using Preserve all the rows for metrics higher than template level
without report filter returns the results shown below. Now Charlotte
appears because the outer join is used, and it has an inventory, but
Washington appears as well because it is in the Region, and the filter is not
applied.
Charlotte 300
Washington 300
Using Preserve all the rows for metrics higher than template level with
report filter produces the following results. Washington is filtered out but
Charlotte still appears because of the outer join.
Charlotte 300
For backward compatibility, the default is to ignore the outer join flag for
metrics higher than template level. This is the SQL Engine behavior for
MicroStrategy 6.x or lower, as well as for MicroStrategy 7.0 and 7.1.
The DSS Star Join property specifies whether a partial star join is performed
or not. A partial star join means the lookup table of a column is joined if and
only if a column is in the SELECT clause or involved in a qualification in the
WHERE clause of the SQL. In certain databases, for example, RedBrick and
Teradata, partial star joins can improve SQL performance if certain types of
indexes are maintained in the data warehouse. Notice that the lookup table
joined in a partial star join is not necessarily the same as the lookup table
defined in the attribute form editor. Any table that acts as a lookup table
rather than a fact table in the SQL and contains the column is considered a
feasible lookup table.
Examples
No Star Join
select distinct a11.PBTNAME PBTNAME
from STORE_ITEM_PTMAP a11
where a11.YEAR_ID in (1994)
Examples
Move MQ Table in normal FROM clause order to the last (for RedBrick)
This setting is added primarily for RedBrick users. The default order of table
joins is as follows:
The Full Outer Join Support property specifies whether the database
platform supports full outer join syntax.
Ifincluded
this property is set to Support, the COALESCE function can be
in the SQL query. If your database does not support the
COALESCE function, you should set this property to No support.
Ifassumed
this property is set to Support, then the Join Type VLDB property is
to be Join 92 and any other setting in Join Type is ignored.
Examples
a11.YEAR_ID)
Join Type
The Join Type property determines which ANSI join syntax pattern to use.
Some databases, such as Oracle, do not support the ANSI 92 standard yet.
Some databases, such as DB2, support both Join 89 and Join 92. Other
databases, such as Tandem and some versions of Teradata, have a mix of the
join standards and therefore need their own setting.
Ifproperty
the Full Outer Join Support VLDB property is set to YES, this
is ignored.
Examples
Join 89
select a22.STORE_NBR STORE_NBR,
max(a22.STORE_DESC) STORE_DESC,
a21.CUR_TRN_DT CUR_TRN_DT,
sum(a21.REG_SLS_DLR) WJXBFS1
from STORE_DIVISION a21,
LOOKUP_STORE a22
where a21.STORE_NBR = a22.STORE_NBR
group by a22.STORE_NBR,
a21.CUR_TRN_DT
Join 92
select a21.CUR_TRN_DT CUR_TRN_DT,
a22.STORE_NBR STORE_NBR,
max(a22.STORE_DESC) STORE_DESC,
sum(a21.REG_SLS_DLR) WJXBFS1
from STORE_DIVISION a21
join LOOKUP_STORE a22
on (a21.STORE_NBR = a22.STORE_NBR)
group by a21.CUR_TRN_DT,
a22.STORE_NBR
a22.DEPARTMENT_NBR DEPARTMENT_NBR,
a21.CUR_TRN_DT CUR_TRN_DT
from LOOKUP_DAY a21,
LOOKUP_DEPARTMENT a22,
LOOKUP_STORE a23
select a21.MARKET_NBR MARKET_NBR,
max(a24.MARKET_DESC) MARKET_DESC,
sum((a22.COST_AMT * a23.TOT_SLS_DLR)) SUMTSC
from ZZOL00 a21
left outer join COST_STORE_DEP a22
on (a21.DEPARTMENT_NBR = a22.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a22.CUR_TRN_DT and
a21.STORE_NBR = a22.STORE_NBR)
left outer join STORE_DEPARTMENT a23
on (a21.STORE_NBR = a23.STORE_NBR and
a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a23.CUR_TRN_DT),
LOOKUP_MARKET a24
where a21.MARKET_NBR = a24.MARKET_NBR
group by a21.MARKET_NBR
and
a21.CUR_TRN_DT = a22.CUR_TRN_DT and
a21.STORE_NBR = a22.STORE_NBR)
left outer join STORE_DEPARTMENT a23
on (a21.STORE_NBR = a23.STORE_NBR and
a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a23.CUR_TRN_DT),
LOOKUP_MARKET a24
where a21.MARKET_NBR = a24.MARKET_NBR
group by a21.MARKET_NBR
This property determines how lookup tables are loaded for being joined. The
setting options are
• Fully based on attribute level. Lookup tables for lower level attributes are
joined before those for higher level attributes
If you select the first (default) option, lookup tables are loaded for join in
alphabetic order.
If you select the second option, lookup tables are loaded for join based on
attribute levels, and joining is performed on the lowest level attribute first.
The Max Tables in Join property works together with the Max Tables in Join
Warning property. It specifies the maximum number of tables in a join. If the
maximum number of tables in a join (specified by the Max Tables In Join
property) is exceeded, then the Max Tables in Join Warning property decides
the course of action.
The table below explains the possible values and their behavior:
Value Behavior
The Max Tables in Join Warning property works in conjunction with the Max
Tables in Join property. If the maximum number of tables in a join (specified
by the Max Tables in Join property) is exceeded, then this property controls
the action taken. The options are to either continue or cancel the execution.
For the next two properties, consider the following simple example data.
1 East
2 Central
3 South
6 North
Fact table
1 2002 1000
2 2002 2000
3 2002 5000
1 2003 4000
2 2003 6000
3 2003 7000
4 2003 3000
5 2003 1500
The Fact table has data for Store IDs 4 and 5, but the Store table does not
have any entry for these two stores. On the other hand, notice that the North
Store does not have any entries in the Fact table. This data is used to show
examples of how the next two properties work.
The Nested Aggregation Outer Joins VLDB property allows you define when
outer joins are performed on metrics that are defined with nested
aggregation functions. A nested aggregation function is when one
aggregation function is included within another aggregation function. For
example, Sum(Count(Expression)) uses nested aggregation because the
Count aggregation is calculated within the Sum aggregation.
• Do not perform outer join on nested aggregation: Outer joins are not
used for metrics that use nested aggregation, even if the metric is defined
to use an outer join. This option reflects the behavior of all pre-9.0
MicroStrategy releases.
For an introduction to this property, see Preserving data using outer joins,
page 720. Preserve All Final Pass Result Elements is an advanced property
that is hidden by default. For information on how to display this property,
see Viewing and changing advanced VLDB properties, page 658.
The Preserve all final pass result elements settings listed below determine
how to outer join on the final result and the lookup and relationship tables:
• If you choose the Preserve all final result pass elements option, the
SQL Engine generates an outer join, and your report contains all of the
elements that are in the final result set. When this setting is turned ON,
outer joins are generated for any joins from the fact table to the lookup
• If you choose the Preserve all elements of final pass result table with
respect to lookup table but not relationship table option, the SQL
Engine generates an inner join on all passes except the final pass; on the
final pass it generates an outer join.
• If you choose the Do not listen to per report level setting, preserve
elements of final pass according to the setting at attribute level. If
this choice is selected at attribute level, it will be treated as preserve
common elements (i.e. choice 1) option at the database instance,
report, or template level, the setting for this VLDB property at the
attribute level is used. This value should not be selected at the attribute
level. If you select this setting at the attribute level, the VLDB property is
set to the Preserve common elements of final pass result table and
lookup table option.
This setting is useful if you have only a few attributes that require
different join types. For example, if among the attributes in a report only
one needs to preserve elements from the final pass table, you can set the
VLDB property to Preserve all final pass result elements setting for
that one attribute. You can then set the report to the Do not listen setting
for the VLDB property. When the report is run, only the attribute set
differently causes an outer join in SQL. All other attribute lookup tables
will be joined using an equal join, which leads to better SQL performance.
Examples
The first two example results below are based on the Preserving data using
outer joins example above. The third example, for the Preserve all elements
of final pass result table with respect to lookup table but not
relationship table option, is a separate example designed to reflect the
increased complexity of that option’s behavior.
The “Preserve common elements of final pass result table and lookup table”
option returns the following results using the SQL below.
East 5000
Central 8000
South 12000
The “Preserve all final result pass elements” option returns the following
results using the SQL below. Notice that the data for Store_IDs 4 and 5 are
now shown.
East 5000
Central 8000
South 12000
3000
1500
Example 3: Preserve all elements of final pass result table with respect
to lookup table but not to relationship table
A report has Country, Metric 1, and Metric 2 on the template. The following
fact tables exist for each metric:
CALLCENTER_ID Fact 1
1 1000
2 2000
1 1000
2 2000
3 1000
4 1000
EMPLOYEE_ID Fact 2
1 5000
2 6000
1 5000
2 6000
3 5000
EMPLOYEE_ID Fact 2
4 5000
5 1000
The SQL Engine performs three passes. In the first pass, the SQL Engine
calculates metric 1. The SQL Engine inner joins the “Fact Table (Metric 1)”
table above with the call center lookup table “LU_CALL_CTR” below:
CALLCENTER_ID COUNTRY_ID
1 1
2 1
3 2
COUNTRY_ID Metric 1
1 6000
2 1000
In the second pass, metric 2 is calculated. The SQL Engine inner joins the
“Fact Table (Metric 2)” table above with the employee lookup table
“LU_EMPLOYEE” below:
EMPLOYEE_ID COUNTRY_ID
1 1
EMPLOYEE_ID COUNTRY_ID
2 2
3 2
COUNTRY_ID Metric 2
1 10000
2 17000
In the third pass, the SQL Engine uses the following country lookup table,
“LU_COUNTRY”:
COUNTRY_ID COUNTRY_DESC
1 United States
3 Europe
The SQL Engine left outer joins the METRIC1_TEMPTABLE above and the
LU_COUNTRY table. The SQL Engine then left outer joins the
METRIC2_TEMPTABLE above and the LU_COUNTRY table. Finally, the
SQL Engine inner joins the results of the third pass to produce the final
results.
The “Preserve all elements of final pass result table with respect to lookup
table but not to relationship table” option returns the following results using
the SQL below.
2 1000 17000
For an introduction to this property, see Preserving data using outer joins,
page 720.
InTable
MicroStrategy 7.1, this property was known as Final Pass Result
Outer Join to Lookup Table.
The Preserve All Lookup Table Elements property is used to show all
attribute elements that exist in the lookup table, even though there is no
corresponding fact in the result set. For example, your report contains Store
and Sum(Sales), and it is possible that a store does not have any sales at all.
However, you want to show all the store names in the final report, even those
stores that do not have sales. To do that, you must not rely on the stores in
the sales fact table. Instead, you must make sure that all the stores from the
lookup table are included in the final report. The SQL Engine needs to do a
left outer join from the lookup table to the fact table.
It is possible that there are multiple attributes on the template. To keep all
the attribute elements, Analytical Engine needs to do a Cartesian Join
between involved attributes’ lookup tables before doing a left outer join to
the fact table.
Option 1: Preserve common elements of lookup and final pass result table.
This is the default option. The Analytical Engine does a normal (equal) join
to the lookup table.
Option 2: Preserve lookup table elements joined to final pass result table based
on fact table keys.
Sometimes the fact table level is not the same as the report or template level.
For example, a report contains Store, Month, Sum(Sales) metric, but the fact
table is at the level of Store, Day, and Item. There are two ways to keep all the
store and month elements:
• Do a left outer join first to keep all attribute elements at the Store, Day,
and Item level, then aggregate to the Store and Month level.
This option is for the first approach. In the example given previously, it
makes two SQL passes:
The advantage of this approach is that you can do a left outer join and
aggregation in the same pass (pass 2). The disadvantage is that because you
do a Cartesian join with the lookup tables at a much lower level (pass 1), the
result of the Cartesian joined table (TT1) can be very large.
Option 3: Preserve lookup table elements joined to final pass result table based
on template attributes without filter.
This option corresponds to the second approach described above. Still using
the same example, it makes three SQL passes:
This approach needs one more pass than the previous option, but the cross
join table (TT2) is usually smaller.
Option 4: Preserve lookup table elements joined to final pass result table based
on template attributes with filter.
This option is similar to Option 3. The only difference is that the report filter
is applied in the final pass (Pass 3). For example, a report contains Store,
Month, and Sum(Sales) with a filter of Year = 2002. You want to display
every store in every month in 2002, regardless of whether there are sales.
However, you do not want to show any months from other years (only the 12
months in year 2002). Option 4 resolves this issue.
Examples
The Preserve common elements of lookup and final pass result table
option simply generates a direct join between the fact table and the lookup
table. The results and SQL are as follows.
East 5000
Central 8000
South 12000
The “Preserve lookup table elements joined to final pass result table based on
fact keys” option creates a temp table that is a Cartesian join of all lookup
table key columns. Then the fact table is outer joined to the temp table. This
preserves all lookup table elements. The results and SQL are as below:
East 5000
Central 8000
South 12000
North
The “Preserve lookup table elements joined to final pass result table based on
template attributes without filter” option preserves the lookup table
elements by left outer joining to the final pass of SQL and only joins on
attributes that are on the template. For this example and the next, the filter
of “Store not equal to Central” is added. The results and SQL are as follows:
East 5000
Central
South 12000
North
pa1.WJXBFS1 WJXBFS1
from Store a11
left outer join #ZZT5X00003UOL000 pa1
on (a11.Store_id = pa1.Store_id)
The “Preserve lookup table elements joined to final pass result table based on
template attributes with filter” option is the newest option and is the same as
above, but you get the filter in the final pass. The results and SQL are as
follows:
East 5000
South 12000
North
MDX Add Fake Determines how MDX • Do not add a fake Add a fake measure to
Measure cube reports that only measure to an an attribute-only MDX
include attributes are attribute-only MDX report report
processed in order to • Add a fake measure to an
improve performance in attribute-only MDX report
certain scenarios.
MDX Add Non Determines whether or • Do not add the Add the non-empty
Empty not data is returned from non-empty keyword in the keyword in the MDX
rows that have null MDX select clause select clause only if
values. • Add the non-empty there are metrics on
keyword in the MDX the report
select clause only if there
are metrics on the report
• Always add the
non-empty keyword in the
MDX select clause
MDX Cell Formatting Defines whether the • MDX metric values are MDX metric values
metric values in formatted per column are formatted per
MicroStrategy MDX cube • MDX metric values are column
reports inherit their value formatted per cell
formatting from an MDX
cube source.
MDX Level Number Determines whether level • Use actual level number Use actual level
Calculation Method (from the bottom of the • Use generation number number
hierarchy up) or to calculate level number
generation (from the top
of the hierarchy down)
should be used to
populate the report
results.
MDX Verify Limit Supports an MDX cube • Do not verify the level of Do not verify the level
Filter Literal Level reporting scenario in literals in limit or filter of literals in limit or
which filters are created expressions filter expressions
on attribute ID forms and • Verify the level of literals
metrics. in limit or filter
expressions
The default date format is DD.MM.YYYY. For example, the date of July 4,
1776 is represented as 04.07.1776.
MDX Add Non Empty is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and changing
advanced VLDB properties, page 658.
on the same MDX cube report, see MDX Non Empty Optimization,
page 740.
• Do not add the non-empty keyword in the MDX select clause: When
this option is selected, data is returned from rows that contain data and
rows that have null metric values (similar to an outer join in SQL). The
null values are displayed on the MDX cube report.
• Add the non-empty keyword in the MDX select clause only if there
are metrics on the report: When this option is selected, and metrics are
included on an MDX cube report, data is not returned from the MDX
cube source when the default metric in the MDX cube source has null
data. Any data not returned is not included on MDX cube reports (similar
to an inner join in SQL). If no metrics are present on an MDX cube
report, then all values for the attributes are returned and displayed on the
MDX cube report.
• Always add the non-empty keyword in the MDX select clause: When
this option is selected, data is not returned from the MDX cube source
when a metric on the MDX cube report has null data. Any data not
returned is not included on MDX cube reports (similar to an inner join in
SQL).
Example
With the MDX Cell Formatting VLDB property, you can specify for the
metric values in MicroStrategy MDX cube reports to inherit their value
formatting from an MDX cube source. This enables MicroStrategy MDX
cube reports to use the same data formatting available in your MDX cube
source. It also maintains a consistent view of your MDX cube source data in
MicroStrategy.
Inheriting value formats from your MDX cube source also enables you to
apply multiple value formats to a single MicroStrategy metric.
• MDX metric values are formatted per column: If you select this option,
MDX cube source formatting is not inherited. You can only apply a single
format to all metric values on an MDX cube report.
• MDX metric values are formatted per cell: If you select this option,
MDX cube source formatting is inherited. Metric value formats are
determined by the formatting that is available in the MDX cube source,
and metric values can have different formats.
For examples of using these options and steps to configure your MDX cube
sources properly, see the MDX Cube Reporting Guide.
This VLDB property is useful only for MDX cube reports that access an
Oracle Hyperion Essbase MDX cube source. To help illustrate the
functionality of the property, consider an unbalanced hierarchy with the
levels Products, Department, Category, SubCategory, Item, and SubItem.
The image below shows how this hierarchy is populated on a report in
MicroStrategy.
This VLDB property determines how null values from an MDX cube source
are ignored using the non-empty keyword when attributes from different
hierarchies (dimensions) are included on the same MDX cube report.
measure within the MDX cube source. Data is only displayed on an MDX
cube report for rows in which the default measure within the MDX cube
source has data.
Electronics $2,500,000
Movies $500,000
Music
Electronics $2,500,000
Movies $500,000
The row for Music is not displayed because all the metrics have null
values.
MDX Verify Limit Filter Literal Level is an advanced property that is hidden
by default. For information on how to display this property, see Viewing and
changing advanced VLDB properties, page 658.
This VLDB property supports a unique scenario when analyzing MDX cube
reports. An example of this scenario is provided below.
You have an MDX cube report that includes a low level attribute on the
report, along with some metrics. You create a filter on the attribute’s ID
form, where the ID is between two ID values. You also include a filter on a
metric. Below is an example of such an MDX cube report definition:
When you run the report, you receive an error that alerts you that an
unexpected level was found in the result. This is because the filter on the
attribute’s ID form can include other levels due to the structure of ID values
in some MDX cube sources. When these other levels are included, the metric
filter cannot be evaluated correctly by default.
You can support this type of report by modifying the MDX Verify Limit Filter
Literal Level. This VLDB property has the following options:
Default to Metric Allows you to choose • Do not use the metric Do not use the metric
Name whether you want to use name as the default name as the default
the metric name as the metric column alias metric column alias
column alias or whether • Use the metric name
to use a MicroStrategy- as the default metric
generated name. column alias
Integer Constant in This property determines • Add “.0” to integer Add “.0”' to integer
Metric whether to add a “.0” constant in metric constant in metric
after the integer. expression expression
• Do Not Add “.0” to
integer constant in
metric expression
Metric Join Type Type of join used in a • Inner Join Inner Join
metric. • Outer Join
Non-Agg Metric Influences the behavior • Optimized for less fact Optimized for less fact
Optimization for non-aggregation table access table access
metrics by either • Optimized for smaller
optimizing for smaller temp table
temporary tables or for
less fact table access.
NULL Check Indicates how to handle • Do nothing Check for NULL in temp
arithmetic operations • Check for NULL in all table join only
with NULL values. queries
• Check for NULL in
temp table join only
Separate COUNT Indicates how to handle • One pass No count distinct, use
DISTINCT COUNT (and other • Multiple count distinct, select distinct and count(*)
aggregation functions) but count expression instead
when DISTINCT is must be the same
present in the SQL. • Multiple count distinct,
but only one count
distinct per pass
• No count distinct, use
select distinct and
count(*) instead
Transformation Role Indicates how to handle • 7.1 style. Apply 7.1 style. Apply
Processing the transformation dates transformation to all transformations to all
calculation. applicable attributes applicable attributes
• 7.2 style. Only apply
transformation to
highest common child
when it is applicable to
multiple attributes
Zero Check Indicates how to handle • Do nothing Check for zero in all
division by zero. • Check for zero in all queries
queries
• Check for zero in temp
table join only
• Use Temp Table as set in the Fallback Table Type setting: When this
option is set, the table creation type follows the option selected in the
VLDB property Fallback Table Type. The SQL Engine reads the Fallback
Table Type VLDB setting and determines whether to create the
intermediate table as a true temporary table or a permanent table.
InTemporary
most cases, the default Fallback Table Type VLDB setting is
table. However, for a few databases, like UDB for 390,
this option is set to Permanent table. These databases have their
Intermediate Table Type defaulting to True Temporary Table, so
you set their Fallback Table Type to Permanent. If you see
permanent table creation and you want the absolute
non-aggregation metric to use a True Temporary table, set the
Fallback Table Type to Temporary table on the report as well.
Examples
Use Sub-query
select a11.CLASS_NBR CLASS_NBR,
a12.CLASS_DESC CLASS_DESC,
sum(a11.TOT_SLS_QTY) WJXBFS1
from DSSADMIN.MARKET_CLASS a11,
DSSADMIN.LOOKUP_CLASS a12
where a11.CLASS_NBR = a12.CLASS_NBR
and (((a11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1)))
and ((a11.MARKET_NBR)
in (select min(c11.MARKET_NBR)
from DSSADMIN.LOOKUP_MARKET c11
where ((c11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1))))))
group by a11.CLASS_NBR,
a12.CLASS_DESC
Examples
Ifstrings,
your database platform does not support COUNT on concatenated
the Count Compound Attribute property should be disabled.
Examples
COUNT(column) Support
Examples
Use COUNT(column)
select a11.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
count(distinct a11.COST_AMT) COUNTDISTINCT
from HARI_COST_STORE_DEP a11
join HARI_LOOKUP_STORE a12
on (a11.STORE_NBR = a12.STORE_NBR)
group by a11.STORE_NBR
Use COUNT(*)
select a11.STORE_NBR STORE_NBR,
a11.COST_AMT WJXBFS1
into #ZZTIS00H5JWDA000
from HARI_COST_STORE_DEP a11
max(a11.STORE_DESC) STORE_DESC,
count(*) WJXBFS1
from #ZZTIS00H5JWOT001 pa2
join HARI_LOOKUP_STORE a11
on (pa2.STORE_NBR = a11.STORE_NBR)
group by pa2.STORE_NBR
Default to Metric Name allows you to choose whether you want to use the
metric name or a MicroStrategy-generated name as the column alias. When
metric names are used, only the first 20 standard characters are used. If you
have different metrics, the metric names start with the same 20 characters. It
is hard to differentiate between the two, because they are always the same.
The Default to Metric Name option does not work for some international
customers.
Ifa number,
you choose to use the metric name and the metric name begins with
the letter M is attached to the beginning of the name during
SQL generation. For example, a metric named 2003Revenue is
renamed M2003Revenue. This occurs because Teradata does not
allow a leading number in a metric name.
If you select the option Use the metric name as the default metric column
alias, you should also set the maximum metric alias size. See Max Metric
Alias Size below for information on setting this option.
Examples
Do not use the metric name as the default metric column alias
insert into ZZTSU006VT7PO000
select a11.[MONTH_ID] AS MONTH_ID,
a11.[ITEM_ID] AS ITEM_ID,
a11.[EOH_QTY] AS WJXBFS1
from [INVENTORY_Q4_2003] a11,
[LU_MONTH] a12,
[LU_ITEM] a13
where a11.[MONTH_ID] = a12.[MONTH_ID] and
a11.[ITEM_ID] = a13.[ITEM_ID]
and (a13.[SUBCAT_ID] in (25)
and a12.[QUARTER_ID] in (20034))
Max Metric Alias Size is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and changing
advanced VLDB properties, page 658.
Max Metric Alias Size allows you to set the maximum size of the metric alias
string. This is useful for databases that only accept a limited number of
characters for column names.
You should set the maximum metric alias size to fewer characters than your
database’s limit. This is because, in certain instances, such as when two
column names are identical, the SQL engine adds one or more characters to
one of the column names during processing to be able to differentiate
between the names. Identical column names can develop when column
names are truncated.
For example, if your database rejects any column name that is more than 30
characters and you set this VLDB property to limit the maximum metric alias
size to 30 characters, the example presented by the following metric names
still causes your database to reject the names during SQL processing:
The system limits the names to 30 characters based on the VLDB option you
set in this example, which means that the metric aliases for both columns is
as follows:
The SQL engine adds a 1 to one of the names because the truncated versions
of both metric names are identical. That name is then 31 characters long and
so the database rejects it.
Therefore, in this example you should use this feature to set the maximum
metric alias size to fewer than 30 (perhaps 25), to allow room for the SQL
engine to add one or two characters during processing in case the first 25
characters of any of your metric names are the same.
Metric Join Type is used to determine how to combine the result of one
metric with that of other metrics. When this property is set to Outer Join, all
the result rows of this metric are kept when combining results with other
metrics. If there is only one metric on the report, this property is ignored.
• At the metric level, it can be set in either the VLDB Properties Editor or
from the Metric Editor’s Tools menu, and choosing Metric Join Type.
The setting is applied in all the reports that include this metric.
• At the report level, it can be set from the Report Editor’s Data menu, by
pointing to Report Data Options, and choosing Metric Join Type. This
setting overrides the setting at the metric level and is applied only for the
currently selected report.
There is a related but separate property called Formula Join Type that can
also be set at the metric level. This property is used to determine how to
combine the result set together within this metric. This normally happens
when a metric formula contains multiple facts that cause the Analytical
Engine to use multiple fact tables. As a result, sometimes it needs to calculate
different components of one metric in different intermediate tables and then
combine them. This property can only be set in the Metric Editor from the
Tools menu, by pointing to Advanced Settings, and then choosing
Formula Join Type.
Both Metric Join Type and Formula Join Type are used in the Analytical
Engine to join multiple intermediate tables in the final pass. The actual logic
is also affected by another VLDB property, Full Outer Join Support. When
this property is set to YES, it means the corresponding database supports full
outer join (92 syntax). In this case, the joining of multiple intermediate
tables makes use of outer join syntax directly (left outer join, right outer join,
or full outer join, depending on the setting on each metric/table). However, if
the Full Outer Join Support is NO, then the left outer join is used to simulate
a full outer join. This can be done with a union of the IDs of the multiple
intermediate tables that need to do an outer join and then using the union
table to left outer join to all intermediate tables, so this approach generates
more passes. This approach was also used by MicroStrategy 6.x and earlier.
Also note that when the metric level is higher than the template level, the
Metric Join Type property is normally ignored, unless you enable another
Examples
The following example first creates a fairly large temporary table, but then
never touches the fact table again.
select a11.REGION_NBR REGION_NBR,
a11.REGION_NBR REGION_NBR0,
a12.MONTH_ID MONTH_ID,
a11.DIVISION_NBR DIVISION_NBR,
a11.CUR_TRN_DT CUR_TRN_DT,
a11.TOT_SLS_DLR WJXBFS1
into ZZNB00
from REGION_DIVISION a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
The following example does not create the large temporary table but must
query the fact table twice.
select a11.REGION_NBR REGION_NBR,
a12.MONTH_ID MONTH_ID,
min(a11.CUR_TRN_DT) WJXBFS1
into ZZOP00
from REGION_DIVISION a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
group by a11.REGION_NBR,
a12.MONTH_ID
NULL Check
Transformable AggMetric
For example, you create two metrics. The first metric, referred to as Metric1,
uses an expression of Sum(Fact) {~+, Attribute+}, where Fact is a
fact in your project and Attribute is an attribute in your project used to
define the level of Metric1. The second metric, referred to as Metric2, uses an
expression of Avg(Metric1){~+}. Since both metrics use aggregation
functions, Metric2 uses nested aggregation.
Including Metric2 on a report can return incorrect results for the following
scenario:
• A transformation shortcut metric is defined on Metric2.
Metric only
Example
You have a report with Week, Sales, and Last Year Sales on the template,
filtered by Month. The default behavior is to calculate the Last Year Sales
with the following SQL. Notice that the date transformation is done for
Month and Week.
The new behavior applies transformation only to the highest common child
when it is applicable to multiple attributes. The SQL is shown in the
following syntax. Notice that the date transformation is done only at the Day
level, because Day is the highest common child of Week and Month. So the
days are transformed, and then you filter for the correct Month, and then
Group by Week.
insert into ZZT6T02D01
select a12.DAT_YYYYWW DAT_YYYYWW,
sum(a11.SALES) SALESLY
from FT1 a11
join TRANS_DAY a12
on (a11.DAT_YYYYMMDD = a12.DAT_YYYYMMLYT)
where a12.DAT_YYYYMM in (200311)
group by a12.DAT_YYYYWW
Zero Check
Cleanup Post Appends string after final drop statement. User-defined NULL
Statement 1-5
Data mart SQL SQL statements included after the CREATE User-defined NULL
to be executed statement used to create the data mart.
after data mart
creation
Data mart SQL SQL statements included before the User-defined NULL
to be executed INSERT statement used to insert data into
before inserting the data mart.
data
Data mart SQL SQL statements included before the User-defined NULL
to be executed CREATE statement used to create the data
prior to data mart.
mart creation
Drop Database Defines whether the database connection is • Drop database Drop database
Connection dropped after user-defined SQL is executed connection after connection
on the database. running after running
user-defined SQL user-defined
• Do not drop SQL
database
connection after
running
user-defined SQL
Insert Post SQL statements issued after create, after User-defined NULL
Statement 1-5 first insert only for explicit temp table
creation. For the first four statements, each
contains single SQL. The last statement
can contain multiple SQL statements
concatenated by “;”.
Insert Pre SQL statements issued after create before User-defined NULL
Statement 1-5 first insert only for explicit temp table
creation. For the first four statements, each
contains single SQL. The last statement
can contain multiple SQL statements
concatenated by “;”.
Table Post SQL statements issued after creating new User-defined NULL
Statement 1-5 table and insert. For the first four
statements, each contains single SQL. The
last statement can contain multiple SQL
statements concatenated by “;”.
Table Pre SQL statements issued before creating new User-defined NULL
Statement 1-5 table. For the first four statements, each
contains single SQL. The last statement
can contain multiple SQL statements
concatenated by “;”.
You can insert the following syntax into strings to populate dynamic
information by the SQL Engine:
• !!! inserts column names, separated by commas (can be used in Table
Pre/Post and Insert Pre/Mid statements).
• !r inserts the report GUID, the unique identifier for the report object that
is also available in the Enterprise Manager application (can be used in all
Pre/Post statements).
• !z inserts the project GUID, the unique identifier for the project (can be
used in all Pre/Post statements).
• !s inserts the user session GUID, the unique identifier for the user’s
session that is also available in the Enterprise Manager application (can
be used in all Pre/Post statements).
The table below shows the location of some of the most important
VLDB/DSS settings in a Structured Query Language (SQL) query structure.
If the properties in the table are set, the values replace the corresponding tag
in the query:
Query structure
<1>
<2>
CREATE <3> TABLE <4> <5><table name> <6>
(<fields' definition>)
<7>
<8>
<9>(COMMIT)
<10>
INSERT INTO <5><table name><11>
SELECT <12> <fields list>
FROM <tables list>
The Commit after Final Drop property (<21>) is sent to the warehouse
even if the SQL View for the report does not show it.
The Cleanup Post Statement property allows you to insert your own SQL
string after the final DROP statement. There are five settings, numbered 1-5.
Each text string entered in Cleanup Post Statement 1 through Cleanup Post
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Cleanup Post Statement 5,
separating each statement with a “;”. The SQL Engine then breaks it into
individual statements using “;” as the separator and executes the statements
separately.
Example
select A1.STORE_NBR,
max(A1.STORE_DESC)
from LOOKUP_STORE
Where A1 A1.STORE_NBR = 1
group by A1.STORE_NBR
The Data mart SQL to be executed after data mart creation VLDB property
allows you to define SQL statements that are included after data mart
creation. These SQL statements are included after the CREATE statement for
the data mart table. This allows you to customize the statement used to
create data marts.
The Data mart SQL to be executed before inserting data VLDB property
allows you to define SQL statements issued before inserting data into a data
mart. These SQL statements are included before the INSERT statement for
the data mart table. This allows you to customize the statement used to insert
data into data marts.
The Data mart SQL to be executed prior to data mart creation VLDB
property allows you to define SQL statements that are included before data
mart creation. These SQL statements are included before the CREATE
statement for the data mart table. This allows you to customize the statement
used to create data marts.
The Drop Database Connection VLDB property allows you to define whether
the database connection is dropped after user-defined SQL is executed on
the database. This VLDB property has the following options:
post-report SQL statements can be defined using the Report Post Statement
VLDB properties, which are described in Report Post Statement, page 782.
Including SQL statements prior to element browsing requests can allow you
to define the priority of element browsing requests to be higher or lower than
the priority for report requests. You can also include any other SQL
statements required to better support element browsing requests. You can
include multiple statements to be executed, separated by a semicolon (;).
The SQL Engine then executes the statements separately.
The Insert Mid Statement property is used to insert your own custom SQL
strings between the first INSERT INTO SELECT statement and subsequent
INSERT INTO SELECT statements inserting data into the same table. There
are five settings in total, numbered 1-5. Each text string entered in Insert Mid
Statement 1 through Insert Mid Statement 4 is executed separately as a
single statement. To execute more than 5 statements, you can put multiple
statements in Insert Mid Statement 5, separating each statement with a “;”.
The SQL Engine then breaks it into individual statements using “;” as the
separator and executes the statements separately.
Examples
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H5YEPO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR
This property is used to insert your custom SQL statements after CREATE
and after the first INSERT INTO SELECT statement for explicit temp table
creation. There are five settings, numbered 1-5. Each text string entered in
Insert Post Statement 1 through Insert Post Statement 4 is executed
separately as a single statement. To execute more than 5 statements, insert
multiple statement in Insert Post Statement 5, separating each statement
with a “;”. The SQL Engine then breaks it into individual statements using “;”
as the separator and executes the statements separately.
Example
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H601PO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
The Insert Pre Statement property is used to insert your custom SQL
statements after CREATE but before the first INSERT INTO SELECT
statement for explicit temp table creation. There are five settings, numbered
1-5. Each text string entered in Insert Pre Statement 1 through Insert Pre
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Insert Pre Statement 5,
separating each statement with a “;”. The SQL Engine then breaks it into
individual statements using “;” as the separator and executes the statements
separately.
Examples
sum(a11.TOT_SLS_DLR) TOTALSALES
into ZZTIS00H60BPO000
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
The Report Post Statement property is used to insert custom SQL statements
after the final SELECT statement but before the DROP statements. There are
five settings, numbered 1-5. Each text string entered in Report Post
Statement 1 through Report Post Statement 4 is executed separately as a
single statement. To execute more than 5 statements, insert multiple
statements in Report Post Statement 5, separating each statement with a “;”.
The SQL Engine then breaks them into individual statements using “;” as the
separator and executes the statements separately.
Example
A3.COL3
from TABLE1 A1,
TABLE2 A2,
TABLE3 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
select A1.STORE_NBR,
max(A1.STORE_DESC)
from LOOKUP_STORE
Where A1 A1.STORE_NBR = 1
group by A1.STORE_NBR
The Report Pre Statement property is used to insert custom SQL statements
at the beginning of the Report SQL. There are five settings, numbered 1-5.
Each text string entered in Report Pre Statement 1 through Report Pre
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Report Pre Statement 5,
separating each statement with a “;”. The SQL Engine then breaks them into
individual statements using “;” as the separator and executes the statements
separately.
Example
TABLE5 A2,
TABLE6 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
select A1.STORE_NBR,
max(A1.STORE_DESC)
from LOOKUP_STORE
Where A1 A1.STORE_NBR = 1
group by A1.STORE_NBR
The Table Post Statement property is used to insert custom SQL statements
after the CREATE TABLE and INSERT INTO statements. There are five
settings, numbered 1-5. Each text string entered in Table Post Statement 1
through Table Post Statement 4 is executed separately as a single statement.
To execute more than 5 statements, insert multiple statements in Table Post
Statement 5, separating each statement with a “;”. The SQL Engine then
breaks them into individual statements using “;” as the separator and
executes the statements separately. This property is applicable when the
Intermediate Table Type VLDB property is set to Permanent or Temporary
table or Views. The custom SQL is applied to every intermediate table or
view.
Example
a11.STORE_NBR STORE_NBR
into #ZZTIS00H63PMQ000
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
The Table Pre Statement property is used to insert custom SQL statements
before the CREATE TABLE statement. There are five settings, numbered 1-5.
Each text string entered in Table Pre Statement 1 through Table Pre
Statement 4 is executed separately as a single statement. To execute more
than 5 statements, insert multiple statements in Table Pre Statement 5,
separating each statement with a “;”. The SQL Engine then breaks them into
individual statements using “;” as the separator and executes the statements
separately. This property is applicable when the Intermediate Table Type
VLDB property is set to Permanent or Temporary table or Views. The custom
SQL is applied to every intermediate table or view.
Example
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
Optimizing queries
The table below summarizes the Query Optimizations VLDB properties.
Additional details about each property, including examples where necessary,
are provided in the sections following the table.
Additional Final Determines whether the • (default) Final pass CAN do Final pass CAN do
Pass Option Engine calculates an aggregation and join lookup aggregation and join
aggregation function and tables in one pass lookup tables in one
a join in a single pass or • One additional final pass pass
in separate passes in the only to join lookup tables
SQL.
Apply Filter Indicates during which • Apply filter only to passes Apply filter only to
Options pass the report filter is touching warehouse tables passes touching
applied. • Apply filter to passes warehouse tables
touching warehouse tables
and last join pass, if it does a
downward join from the temp
table level to the template
level
• Apply filter to passes
touching warehouse tables
and last join pass
Count Distinct Determines how distinct • Do not select distinct Do not select distinct
with Partitions counts of values are elements for each partition elements for each
retrieved from partitioned • Select distinct elements for partition
tables. each partition
Custom Group Helps optimize custom • Treat banding as normal Treat banding as
Banding Count group banding when calculation normal calculation
Method using the Count Banding • Use standard case
method. You can choose statement syntax
to use the standard • Insert band range to
method that uses the database and join with
Analytical Engine or metric value
database-specific syntax,
or you can choose to use
case statements or temp
tables.
Custom Group Helps optimize custom • Treat banding as normal Treat banding as
Banding Points group banding when calculation normal calculation
Method using the Points Banding • Use standard case
method. You can choose statement syntax
to use the standard • Insert band range to
method that uses the database and join with
Analytical Engine or metric value
database-specific syntax,
or you can choose to use
case statements or temp
tables.
Custom Group Helps optimize custom • Treat banding as normal Treat banding as
Banding Size group banding when calculation normal calculation
Method using the Size Banding • Use standard case
method. You can choose statement syntax
to use the standard • Insert band range to
method that uses the database and join with
Analytical Engine or metric value
database-specific syntax,
or you can choose to use
case statements or temp
tables.
Data population Defines if and how • Do not normalize Intelligent Normalize Intelligent
for Intelligent Intelligent Cube data is Cube data Cube data in
Cubes normalized to save • Normalize Intelligent Cube Intelligence Server
memory resources. data in Intelligence Server
• Normalize Intelligent Cube
data in database using
Intermediate Table Type
• Normalize Intelligent Cube
data in database using
Fallback Type
• Normalize Intelligent Cube
data basing on dimensions
with attribute lookup filtering
• Normalize Intelligent Cube
data basing on dimensions
with no attribute lookup
filtering
Data population Defines if and how report • Do not normalize report data Do not normalize report
for reports data is normalized to • Normalize report data in data
save memory resources. Intelligence Server
• Normalize report data in
database using Intermediate
Table Type
• Normalize report data in
database using Fallback
Table Type
• Normalize report data basing
on dimensions with attribute
lookup filtering
Engine Attribute Enable or disable the • Enable Engine Attribute Role Disable Engine
Role Options Analytical Engine's ability feature Attribute Role feature
to treat attributes defined • Disable Engine Attribute
on the same column with Role feature
the same expression as
attribute roles.
MD Partition Allows you to choose how • Use count(*) in prequery Use count(*) in
Prequery Option to handle prequerying the • Use constant in prequery prequery
metadata partition.
Multiple data Defines which technique • Use MultiSource Option to Use MultiSource Option
source support to use to support multiple access multiple data sources to access multiple data
data sources in a project. • Use database gateway source
support to access multiple
data sources
Rank Method if Determines how • Use ODBC ranking (MSTR 6 Use ODBC ranking
DB Ranking Not calculation ranking is method) (MSTR 6 method).
Used performed. • Analytical engine performs
rank
Set Operator Allows you to use set • Disable Set Operator Disable Set Operator
Optimization operators in sub queries Optimization Optimization
to combine multiple filter • Enable Set Operator
qualifications. Set Optimization (if supported by
operators are only database and [Sub Query
supported by certain Type])
database platforms and
with certain sub query
types.
Sub Query Type Allows you to determine • WHERE EXISTS (SELECT * Use Temporary Table,
the type of subquery used ...) falling back to EXISTS
in engine-generated SQL. • WHERE EXISTS (SELECT (SELECT *...) for
col1, col2...) correlated subquery
• WHERE COL1 IN (SELECT
s1.COL1...) falling back to
EXISTS (SELECT * ...) for
multiple columns IN
• WHERE (COL1, COL2...) IN
(SELECT s1.COL1,
s1.COL2...)
• Use Temporary Table, falling
back to EXISTS (SELECT
*...) for correlated subquery
• WHERE COL1 IN (SELECT
s1.COL1...) falling back to
EXISTS (SELECT col1, col2
...) for multiple columns IN
• Use Temporary Table, falling
back to IN (SELECT COL)
for correlated subquery
Unrelated Filter Determines whether the • Remove unrelated filter Keep unrelated filter
Options Analytical Engine should • Keep unrelated filter
keep or remove the
unrelated filter.
WHERE Clause Determines the table • Use lookup table Use fact table
Driving Table used for qualifications in • Use fact table
the WHERE clause.
The Additional Final Pass Option determines whether the Engine calculates
an aggregation function and a join in a single pass or in separate passes in
the SQL.
Itupdate
is recommended that you use this property on reports. You must
the metadata to see the property populated in the metadata.
Example
The following SQL example was created using SQL Server metadata and
warehouse.
From the above warehouse structure, define the following schema objects:
• Salary = Avg(Salary_Dept){~+}
Pass1
select pa1.Mgr_Id Mgr_Id,
max(a11.Mgr_Desc) Mgr_Desc,
avg(pa1.WJXBFS1) WJXBFS1
from #ZZTUW0200LXMD000 pa1
join dbo.Emp_Mgr a11
on (pa1.Mgr_Id = a11.Mgr_Id)
group by pa1.Mgr_Id
Pass2
drop table #ZZTUW0200LXMD000
The problem in the SQL pass above that appears in italics is that the join
condition and the aggregation function are in a single pass. The SQL joins the
ZZTUW0200LXMD000 table to the Emp_Mgr table on column Mgr_ID, but
Mgr_ID is not the primary key to the LU_Emp_Mgr table. Therefore, there
are many rows on the LU_Emp_Mgr table with the same Mgr_ID. This
results in a repeated data problem.
Clearly, if both the conditions, aggregation and join, do not exist on the same
table, this problem does not occur.
To resolve this problem, select the option One additional final pass only to
join lookup tables in the VLDB Properties Editor. With this option selected,
the report, when executed, generates the following SQL:
Pass0
select a12.Mgr_Id Mgr_Id,
a11.Dept_Id Dept_Id,
sum(a11.Salary) WJXBFS1
into #ZZTUW01006IMD000
from dbo.Emp_Dept_Salary a11
join dbo.Emp_Mgr a12
on (a11.Emp_Id = a12.Emp_Id)
group by a12.Mgr_Id,
a11.Dept_Id
Pass1
select pa1.Mgr_Id Mgr_Id,
avg(pa1.WJXBFS1) WJXBFS1
into #ZZTUW01006IEA001
from #ZZTUW01006IMD000 pa1
group by pa1.Mgr_Id
Pass2
select distinct pa2.Mgr_Id Mgr_Id,
a11.Mgr_Desc Mgr_Desc,
pa2.WJXBFS1 WJXBFS1
from #ZZTUW01006IEA001 pa2
join dbo.Emp_Mgr a11
on (pa2.Mgr_Id = a11.Mgr_Id)
Pass3
drop table #ZZTUW01006IMD000
Pass4
drop table #ZZTUW01006IEA001
In this SQL, the italicized sections show that the Engine calculates the
aggregation function, which is the Average function, in a separate pass and
performs the join operation in another pass.
The Apply Filter property has three settings. The common element of all
three settings is that report filters must be applied whenever a warehouse
table is accessed. The settings are
• Apply filter to passes touching warehouse tables and last join pass,
if it does a downward join from the temporary table level to the
template level: The filter is applied in the final pass if it is a downward
join. For example, you have Store, Region Sales, and Region Cost on the
report, with the filter “store=1.” The intermediate passes calculate the
total sales and cost for Region 1 (to which Store 1 belongs). In the final
pass, a downward join is done from the Region level to the Store level,
using the relationship table LOOKUP_STORE. If the “store = 1” filter in
this pass is not applied, stores that belong to Region 1 are included on the
report. However, you usually expect to see only Store 1 when you use the
filter “store=1.” So, in this situation, you should choose this option to
make sure the filter is applied in the final pass.
• Apply filter to passes touching warehouse tables and last join pass:
The filter in the final pass is always applied, even though it is not a
downward join. This option should be used for special types of data
modeling. For example, you have Region, Store Sales, and Store Cost on
the report, with the filter “Year=2002.” This looks like a normal report
and the final pass joins from Store to Region level. But the schema is
abnormal: certain stores do not always belong to the same region,
perhaps due to rezoning. For example, Store 1 belongs to Region 1 in
2002, and belongs to Region 2 in 2003. To solve this problem, put an
additional column Year in LOOKUP_STORE so that you have the
following data.
1 1 2002
1 2 2003
...
Apply the filter Year=2002 to your report. This filter must be applied in the
final pass to find the correct store-region relationship, even though the final
pass is a normal join instead of a downward join.
Two other VLDB properties, Downward Outer Join Option and Preserve All
Lookup Table Elements, have an option to apply the filter. If you choose
those options, then the filter is applied accordingly, regardless of what the
value of Apply Filter Option is.
number of rows results in more traffic between Intelligence Server and the
database.
This can improve performance by reducing the size of the partition tables
before they are combined for the final count distinct calculation.
The Custom Group Banding Count Method helps optimize custom group
banding when using the Count Banding method. You have the following
options:
• Use standard case statement syntax: Select this option to utilize case
statements within your database to perform the custom group banding.
• Insert band range to database and join with metric value: Select this
option to use temporary tables to perform the custom group banding.
Examples
The Custom Group Banding Points Method helps optimize custom group
banding when using the Points Banding method. You can choose to use the
standard method that uses the Analytical Engine or database-specific syntax,
or you can choose to use case statements or temp tables.
Examples
The Custom Group Banding Size Method helps optimize custom group
banding when using the Size Banding method. You can choose to use the
standard method that uses the Analytical Engine or database-specific syntax,
or you can choose to use case statements or temp tables.
Examples
(case
when (pa3.WJXBFS1 >= 0 and pa3.WJXBFS1 < .2) then 1
when (pa3.WJXBFS1 >= .2 and pa3.WJXBFS1 < .4) then 2
when (pa3.WJXBFS1 >= .4 and pa3.WJXBFS1 < .6) then 3
when (pa3.WJXBFS1 >= .6 and pa3.WJXBFS1 < .8) then 4
when (pa3.WJXBFS1 >= .8 and pa3.WJXBFS1 <= 1) then 5
end) as DA57
from ZZMQ002 pa3
The Data population for Intelligent Cubes VLDB property allows you to
define if and how Intelligent Cube data is normalized to save memory
resources.
You can avoid this duplication of data by normalizing the Intelligent Cube
data. In this scenario, the South region description information would only
be stored once even though the region contains five stores. While this saves
memory resources, the act of normalization requires some processing time.
This VLDB property provides the following options to determine if and how
Intelligent Cube data is normalized:
This is a good option if you publish your Intelligent Cubes at times when
Intelligence Server use is low. Normalization can then be performed
without affecting your user community. You can use schedules to support
this strategy. For information on using schedules to publish Intelligent
Cubes, see the OLAP Services Guide.
If you used this option in 9.0.0 and have upgraded to the most recent
version of MicroStrategy, it is recommended that you use a different
Intelligent Cube normalization technique. If the user account for the
data warehouse has permissions to create tables, switch to the option
Normalize Intelligent Cube data in the database. This option is
described below. If the user account does not have permissions to
create tables, switch to the option Normalize Intelligent Cube data
in Intelligence Server.
Normalize Intelligent Cube data in the database: This database
normalization is a good option if attribute data and fact data are
stored in the same table.
To use this option, the user account for the database must have
permissions to create tables.
To use this option, the user account for the database must have
permissions to create tables.
To use this option, the user account for the database must have
permissions to create tables. Additionally, using this option can return
different results than the other Intelligent Cube normalization
techniques. For information on these differences, see Data differences
when normalizing Intelligent Cube data using direct loading below.
Data differences when normalizing Intelligent Cube data using direct loading
The option Direct loading of dimensional data and filtered fact data can
return different results than the other Intelligent Cube normalization
techniques in certain scenarios. Some of these scenarios and the effect that
they have on using direct loading for Intelligent Cube normalization are
described below:
• There are extra rows of data in fact tables that are not available in the
attribute lookup table. In this case the VLDB property Preserve all final
pass result elements (see Preserve all final pass result elements,
page 722) determines how to process the data. The only difference
between direct loading and the other normalization options is that the
option Preserve all final result pass elements and the option Preserve all
elements of final pass result table with respect to lookup table but not
relationship table both preserve the extra rows by adding them to the
lookup table.
• There are extra rows of data in the attribute lookup tables that are not
available in the fact tables. With direct loading, these extra rows are
included. For other normalization techniques, the VLDB property
Preserve all lookup table elements (see Preserve all lookup table
elements, page 728) determines whether or not to include these rows.
The Data population for reports VLDB property allows you to define if and
how report data is normalized to save memory resources.
When a report is executed, the description information for the attributes (all
data mapped to non-ID attribute forms) included on the report is repeated
for every row. For example, a report includes the attributes Region and
Store, with each region having one or more stores. Without performing
normalization, the description information for the Region attribute would be
repeated for every store. If the South region included five stores, then the
information for South would be repeated five times.
You can avoid this duplication of data by normalizing the report data. In this
scenario, the South region description information would only be stored
once even though the region contains five stores. While this saves memory
resources, the act of normalization requires some processing time. This
VLDB property provides the following options to determine if and how
report data is normalized:
• The other options available for report data normalization all perform the
normalization within the database. Therefore, these are all good options
if the memory resources of Intelligence Server must be conserved.
If you used this option in 9.0.0 and have upgraded to the most recent
version of MicroStrategy, it is recommended that you use a different
report data normalization technique. If the user account for the data
warehouse has permissions to create tables, switch to the option
Normalize report data in the database. This option is described
below. If the user account does not have permissions to create tables,
switch to the option Normalize report data in Intelligence Server.
To use this option, the user account for the database must have
permissions to create tables.
To use this option, the user account for the database must have
permissions to create tables.
Dimensionality Model
• Use relational model: For all projects, Use relational model is the
default value. With the Use relational model setting, all the
dimensionality (level) resolution is based on the relationship between
attributes.
• Use dimensional model: The Use dimensional model setting is for cases
where attribute relationship dimensionality (level) resolution is different
from dimension-based resolution. There are very few cases when the
setting needs to be changed to Use dimensional model. The following
situations may require the Use dimensional model setting:
Metric Conditionality: You have a report with the Year attribute and
the “Top 3 Stores Dollar Sales” metric on the template and the filters
Store, Region, and Year. Therefore, the metric has a metric
conditionality of “Top 3 Stores.” In MicroStrategy 7.x and later, metric
conditions are set to remove related report filter elements by default;
therefore, with the above report the filters on Store and Region are
ignored, because they are related to the metric conditionality. Year is
not removed because it is not related to the metric conditionality. In
MicroStrategy 6.x and earlier, all filters were used; in MicroStrategy
7.x and later, if you set this property to Use dimensional model, the
filters are not ignored. Note that if you change the default of the
Remove related report filter element option in advanced
conditionality, the Use dimensional model setting does not make a
difference in the report. For more information regarding this
advanced setting, see the Metrics chapter in the Advanced Reporting
Guide.
Metric Dimensionality Resolution: In MicroStrategy 6.x and earlier,
every attribute belonged to a dimension. MicroStrategy 7.x and later
does not have the concept of dimension, but instead has the concept of
metric level. For a project upgraded from 6.x to 7.x, the dimension
information is kept in the metadata. Attributes created in 7.x do not
have this information. For example, you have a report that contains
the Year attribute and the metric “Dollar Sales by Geography.” The
metric is defined with the dimensionality of Geography, which means
the metric is calculated at the level of whatever Geography attribute is
on the template. If there is no attribute on the template that is a
member of the Geography dimension, with MicroStrategy 6.x or
lower, the metric then defaults the metric dimensionality to the
highest level in the Geography dimension, for example Country or All
Geography. In MicroStrategy 7.x and later, the metric dimensionality
is ignored, and therefore defaults to the report level or the level that is
defined for the report. The MicroStrategy 6.x and earlier behavior also
occurs if you set this VLDB property to Use dimensionality model and
if the attribute was originally created under a MicroStrategy 6.x
product.
Market State
Store
Market and State are both parents of Store. A report has the attributes
Market and State and a Dollar Sales metric with report level
dimensionality. In MicroStrategy 7.x and later, with the Use relational
model setting, the report level (metric dimensionality level) is Market
and State. To choose the best fact table to use to produce this report,
the Analytical Engine considers both of these attributes. With
MicroStrategy 6.x and earlier, or with the Use dimensional model
setting in MicroStrategy 7.x and later, Store is used as the metric
dimensionality level and for determining the best fact table to use.
This is because Store is the highest common descendent between the
two attributes.
The Attribute Role feature was implemented in MicroStrategy 7i. The Engine
Attribute Role Options property allows you to share an actual physical table
to define multiple schema objects. There are two approaches for this feature:
• The first approach is a procedure called table aliasing, where you can
define multiple logical tables in the schema that point to the same
physical table, and then define different attributes and facts on these
logical tables. Table aliasing provides you a little more control and is best
when upgrading or when you have a complex schema. Table aliasing is
described in detail in the MicroStrategy Project Design Guide.
• The second approach is called Engine Attribute Role. With this approach,
rather than defining multiple logical tables, you only need to define
multiple attributes and facts on the same table. The MicroStrategy
Engine automatically detects “multiple roles” of certain attributes and
splits the table into multiple tables internally. There is a limit on the
number of tables into which a table can split. This limit is known as the
Attribute Role limit. This limit is hard coded to 128 tables. If you are a
new MicroStrategy user starting with 7i or later, it is suggested that you
use the automatic detection (Engine Attribute Role) option.
• If two attributes are defined on the same column from the same table,
have the same expression, and are not related, it is implied that they are
playing different roles and must be in different tables after the split.
• If two attributes are related to each other, they must stay in the same
table after the split.
Given the diversity of data modeling in projects, the above algorithm cannot
be guaranteed to split tables correctly in all situations. Thus, this property is
added in the VLDB properties to turn the Engine Attribute Role on or off.
When the feature is turned off, the table splitting procedure is bypassed.
select a1.fact_1
from FT1 a1 join LU_DAY a2 on (a1.order_day=a2.day)
join LU_DAY a3 on (a1.ship_day = a3.day)
where a2.year = 2002 and
a3.year = 2003
Note that LU_DAY appears twice in the SQL, playing different “roles.” Also,
note that in this example, the Analytical Engine does not split table FT1
because “Ship Day” and “Order Day” are defined on different columns.
Fact table FT1 contains columns “day” and “fact_1.” “Ship Day” and “Order
Day” are defined on column “day.” The Analytical Engine detects that these
two attributes are defined on the same column and therefore splits FT1 into
FT1(1) and FT1(2), with FT1(1) containing “Ship Day” and “Fact 1”, and FT(2)
containing “Order Day” and “Fact 1.” If you put “Ship Day” and “Order Day”
on the template, as well as a metric calculating “Fact 1,” the Analytical Engine
cannot find such a fact. Although externally, FT1 contains all the necessary
attributes and facts, internally, “Fact 1” only exists on either “Ship Day” or
“Order Day,” but not both. In this case, to make the report work (although
still incorrectly), you should turn OFF the Engine Attribute Role feature.
a limit on the number of tables into which a given table can be split
internally. In this case, you should turn the Engine Attribute Role
feature OFF and use table aliasing instead.
There are multiple ways to generate a SELECT statement that checks for the
data, but the performance of the query can differ depending on the platform.
The default value for this property is: “select count(*) …” for all database
platforms, except UDB, which uses “select distinct 1…”
The Multiple data source support VLDB property allows you to choose which
technique to use to support multiple data sources in a project. This VLDB
property has the following options:
• Use MultiSource Option to access multiple data sources:
MultiSource Option is used to access multiple data sources in a project.
You can specify a secondary database instance for a table, which is used
to support database gateways. For example, in your environment you
might have a gateway between two databases such as an Oracle database
and a DB2 database. One of them is the primary database and the other is
the secondary database. The primary database receives all SQL requests
and passes them to the correct database.
The OLAP function support VLDB property defines whether OLAP functions
support backwards compatibility or reflect enhancements to OLAP function
logic. This VLDB property has the following options:
This behavior does not correctly use multiple passes for nested or sibling
metrics that use OLAP functions. It also does not correctly apply
attributes in the SortBy and BreakBy parameters.
The Rank Method property determines which method to use for ranking
calculations. There are three methods for ranking data, and in some cases,
this property is ignored. The logic is as follows:
2 If the database supports the Rank function, then the ranking is done in
the database.
3 If neither of the above criteria is met, then the Rank Method property
setting is used.
SELECT clause is identical to the level of the FROM clause. For example, a
SQL statement that only includes the ID column for the Store attribute in
the SELECT clause and only includes the lookup table for the Store
attribute in the FROM clause does not include any Group By conditions.
The Remove Report Tables For Outer Joins property determines whether an
optimization for outer join processing is enabled or disabled. You have the
following options:
However, if you sort or rank report results and some of the values used
for the sort or rank are identical, you may encounter different sort or rank
orders depending on whether you disable or enable this optimization. To
preserve current sorting or ranking orders on identical values, you may
want to disable this optimization.
• Relationship qualifications
• Metric qualifications at the same level are combined into one set
qualification before being applied to the final result pass. This is
more efficient than using a set operator. Consult MicroStrategy
Tech Note TN5200-802-0535 for more details.
Along with the restrictions described above, SQL set operators also depend
on the subquery type and the database platform. For more information on
sub query type, see Sub Query Type, page 832. Set Operator Optimization
can be used with the following sub query types:
Iffallback,
either of the two sub query types that use fallback actions perform a
Set Operator Optimization is not applied.
Database Intersect Intersect ALL Except Except ALL Union Union ALL
Tandem No No No No No No
The Set Operator Optimization property provides you with the following
options:
• Disable Set Operator Optimization: Operators such as IN and AND NOT
are used in SQL sub queries with multiple filter qualifications.
The SQL Global Optimization property provides access to level options you
can use to determine whether and how SQL queries are optimized.
The default option for this VLDB property has changed in 9.0.0. For
information on this change, see Upgrading from pre-9.0.x versions of
MicroStrategy, page 831.
You can set the following SQL Global Optimization options to determine the
extent to which SQL queries are optimized:
• Level 4: Level 2 + Merge All Passes with Different WHERE: This is the
default level. Level 2 optimization takes place as described above, and all
SQL passes with different WHERE clauses are consolidated when it is
appropriate to do so. While Level 3 only consolidates SQL statements that
access database tables, this option also considers SQL statements that
access temporary tables, derived tables, and common table expressions.
This example demonstrates how some SQL passes are redundant and
therefore removed when the Level 1 or Level 2 SQL Global Optimization
option is selected.
• Year attribute
• Region attribute
• SQL Pass 1: Retrieves the set of categories that satisfy the metric
qualification
SELECT a11.CATEGORY_ID CATEGORY_ID
into #ZZTRH02012JMQ000
FROM YR_CATEGORY_SLS a11
GROUP BY a11.CATEGORY_ID
HAVING sum(a11.TOT_DOLLAR_SALES) > 1000000.0
• SQL Pass 2: Final pass that selects the related report data, but does not
use the results of the first SQL pass
SELECT a13.YEAR_ID YEAR_ID,
a12.REGION_ID REGION_ID,
max(a14.REGION_NAME) REGION_NAME,
sum((a11.TOT_DOLLAR_SALES - a11.TOT_COST))
WJXBFS1
If you select either the Level 1: Remove Unused and Duplicate Passes or
Level 2: Level 1 + Merge Passes with different SELECT option, only one
SQL pass—the second SQL pass described above—is generated because it is
sufficient to satisfy the query on its own. By selecting either option, you
reduce the number of SQL passes from two to one, which can potentially
decrease query time.
Sometimes, two or more passes contain SQL that can be consolidated into a
single SQL pass, as shown in the example below. In such cases, you can select
the Level 2: Level 1 + Merge Passes with different SELECT option to
combine multiple passes from different SELECT statements.
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
into [ZZTI10200U2MD000]
FROM [CITY_CTR_SLS] a11,
[LU_CALL_CTR] a12
WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID]
GROUP BY a12.[REGION_ID]
• SQL Pass 3: Final pass that calculates Metric 3 = Metric 1/Metric 2 and
displays the result
SELECT pa11.[REGION_ID] AS REGION_ID,
a13.[REGION_NAME] AS REGION_NAME,
pa11.[WJXBFS1] AS WJXBFS1,
IIF(ISNULL((pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1]
= 0, NULL,
pa12.[WJXBFS1]))), 0,
(pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0,
NULL,pa12.[WJXBFS1]))) AS WJXBFS2
FROM [ZZTI10200U2MD000] pa11,
[ZZTI10200U2MD001] pa12,
[LU_REGION] a13
WHERE pa11.[REGION_ID] = pa12.[REGION_ID] and
pa11.[REGION_ID] = a13.[REGION_ID]
Because SQL passes 1 and 2 contain almost exactly the same code, they can
be consolidated into one SQL pass. Notice the italicized SQL in Pass 1 and
Pass 2. These are the only unique characteristics of each pass; therefore, Pass
1 and 2 can be combined into just one pass. Pass 3 remains as it is.
You can achieve this type of optimization by selecting the Level 2: Level 1 +
Merge Passes with different SELECT option. The SQL that results from
this level of SQL optimization is as follows:
Pass 1:
SELECT a12.[REGION_ID] AS REGION_ID,
count(a11.[CALL_CTR_ID]) AS WJXBFS1
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
into [ZZTI10200U2MD001]
FROM [CITY_CTR_SLS] a11,
[LU_CALL_CTR] a12
WHERE a11.[CALL_CTR_ID] = a12.[CALL_CTR_ID]
GROUP BY a12.[REGION_ID]
Pass 2:
SELECT pa11.[REGION_ID] AS REGION_ID,
a13.[REGION_NAME] AS REGION_NAME,
pa11.[WJXBFS1] AS WJXBFS1,
IIF(ISNULL((pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL,
pa12.[WJXBFS1]))), 0,
(pa11.[WJXBFS1] / IIF(pa12.[WJXBFS1] = 0, NULL,
pa12.[WJXBFS1]))) AS WJXBFS2
FROM [ZZTI10200U2MD000] pa11,
[ZZTI10200U2MD001] pa12,
[LU_REGION] a13
WHERE pa11.[REGION_ID] = pa12.[REGION_ID] and
pa11.[REGION_ID] = a13.[REGION_ID]
Sometimes, two or more passes contain SQL with different where clauses
that can be consolidated into a single SQL pass, as shown in the example
below. In such cases, you can select the Level 3: Level 2 + Merge Passes,
which only hit DB Tables, with different WHERE option or the Level 4:
Level 2 + Merge All Passes with Different WHERE option to combine
multiple passes with different WHERE clauses.
• Quarter attribute
• Metric 1 = Web Sales (Calculates sales for the web call center)
• Metric 2 = Non-Web Sales (Calculates sales for all non-web call centers)
Pass 1
create table ZZMD00 (
QUARTER_IDSHORT,
WJXBFS1DOUBLE)
Pass 2
insert into ZZMD00
select a12.[QUARTER_ID] AS QUARTER_ID,
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
from [DAY_CTR_SLS]a11,
[LU_DAY]a12
where a11.[DAY_DATE] = a12.[DAY_DATE]
and a11.[CALL_CTR_ID] in (18)
group bya12.[QUARTER_ID]
Pass 3
create table ZZMD01 (
QUARTER_IDSHORT,
WJXBFS1DOUBLE)
Pass 4
insert into ZZMD01
select a12.[QUARTER_ID] AS QUARTER_ID,
sum(a11.[TOT_DOLLAR_SALES]) AS WJXBFS1
from [DAY_CTR_SLS]a11,
[LU_DAY]a12
where a11.[DAY_DATE] = a12.[DAY_DATE]
Pass 5
select pa11.[QUARTER_ID] AS QUARTER_ID,
a13.[QUARTER_DESC] AS QUARTER_DESC0,
pa11.[WJXBFS1] AS WJXBFS1,
pa12.[WJXBFS1] AS WJXBFS2
from [ZZMD00]pa11,
[ZZMD01]pa12,
[LU_QUARTER]a13
where pa11.[QUARTER_ID] = pa12.[QUARTER_ID] and
pa11.[QUARTER_ID] = a13.[QUARTER_ID]
Pass 2 calculates the Web Sales and Pass 4 calculates all non-Web Sales.
Because SQL passes 2 and 4 contain almost exactly the same SQL, they can
be consolidated into one SQL pass. Notice the highlighted SQL in Pass 2 and
Pass 4. These are the only unique characteristics of each pass; therefore, Pass
2 and 4 can be combined into just one pass.
You can achieve this type of optimization by selecting the Level 3: Level 2 +
Merge Passes, which only hit DB Tables, with different WHERE option
or the Level 4: Level 2 + Merge All Passes with Different WHERE option.
The SQL that results from this level of SQL optimization is as follows:
Pass 1
create table ZZT6C00009GMD000 (
QUARTER_IDSHORT,
WJXBFS1DOUBLE,
GODWFLAG1_1LONG,
WJXBFS2DOUBLE,
GODWFLAG2_1LONG)
Pass 2
insert into ZZT6C00009GMD000
select a12.[QUARTER_ID] AS QUARTER_ID,
sum(iif(a11.[CALL_CTR_ID] in (18), a11.[TOT_DOLLAR_SALES],
NULL)) AS WJXBFS1,
Pass 3
select pa12.[QUARTER_ID] AS QUARTER_ID,
a13.[QUARTER_DESC] AS QUARTER_DESC0,
pa12.[WJXBFS1] AS WJXBFS1,
pa12.[WJXBFS2] AS WJXBFS2
from [ZZT6C00009GMD000]pa12,
[LU_QUARTER]a13
where pa12.[QUARTER_ID] = a13.[QUARTER_ID]
and(pa12.[GODWFLAG1_1] = 1
and pa12.[GODWFLAG2_1] = 1)
The default option for the SQL Global Optimization VLDB property changed
in MicroStrategy 9.0.0. In pre-9.0.x versions of MicroStrategy, the default
option for this VLDB property was Level 2: Level 1 + Merge Passes with
different SELECT. Starting with MicroStrategy 9.0.0, the default option for
this VLDB property is Level 4: Level 2 + Merge All Passes with Different
WHERE.
When projects are upgraded to 9.0.x, if you have defined this VLDB property
to use the default setting, this new default is applied. This change improves
performance for the majority of reporting scenarios. However, the new
default can cause certain reports to become unresponsive or fail with
time-out errors. For example, reports that contain custom groups or a large
number of conditional metrics may encounter performance issues with this
new default.
To resolve this issue for a report, after completing an upgrade, modify the
SQL Global Optimization VLDB property for the report to use the option
Level 2: Level 1 + Merge Passes with different SELECT.
The Sub Query Type property tells the Analytical Engine what type of syntax
to use when generating a subquery. A subquery is a secondary SELECT
statement in the WHERE clause of the primary SQL statement.
The Sub Query Type property is database specific, due to the fact that
different databases have different syntax support for subqueries. Some
databases can have improved query building and performance depending on
the subquery type used. For example, it is more efficient to use a subquery
that only selects the needed columns rather than selecting every column.
Subqueries can also be more efficient by using the IN clause rather than
using the EXISTS function.
Database Default
DB2 UDB Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated
subquery
Microsoft Access Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated
2000/2002/2003 subquery
Microsoft Excel Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated
2000/2003 subquery
RedBrick Where col1 in (Select s1.col1...) falling back to Exists (Select col1, col2...) for
multiple column in
Teradata Use Temporary Table, falling back to in (Select col) for correlated subquery
Notice that some options have a fallback action. In some scenarios, the
selected option does not work, so the SQL Engine must fall back to an
approach that always works. The typical scenario for falling back is when
multiple columns are needed in the IN list, but the database does not support
it and the correlated subqueries.
For a further discussion of the Sub Query Type VLDB property, refer
to MicroStrategy Tech Note TN5200-75x-0539.
Examples
• No attributes on the report grid or the Report Objects of the report are
related to the transformation’s member attribute. For example, if a
transformation is defined on the attribute Year of the Time hierarchy, no
attributes in the Time hierarchy can be included on the report grid or
Report Objects.
• The filter of the report does contain attributes that are related to the
transformation’s member attribute. For example, if a transformation is
defined on the attribute Year of the Time hierarchy, a filter on another
attribute in the Time hierarchy is included on the report.
Example
Statement 1
on (a13.SUBCAT_ID = a14.SUBCAT_ID)
join LU_CATEGORY a15
on (a14.CATEGORY_ID = a15.CATEGORY_ID)
where a12.DAY_DATE = '08/31/2001'
group by a14.CATEGORY_ID
Statement 2
MicroStrategy contains the logic to ignore filters that are not related to the
template attributes, to avoid unnecessary Cartesian joins. However, there are
some cases where a relationship is created that the Engine should not ignore.
The Unrelated Filter Options property tells the Analytical Engine to remove
or keep unrelated filters. It allows unrelated filters to be applied in report
resolution, but not in all cases.
Examples
• Template: Year
In this case, the filter is removed regardless of this VLDB property setting,
assuming there is no relationship between Country and Year defined in the
schema.
• Report Filters
• Template: Year
For the setting to work, filter FL02 can come from joint element list (as
above), Metric Qualification/Relationship Filter with Country and Quarter as
the output levels, or Report as Filter with Country and Quarter on the
template of Report as Filter.
The Where Clause Driving Table property tells the Analytical Engine what
type of column is preferred in a qualification of a WHERE clause when
generating SQL. One SQL pass usually joins fact tables and lookup tables on
certain ID columns. When a qualification is defined on such a column, the
Analytical Engine can use the column in either the fact table or the lookup
table. In certain databases, like Teradata and RedBrick, a qualification on the
lookup table can achieve better performance. By setting the Where Clause
Driving Table property to Use Lookup Table, the Analytical Engine always
tries to pick the column from the lookup table.
IfFROM
Use lookup table is selected, but there is no lookup table in the
clause for the column being qualified on, the Analytical Engine
does not add the lookup table to the FROM clause. To make sure that
a qualification is done on a lookup table column, the DSS Star Join
property should be set to use Partial star join.
Attribute Form Allows you to choose whether to • Select ID form only Select ID form
Selection Option for select attribute forms that are on • Select ID and other forms only
Intermediate Pass the template in the intermediate if they are on template
pass (if available). and available in existing
join tree
Attribute Selection Allows you to choose whether to • (Default) Select only the Select only the
Option for select additional attributes attributes needed attributes
Intermediate Pass (usually parent attributes) • Select other attributes in needed
needed on the template as the current join tree if they
join tree and their child are on template and their
attributes have already been child attributes have
selected in the Attribute Form already been selected.
Selection option for
Intermediate Pass.
Constant Column Allows you to choose whether to • Pure select, no group by Pure select, no
Mode use a GROUP BY and how the • Use max, no group by group by
GROUP BY should be • Group by column
constructed when working with (expression)
a column that is a constant. • Group by alias
• Group by position
Custom Group Allows you define how a report • No interaction - static No interaction -
Interaction With the filter interacts with a custom custom group static custom
Report Filter group. • Apply report filter to group
custom group
• Apply report filter to
custom group, but ignore
related elements from the
report filter
Datamart Column Allows you to determine the • Columns created in order Columns
Order order in which datamart based on attribute weight created in order
columns are created. • Columns created in order based on
in which they appear on attribute weight
the template
Default Attribute Use this to determine how • Lowest weight Lowest weight
Weight attributes are treated, for those • Highest weight
attributes that are not in the
attribute weights list.
Disable Prefix in Allows you to choose whether or • (Default) Use prefix in (Default) Use
WH Partition Table not to use the prefix partition both warehouse partition prefix in both
queries. The prefix is always pre-query and partition warehouse
used with pre-queries. query partition
• Use prefix in warehouse pre-query and
partition prequery but not partition query
in partition query
Long integer Determines whether to map • Do not use BigInt Do not use
support long integers of a certain length • Up to 18 digits BigInt
as BigInt data types when • Up to 19 digits
MicroStrategy creates tables in
a database.
Merge Same Metric Determines how to handle • Merge same metric Merge same
Expression Option metrics that have the same expression metric
definition. • Do not merge same expression
metric expression
Select Post String Defines the custom SQL string User-defined NULL
to be appended to all SELECT
statements, for example, FOR
FETCH ONLY.
SQL Date Format Sets the format for date in User-defined yyyy-mm-dd
engine-generated SQL.
SQL Decimal Used to change the decimal • Use “." as decimal Use “." as
Separator separator in SQL statements separator (ANSI decimal
from a decimal point to a standard) separator (ANSI
comma, for international • Use “," as decimal standard)
database users. separator
UNION Multiple Allows the Analytical Engine to • Do not use UNION Do not use
INSERT UNION multiple insert • Use UNION UNION
statements into the same
temporary table.
Example
A report template contains the attributes Region and Store, and metrics M1
and M2. M1 uses the fact table FT1, which contains Store_ID, Store_Desc,
Region_ID, Region_Desc, and f1. M2 uses the fact table FT2, which contains
Store_ID, Store_Desc, Region_ID, Region_Desc, and F2. With the normal
SQL Engine algorithm, the intermediate pass that calculates M1 selects
Store_ID and F1, the intermediate pass that calculates M2 selects Store_ID
and F2. Then the final pass joins these two intermediate tables together. But
that is not enough. Since Region is on the template, it should join upward to
the region level and find the Region_Desc form. This can be done by joining
either FT1 or FT2 in the final pass. So with the original algorithm, either FT1
or FT2 is being accessed twice. If these tables are big, and they usually are,
the performance can be very slow. On the other hand, if Store_ID,
Store_Desc, Region_ID, and Region_Desc are picked up in the intermediate
passes, there is no need to join FT1 or FT2 does not need to be joined in the
final pass, thus boosting performance.
For this reason, the following two properties are available in MicroStrategy:
• Each property has two values. The default behavior is the original
algorithm.
– The SQL Engine does not join additional tables to select more
attributes or forms. So for intermediate passes, the number of
tables to be joined is the same as when the property is disabled.
The Bulk Insert String property appends the string provided in front of the
INSERT statement. For Teradata, this property is set to “;” to increase query
performance. The string is appended only for the INSERT INTO SELECT
statements and not the INSERT INTO VALUES statement that is generated
by the Analytical Engine. Since the string is appended for the INSERT INTO
SELECT statement, this property takes effect only during explicit,
permanent, or temporary table creation.
Example
Constant Column Mode allows you to choose whether or not to use a GROUP
BY and how the GROUP BY should be constructed when working with a
column that is a constant. The GROUP BY can be constructed with the
column, alias, position numbers, or column expression. Most users do not
need to change this setting. It is available to be used with the new Generic
DBMS object and if you want to use a different GROUP BY method when
working with constant columns.
Examples
GROUP BY alias
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID, XKYCGT
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID, XKYCGT
GROUP BY position
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 0 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q1_2002 a11
group by a11.QUARTER_ID, 2
insert into ZZTP00
select a11.QUARTER_ID QUARTER_ID, 1 XKYCGT,
sum(a11.REG_SLS_DLR) WJXBFS1
from SALES_Q2_2002 a11
group by a11.QUARTER_ID, 2
The Custom Group Interaction With the Report Filter VLDB property allows
you define how a report filter interacts with a custom group.
In this scenario, the report filter is evaluated after the custom group. If the
same customer that has a total of $7,500 only had $2,500 in 2007, then the
report would only display $2,500 for that customer. However, the customer
would still be in the $5,000 to $10,000 in revenue range because the custom
group did not account for the report filter.
You can define report filter and custom group interaction to avoid this
scenario. This VLDB property has the following options:
For information on custom groups and defining these options for a custom
group, see the Advanced Reporting Guide.
Database instance
This property allows you to determine the order in which datamart columns
are created when you configure a datamart from the information in the
columns and rows of a report.
Date Pattern
The Data Pattern property is used to add or alter a syntax pattern for
handling date columns.
Example
You can access the attribute weights list from the Project
Configuration Editor. In the Project Configuration Editor, collapse
Report Definition and select SQL generation. From the Attribute
weights section, select Modify to open the attribute weights list.
The attribute weights list allows you to change the order of attributes used in
the SELECT clause of a query. For example, suppose the Region attribute is
placed higher on the attribute weights list than the Customer State attribute.
When the SQL for a report containing both attributes is generated, Region is
referenced in the SQL before Customer State. However, suppose another
attribute, Quarter, also appears on the report template but is not included in
the attribute weights list.
In this case, you can select either of the following options within the Default
Attribute Weight property to determine whether Quarter is considered
highest or lowest on the attribute weights list:
• Lowest: (default) When you select this option, those attributes not in the
attribute weights list are treated as the lightest weight. Using the example
above, with this setting selected, Quarter is considered to have a lighter
attribute weight than the other two attributes. Therefore, it is referenced
after Region and Customer State in the SELECT statement.
• Highest: When you select this option, those attributes not in the attribute
weights list are treated as the highest weight. Using the example above,
with this setting selected, Quarter is considered to have a higher attribute
weight than the other two attributes. Therefore, it is referenced before
Region and Customer State in the SELECT statement.
For those projects that need their own prefix in the PBT, the MicroStrategy
6.x approach (using the DDBSOURCE column) no longer works due to
architectural changes. The solution is to store the prefix along with the PBT
name in the column PBTNAME of the partition mapping table. So instead of
storing PBT1, PBT2, and so on, you can put in DB1.PBT1, DB2.PBT2, and so
on. This effectively adds a different prefix to different PBTs by treating the
entire string as the partition base table name.
The solution above works in most cases but does not work if the PMT needs
its own prefix. For example, if the PMT has the prefix “DB0.”, the prequery
works fine. However, in the partition query, this prefix is added to what is
stored in the PBTNAME column, so it gets DB0.DB1.PBT1, DB0.DB1.PBT2,
and so on. This is not what you want to happen. This new VLDB property is
used to disable the prefix in the WH partition table. When this property is
turned on, the partition query no longer shares the prefix from the PMT.
Instead, the PBTNAME column (DB1.PBT1, DB2.PBT2, and so on) is used as
the full PBT name.
Even when this property is turned ON, the partition prequery still
applies a prefix, if there is one.
• If there is COUNT (DISTINCT …) and the database does not support it,
the Analytical Engine does a SELECT DISTINCT pass and then a
COUNT(*) pass.
• If for certain selected column data types, the database does not allow
DISTINCT or GROUP BY, the Analytical Engine does not do it.
• If the select level is the same as the table key level and the table’s true key
property is selected, the Analytical Engine does not issue a DISTINCT.
When none of the above conditions are met, the Analytical Engine uses this
property.
GROUP BY ID Attribute
The code fragment following each description replaces the section named
group by ID in the following sample SQL statement.
select a22.STORE_NBR STORE_NBR,
a22.MARKET_NBR * 10 MARKET_ID,
sum(a21.REG_SLS_DLR) WJXBFS1
from STORE_DIVISION a21
join LOOKUP_STORE a22
on (a21.STORE_NBR = a22.STORE_NBR)
where a22.STORE_NBR = 1
group by a22.STORE_NBR, group by ID
want non-ID columns in the GROUP BY, you can choose to use a MAX when
the column is selected so that it is not used in the GROUP BY.
Examples
Use Max
select a11.MARKET_NBR MARKET_NBR,
max(a14.MARKET_DESC) MARKET_DESC,
a11.CLASS_NBR CLASS_NBR,
max(a13.CLASS_DESC) CLASS_DESC,
a12.YEAR_ID YEAR_ID,
max(a15.YEAR_DESC) YEAR_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from MARKET_CLASS a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join LOOKUP_CLASS a13
on (a11.CLASS_NBR = a13.CLASS_NBR)
join LOOKUP_MARKET a14
on (a11.MARKET_NBR = a14.MARKET_NBR)
join LOOKUP_YEAR a15
on (a12.YEAR_ID = a15.YEAR_ID)
group by a11.MARKET_NBR, a11.CLASS_NBR,
a12.YEAR_ID
Use Group by
select a11.MARKET_NBR MARKET_NBR,
a14.MARKET_DESC MARKET_DESC,
a11.CLASS_NBR CLASS_NBR,
a13.CLASS_DESC CLASS_DESC,
a12.YEAR_ID YEAR_ID,
a15.YEAR_DESC YEAR_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from MARKET_CLASS a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join LOOKUP_CLASS a13
on (a11.CLASS_NBR = a13.CLASS_NBR)
join LOOKUP_MARKET a14
on (a11.MARKET_NBR = a14.MARKET_NBR)
join LOOKUP_YEAR a15
on (a12.YEAR_ID = a15.YEAR_ID)
group by a11.MARKET_NBR,
a14.MARKET_DESC,
a11.CLASS_NBR,
a13.CLASS_DESC,
a12.YEAR_ID,
a15.YEAR_DESC
The Insert Post String property allows you to define a custom string to be
inserted at the end of the INSERT statements.
Example
Insert into TABLENAME
select A1.COL1, A2.COL2, A3.COL3
from TABLE1 A1, TABLE2 A2, TABLE3 A3
The Insert Table Option property allows you to define a custom string to be
inserted after the table name in the insert statements. This is analogous to
table option.
Example
Insert into TABLENAME */Insert Table Option/*
select A1.COL1, A2.COL2, A3.COL3
from TABLE1 A1, TABLE2 A2, TABLE3 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
With this VLDB property you can determine whether long integers are
mapped to a BigInt data type when MicroStrategy creates tables in the
database. A datamart is an example of a MicroStrategy feature that requires
MicroStrategy to create tables in a database.
When long integers from databases are integrated into MicroStrategy, the
BigDecimal data type is used to define the data in MicroStrategy. Long
integers can be of various database data types such as Number, Decimal, and
BigInt.
In the case of BigInt, when data that uses the BigInt data type is integrated
into MicroStrategy as a BigDecimal, this can cause a data type mismatch
when MicroStrategy creates a table in the database. MicroStrategy does not
use the BigInt data type by default when creating tables. This can cause a
data type mismatch between the originating database table that contained
the BigInt and the database table created by MicroStrategy.
You can use the following VLDB settings to support BigInt data types:
• Do not use BigInt: Long integers are not mapped as BigInt data types
when MicroStrategy creates tables in the database. This is the default
behavior.
If you use BigInt data types, this can cause a data type mismatch between
the originating database table that contained the BigInt and the database
table created by MicroStrategy.
This setting is a good option if you can ensure that your BigInt data uses
no more than 18 digits. The maximum number of digits that a BigInt can
use is 19. With this option, if your database contains BigInt data that uses
all 19 digits, it is not mapped as a BigInt data type when MicroStrategy
creates a table in the database.
However, using this setting requires you to manually modify the column
data type mapped to your BigInt data. You can achieve this by creating a
column alias for the column of data in the Attribute Editor or Fact Editor
in MicroStrategy. The column alias must have a data type of BigDecimal,
a precision of 18, and a scale of zero. For steps to create a column alias to
modify a column data type, see the MicroStrategy Project Design Guide.
• Up to 19 digits: Long integers that have up to 19 digits are converted into
BigInt data types.
However, this option can cause an overflow error if you have long
integers that use exactly 19 digits, and its value is greater than the
maximum allowed for a BigInt (9,223,372,036,854,775,807).
The Max Digits in Constant property controls the number of significant digits
that get inserted into columns during Analytical Engine inserts. This is only
applicable to real numbers and not to integers.
Examples
Database-specific setting
SQL Server 28
Teradata 18
The Merge Same Metric Expression Option VLDB property allows you to
determine whether the SQL Engine should merge metrics that have the same
definition, or whether it should process the metrics separately. If you do not
The Select Post String property allows you to define a custom string to be
inserted at the end of all SELECT statements generated by the Analytical
Engine.
To include a post string only on the final SELECT statement you should use
the Select Statement Post String VLDB property, which is described in Select
Statement Post String, page 860.
Example
The SQL statement shown below displays an example of where the Select
Post String and Select Statement Post String VLDB properties would include
their SQL statements.
with gopa1 as
(select a12.REGION_ID REGION_ID
from CITY_CTR_SLS a11
join LU_CALL_CTR a12
on (a11.CALL_CTR_ID = a12.CALL_CTR_ID)
group by a12.REGION_ID
having sum(a11.TOT_UNIT_SALES) = 7.0
/* select post string */ )select
a11.REGION_ID REGION_ID,
a14.REGION_NAME REGION_NAME0,
sum(a11.TOT_DOLLAR_SALES) Revenue
from STATE_SUBCATEG_REGION_SLS a11
join gopa1 pa12
on (a11.REGION_ID = pa12.REGION_ID)
The Select Statement Post String VLDB property allows you to define a
custom SQL string to be inserted at the end of the final SELECT statement.
This can be helpful if you use common table expressions with an IBM DB2
database. These common table expressions do not support certain custom
SQL strings. This VLDB property allows you to apply the custom SQL string
to only the final SELECT statement which does not use a common table
expression.
Example
The SQL statement shown below displays an example of where the Select
Post String and Select Statement Post String VLDB properties include their
SQL statements.
with gopa1 as
(select a12.REGION_ID REGION_ID
from CITY_CTR_SLS a11
join LU_CALL_CTR a12
on (a11.CALL_CTR_ID = a12.CALL_CTR_ID)
group by a12.REGION_ID
having sum(a11.TOT_UNIT_SALES) = 7.0
/* select post string */ )select
a11.REGION_ID REGION_ID,
a14.REGION_NAME REGION_NAME0,
sum(a11.TOT_DOLLAR_SALES) Revenue
from STATE_SUBCATEG_REGION_SLS a11
The SQL Data Format property specifies the format of the date string literal
in the SQL statements when date-related qualifications are present in the
report.
Example
Default yyyy-mm-dd
Oracle dd-mmm-yy
Teradata yyyy/mm/dd
The SQL Decimal Separator property specifies whether a “.” or “,” is used as a
decimal separator. This property is used for non-English databases that use
commas as the decimal separator.
Examples
SQL Hint
The SQL Hint property is used for the Oracle SQL Hint pattern. This string is
placed after the SELECT word in the Select statement. This property can be
used to insert any SQL string that makes sense after the SELECT in a Select
statement, but it is provided specifically for Oracle SQL Hints.
Example
SQL Hint = /* FULL */
Where A1.STORE_NBR = 1
Group by A1.STORE_NBR
The SQL Timestamp Format property allows you to determine the format of
the timestamp literal accepted in the WHERE clause. This is a
database-specific property; some examples are shown in the table below.
Example
The Union Multiple Insert property allows the Analytical Engine to UNION
multiple INSERT statements into the same temporary table. This is a
database-specific property. Some databases do not support the use of
Unions.
• SQL Server
• Teradata
Commit After Final Determines whether to issue a • No Commit after the No Commit after the
Drop COMMIT statement after the final Drop statement final Drop statement
final DROP statement • Commit after the
final Drop statement
Drop Temp Table Determines when to drop an • Drop after final pass Drop after final pass
Method intermediate object. • Do nothing
• Truncate table then
drop after final pass
Fallback Table Type Determines the type of table • Permanent table Permanent table
that is generated if the • True temporary
Analytical Engine cannot table
generate a derived table or
common table.
Table Creation Type Determines the method to • Explicit table Explicit table
create an intermediate table. • Implicit table
Tofollowing
populate dynamic information by the Analytical Engine, insert the
syntax into Table Prefix and Table Space strings:
Alias Pattern
The Alias Pattern property allows you to alter the pattern for aliasing column
names. Most databases do not need this pattern, because their column
aliases simply follow the column name with only a space between them.
However, Microsoft Access needs an AS between the column name and the
given column alias. This pattern is automatically set for Microsoft Access
users. This property is provided for customers using the Generic DBMS
object because some databases may need the AS or another pattern for
column aliasing.
Attribute ID Constraint
This property is available at the attribute level. You can access this property
by opening the Attribute Editor, selecting the Tools menu, then choosing
VLDB Properties.
When creating intermediate tables in the explicit mode, you can specify the
NOT NULL/NULL constraint during the table creation phase. This takes
effect only when permanent or temporary tables are created in the explicit
table creation mode. Furthermore, it applies only to the attribute columns in
the intermediate tables.
Example
NOT NULL
create table ZZTIS003HHUMQ000 (
DEPARTMENT_NBR NUMBER(10, 0) NOT NULL,
STORE_NBR NUMBER(10, 0) NOT NULL)
The Character Column Option and National Character Column Option VLDB
properties allow you to support the character sets used in Teradata. Teradata
allows character sets to be defined on a column-by-column basis. For
example, one column in Teradata may use a Unicode character set, while
another column uses a Latin character set.
MicroStrategy uses two sets of data types to support multiple character sets.
The Char and VarChar data types are used to support a character set. The
NChar and NVarChar data types are used to support a different character set
than the one supported by Char and VarChar. The NChar and NVarChar data
types are commonly used to support the Unicode character set while Char
and VarChar data types are used to support another character set.
You can support the character sets in your Teradata database using these
VLDB properties:
• The Character Column Option VLDB property defines the character set
used for columns that use the MicroStrategy Char or VarChar data types.
If left empty, these data types use the default character set for the
Teradata database user.
If you use the Unicode character set and it is not the default character set
for the Teradata database user, you should define NChar and NVarChar
data types to use the Unicode character set.
For example, your Teradata database uses the Latin and Unicode character
sets, and the default character set for your Teradata database is Latin. In this
scenario you should leave Character Column Option empty so that it uses the
default of Latin. You should also define National Character Column as
CHARACTER SET UNICODE so that NChar and NVarChar data types support
the Unicode data for your Teradata database.
To extend this example, assume that your Teradata database uses the Latin
and Unicode character sets, but the default character set for your Teradata
database is Unicode. In this scenario you should leave National Character
Column Option empty so that it uses the default of Unicode. You should also
define Character Column as CHARACTER SET LATIN so that Char and
VarChar data types support the Latin data for your Teradata database.
The Character Column Option and National Character Column Option VLDB
properties can also support the scenario where two character sets are used,
and Unicode is not one of these character sets. For this scenario, you can use
these two VLDB properties to define which MicroStrategy data types support
the character sets of your Teradata database.
Column Pattern
The Column Pattern property allows you to alter the pattern for column
names. Most databases do not need this pattern altered. However, if you are
using a case-sensitive database and need to add double quotes around the
column name, this property allows you to do that.
Example
The standard column pattern is #0.#1. If double quotes are needed, the
pattern changes to:
"#0.#1"
The Commit After Final Drop property determines whether or not to issue a
COMMIT statement after the final DROP statement.
Commit Level
The Commit Level property is used to issue COMMIT statements after the
Data Definition Language (DDL) and Data Manipulation Language (DML)
statements. When this property is used in conjunction with the INSERT MID
Statement, INSERT PRE Statement, or TABLE POST Statement VLDB
properties, the COMMIT is issued before any of the custom SQL passes
specified in the statements are executed. The only DDL statement issued
after the COMMIT is issued is the explicit CREATE TABLE statement.
Commit is issued after DROP TABLE statements even though it is a DDL
statement.
The only DML statement issued after the COMMIT is issued is the INSERT
INTO TABLE statement. If the property is set to Post DML, the COMMIT is
not issued after an individual INSERT INTO VALUES statement; instead, it
is issued after all the INSERT INTO VALUES statements are executed.
The Post DDL COMMIT only shows up if the Intermediate Table Type VLDB
property is set to Permanent tables or Temporary tables and the Table
Creation Type VLDB property is set to Explicit mode.
The Post DML COMMIT only shows up if the Intermediate Table Type VLDB
property is set to Permanent tables, Temporary tables, or Views.
Examples
No Commit
create table ZZTIS00H8L8MQ000 (
DEPARTMENT_NBR NUMBER(10, 0),
STORE_NBR NUMBER(10, 0)) tablespace users
commit
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
commit
commit
No Commit
create table ZZTIS00H8LCMQ000 tablespace users as
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HARI_STORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
commit
commit
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8M3MQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
This option can also be used along with the MultiSource Option feature,
which allows you to access multiple databases in one MicroStrategy
project. You can define your secondary database instances to disallow
CREATE and INSERT statements so that all information is only inserted
into the primary database instance. For information on the MultiSource
Option feature, see the Project Design Guide.
You can also use this option to avoid the creation of temporary tables on
databases for various performance or security purposes.
This option does not control the SQL that can be created and
executed against a database using Freeform SQL and Query
Builder reports.
The Drop Temp Table Method property specifies whether the intermediate
tables, permanent tables, temporary tables, and views are to be dropped at
the end of report execution. Dropping the tables can lock catalog tables and
affect performance, so dropping the tables manually in a batch process when
the database is less active can result in a performance gain. The trade-off is
space on the database server. If tables are not dropped, the tables remain on
the database server using space until the database administrator drops them.
Examples
partitioning, outer joins, and analytical functions that the report cannot
execute using derived tables, common table expressions, or views. If this is
the case, the Fallback Table Type VLDB property (described above) is used to
execute the report. The common table expression is only supported by UDB
DB2 and Oracle 9i. The temporary table syntax is specific to each platform.
This property can have a major impact on the performance of the report.
Permanent tables are usually less optimal. Derived tables and common table
expressions usually perform well, but they do not work in all cases and for all
databases. True temporary tables also usually perform well, but not all
databases support them. The default setting is permanent tables, because it
works for all databases in all situations. However, based on your database
type, this setting is automatically changed to what is generally the most
optimal option for that platform, although other options could prove to be
more optimal on a report-by-report basis. The database-specific settings are
noted below.
Database Default
To help support the use of common table expressions and derived tables, you
can also use the Maximum SQL Passes Before FallBack and Maximum
Tables in FROM Clause Before FallBack VLDB properties. These properties
(described in Maximum SQL Passes Before FallBack, page 884 and
Maximum Tables in FROM Clause Before FallBack, page 885) allow you to
define when a report is too complex to use common table expressions and
derived table expressions and instead use a fallback table type.
Examples
Permanent Table
create table ZZIS03CT00 (
DEPARTMENT_NBR DECIMAL(10, 0),
STORE_NBR DECIMAL(10, 0))
Derived Table
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join (select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
) pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
Temporary Table
declare global temporary table session.ZZIS03CU00(
DEPARTMENT_NBR DECIMAL(10, 0),
STORE_NBR DDECIMAL(10, 0)) on commit preserve rows not
logged
a11.STORE_NBR
Views
create view ZZIS03CV00 (DEPARTMENT_NBR, STORE_NBR) as
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
The Maximum SQL Passes Before FallBack VLDB property allows you to
define reports to use common table expressions or derived tables while also
using temporary or permanent tables for complex reports.
Using common table expressions or derived tables can often provide good
performance for reports. However, some production environments have
shown better performance when using temporary tables for reports that
require multi-pass SQL.
To support the use of the best table type for each type of report, you can use
the Maximum SQL Passes Before FallBack VLDB property to define how
many passes are allowed for a report that uses intermediate tables. If a report
uses more passes than are defined in this VLDB property, the table type
defined in the Fallback Table Type VLDB property (see Fallback Table Type,
page 878) is used rather than the table type defined in the Intermediate
Table Type VLDB property (see Intermediate Table Type, page 879).
For example, you define the Intermediate Table Type VLDB property to use
derived tables for the entire database instance. This default is then used for
all reports within that database instance. You also define the Fallback Table
Type VLDB property to use temporary tables as the fallback table type. For
your production environment, you define the Maximum SQL Passes Before
FallBack VLDB property to use the fallback table type for all reports that use
more than five passes.
A report is executed. The report requires six passes of SQL to return the
required report results. Usually this type of report would use derived tables,
as defined by the Intermediate Table Type VLDB property. However, since it
uses more passes than the limit defined in the Maximum SQL Passes Before
FallBack VLDB property, it must use the fallback table type. Since the
Fallback Table Type VLDB property is defined as temporary tables, the
report uses temporary tables to perform the multi-pass SQL and return the
report results.
Using common table expressions or derived tables can often provide good
performance for reports. However, some production environments have
shown better performance when using temporary tables for reports that
require joining a large amount of database tables.
To support the use of the best table type for each type of report, you can use
the Maximum Tables in FROM Clause Before FallBack VLDB property (see
Fallback Table Type, page 878) to define how many tables are allowed in a
From clause for a report that uses intermediate tables. If a report uses more
tables in a From clause than are defined in this VLDB property, the table type
defined in the Fallback Table Type VLDB property is used rather than the
table type defined in the Intermediate Table Type VLDB property (see
Intermediate Table Type, page 879).
For example, you define the Intermediate Table Type VLDB property to use
derived tables for the entire database instance. This default is then used for
all reports within that database instance. You also define the Fallback Table
Type VLDB property to use temporary tables as the fallback table type. For
your production environment, you define the Maximum Tables in FROM
Clause Before FallBack VLDB property to use the fallback table type for all
reports that use more than seven tables in a From clause.
A report is executed. The report requires a SQL statement that includes nine
tables in the From clause. Usually this type of report would use derived
tables, as defined by the Intermediate Table Type VLDB property. However,
since it uses more tables in the From clause than the limit defined in the
Maximum Tables in FROM Clause Before FallBack VLDB property, it must
use the fallback table type. Since the Fallback Table Type VLDB property is
defined as temporary tables, the report uses temporary tables to perform the
SQL statement and return the report results.
For a description of this VLDB property, see Character Column Option and
National Character Column Option, page 868.
The Table Creation Type property tells the SQL Engine whether to create
table implicitly or explicitly. Some databases do not support implicit
creation, so this is a database-specific setting.
Examples
Explicit table
create table TEMP1 (
STORE_NBR INTEGER,
TOT_SLS DOUBLE,
PROMO_SLS DOUBLE)
Implicit table
create table TEMP1 as
select a21.STORE_NBR STORE_NBR,
(sum(a21.REG_SLS_DLR) + sum(a21.PML_SLS_DLR)) TOT_SLS,
sum(a21.PML_SLS_DLR) PROMO_SLS
from STORE_DIVISION a21
where a21.STORE_NBR = 1
group by a21.STORE_NBR
These properties can be used to customize the CREATE TABLE SQL syntax
for any platform. All of these properties are reflected in the SQL statement
only if the Intermediate Table Type VLDB property is set to Permanent
Table. Customizing a CREATE TABLE statement is only possible for a
permanent table. For all other valid Intermediate Table Type VLDB settings,
the SQL does not reflect the values set for these properties. The location of
each property in the CREATE TABLE statement is given below.
create /* Table Qualifier */ table /*Table
Descriptor*//* Table Prefix */ZZTIS003RB6MD000 /*Table
Option*/ (
STORE_NBR NUMBER,
CLEARANCESAL DOUBLE)
/* Table Space */
/* Create PostString */
For platforms like Teradata and DB2 UDB 6.x and 7.x versions, the Primary
Index or the Partition Key SQL syntax is placed between the Table Space and
Create Post String VLDB property.
You can determine the default options for each VLDB property for a database
by performing the steps below. This provides an accurate list of default VLDB
properties for your third-party data source for the version of MicroStrategy
that you are using.
888 Default VLDB settings for specific data sources © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 SQL Generation and Data Processing: VLDB Properties A
Prerequisites
• You have a user account with administrative privileges.
3 From the File menu, point to New, and select Database Instance. The
Database Instances Editor opens.
4 In the Database instance name field, type a descriptive name for the
database instance.
6 Click OK to exit the Database Instances Editor and save the database
instance.
7 Right-click the new database instance that you created and select VLDB
Properties. The VLDB Properties Editor opens.
© 2010 MicroStrategy, Inc. Default VLDB settings for specific data sources 889
A SQL Generation and Data Processing: VLDB Properties System Administration Guide Vol. 1
8 From the Tools menu, ensure that Show Advanced Settings is selected.
9 From the Tools menu, select Create VLDB Settings Report. The VLDB
Settings Report dialog box opens.
Asettings
VLDB settings report can be created to display current VLDB
for database instances, attributes, metrics, and other
objects in your project. For information on creating a VLDB
settings report for other purposes, see Creating a VLDB settings
report, page 656.
10 Select the Show descriptions of setting values check box. This displays
the descriptive information of each default VLDB property setting in the
VLDB settings report.
11 The VLDB settings report now displays all the default settings for the data
source. You can copy the content in the report using the Ctrl+C keys on
your keyboard, then paste the information into a text editor or word
processing program (such as Microsoft Word) using the Ctrl+V keys.
12 Once you are finished reviewing and copying the VLDB settings report,
click the close button to close the VLDB Settings Report dialog box.
13 From the File menu, select Close to close the VLDB Properties Editor.
14 You can then either delete the database instance that you created earlier,
or modify it to connect to your data source.
890 Default VLDB settings for specific data sources © 2010 MicroStrategy, Inc.
B
PERMISSIONS AND PRIVILEGES
B.
Introduction
Client-based privileges:
Administrator privileges:
When you edit an object’s access control list using the object’s Properties
dialog box, you can assign a predefined grouping of permissions, or you can
create a custom grouping. For a more detailed discussion of permissions, see
Controlling access to objects: Permissions, page 53. The table below lists the
predefined groupings and the specific permissions each one grants.
Permissions
Grouping Description
granted
View Grants permission to access the object for viewing only. • Browse
• Read
• Use
• Execute
892 Access control list permissions for an object © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Permissions and Privileges B
Permissions
Grouping Description
granted
Full Control Grants all permissions for the object. all are granted
Denied All Explicitly denies all permissions for the object. None of the permissions none; all are
are assigned. denied
Note: This permission overrides any permissions the user may inherit
from any other sources.
Default Neither grants nor denies permissions; all permissions are assigned to none
“Default.”
Custom Allows the user or group to create a custom combination of custom choice
permissions.
Permission Definition
Read View the object’s definition in the appropriate editor, and view the object’s access
control list
Use Use the object when creating or modifying other objects. For example, the Use
permission on a metric allows a user to create a report containing that metric. For more
information, see Permissions and report/document execution, page 58.
© 2010 MicroStrategy, Inc. Access control list permissions for an object 893
B Permissions and Privileges System Administration Guide Vol. 1
As with other objects in the system, you can create an ACL for a server object
that determines what system administration permissions are assigned to
which users. These permissions are different from the ones for other objects
(see table below) and determine what capabilities a user has for a specific
server. For example, you can configure a user to act as an administrator on
one server, but as an ordinary user on another. To do this, you must modify
the ACL for each server definition object by right-clicking the
Administration icon, selecting Properties, and then selecting the Security
tab.
The table below lists the groupings available for server objects, the
permissions each one grants, and the tasks each allows you to perform on the
server.
894 Permissions for server governing and configuration © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Permissions and Privileges B
Default All permissions that are assigned Perform any task on that server.
to "Default"
Custom... custom choice Perform the tasks your custom selections allow.
Privilege availability
A privilege with the note “Server level only” can only be granted at the project
source level. It cannot be granted for a specific project.
Certain privileges are marked with asterisks in the tables below, for the
following reasons:
• Privileges marked with * are included only if you have OLAP Services
installed as part of Intelligence Server.
• Privileges marked with ** are included only if you have Report Services
installed.
• Privileges marked with *** are included only if you have Distribution
Services installed.
These privileges are unavailable (greyed out) in the User Editor if you do not
have a license for the appropriate MicroStrategy product. To determine your
license information, use License Manager to check whether OLAP Services,
Report Services, or Distribution Services are available.
user who has any of these privileges, but none of the Web Analyst or Web
Professional privileges, as a Web Reporter user.
All MicroStrategy Web users that are licensed for MicroStrategy Report
Services may view and interact with a document in Flash Mode. Certain
interactions in Flash Mode have additional licensing requirements:
Web change user preferences change some characteristics of page appearance and report results
Web change view mode toggle between grid, graph, and grid & graph, to hide or show
predefined totals, and to reset reports
Web drill and link use Links to access related data not shown in the original report
results
Web object search search for reports, documents, folders, filters, or templates
Web re-execute report against re-execute a report, hitting the warehouse rather than the server
warehouse cache. If Intelligence Server caching is turned off and this is not
granted, the re-execute button is removed.
Web sort sort report data by clicking on sort icons in column headings
Web subscribe to History list subscribe to periodic execution of reports and view their results via the
History List
Note: A user with this privilege is also considered to have the
Schedule Request privilege in Common Privileges.
Web switch page-by elements switch page-by elements for objects in the Page axis
Web use locked headers use the Lock Grid Headers feature
Web view History List view the History List, to access, sort, or remove reports and
documents from the History List, and to refresh or clear the History List
All MicroStrategy Web users that are licensed for MicroStrategy Report
Services may view and interact with a document in Flash Mode. Certain
interactions in Flash Mode have additional licensing requirements:
* Web add/remove units to/from grid add units to or remove units from an existing grid report in a
in document in View mode Report Services document when in View mode
* Web create derived metrics create new calculations based on other metrics already on a
base report
* Web use Report Objects window use the Report Objects panel
* Web use View filter editor add or modify the View filter for a report
Web add to History List add reports or documents to the History List (requires Web
simultaneous execution privilege)
Web advanced drilling access advanced drill mode through the More Options link on the
report results page
Web choose attribute form display use the Attribute Forms dialog, see attribute forms in the Report
Objects list, see the Attribute Forms context menu options, and
pivot attribute forms
Web create new report access the Create Report folder and design reports, and to run
new reports from the folder where he or she has saved the report
definition
Web edit notes add and edit notes that have been added to a report or document
Web pivot report move rows and columns up or down and left or right, to pivot
from rows to columns and vice versa, and to move metrics,
attributes, custom groups, and consolidations to the Page axis
Web report details access report and document information by clicking the Report
details link on the report, History List, or Wait page
Web report SQL view the SQL code for the report
Web save to Shared Reports save reports and documents to the Shared Reports folder
Web simple graph formatting control all of the functionality found on the first tab of the graph
formatting editor as well as graph formatting toolbar
Web use basic Threshold editor use the Basic Threshold Editor
Analyst groups. License Manager counts any user who has any of these
privileges as a Web Professional user.
All MicroStrategy Web users that are licensed for MicroStrategy Report
Services may view and interact with a document in Flash Mode. Certain
interactions in Flash Mode have additional licensing requirements:
* Web define Intelligent Cube create a report that uses an Intelligent Cube as a data source
report
* Web save derived elements save stand-alone derived elements, separate from the report
** Web document design create a document page, access design mode for documents, and
perform WYSIWYG editing of documents in view mode
Note: This privilege is required to define conditional formatting.
** Web manage document add datasets to and remove datasets from a Report Services document
datasets Note: User must have Web document design privilege as well.
Web define advanced report set the available and default Run and Export modes for a report
options
Web define MDX cube report define a new report that accesses an MDX cube, and see the MDX Cube
option in the Create Report dialog box.
Web format grid and graph change the formats of grid and graph reports. In a grid report, the Format
grid field is displayed; in a graph report, the Graph type and Graph
sub-type fields are displayed.
Web modify the list of report use the Object Browser when viewing a report in view or design mode.
objects (use Object Browser This determines whether the user is a report designer or a report creator.
- all objects) A report designer is a user who can build new reports based on any
object in the project. A report creator can work only within the parameters
of a pre-designed report that has been set up by a report designer. For
more information on this, see the Advanced Reporting Guide.
Web set column widths modify the column widths and row height for a grid report
Web subscribe others view available addresses for all users, and add other MicroStrategy users
to a report or document subscription
Web use design mode modify the report using design mode
Web use filter editor add or modify the report filter for a report
Web use prompt editor use the prompt editor, and create or modify prompts
Common privileges
The predefined MicroStrategy Web Reporter and Desktop Analyst user
groups are assigned the common privileges by default.
* Drill within Intelligent Cube drill within an Intelligent Cube, so no SQL is executed. A user who has
this privilege and executes a drill that can be resolved through OLAP
Services does not generate and execute SQL against the warehouse.
View notes view notes that have been added to a report or document
Use Office make requests and execute reports through the MicroStrategy Office interface
** Mobile View Document view Report Services Documents in the MicroStrategy Mobile client, and
create Mobile document subscriptions in MicroStrategy Web
Create and edit transmitters and Use the Transmitter Editor and Device Editor
devices (server level only)
Use link to History List in email receive an email subscription with a link to a History List, and use
the Data And Link To History List and Link To History List options
when creating an email subscription
Use send now and send a preview preview and send subscriptions immediately
now
Execute multiple source report execute a report that uses a DBInstance other than the project’s
primary DBInstance
Import table from multiple sources import a table from a DBInstance other than the project’s primary
DBInstance
* Use Report Objects window view and use the Report Objects window
Drill and link use Links to access related data not shown in the original report
results
Re-execute report against re-execute a report, hitting the warehouse rather than the server
warehouse cache. If Intelligence Server caching is turned off and this privilege is
not granted, the re-execute button is removed.
Send to e-mail use the Send to e-mail option in the Report editor
Use Data Explorer use the Data Explorer in the Object Browser
Use Report Data Options use the Report Data Options feature
Use Report Editor access the “New” option in the Report Editor, and create new reports
Note: If a user has this privilege but not the Use design view
privilege (in Desktop Designer privileges), she can still create new
reports from templates, but the “Blank report” option is not available.
Use Search Editor use the search feature on all editors and Desktop
* Define Intelligent Cube report create a report that uses an Intelligent Cube as a data source
* Save derived elements save stand-alone derived elements, separate from the report
*** Use bulk export editor use the Bulk Export Editor to define a bulk export report
Define Freeform SQL report define a new report using Freeform SQL, and see the Freeform SQL
icon in the Create Report dialog box.
Define MDX cube report define a new report that accesses an MDX cube.
Define Query Builder report define a new Query Builder report that accesses an external data
source, and see the Query Builder icon in the Create Report dialog box.
Modify the list of report objects add objects to a report, which are not currently displayed in the Report
(use Object Browser) Objects window. This determines whether the user is a report designer
or a report creator. A report designer is a user who can build new
reports based on any object in the project. A report creator can work
only within the parameters of a pre-designed report that has been set
up by a report designer. This privilege is required to edit the report filter
and the report limit. For more information on these features, see the
Advanced Reporting Guide.
Use Find and Replace dialog use the Find and Replace dialog box
Use Formatting Editor use the formatting editor for consolidations, custom groups, and reports
Use Metric Editor use the Metric Editor. Among other tasks, this privilege allows the user
to import DMX (Data Mining Services) predictive metrics.
Use project documentation use the project documentation feature to print object definitions
Architect privileges
These privileges correspond to the functionality available to users of the
Architect product, that is, project designers. The predefined MicroStrategy
Bypass schema objects security modify schema objects without having the necessary permissions for
access checks each object. For example, users with this can update the schema or
use the Warehouse Catalog Browser without having administrator
privileges.
Use Architect editors use the editors within Architect (for example, the Attribute, Fact,
Hierarchy, and Table editors)
Note: This privilege is required to work with logical views.
Use Object Manager use Object Manager to migrate objects between projects
Note: A user cannot log into a project source in Object Manager unless
she has this privilege on at least one project within that project source.
Additionally, a user cannot open a project in Object Manager unless she
has this privilege on that project.
Use Command Manager use Command Manager to run and manage scripts
(server level only)
Administration privileges
These privileges control access to the Administration features listed just
below the project source’s name. They also control access to options in the
Administration menu.
Administer caches have full control over report, document, element, and object caches in a
project
Administer cluster (server add or remove nodes in a cluster, and set the default cluster membership
level only)
Administer subscriptions create, edit, and delete subscriptions, and schedule administrative tasks
Assign security filters grant or revoke a security filter to another user for a project
Assign security roles grant or revoke a security role to another user for a project
Audit change journal use the Change Journal Monitor; view the change history for all objects the
user has Browse access for
Bypass all object security have full control over all objects regardless of the object access
access checks permissions granted to the user, and grant or revoke the Bypass All Object
Security Access Checks privilege to other users.
Note: This privilege is inherently granted for use of Object Manager, Project
Duplicator, and Project Mover, if you have the appropriate privileges to use
those tools.
Configure caches view and set the report, document, element, and object caching properties
Configure change • project level: enable or disable project level auditing, and purge project
journaling audit entries
• server level: enable or disable auditing for all projects and configuration
objects, and purge audit entries for all projects and configuration objects
Configure connection map view, set, and refresh the connection map for a project
Configure governing • project level: view and set project governing settings
• server level: view and set governing settings for all projects and for
Intelligence Server
Configure history lists view and set the History List properties
(server level only)
Configure project basic view and change project settings, and use MDUpdate utility
Configure project data view and change the project’s primary data source, and add and remove
source database instances for use in data marts, Freeform SQL, Query Builder,
and MDX
Configure server basic create, view, and change the server definition
(server level only)
Create and edit contacts create and modify contacts and addresses
and addresses (server level
only)
Create and edit database create and modify database instances and database connections, and set
instances and connections the number of database threads for each database instance and the
(server level only) prioritization map of each database instance
Create and edit schedules create and modify schedules and events
and events (server level
only)
Create and edit users and create and modify users and user groups
groups (server level only) Note: To enable or disable users you must have the Enable User privilege.
To grant or revoke privileges you must have the Grant/Revoke Privilege
privilege.
Enable Intelligence Server in Web, access the Intelligence Server Administration page
administration from Web
Link users and groups to link MicroStrategy users and groups to users and groups from sources
external accounts (server such as Windows NT, LDAP, or a database
level only)
Load and unload project load and unload projects, and configure which projects are loaded on which
cluster nodes at startup
Monitor caches use the Cache Monitor; view information for all caches in a project
Monitor cluster (server level use the Cluster view of the System Maintenance Monitor
only)
Monitor cubes use the Intelligent Cube Monitor; view information for all Intelligent Cubes in
a project
Monitor database use the Database Connection Monitor; view information for all database
connections (server level connections in a project
only)
Monitor History Lists use the History List Monitor; view information for all history list messages in
a project
Monitor jobs use the Job Monitor; view information for all jobs in a project
Monitor project use the Project view of the System Maintenance Monitor
Monitor subscriptions use the Subscription Manager; view information for all subscriptions in a
project
Note: Scheduled administrative tasks are only visible if the user has the
privilege corresponding to the administrative task
Monitor user connections use the User Connection Monitor; view information for all user connections
in a project
Server performance This privilege is deprecated. Server performance can be monitored through
counter monitoring Enterprise Manager.
Web administration access the MicroStrategy Web Administrator page and assign Project
defaults
The default privileges that are automatically granted for these out-of-the-box
security roles are listed below. For information about security roles, see
Defining sets of privileges: Security roles, page 65.
• LDAP Public/Guest
• LDAP Users
• Public/Guest
• Warehouse Users
all users who were able to access Desktop in previous versions can
continue to do so.
All privileges in the Web Reporter privilege group (see Web Reporter
privileges, page 896).
All privileges in the Web Analyst privilege group (see Web Analyst
privileges, page 898).
• Web Drill And Link (in Web • Create Application Objects (in
Reporter) Common Privileges)
• Web Simultaneous Execution • Schedule Request (in Common
(in Web Reporter) Privileges)
• Web View History List (in Web • Use Distribution Services (in
Reporter) Distribution Services)
Some of these privileges are also inherited from the groups that
the Web Analyst group is a member of.
• The MicroStrategy Web Professional group grants the following
privileges:
• Web Drill And Link (in Web • Create Application Objects (in
Reporter) Common Privileges)
• Web Simultaneous Execution • Schedule Request (in Common
(in Web Reporter) Privileges)
• Web View History List (in Web • Use Distribution Services (in
Reporter) Distribution Services)
Some of these privileges are also inherited from the groups that
the Web Professional group is a member of.
The default privileges that are automatically granted for the groups that are
members of the System Monitors group are listed below. Unless otherwise
specified, all privileges are from the Administration privilege group (see
Administration privileges, page 908).
User Administrators
MICROSTRATEGY WEB
Introduction
920 Assigning privileges for editions of MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
The MicroStrategy security model enables you to set up user groups that can
have subgroups within them, thus creating a hierarchy. The following applies
to the creation of user subgroups:
See your license agreement as you determine how each user is assigned to a
given privilege set. MicroStrategy Web products provide three Web editions
(Professional, Analyst, Reporter), defined by the privilege set assigned to
each.
Assigning privileges outside those designated for each edition changes the
user’s edition. For example, if you assign to a user in a Web Reporter group a
privilege available only to a Web Analyst, MicroStrategy considers the user to
be a Web Analyst user.
Within any edition, privileges can be removed for specific users or user
groups. For more information about security and privileges, see Chapter 2,
Setting Up User Security.
© 2010 MicroStrategy, Inc. Assigning privileges for editions of MicroStrategy Web products 921
C Administering MicroStrategy Web System Administration Guide Vol. 1
License Manager enables you to perform a self-audit of your user base and,
therefore, helps you understand how your licenses are being used. For more
information, see Auditing and updating licenses, page 189.
If you have the appropriate privileges, you can find the link to the
Administrator page on the MicroStrategy Web or Web Universal home page.
• Use Microsoft IIS and Windows security to limit access to the page file
922 Using the MicroStrategy Web Administrator page © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
The link to the Administrator page appears only if one of the following
criteria is true:
• You are logged in to a project and have the Web administration privilege.
For steps on how to assign this privilege to a user, see the Desktop online
help.
In the J2EE version, the Administrator page is a servlet and access to the
servlet is controlled using the Web and application servers. The default
location of the Administrator servlet varies depending on the platform you
are using. For details, see the MicroStrategy Installation and Configuration
Guide.
In MicroStrategy Web Universal, when using the J2EE version, users must
have the proper user ID and password to access the Administrator servlet
(mstrWebAdmin). Consult the documentation for your particular Web and
application servers for information about file-level security requirements
and security roles.
© 2010 MicroStrategy, Inc. Using the MicroStrategy Web Administrator page 923
C Administering MicroStrategy Web System Administration Guide Vol. 1
Any changes you make to the project defaults become the default settings for
the current project or for all Web projects if you select the Apply to all
projects on the current Intelligence Server (server name) option from
the drop-down list.
The project defaults include user preference options, which each user can
override, and other project default settings accessible only to the
administrator.
For information on the History List settings, see Saving report results:
History List, page 233.
• When users who are setting User preferences click Load Default Values,
the project default values that the administrator set on the Project
defaults pages are loaded.
The settings are not saved until you click Apply. If you select the Apply to all
projects on the current Intelligence Server (server name) from the
drop-down menu, the settings are applied to all projects, not just the one you
are currently configuring.
Users can change the individual settings for their user preference options by
accessing them via the Preferences link at the top of the Web page. However,
you can set what default values the users see for these options. To do this,
click the Preferences link, then click the Project defaults link on the
left-hand side of the page (under the “Preferences Level” heading).
You can then set the defaults for several categories, including the following:
• General
• Folder Browsing
• Grid display
• Graph display
• History List
• Export
• Print (PDF)
• Drill mode
• Prompts
• Report Services
Each category comprises its own page and includes related settings that are
accessible only to users with the Web administration privilege. For details on
each setting, see the MicroStrategy Web online help for the Web
Administrator.
For more detailed information about this, see the MicroStrategy Installation
and Configuration Guide.
or
926 Integrating Narrowcast Server with MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
Modify the file contents so the corresponding two lines are as follows:
TransactionEngineLocation=machine_name:\\
Subscription Engine\\build\\server
TransactionEngineLocation=MACHINE_NAME:/Subscription
Engine/build/server
2 Share the folder where the Subscription Engine is installed for either the
local Administrators group or for the account under which the
Subscription Administrator service account runs. This folder must be
shared as Subscription Engine.
You should ensure that the password for this account does not expire.
If the Subscription Engine machine’s drive is shared and unshared
multiple times, the following Windows message displays: “System
Error: The network name was deleted.”
© 2010 MicroStrategy, Inc. Enabling users to install MicroStrategy Office from Web 927
C Administering MicroStrategy Web System Administration Guide Vol. 1
Using firewalls
A firewall is a kind of technology that enforces an access control policy
between two systems. A firewall can be thought of as something that exists to
block certain network traffic while permitting other network traffic. Though
the actual means by which this is accomplished varies widely, firewalls can
be implemented using both hardware and software, or a combination of
both.
Therefore, in many environments and for a variety of reasons you may wish
to put a firewall between your Web servers and the Intelligence Server or
cluster. This does not pose any problems for the MicroStrategy system, but
there are some things you need to know to ensure that the system functions
as expected.
Another common place for a firewall is between the Web clients and the Web
servers. The following diagram shows how a MicroStrategy system might
look with firewalls in both of these locations:
Metadata
EXTERNAL FIREWALL
INTERNAL FIREWALL
Web client
MicroStrategy Data
MicroStrategy
Web Server Warehouse
Intelligence Server
928 Using additional security features for MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
Regardless of how you choose to implement your firewall(s), you must make
sure that the client Web browsers can communicate with MicroStrategy Web
products, that MicroStrategy Web products can communicate with
Intelligence Server, and vice versa. To do this, certain communication ports
must be open on the server machines and the firewalls must allow Web
server and Intelligence Server communications to go through on those ports.
Most firewalls have some way to specify this. Consult the documentation that
came with your firewall solution for details.
You can change this port number if you wish. See the steps in the
next procedure To change the port through which MicroStrategy
Web and Intelligence Server communicate, page 929 to learn
how.
© 2010 MicroStrategy, Inc. Using additional security features for MicroStrategy Web products 929
C Administering MicroStrategy Web System Administration Guide Vol. 1
change it for both the Web servers and the Intelligence Servers. The port
numbers on both sides must match.
IftheyouWebareserver
using clusters, you must make sure that all machines in
cluster can communicate with all machines in the
Intelligence Server cluster.
2 In Desktop, log in to the project source that connects to the server whose
port you want to change.
4 On the Intelligence Server Options tab, type the port number you wish to
use in the Port Number box. Save your changes.
8 On the Connection tab, enter the new port number and click OK to save
your changes.
system
You must update this port number for all project sources in your
that connect to this Intelligence Server.
Itproduct
probably is not connected because the MicroStrategy Web
does not yet know the new port number you assigned to
Intelligence Server.
930 Using additional security features for MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
12 In the Port box, type the port number you wish to use. This port number
must match the port number you set for Intelligence Server. An entry of 0
means use port 34952 (the default).
13 Click Save to save your changes and you can connect to Intelligence
Server again.
IfIntelligence
the port numbers for your MicroStrategy Web product and
Server do not match, you get an error when the
MicroStrategy Web product tries to connect to Intelligence Server.
For detailed steps for any of the high-level steps listed above, see the Desktop
online help.
Setting up and using SSL between Web browsers and MicroStrategy Web
products is a process that is totally external to MicroStrategy products. You
do not need to do anything special for MicroStrategy Web products to use
SSL. The following steps describe the high-level process for enabling SSL on
your Web server.
To use SSL
2 Once you have the certificate, you need to install the certificate on your
Web server and enable SSL. Refer to your Web server documentation for
information on installing the certificate and configuring the Web server
to use SSL.
© 2010 MicroStrategy, Inc. Using additional security features for MicroStrategy Web products 931
C Administering MicroStrategy Web System Administration Guide Vol. 1
Once you have a secure Web server, all you need to do is use https:// to
ensure secure communication between your Web clients and the Web server.
For example, the URL to access MicroStrategy Web in your environment is
http://machine_name/microstrategy/asp, where machine_name is
the name of the Web server. If you enable SSL, you would use the following
URL instead: https://machine_name/microstrategy/asp
Using cookies
A cookie is a piece of information that is sent to your Web browser—along
with an HTML page—when you access a Web site or page. When a cookie
arrives, your browser saves this information to a file on your hard drive.
When you return to the site or page, some of the stored information is sent
back to the Web server, along with your new request. This information is
usually used to remember details about what a user did on a particular site or
page for the purpose of providing a more personal experience for the user.
For example, you have probably visited a site such as Amazon.com and found
that the site recognizes you. It may know that you have been there before,
when you last visited, and maybe even what you were looking at the last time
you visited.
MicroStrategy Web products use cookies for a wide variety of things. In fact,
it uses them for so many things that the application cannot work without
them. Cookies are used to hold information about user sessions, preferences,
932 Using additional security features for MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
available projects, language settings, window sizes, and so on. For a complete
and detailed reference of all cookies used in MicroStrategy Web and
MicroStrategy Web Universal, see Appendix D, MicroStrategy Web Cookies.
The sections below describe the cookie related settings available in each
product.
Disable cookies: The application does not store any cookies. This
means that no settings are stored in cookies; instead, the application
stores persistable settings (for example, the open and close state of a
view filter) in the metadata. To make your application highly secure,
you can select this option.
If you enable cookies, you also have the option to enable or disable:
© 2010 MicroStrategy, Inc. Using additional security features for MicroStrategy Web products 933
C Administering MicroStrategy Web System Administration Guide Vol. 1
Using encryption
Encryption is the translation of data into a sort of secret code for security
purposes. The most common use of encryption is for information that is sent
across a network so that a malicious user cannot gain anything from
intercepting a network communication. Sometimes information stored in or
written to a file is encrypted. The SSL technology described earlier is one
example of an encryption technology.
2 At the top of the page or in the column on the left, click Security to see
the security settings.
4 Click Save to save your changes. Now all communication between the
Web server and Intelligence Server is encrypted.
934 Using additional security features for MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
gaining access to the physical machine that hosts the Web application. For
this reason you should make sure that the machine is in a secure location and
that you restrict access to the files stored on it using the standard file-level
security offered by the operating system.
For example, with Microsoft IIS, by default only the “Internet guest user”
needs access to the virtual directory. This is the account under which all file
access occurs for Web applications. In this case, the Internet guest user
needs the following privileges to the virtual directory: read, write, read and
execute, list folder contents, and modify.
However, only the administrator of the Web server should have these
privileges to the Admin folder in which the Web Administrator pages are
located. When secured in this way, if users attempt to access the
Administrator page, the application prompts them for the machine’s
administrator login ID and password.
In addition to the file-level security for the virtual directory and its contents,
the Internet guest user also needs full control privileges to the Log folder in
the MicroStrategy Common Files, located by default in C:\Program
Files\Common Files\MicroStrategy. This ensures that any
application errors that occur while a user is logged in can be written to the
log files.
The file-level security described above is all taken care of for you when
you install the ASP.NET version of MicroStrategy Web using
Microsoft IIS. These details are just provided for your information.
If you are using the J2EE version of MicroStrategy Web Universal you may
be using a different Web server, but most Web servers have similar security
requirements. Consult the documentation for your particular Web server for
information about file-level security requirements.
© 2010 MicroStrategy, Inc. Using additional security features for MicroStrategy Web products 935
C Administering MicroStrategy Web System Administration Guide Vol. 1
HTTP or HTTPS
TCP/IP
ODBC
Data Warehouse
Database
ODBC
MicroStrategy
Intelligence Server
Metadata password
encrypted in NT registry Metadata Database
936 Using additional security features for MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
There are three settings related to session time-out that may affect
MicroStrategy Web users.
<sessionState
mode=”InProc”
stateConnectionString=
“tcpip=127.0.0.1:42424”
sqlConnectionString=”data source=
127.0.0.1;user id=sa;password=”
cookieless=”false”
timeout=”20”
/>
This setting does not affect Web Universal, since it does not use
.NET architecture.
© 2010 MicroStrategy, Inc. FAQs for configuring and tuning MicroStrategy Web products 937
C Administering MicroStrategy Web System Administration Guide Vol. 1
This setting does not automatically reconnect the .NET session object.
The following table demonstrates how these settings interact in various
combinations.
Intelligence
web.config Allow automatic login
Server User idle time Result
time-out if session is lost
time-out
• You can modify certain settings in the MicroStrategy Web server machine
or application for best performance. Details for MicroStrategy Web and
Web Universal follow:
increase the server machine’s Java Virtual Machine heap size. For
information on doing this, see MicroStrategy Tech Note
TN5000-071-0001.
938 FAQs for configuring and tuning MicroStrategy Web products © 2010 MicroStrategy, Inc.
System Administration Guide Vol. 1 Administering MicroStrategy Web C
Also, see the documentation for your particular Web application server
for additional tuning information. In general, these are the things you can
do:
© 2010 MicroStrategy, Inc. FAQs for configuring and tuning MicroStrategy Web products 939
C Administering MicroStrategy Web System Administration Guide Vol. 1
940 FAQs for configuring and tuning MicroStrategy Web products © 2010 MicroStrategy, Inc.
D
MICROSTRATEGY WEB
D.
COOKIES
Introduction
This appendix provides detailed information for all cookies used in:
Session information
MSTRSsn_<Server_name>_<Project_Name>_<Port_
Number>
If, for any reason, this cookie is not present at request time, the application
first redirects the execution to the login page, sign the user back in, and then
redirect the flow of the application to the page originally requested. When
the flow is redirected to the login page, if the user logged in using standard
authentication, the login page prompts the user for his or her password. If
the user logged in using either Windows or Guest authentication, the
application automatically logs the user in; the user will not even notice it.
This happens because the session ID is lost.
Cookie subkeys
Subkey Purpose
PWD Stores the password used to log in. An Administrator Preference determines
whether the password is encrypted or not (not encrypted by default). The
password is saved here if the user has not chosen to save the password on the
Web browser.
Subkey Purpose
Prv1 Stores the first set of User Privileges as a hexadecimal value (denoted by &H).
Possible values are as follows:
PRIVILEGE_WEBADMINISTRATOR = &H1
PRIVILEGE_WEBUSER = &H2
PRIVILEGE_WEBVIEWHISTORYLIST = &H4
PRIVILEGE_WEBREPORTMANIPULATIONS = &H8
PRIVILEGE_WEBCREATENEWREPORT = &H10
PRIVILEGE_WEBOBJECTSEARCH = &H20
PRIVILEGE_WEBCHANGEUSEROPTIONS = &H40
PRIVILEGE_WEBSAVEREPORT = &H80
PRIVILEGE_WEBDRILLANYWHERE = &H100
PRIVILEGE_WEBEXPORT = &H200
PRIVILEGE_WEBPRINTMODE = &H400
PRIVILEGE_WEBDELETE = &H800
PRIVILEGE_WEBPUBLISH = &H1000
PRIVILEGE_WEBREPORTDETAILS = &H2000
PRIVILEGE_WEBREPORTSQL = &H4000
PRIVILEGE_WEBADDHISTORYLIST = &H00008000
PRIVILEGE_WEBCHANGEVIEWMODE = &H10000
PRIVILEGE_WEBDRILL = &H20000
PRIVILEGE_WEBDRILLONMETRICS = &H40000
PRIVILEGE_WEBCHANGESTYLE = &H80000
PRIVILEGE_WEBSCHEDULING = &H100000
PRIVILEGE_WEBSIMULTANEOUSEXECUTION = &H200000
PRIVILEGE_WEBSORT = &H400000
PRIVILEGE_WEBSWITCHPAGEBY = &H800000
PRIVILEGE_WEBSAVETEMPLATEFILTER = &H1000000
PRIVILEGE_WEBFILTERONSELECTION = &H2000000
PRIVILEGE_WEBUSEREPORTFILTEREDITOR = &H4000000
PRIVILEGE_WEBCREATEDERIVEDMETRICS = &H8000000
PRIVILEGE_WEBMODIFYSUBTOTALS = &H10000000
PRIVILEGE_WEBUSEREPORTOBJECTSWINDOW = &H20000000
Subkey Purpose
Prv2 Stores the second set of User Privileges as a hexadecimal value (denoted by &H).
Possible values are as follows:
PRIVILEGE_WEBFORMATTINGEDITOR = &H40000001
PRIVILEGE_WEBSCHEDULEEMAIL = &H40000002
PRIVILEGE_WEBSENDNOW = &H40000004
PRIVILEGE_WEBMODIFYREPORTLIST = &H40000008
PRIVILEGE_WEBUSEDESIGNMODE = &H40000010
PRIVILEGE_WEBALIASOBJECTS = &H40000020
PRIVILEGE_WEBCONFIGURETOOLBARS = &H40000040
PRIVILEGE_WEBUSEQUERYFILTEREDITOR = &H40000080
PRIVILEGE_WEBREEXECUTEREPORTAGAINSTWH = &H40000100
PRIVILEGE_WEBSIMPLEGRAPHFORMATTING = &H40000200
PRIVILEGE_WEBUSELOCKEDHEADERS = &H40000400
PRIVILEGE_WEBSETCOLUMNWIDTHS = &H40000800
Lng Indicates the locale ID associated with the Intelligence Server session.
ReturnURL Stores the URL Query String of the last report, document, or folder visited or
executed.
ReturnName Stores the metadata object name of the last report, document, or folder visited or
executed.
StartPageURL Stores the URL Query String of the default start page for the Intelligence Server
project.
StartPageName Stores the Web Application Page or Feature name of the default start page for the
project. For example, Shared Reports, My Reports, and so on.
WrkSet Indicates if the user has privileges to use the Working Set.
Template Stores the DssObjectId of the last template chosen by the user (by clicking on it
while browsing folders inside the Web application).
Filter Stores the DssObjectId of the last filter chosen by the user (by clicking on it
while browsing folders inside the Web application).
ShowFilterTemplate Indicates whether to allow the user to execute filter and template combinations.
ShRptsID The folder ID that used for the "Shared Reports" folder.
ShRprtsName The name of the folder used for the "Shared Reports" folder.
Stores the last user ID logged successfully into the application. This is a
permanent cookie. This cookie is only used at login time to preset the user ID
on the login page. If the cookie is not present, the user ID text box on the
login page could be empty.
Project information
MSTRPrj_<Server_name>_<Project_Name>
Cookie subkeys
Subkey Purpose
PWD Stores the password used to log in if the user decided to save the password
in the Web browser. The value of this cookie is never encrypted.
Disp_Mode Indicates whether or not the user had the outline mode option in the report
page selected the last time he used the application.
Current language
Lng
Indicates the locale ID used in the current or last visited project. This is a
temporary cookie. This cookie is created at login time. If the cookie is lost in
the middle of a session, the application recovers this information from the
information included in the session information cookie.
GUI settings
MstrWeb
This cookie stores information specific to the look and feel of the application.
This is a temporary cookie. If the cookie is not present at request time, the
values it holds take the default values.
Cookie subkeys
Subkey Purpose
HelpSection Indicates whether the help section on the left toolbar is visible or not:
• 0 means the help section is not shown to the user.
• 1 means the help section is shown to the user.
Stores the ID and name of autostyles that are stored in the 'My objects'
folder. This cookie is created when a user logs into the MicroStrategy Web
application.
Stores the ID and name of default autostyles that are created when you
upgrade a project to version 7.2.x and that are stored in the 'Autostyles'
folder. This cookie is created when a user logs into the MicroStrategy Web
application.
Connection information
ConnectionInfo
Cookie subkeys
Subkey Purpose
This cookie is used for caching the information about the Intelligence Servers
to which the Web Server can potentially connect and the projects each
Intelligence Server contains. This is a temporary cookie. The subkeys are a
set of four different cookies. If this cookie is not received at request time, it is
created from the Clusters collection in the XML API.
Cookie subkeys
Subkey Purpose
P<n>ALIAS Stores the project alias of the project. This alias is set via the Preferences
page.
This cookie stores the user Preferences common to all the projects and
Intelligence Servers. This is a permanent cookie. This cookie is read at login
time. If the cookie is not received, the application assumes global user
preferences have been saved when caching the preferences.
Cookie subkeys
Subkey Purpose
Cached preferences
CurrUsrOpt
When the user enters the application and logs in to a specific project, all the
applicable preferences, either from the Administrator Preferences, Project
Defaults, or the User Preferences are combined and cached in this cookie.
This is a temporary cookie. The application assumes all cookies are cached at
login time. If the information contained in this cookie is lost, the application
might behave unpredictably depending on the preference being read. All
check box preferences behave as FALSE or cleared, numeric preferences
might raise an error.
Cookie subkeys
Subkey Purpose
Preferences
For reference, this section lists the different preferences and their
corresponding codes.
• Temporary cookies: are remembered only while the user keeps the same
browser session open. The information saved in these cookies is used for
all projects that a user might access in the same browser session. These
cookies are deleted when the browser is closed.
• Project cookies: similar to temporary cookies, but these are different from
project to project. These cookies are deleted when a user logs out of
MicroStrategy Web (even though the browser may remain open).
Permanent cookies
Server location
sLoc
This cookie holds the identifier of the locale used by default when rendering
HTML content by the application. It is used by the Admin servlet and the
Session Manager to display the appropriate localized interface to the user. It
should be a copy of the actual preference saved in the metadata. It is
overwritten if the value found in the metadata is different from the one
stored in the cookie.
rj
Stores whether or not the user wants to remove at logout time the jobs
requested during a session. This is used each time the user logs in to a new
project. It should be a copy of the actual preference saved in the metadata. It
is overwritten each time the user logs in if the value found in the metadata is
different from the one stored in the cookie.
Cancel requests
cr
Stores whether or not the user wants to cancel at logout time the jobs
requested during a session that are still pending execution. This is used each
time the user logs in to a new project. It should be a copy of the actual
preference saved in the Metadata. It is overwritten each time the user logs in
if the value found in the metadata is different from the one stored in the
cookie.
ft
The name of the cookie is fixed, regardless of how the toolbar beans are
named. There is only one cookie representing both the grid and graph
toolbars.
ob
lTbar
related
Help display
hlp
Determines if the Help section is to be shown. This is used by all pages that
show the Need Help link (most pages in the application). All pages that have
a collapsible help section have the same value to ensure consistency (if it is
hidden on one page, it is also hidden for the rest of the pages).
rt
pivB
Determines whether or not the pivot buttons are displayed on grid. It is used
by ReportGridDisplayCell, which is called from
ReportTransformGrid. It is also mapped to a formal parameter in
pageConfig.xml.
sSrt
Determines whether or not the sort buttons are displayed on grid. It is used
by ReportGridDisplayCell, which is called from
ReportTransformGrid. It is also mapped to a Formal parameter in
pageConfig.xml.
avf
Determines whether changes to the view filter are applied to the grid/graph
immediately. If it is ON, the update is applied automatically. If it is OFF, the
update is applied after the user clicks Apply. It is used by
Debug flags
dbf
This cookie is used to store the debug flags, which determine the debug
information to include when transforming a bean. The information is
rendered as a comment in the HTML and it usually includes the bean's XML,
formal parameter values, and request keys. This cookie is set by adding
debug=value in the URL.
Page-by display
pbs
This is the cookie for the browser setting that determines whether or not to
display the page-by section. The ReportFrameBean modifies this cookie
each time the user toggles the display.
urf
This is the cookie for the browser setting that determines whether or not to
display the report filter section. The ReportFrameBean modifies this
cookie each time the user toggles the display.
uvf
This is the cookie for the browser setting that determines whether or not to
display the view filter section. The ReportFrameBean modifies this cookie
each time the user toggles the display.
iFrame display
iFrameVisible
This cookies indicates whether or not the hidden IFrame used for report
manipulations is visible to the user on the HTML pages.
Temporary cookies
Features state
mstrFeat
Used to save the state of the Features component on a cookie. This cookie is
only used when the Administrator has allowed users to save this type of
information in user cookies instead of session variables.
mstrSmgr
Project cookies
lpn
Saves the name of the page that was last accessed successfully. It is used by
the Cancel feature for the application to know which page to return if an
action is cancelled by the user.
lps
Saves the minimal state of the beans on the page that was last accessed
successfully. It is used by the Cancel feature to know the state of objects on
the page to return if an action is canceled by the user.
Return to page
rtn
Name of the page that is the target of a “Return To” link. This is used by the
Return To feature for the application to know which page to return when the
user clicks this link.
Return to state
rts
Saves the minimal state of the beans on the page that is the target of a
“Return To” link. It is used by the Return To feature to know the state of
objects on the page to return if a user clicks this link.
Return to title
rtt
Saves the title of the page that is the target of a “Return To” link. It is used by
the Return To feature for the application to know the title of the page to be
displayed as the link label.
Last folder
lf
Name of the browser setting that represents the last folder visited.
lsf
Name of the browser setting that represents the last system folder visited.
lpnDM
Name of the browser setting that represents the last page name visited for
Design Mode.
lpnDM
Name of the browser setting that represents the state of last page visited for
Design Mode.
Filter ID
fid
Template ID
tid
PERFORMANCE LOGGING
© 2010 MicroStrategy, Inc. About the Diagnostics and Performance Logging tool 961
E Diagnostics and Performance Logging System Administration Guide Vol.1
MicroStrategy Software
Components Trace Level
Feature
Custom Group Editor • DSS CommonDialogsLib All the components perform Function
• DSS CommonEditorControlsLib Level Tracing except for DSS
• DSS Components Components, which can also perform
• DSS DateLib Explorer and Component Tracing.
• DSS EditorContainer
• DSS EditorManager
• DSS EditorSupportLib
• DSS ExpressionboxLib
• DSS FilterLib
• DSS FTRContainerLib
• DSS ObjectsSelectorLib
• DSS PromptEditorsLib
• DSS PromptsLib
962 Features you can trace and log © 2010 MicroStrategy, Inc.
System Administration Guide Vol.1 Diagnostics and Performance Logging E
MicroStrategy Software
Components Trace Level
Feature
HTML Document Editor • DSS CommonDialogsLib All the components perform Function
• DSS Components Level Tracing except for DSS
• DSS DocumentEditor Components, which can also perform
• DSS EditorContainer Explorer and Component Tracing.
• DSS EditorManager
© 2010 MicroStrategy, Inc. Features you can trace and log 963
E Diagnostics and Performance Logging System Administration Guide Vol.1
MicroStrategy Software
Components Trace Level
Feature
964 Features you can trace and log © 2010 MicroStrategy, Inc.
System Administration Guide Vol.1 Diagnostics and Performance Logging E
MicroStrategy Software
Components Trace Level
Feature
© 2010 MicroStrategy, Inc. Features you can trace and log 965
E Diagnostics and Performance Logging System Administration Guide Vol.1
MicroStrategy Software
Components Trace Level
Feature
966 Features you can trace and log © 2010 MicroStrategy, Inc.
System Administration Guide Vol.1 Diagnostics and Performance Logging E
MicroStrategy Software
Components Trace Level
Feature
© 2010 MicroStrategy, Inc. Features you can trace and log 967
E Diagnostics and Performance Logging System Administration Guide Vol.1
968 Features you can trace and log © 2010 MicroStrategy, Inc.
GLOSSARY
access control list A list of users and groups and the access permissions that
(ACL) each has for an object.
child dependency Occurs when an object uses other objects in its definition.
concurrent users Users who execute reports or use the system in one way or
another in the same time.
See also:
• login ID
• password
element cache Most-recently used lookup table elements that are stored in
memory on Intelligence Server or MicroStrategy Desktop
machines so they can be retrieved more quickly.
encryption The translation of data into a sort of secret code for security
purposes.
history cache Report results saved for future reference via the History List
by a specific user.
History List A folder where users put report results for future references.
idle time The time during which a user stops actively using a session,
for example, not using the project, not creating or executing
reports.
inbox synchronization The process of synchronizing inboxes across all nodes in the
cluster so that all the nodes contain the same History List
messages.
Intelligent Cube A data structure containing data from the data warehouse
that is stored in memory. Executing a report against an
Intelligent Cube is faster and causes less database load than
executing the report against the data warehouse. Intelligent
Cubes are part of the OLAP Services add-on for Intelligence
Server.
Lightweight Directory An open standard Internet protocol running over TCP/IP and
Access Protocol designed to maintain and work with large directory services.
(LDAP) An LDAP directory can be used to centrally manage users in a
MicroStrategy environment by implementing LDAP
authentication.
matching cache Report results retained for the purpose of being reused by the
same report requests later on.
matching-history cache A matching cache with at least one History List message
referencing it.
memory request idle The mode in which Intelligence Server denies requests for
mode memory until its memory usage drops below the low
watermark.
message lifetime Determines how long (set in days) messages can exist in a
user’s History List.
permissions Define for objects the degree of control users have over them.
private bytes The current number of bytes a process has allocated that
cannot be shared with other processes.
report instance A container for all objects and information needed and
produced during report execution including templates,
filters, prompt answers, generated SQL, report results, and so
on. It is the only object that is referenced when executing a
report, being passed from one special server to another as
execution progresses.
system prompt A special type of prompt that does not require an answer
from the user. A system prompt is answered automatically by
the system. For example, the User Login system prompt is
answered automatically with the login name of the user who
runs the report. System prompts can be used in filters and
metric expressions.
user profile What the user is doing when he or she is logged in to the
system.
virtual bytes The limit associated with Intelligence Server’s virtual address
space allocation is the committed address space (memory
actually being used by a process) plus the reserved address
space (memory reserved for potential use by a process).
virtual memory The amount of physical memory (RAM) plus Disk Page file
(also called the swap file).
VLDB property A group of settings used to control SQL syntax or behavior for
different DBMS platforms. VLDB properties initialize the
SQL generation standards for each DBMS platform and allow
you to optimize SQL generation for your data warehouse
configuration.
VLDB settings Settings that affect the way MicroStrategy Intelligence Server
interacts with the data warehouse to take advantage of the
unique optimizations that different databases offer. Each
VLDB property has two or more of VLDB settings.
XML cache A report cache in XML format that is created and available
for use on the Web.
L license
auditing 192
latency and project failover 507
check 187
LDAP defined on 106
CPU 188
anonymous/guest users 134
managing 189
authentication 106
out of compliance 188
binding authentication 132
time to run license check 187
clear text 111
updating 193
connection pooling 142
verifying named user 186
Connectivity Wizard 113
license compliance, CPU affinity and 195
database passthrough 135
License Manager 190
directory defined on 106
Lightweight Directory Access Protocol. See
group search filters 117 LDAP.
host 111 Linux
importing at login 121 install LDAP SDK 110
importing in batch 122 load balancing defined on 483
importing users and groups 119 loaded project mode 456
IP address 111 loading
linking a Windows login 136 project 453, 459
linking users and groups 130 project defaults 924
list of groups search filter 124 report cache from disk 467
list of users search filter 123 saved report cache 218
password comparison only 132 local cache file 487
port 111 locking a project 303
SDK 108 Log Destination Editor 604
SDK on UNX/Linux 110 log file
search root 114
creating 604
server defined on 106 diagnostics 605
SSL 111 location 605
SSL certificates 112 reading 605
synchronization schedules 141 viewing in the Monitor 607
troubleshooting 145 logging configuration 598
user group 50 login ID, using to restrict element
user search filters 116 caching 260
LDAP authentication low watermark defined on 567
FAQ 626 LWM. See low watermark.
troubleshooting 622
LDAP user group 50
governing 526
X
XML
cache 209
converting to HTML for Web 41
delta and reports 526
drill path, governing 553
drill path, personalized 61
MCM and 562
Project Merge Wizard 302
results delivery and 550
size 551
XML cache defined on 209
Cache Monitor 466