Você está na página 1de 5

#*********************************************************************** # Example INIT.

ORA file for data-warehousing applications #*********************************************************************** # This file is provided by Oracle Corporation to help you customize # your RDBMS installation for your data warehousing or # business intelligence application. The primary purpose of this file # is to provide a starting point for parameter settings in a # data-warehouse. This file is designed for release 9.0. # # All of the parameters in this file and discussed in more detail # in the documentation. This file should be used for guidance # on the setting of parameters, not as a reference for parameters. # # 'Data warehousing' is an expansive term. In general, this parameter # file assumes that a data warehouse is characterized by: # - end-users execute only queries (rather than updates) # - end-user queries often examine large amounts of data # - data-loading and updating is managed in separate operations; # often, data-modifications occur during batch operations although # some data warehouses receive new data throughout the day # # Some parameter settings are generic to any data-warehouse application. # Other parameters depend upon the size of the data warehouse; different # settings are provided for these parameters, for the following categories # of data warehouses: # Category Size of raw data CPUs Memory # DEMO <1GB 1 128MB # SMALL <100GB 4 ~1GB # MEDIUM 100-500GB 4-12 4-10GB # LARGE >500GB 12-16+ >10GB # 'Raw data' refers to the size of the actual data, and does not # include index space or temp space. # # The uncommented parameters in this init.ora file are configured for # a 'demo' system. These parameters are suitable for using the 'Sales # History' Schema (a sample data warehouse schema, which is included # on the Oracle9i CD), which is used throughout Oracle's documentation # and training related to data warehousing. Most customers will be # able to install and run this schema on a single-CPU workstation. # # More detailed information on all parameters can be found in the # in the documentation. # # This parameter file provide initial guidelines for the configuration # parameters of a data warehouse. Using these guidelines, you should # be able to achieve good performance for a wide variety of data # warehouse applications. However, further tuning of these parameters # for a specific application may yield improved performance. # # INSTRUCTIONS: Edit this file and the other INIT files at your site, # either by using the values provided here or by providing your own. # If you are using Oracle Real Application Clusters, place an IFILE= # line into each instance-specific INIT file that points at this file. #*********************************************************************** # Database parameters #***********************************************************************

# Database blocks should be large in data warehouses. This improves # performance for operations involving large amounts of data. db_block_size = 8192 # For a large data warehouse, db_files should be set to a large value. #db_files = 1000 #*********************************************************************** # Memory parameters #*********************************************************************** # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # In a data warehouse, the majority of physical memory will be allocated for the one of the following two purposes: Runtime memory: used for sorting and hashing data during query processing (governed by the parameter pga_aggregate_size) Data caching: used to accelerate performance by avoid disk accesses (governed by the parameter db_cache_size) Additionally, a significant amount of memory may need to be allocated for: Shared pool: used for storing shared memory constructs (governed by the parameter shared_pool_size) Large pool: used during parallel-execution processing (governed by the parameter large_pool_size) Memory is managed globally. The DBA should first determine how much memory is available for Oracle to use. Then, the DBA should choose memory parameters so that pga_aggregate_size + db_cache_size + shared_pool_size + large_pool_size is roughly equal to the amount of memory available for the Oracle database. For example, suppose that a DBA is managing a small data mart. The data mart server has 1GB of physical memory. The DBA has determined that 500M of memory will be used by the operating system and other applications, so that 500M is available for Oracle. The DBA may choose the following settings: shared_pool_size = 50M pga_aggregate_size = 200M db_cache_size = 200M large_pool_size = <default> The total memory utilization is 450M plus a system-determined value for the large pool. The following sections discuss each of these memory-related parameters in more detail. These examples assume that the data warehouse server has 1GB, 8GB, and 16GB respectively for small, medium, and large configurations. Runtime memory (the memory used for sorting and hashing during query execution) is automatically and globally managed when the pga_aggregate_target parameter is set. For data warehouse workloads which involve sorts and joins of large volumes of data, the pga_aggregate_target should be set to a large value. pga_aggregate_target should, in general, be equal to 20-80% of the available memory, depending on the workload. The values below assume a mixed data-warehouse workload. This parameters (introduced in Oracle9i) replaces all of the

# following parameters: hash_area_size, sort_area_size, # create_bitmap_area_size, and bitmap_merge_area_size pga_aggregate_target = 30M #pga_aggregate_target = 200M #pga_aggregate_target = 3000M #pga_aggregate_target = 6000M # # # # # # # # # #DEMO #SMALL #MEDIUM #LARGE

The database cache is also a globally-managed portion of memory. The database cache should be set to a large value for data warehouse workloads which involves short-running queries and/or the access of small tables and indexes. db_cache_size should, in general, be equal to 20-80% of the available memory, depending on the workload. The values below assume a mixed data-warehouse workload. #DEMO #SMALL #MEDIUM #LARGE

db_cache_size = 30M #db_cache_size = 200M #db_cache_size = 3000M #db_cache_size = 6000M

# Shared pool size should be, in general, equal to 5-10% of the # available memory. Data warehouses typically do not require as much # memory for shared pool as OLTP systems. shared_pool_size = 20M #shared_pool_size = 50M #shared_pool_size = 400M #shared_pool_size = 800M # # # # # # # # # # # # # # # # # # # # # # # # #DEMO #SMALL #MEDIUM #LARGE

The default for large_pool_size should appropriate for most environments. The Large Pool is used for several purposes. In a data warehouse the majority of the space in the Large Pool will be used for parallel-execution internal message buffers. The amount of memory required by parallel-execution is proportional to the product of the number of concurrent parallel-execution users and the square of the number of CPU's. The documentation describes in detail how to estimate the default size of the Large Pool, and the conditions under which this parameter should be set explicitly. Here are some very general estimates on the amount of memory required for the Large Pool based on the number of CPU's: 4 cpus: 5M (with parallel_threads_per_cpu = 4) 8 cpus: 5M (with parallel_threads_per_cpu = 2) 8 cpus: 20M (with parallel_threads_per_cpu = 4) 16 cpus: 20M (with parallel_threads_per_cpu = 2) 32 cpus: 80M (with parallel_threads_per_cpu = 2) The Large Pool is only used for parallel-execution message buffers when parallel_automatic_tuning is enabled. If

# parallel_automatic_tuning is not utilitized, then parallel-execution # message buffers are stored in the shared pool, and the # shared_pool_size parameter should be adjusted appropriately. #*********************************************************************** # Parallel Execution parameters #*********************************************************************** # # # # # # Parallel execution parameters were greatly simplified in Oracle8i. Data warehouses developed on older releases of Oracle may use different init.ora parameters. While these older parameters continue to be supported, these parameters below are recommended for all new data warehouses, and should be considered when upgrading data warehouses from previous releases.

# Setting parallel_automatic_tuning will result in the database # configuring itself to support parallel execution. parallel_automatic_tuning = true # This parameter determines the default number of parallel execution # processes. Typically, 2 parallel processes per CPU provides good # performance. However, for systems with a smaller number of CPUs or # for systems in which the IO subsystem is slow relative to the the # CPU's, more parallel processes may be desired and the value of this # parameter may be increased. parallel_threads_per_cpu = 4 #SMALL #parallel_threads_per_cpu = 2 or 4 #MEDIUM #parallel_threads_per_cpu = 2 #LARGE #*********************************************************************** # Optimizer and query parameters #*********************************************************************** # All data warehouses should use the cost-based optimizer. All basic # data warehouse performance features, such as star-query support, # hash joins, parallel execution, and bitmap indexes are only # accessible via the cost-based optimizer. optimizer_mode = choose # When using a star schema, set this parameter to true. star_transformation_enabled = true #*********************************************************************** # IO parameters #*********************************************************************** # # # # # # # Multiblock reads allow for the database to retrieve multiple database blocks in a single IO. In general, a high multiblock read count provides better performance, particularly for operations on large volumes of data. Oracle supports IO's up to 1MB on many platforms. Disk striping will also affect the value for multiblock read count, since the stripe size should ideally be a multiple of the IO size.

# If you are gathering optimizer system statistics (see DBMSSTAT.SQL # for more information), then you should set this parameter to a high

# value. #db_file_multiblock_read_count = 64 # If you are not gathering optimizer system statistics, then you # should set this parameter to a lower value. db_file_multiblock_read_count = 16

#*********************************************************************** # Materialized view parameters #*********************************************************************** # This parameter enables the use of materialized views for improved # query performance. query_rewrite_enabled = true # This parameter determines the degree to which Oralce enforces # integrity rules during query rewrite. In most data-warehouse # environment, 'trusted' is the appropriate setting. query_rewrite_integrity = trusted

#*********************************************************************** # Compatibility #*********************************************************************** # When building a new application, both compatibility and # optimizer_features_enabled should be set to the current release to # take advantage of all new features. If you are upgrading an existing # application to Oracle9i, then you may want to consider setting one # or both of these parameters to an earlier release. #compatible = 9.0 #optimizer_features_enabled = 9.0 #*********************************************************************** # Other Parameters #*********************************************************************** # # # # This section lists other parameters that, although not specific to data warehousing, are required for any Oracle database. By uncommenting these parameters, this parameter file can be used as a complete stand-alone init.ora file.

#db_name = MY_DB_NAME # Define at least two control files by default #control_files = (ora_control1, ora_control2) DG_BROKER_START=TRUE

Você também pode gostar